Uncategorized

Get PDF Measuring the Mind: Conceptual Issues in Contemporary Psychometrics

Free download. Book file PDF easily for everyone and every device. You can download and read online Measuring the Mind: Conceptual Issues in Contemporary Psychometrics file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Measuring the Mind: Conceptual Issues in Contemporary Psychometrics book. Happy reading Measuring the Mind: Conceptual Issues in Contemporary Psychometrics Bookeveryone. Download file Free Book PDF Measuring the Mind: Conceptual Issues in Contemporary Psychometrics at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Measuring the Mind: Conceptual Issues in Contemporary Psychometrics Pocket Guide.

Hardcover , pages. To see what your friends thought of this book, please sign up. To ask other readers questions about Measuring the Mind , please sign up. Lists with This Book. This book is not yet featured on Listopia. I read Borsboom's book on the plane on the way to France -- I can't generally sleep on planes, so I read the whole book in one sitting and it was the only book I brought with me to France. I first came across Denny Borsboom of the University of Amsterdam when he had the lead article in the journal, Psychometrika.

Measuring the Mind: Conceptual Issues in Contemporary Psychometrics by Denny Borsboom

If you don't know, Psychometrika is about as mathematical as it gets with the possible exception of the British Journal of Mathematical and Statistical Psychology for psychology journa I read Borsboom's book on the plane on the way to France -- I can't generally sleep on planes, so I read the whole book in one sitting and it was the only book I brought with me to France.

If you don't know, Psychometrika is about as mathematical as it gets with the possible exception of the British Journal of Mathematical and Statistical Psychology for psychology journals, and often, the articles are too technical for me to read from start to finish, though I usually get something out of them. The first article in the journal is usually thought of as the best, and many quantitative psychologists will go their entire careers without a first author Psychometrika article.

However, he had such an article straight out of graduate school entitled "Attack of the Psychometricians" in September of , and their were comments on the article by numerous psychologists who I respect a great deal, some of whom were not even quantitative psychologists most notably, Lee Anna Clark -- I did my dissertation expanding David Watson and Lee Anna Clark's tripartite model of anxiety and depression. I don't like it when people publish such articles so early in their careers, as it makes me feel like a bit of a let down.

Anyway, to see Borsboom give a talk at the University of Illinois, Chicago, check out this link: In "Measuring the Mind," Borsboom writes a book addressing the philosophical underpinnings of applied measurement. Specifically, he seeks to rest psychological measurement on secure underpinnings, wants to explicitly clarify how those underpinnings are different that classical test theory. The four part punch line is that 1 one must think of measurement in the way that structural equation modeling represents measurement, 2 that way is fundamentally different than Classical Test Theory CTT , 3 that way is fundamentally different from the fundamental measurement model that conforms to measurement axioms in physics, as explicated in works like Krantz, Luce, Suppes, and Tversky's classic text, Foundations of Measurement [there were three volumes to this text, with the first being most important and influential; all are now out of print -- I managed to get my hands on volume one, but haven't been able to find the next two -- if anyone can, let me know: Duncan Luce, the second author, received a Ph.

Borsboom does an excellent job explicating the first three of these points. His fourth point, which is more new ideas rather than explication of older ones, is the least developed -- it's still good, but Paul Meehl is my favorite psychologist of all time and Lee Cronbach was my intellectual great grandfather Mattthew taught me multivariate statistics, Matthew's advisor was Larry Hubert, and Larry's advisor was Lee Cronbach -- it's going to take more than "good" on the part of Borsboom to overturn ideas in a paper like theirs, and perhaps Borsboom will spend much of his future writing I know he's started doing some of this on this very point.

However, the concept of error is one that is supposed to apply to an individual in the measurement. Unfortunately, in reality, the way that psychologists estimate error is by looking at a group of people as a whole. What is supposed to be meant by this to make measurement make sense is that the error is from an individual -- maybe he or she stayed up to late before the psychological testing, had too much caffeine, had a hang over, etc. The only way to get at error for a distribution of the individual is to measure an individual several times.

The expected value of error for a distribution is zero, but the probability of obtaining an error of exactly zero on a single observation is itself zero, so a single observation wouldn't provide proper measurement of a person. So, Borsboom comes up with a humerous and purposely ridiculous philosophical situation -- an individual takes a test, then her brain was washed to prevent the effects of learning most of our statistics assume multiple measurements come from random variables that are i.

That way, an individual distribution error distribution would be meaningful. But this has no resemblance to the real practice of psychological measurement.

Test Construction and Measurement

AND, if it did, then the expected error would always be zero, thereby equating the observed measurement with the construct of interest itself. This in and of itself is problematic, as the measurement is of the construct rather than being the construct this is also an issue for the measurement fundamentalists.


  1. See a Problem?.
  2. Animal Kingdom Tour for Adults: A Self-guided Walking Tour (Visual Travel Tours Book 161)?
  3. Measuring the Mind - Wikipedia.
  4. Chocolate: The Chocolate Series (Candy Book 1).
  5. Höllental: Psychothriller (German Edition)!

The whole discussion can be thought of in terms of construct realism and constructivism, and he does explicate that point, but it seems almost superfluous to his basic idea. The CTT explication has relevance to those working on issues of ergodicity in measurement he specifically thanks Peter Molenaar for input on this text, but does not rely much on Molenaar's work -- and briefly touches on issues relating to p-technique and dynamic factor analysis.

In terms of the structural equation modeling SEM approach to measurement, the idea he presents is that there must be a construct of interest in which one is interested a priori.

The measurements, regardless of whether for an individual or a group, are indicators of the construct, but not synonymous to the construct itself, as the correlation between measurement and construct i. The point here is that the error terms in SEM models are not identical to error terms in CTT models, and the face value similarity of the two is not really present. The issue that is problematic from this chapter is the idea of positing of constructs a priori. It seems that it is difficult to think of a construct separately from the measurement thereof. Then again, this is a problem that I may have developed studying statistics too much -- I can't think of research independently of the statistics used in the research -- but I know many people who do every day, so this would likely be possible for many people.

If one posits constructs a priori, then this opens up an area of research on testing of the relationship of indicators to constructs. However, it means that one must be a "construct realist," to use Borsboom's term, and that one must be willing to make very concrete, testable hypotheses about what indicators relate to constructs, and how they do so. With regards to the fundamental measurement approach, Borsboom explicates this position, the mathematically most rigorous of them all, as actually being a constructivist position.

For example, in traditional Guttman scaling, one has a matrix with ones on the main diagonal, ones subdiagonal, and zeros above the diagonal. Each row corresponds to a case. Each column corresponds to an item. Zeros indicate an individual has not endorsed the item or not answered correctly, whereas ones indicate the individual has answered correctly or endorsed an item.

This type of scaling provides all of the fundamental properties that are presented in measurement in physics, even if the items are as disparate as things like "Picking one's nose" and "Voting for Bush" no political comment intended here. In the Rasch model, one models the probability curves of the items as a function of ability, and there's no place for different items being easier or difficult to different people.

However, Borsboom explicates how things like a 2P IRT might be able to work and meet measurement conditions, despite the criticisms of individuals like Ben Wright a famous Rasch modeler from the University of Chicago otherwise, with appeals to fundamental measurement. Regardless of this tangent about meeting fundamental measurement properties, Borsboom returns to the idea that this Guttman scaling idea which generalizes to IRT leaves one in a position where the items themselves are synonymous with the constructs, which is a usually unintended consequence of the CTT approach and, incidentally, of Principal Components Analysis [PCA: Anyway, this leaves one in a situation where what you measure IS the thing.

There's no such thing as intelligence other than a WAIS score. This seems a ridiculous position to be in, and then our science works not at establishing relationships between constructs but at establishing relationships between measures -- a task that I find totally uninteresting.

Navigation menu

Borsboom has a quote about three umpires -- the first said, "I call them as I see them. With regards to the implications of these ideas to construct validity, it seems that the traditional approach of establishing the validity has been to correlate a measure with similar measures of the construct. However, this correlational approach, at least for Borsboom, seems flawed to much of a catch all for all types of validity. Borsboom would like to stop using construct validity as an umbrella term and elaborate on other types of validity.

Measuring the Mind: Conceptual Issues in Contemporary Psychometrics

Specifically, the ability to predict something in the future might be very different than the ability to predict things concurrently -- something that is nearly completely masked by use of a bunch of correlations between measures as in the case of the Multi-Trait Multi-Method Matrix MTMM so common in establishing validity in psychology. I don't really understand what Borsboom is offering, other than saying that one should clearly delineate the temporal element of the statistical relationship, and that seems acceptable but almost already done, especially with the recent explosion of methods for longitudinal data analysis in psychology and multiple papers on "bias" that creeps in when one uses concurrent measurement to estimate prospective occurrence I'd argue that it's not bias per se, but rather that one is examining different constructs and is measuring one unbiasedly.

Overall, I encourage anyone interested in psychological measurement to read this book. I think the construct realist approach Borsboom puts forth would do a great deal for psychologists in terms of clarifying what it is they're thinking about and in terms of making explicit how their measurement relates to what they're thinking about. However, it might be very difficult to do. I guess I think of what Raymond Cattell used to call the inductive-hypothetico-deductive spiral as being the position that Borsboom's book leaves us in, and that's not an all together bad place to be.

Oct 15, Dana King rated it it was ok. Dense and poorly written. Anne rated it really liked it Jun 19, However, as is typical of popular statistical procedures, classical test theory is prone to misinterpretation. One reason for this is the terminology used: The infelicitous use of the adjective 'true' invites the mistaken idea that the true score on a test must somehow be identical to the 'real', 'valid', or 'construct' score. This chapter has hopefully proved the inadequacy of this view beyond reasonable doubt.

Unlocking potential with the best learning and research solutions

This chapter discusses the theory behind latent variables in psychometrics particularly with regard to item response theory. This chapter discusses measurement scales as the central concept of representational measurement theory. The chapter looks at the history behind psychological measurement scales and also at attempts to formalise measurement properties such as additive conjoint measurement. If the ability to construct a homomorphic representation were to be a necessary condition for measurement, this entails that we should be able to gather data that fit the measurement model perfectly.

This is because, strictly speaking, models like the conjoint model are refuted by a single violation of the axioms Since we can safely assume that we will not succeed in error-free data — certainly not in psychology — we must choose two conclusions: If we accept the former, we may just as well stop the discussion right now. If we accept the latter, then we have to invent a way to deal with error. From Wikipedia, the free encyclopedia.