‘Product-orientation’ among psychometric criteria in development and application
- 👤 Speaker: Professor Mark Haggard, Department of Psychology, University of Cambridge
- 📅 Date & Time: Tuesday 25 February 2014, 16:30 - 17:30
- 📍 Venue: Seminar Room, Department of Psychology, Downing Site, Cambridge
Abstract
Like any good science, psychometrics comprises interplay of principles with facts, theories with data. Part of the theory of contemporary psychometrics is provided by general applied statistics, but other parts, such as IRT or Rasch scaling, are specific to the application area. Not all bulk users of measures of human responses are up to date with technical and theoretical developments in psychometrics, although there is sometimes an elementary 1940s version of psychological measurement on which to build. This is seen in three main ways: a very limited box-ticking concept of ‘validity’ (criterion form and consistency index for mixed reliability and validity) prevails. Second, amid the universal pressures for brevity, the importance of the length of instruments goes largely unmentioned, so is left to wishful thinking, even though the number of trials or questions is as important for statistical power as the number of participants is. Third, the need to examine equal-interval properties is often ignored, even though these properties are fundamental to meaningfully applying the most powerful (parametric) statistical modelling techniques, and when interpreting linearity issues and interaction terms.
Many small-sample psychological studies face a particular kind of circularity in attempting to optimise measurement. Optimising a measure for a particular or purpose can overestimate effect sizes when the data are obtained using a measure developed specifically for that purpose. Large N is a splendid cure for this problem, failing which, cross-testing from development sample to a different generalisation (test) sample can be used. Additionally, to avoid circularity in optimisation, when a data set is sufficiently rich it may offer various internal ad hoc opportunities, ie validation paradigms, for construct validation in the selection and optimisation phases, that is, to guide item selection, scaling or weighting; this is again the use of a different subset of the data, but by variable, rather than by case. I illustrate an eclectic approach to measure development by describing the nested phases in developing a long-form set of questionnaire-based health outcome measures for a randomised clinical trial, from three datasets; these have led to the subsequent specification of medium-length (32-item) and short-form (14-item) measures for use by others. Providing both reconciles competing objectives of applications (and hence in development) so as to allow diverse applications. This strategy is not overly rule-bound, but seeks always to maximise prior constraint and replication, and to minimise capitalisation upon chance.
Series This talk is part of the Cambridge Psychometrics Centre Seminars series.
Included in Lists
- Biology
- Cambridge Neuroscience Seminars
- Cambridge Psychometrics Centre Seminars
- Cambridge talks
- Chris Davis' list
- Department of Psychiatry talks stream
- dh539
- dh539
- Featured lists
- Life Science
- Life Sciences
- Neuroscience
- Neuroscience Seminars
- Neuroscience Seminars
- Psychology talks and events
- Seminar Room, Department of Psychology, Downing Site, Cambridge
- Stem Cells & Regenerative Medicine
- Yishu's list
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)


Tuesday 25 February 2014, 16:30-17:30