r/psychometrics

Has anyone applied taxometric methods to motivational typologies beyond the Big Five framework?

The "pseudoscience" label gets applied to type-based personality models as a single verdict, but it seems to conflate three separable questions:

  1. Do the described behavioral patterns actually exist as discrete clusters? This is an open empirical question for most typological systems. Not disproven; largely untested with appropriate methods.

  2. Can the patterns be measured reliably? For the Enneagram specifically, cross-instrument agreement sits around 42%. This is a known measurement failure, but the failure appears to be format-specific. Self-report instruments hit a structural ceiling when the construct being measured shapes how respondents describe themselves. The Big Five's measurement advantage came partly from format alignment: the constructs were extracted from self-report data, so self-report instruments naturally recover them. Constructs not derived from self-report factor analysis may require different measurement formats entirely.

  3. Has anyone studied them rigorously? For most typological systems outside the Big Five, the answer is almost entirely no. The absence is striking but it reads more as neglect than disconfirmation.

Nick Haslam's work on taxometric methods showed that categorical vs. dimensional structure is an empirically testable question, not a theoretical assumption. The methods exist to determine whether behavioral data clusters categorically. Whether anyone has systematically applied these methods to motivational typologies that describe behavioral patterns (rather than self-reported trait endorsements) is the question I keep running into.

The bridge position that seems unoccupied: the described patterns may be real, the dominant measurement approach may be structurally wrong for them, and better-suited methods exist in behavioral observation traditions that haven't been applied. This is different from both "the system works, trust the practitioners" and "it's pseudoscience, reject it entirely."

Has the community encountered taxometric or behavioral observation approaches applied to personality constructs outside the standard factor-analytic framework? Interested in what's actually being done, not what's theoretically possible.

reddit.com
u/eSorghum — 1 day ago
▲ 3 r/psychometrics+1 crossposts

Prospective Measurement & Evaluation PhD Student

Hi everyone,

I’m currently preparing for PhD applications in Measurement & Evaluation for Fall 2027. I’m a first-year master’s student in Student Affairs & Higher Education with interests in assessment*.*

So far, I’ve shortlisted schools, started preparing for the GRE, and planned for LoRs. This summer, I’m hoping to gain more exposure to research in the field.

I wanted to ask if anyone here is working on a project related to psychometrics, assessment, validation, survey design, or statistical modelling that I could shadow or assist with remotely in some capacity. Even informal exposure to research workflows or methods would be very valuable as I refine my interests and prepare for doctoral study.

I am passionate about the field, but I’d love more practical insight into how research projects operate.

I’d also appreciate any advice on experiences or skills that would strengthen a PhD application in this area. Thank you!

reddit.com
u/Kumatsia — 1 day ago

i’m a newly minted psychometrician (passed my licensure a few months ago), and i happen to have developed an interest in survey validation and survey method (i’m really excited to take this course in my MA this august).

in my current project on Microsoft Excel, i am exploring CVI computations, specifically I-CVI and UA for each item in my questionnaire, then i would simply compute for the S-CVI/Ave and S-CVI/UA; i have no further concerns for as long as there is only one domain/construct with set of items trying to reflect that single domain/construct.

my concern arises when i think of a scenario where i have to validate a survey questionnaire with two or more domains measuring different constructs. do i simply compute the average of all the items and call it scale-level content validity or do i compute the CVI for each domain, or do i do both?

i think what i am really worried here is about the first part; if it’s psychometrically defensible and logical to compute for or average the CVI of all items in one go.

i’d really love to hear your thoughts on this!

reddit.com
u/thoughtalchemyst — 13 days ago