Future state of Academic HCI and the impact of AI
TL;DR: AI is supercharging publish-or-perish without a matching upgrade in review or supervision. The risk isn't obvious copy-paste, supervisors (should) catch that. It's quieter: methodological thinking being delegated to LLMs and the work still passing review at the level that I wonder what will happen to a discipline that risks to have so much research out that nobody can keep up with, and become academics talking to a wall and not able to process what will happen next.
Curious how others in HCI are handling this.
-----
Coming at this from the academic side, some reviewing and service work, soon in industry, and the view has been making me uneasy. Wanted to see how others are processing it.
Publish-or-perish has been the dominant incentive in HCI for years, and we've all tolerated a layer of mediocre papers because the human bottleneck kept volume manageable. That bottleneck is being lifted. AI is a real productivity multiplier, and the review system doesn't seem set up for what's coming through.
What worries me isn't the obvious failure mode, for example PhD students copy-pasting generated text, supervisors usually catch that. It's the subtler delegation of thinking: using LLMs to pick baselines, generate hypotheses, choose theoretical frameworks, design pilots. The output reads well, the stats are clean, the writing is fluent, but no one (not the student, not the advisor) has actually defended the methodological choices. And it often makes it through review.
The supervision side worries me too, and not just because of workload. There's a generational asymmetry I keep noticing: many PIs don't use these tools much, or use them superficially, and PhD students are often more AI-fluent than their advisors. The traditional "I know more than you because I've been here longer" mentorship model gets strained when the student can produce competent-looking output in areas the supervisor doesn't deeply master. So it isn't only the PI with 10+ students drowning in workload, it's that many advisors may not be well-positioned to spot where the LLM hallucinated a reference, suggested a confounded design, or stitched together a methodologically thin narrative.
One rough prediction: bifurcation. Top venues push toward formats AI struggles with, in-the-wild deployments, longitudinal studies, working artifacts, replications, and tighten methodological requirements (pre-registration as default, maybe). Smaller venues get flooded and lose signal. Industry pulls further ahead of academia on anything requiring data and infrastructure. A replication crisis 2.0 within 3–5 years wouldn't surprise me and would be actually good to avoid death.
I want to leave room for a counterpoint, though: the gap between researchers who use AI rigorously and those who don't is widening, and that's actually a good thing for people who care about quality. If you read the literature properly, defend your methodology, and catch the LLM when it confabulates, you have a real accelerator that doesn't degrade quality. Publish-or-perish with AI punishes weaker thinking more visibly and rewards rigorous thinking more visibly. It doesn't fix the systemic problem, but at the individual level it feels more like an opportunity than a threat.
Curious what others are seeing. How is your lab actually handling AI in PhD work, any explicit policies, or is it mostly informal? Supervisors, are you keeping up, and how? PhD students, where do you personally draw the line between using a tool and delegating thinking? And as reviewers, are you flagging anything different yet?