[Q] How would you test whether mass AI use explains any residual variation in recent crime declines?
I’m trying to think through a causal-inference question and would appreciate statistical guidance.
Question: how would you test whether mass generative-AI adoption explains any residual variation in recent U.S. crime declines after accounting for the obvious confounders?
I am not claiming causation.
Basic motivating observation: around the same broad period that AI use became widespread, FBI national data showed major 2024 crime drops: violent crime down 4.5%, murder down 14.9%, robbery down 8.9%, rape down 5.2%, and aggravated assault down 3.0%. Pew also reported in 2025 that 62% of U.S. adults say they interact with AI at least several times a week.
Hypothesis to test: conversational AI may function for some users as behavioral displacement, emotional regulation, loneliness buffering, conflict rehearsal, fantasy discharge, cognitive interruption, or impulse delay.
Major confounders: post-pandemic normalization, policing changes, reporting changes, demographics, economic shifts, school/routine restoration, local policy, violence-intervention programs, substance-use trends, and regional differences in baseline crime risk.
What statistical design would be strongest here?
Ideas I’m considering:
- difference-in-differences using high-AI-adoption vs low-AI-adoption regions
- age/sex cohort analysis, especially younger users
- time-series analysis around adoption surges
- negative controls for crime categories AI should not plausibly affect
- comparing outlet-sensitive crimes to AI-enabled crimes like fraud/cybercrime
- natural experiments from uneven access, outages, model changes, or institutional adoption
What datasets, controls, or methods would make this test least vulnerable to overclaiming?