The correct answer is no, not with certainty. Human behavior is too contingent, too contextual, and too sensitive to missing information for that claim.
The more useful answer is that AI agents can help model structured reaction under explicit assumptions.
What they can do well
They can help surface:
- likely first-order reactions,
- incentive conflicts,
- amplification patterns,
- weak assumptions in a scenario brief.
That is already valuable. It turns "predict the future" into a reviewable workflow.
What they cannot do well
They cannot guarantee:
- perfect realism,
- access to missing facts,
- immunity from framing mistakes,
- correct handling of every hidden variable.
This matters because the most dangerous failure mode is false precision.
Prompt template
Based on the uploaded scenario, show the most plausible reaction path,
the weakest assumptions behind it, and the evidence that would most
change the forecast.
How to use the output responsibly
Use the simulation to expand the analysis surface, not to outsource the decision. A strong operator treats the output as structured evidence, then checks the graph, the scenario boundary, and the missing pressure.
Related guides: LLM Social Simulation Explained and How to Review a MiroFish Forecast.
Limits
If a decision carries material policy, financial, or reputational downside, a human review layer is not optional.