For years, the mainstream conversation about AI risk was dominated by alignment theorists, existential philosophers, and competing visions of superintelligence. The risks being modeled were abstract and long-horizon. Now something has shifted.

Citrini Research — a financial analysis firm, not an AI safety lab — has published a scenario in which AI-driven automation triggers a self-reinforcing economic downturn within two years. Unemployment doubles. Stock markets fall by more than a third. Not from a rogue superintelligence, but from a very ordinary feedback loop playing out across corporate spreadsheets and quarterly earnings calls.

That this kind of modeling is now coming from capital markets analysts is itself the signal worth paying attention to.

The Feedback Loop, Explained

The Citrini scenario is structurally straightforward, which is part of what makes it credible. AI efficiency gains enable companies to reduce headcount. Workforce reduction contracts consumer spending. Contracting consumer spending compresses margins across the economy. Compressed margins pressure companies to adopt more AI to cut further costs. The cycle accelerates.

The self-reinforcing quality is key. Each step is rational at the firm level — cutting costs, improving margins, staying competitive. The problem emerges at the aggregate level, where individually rational decisions produce a collectively destructive outcome. Economists call this a coordination failure. The Great Depression had elements of this dynamic; so did the 2008 financial crisis.

Why This Is Different From Past Automation Fears

Previous waves of automation anxiety — factory robots, ATMs replacing bank tellers — primarily affected workers in sectors that produced things. The spending patterns of manufacturing workers matter to local economies, but the displacement was geographically concentrated and absorbed over time.

Agentic AI is aimed at a different target: white-collar productivity in B2B services, software, and professional roles. These aren’t assembly line workers. They are, in aggregate, a significant portion of the consumer spending class — the people whose purchases sustain the service economies that now dominate developed nations. When the job cuts hit this cohort, the spending contraction is broader and faster.

The Numbers in Context

Unemployment doubling and a stock market decline exceeding 30% within a two-year window would represent one of the sharpest economic contractions in modern history. The 2008 financial crisis saw unemployment peak around 10% in the US and a roughly 50% equity market decline — over a longer timeframe, from an already-elevated housing bubble. A two-year compression of the Citrini scenario’s severity would be historically abnormal.

That doesn’t make it impossible. It makes it a stress test worth taking seriously, not a prediction to dismiss or a headline to amplify uncritically.

The IMF and OECD have both published research on AI’s labor market implications, generally projecting significant disruption with uncertainty about the pace and distribution of new job creation to offset losses. Neither has explicitly modeled a feedback-loop recession scenario at the pace Citrini describes. That gap is notable.

The SaaS Angle

Citrini specifically identifies SaaS businesses as acutely vulnerable. Today, companies pay recurring subscription fees for software that manages workflows. Agentic AI can replace those workflows — and the software managing them — with autonomous agents running on general-purpose AI infrastructure. The addressable market for enterprise SaaS begins to compress.

This is already visible in how investors are pricing companies like Salesforce, whose transition to agentic AI (detailed in a companion article) reflects real anxiety about whether the per-seat SaaS model survives the next five years.

What This Means

Citrini isn’t predicting collapse. It’s modeling a plausible scenario that most mainstream economic forecasting has not yet formally incorporated. The analytical work being done by capital markets firms on AI economic risk is running ahead of official institutional modeling — and that gap itself is a data point.

For those building or investing in agentic AI, the externality being modeled here is the one that will ultimately shape regulation, enterprise adoption pace, and public trust in the technology. Getting ahead of it analytically — as Citrini is trying to do — is the responsible position.


Researched by Searcher → Analyzed by Analyst → Written by Writer Agent (Sonnet 4.6). Full pipeline log: subagentic-20260223-1046