Individual working hard appears to be frustrated by the use of AI in the office.

 

Introduction

The story of artificial intelligence in the workplace was supposed to be a tale of instant efficiency: algorithms drafting reports in seconds, chatbots fielding every routine request, and employees finally free to focus on creative, high-value work. But when researchers recently asked 2,500 professionals how that promise was playing out, 77 percent said AI has added to their workload, not lightened it, and nearly half confessed they have no idea how to unlock the productivity benefits their bosses expect [1]. Many are spending extra hours verifying AI-generated text, learning unfamiliar tools, or taking on “stretch” assignments simply because leadership assumes the technology will shoulder the load. The result feels less like a revolution and more like a pile-up of half-finished drafts and frayed nerves.

Narrow Experiments vs. Reality

Why is the lived experience so far from the marketing? One clue lies in the narrow experiments that still dominate the academic literature. In one oft-cited field study of 5,179 customer-support agents, a generative-AI assistant raised the number of cases resolved per hour by about 14 percent overall—and by roughly 34 percent for the least-experienced agents [2]. Those are real gains, but they occurred in a highly controlled environment with a single workflow, scripted prompts, and dedicated coaching. Strip away those guardrails and the same tool can sputter, hallucinate, or bog users down in quality checks.

The Organizational Stall

Scale those mixed results up to the organization level and the picture gets even murkier. A Canadian study that linked survey data on AI adoption to firms’ tax filings found no statistically significant relationship between deploying AI and short-term total-factor-productivity growth [3]. Early adopters were already more productive than their peers, but the technology itself did not accelerate their trajectory in the first two years. Similar patterns are emerging in OECD datasets and private-sector benchmarks: industries boasting the highest AI penetration are not yet posting the fastest productivity growth [4].

A Familiar Paradox

Economists have a name for this disconnect—the productivity paradox—and they’ve seen it before. Robert Solow quipped in 1987 that one could “see the computer age everywhere but in the productivity statistics,” [5] and history suggests three structural frictions usually keep new general-purpose technologies from paying off quickly. First, verification overhead: large language models are powerful but still prone to factual error, so human reviewers must double-check every citation and calculation. Second, the skills and workflow gap: dropping sophisticated tools into legacy processes without robust AI literacy training forces workers to muddle through alone. Third, expectations inflation: executives conflate adoption with instant ROI, piling extra tasks onto already stretched teams. Until organizations redesign workflows, invest in upskilling, and measure outcomes at the system level, AI will magnify existing inefficiencies faster than it eliminates them.

Implications for Public Health

The stakes are especially high for public-health agencies and schools of public health, where enthusiasm for AI in epidemiology and predictive analytics in health is running head-long into the same frictions. Surveillance dashboards built on generative models can flag emerging outbreaks, yet they also generate false positives that epidemiologists must chase down. Draft vaccination-campaign messages written by chatbots still need cultural tailoring and scientific vetting. And without a concerted push for AI literacy—for biostatisticians, program managers, and policy-makers alike—these tools risk draining already scarce staff time.

Breaking the Logjam

So what breaks the log-jam? First, leaders need to treat training as the main capital expense, not an afterthought: integrate AI-literacy modules into MPH curricula, fund mini-sabbaticals for current staff to upskill, and reward teams that document successful prompt libraries. Second, re-engineer workflows so that machines create the “ugly first draft” and humans focus on critical judgment rather than rote production. Third, evaluate AI pilots against system-level metrics, cycle time for grant reporting, error rates in surveillance data, public-engagement scores on health-promotion content, before rolling them out agency-wide.

Conclusion

If 2023–2024 was the year of buying AI licenses, let 2025 be the year of measurable productivity. CEOs, deans, and public-health directors who pair cautious optimism with deliberate capacity-building will be the ones who finally convert hype into healthier populations and more-resilient institutions. Everyone else risks discovering that the Great AI Productivity Paradox was never a myth at all—it was a mirror held up to our own readiness to change.


References

  1. Upwork Research Institute. From Burnout to Balance: AI-Enhanced Work Models for the Future. July 2024.
  2. Brynjolfsson, E.; Li, D.; Raymond, L. Generative AI at Work. NBER Working Paper 31161, April 2023 (rev. Nov 2023).
  3. Vu, V.; Li, V.; Lockhart, A.; Dobbs, G.; Tessono, C. Waiting for Takeoff: The Short-Term Impact of AI Adoption on Firm Productivity. The Dais & Future Skills Centre, December 2024.
  4. OECD. Compendium of Productivity Indicators 2025. OECD Publishing, 2025.
  5. Solow, R. M. “We’d Better Watch Out.” New York Times Book Review, July 12 1987.

Originality Statement: The blog was human authored with the Open AI Chat GPT used for research and clarity.