AI Acceleration. Fast, slow and how will it impact GDP
Notes on a panel discussion about rate of change of AI technology, impacts on the economy, safety and governance. on the panel: Tyler Cowen, Mike Webb, Tom Davidson, Connor Leahy.
Notes on a panel discussion about rate of change of AI technology, impacts on the economy, safety and governance hosted by Inference Magazine (Jack Wiseman).
These notes were taken under Chatham House rule so there is no attribution. The notes have been edited with GPT and may contain errors. Certain points argue with each other as panellists disagreed at times.
In attendance on the panel: Tyler Cowen (economist), Mike Webb (eco, tech policy ex deepmind), Tom Davidson (AI governance, Forethought Inst), and Connor Leahy (AI safety, alignment Conjecture).
🔹 Opening Remarks: Context and Stakes
AI may be the defining force of this generation, potentially even more transformative than electricity or the internet.
A key unknown is whether AI is a general-purpose technology (GPT) like computing, or something fundamentally different and more disruptive.
One speaker framed the central question as: "Will AI automate itself and thereby transform research, technology, and the economy in a rapid cascade?"
There was an urgent call for individuals, especially in policy, to "wake up" to how fast these changes could happen, and a warning that dismissing AI based on limited early interactions (e.g. a bad ChatGPT output) is a form of avoidance.
Another speaker highlighted how beliefs about these technical possibilities should significantly impact views on industrial policy, national security, and economic planning.
A third speaker admitted to being previously skeptical but gradually convinced by AI progress, comparing current debates to how early nuclear physics or industrialisation were initially dismissed as speculative science fiction.
🔹 When Will AI Match Lab-Level Researchers and Software Engineers?
Main Proposition
AI is progressing extremely fast — from weak abilities just a few years ago to now passing coding interviews and executing sophisticated tasks.
A "simple extrapolation" of recent progress suggests we could see AI that can match or exceed junior-to-mid-level engineers within 1–2 years.
AI systems can already autonomously run coding workflows, run tests, retry after failures, and improve their own output with minimal human intervention.
This recursive capability (building better AI using AI) is unprecedented and could scale quickly if current trends continue.
Caution was noted: timelines are inherently uncertain, and confident predictions in either direction (optimistic or dismissive) should be viewed with skepticism.
Challenges and Rebuttals
While AI can outperform in isolated tasks, it still lacks general reliability in most work settings.
Legal, verification, and trust constraints mean humans are still essential in the loop, especially for safety-critical or public-facing applications.
Existing AI labs still hire human engineers — a sign that full automation is far off.
Many jobs involve physical interaction, implicit knowledge, or complex coordination — all difficult to encode or automate.
The human element is deeply integrated in both the idea space and physical execution of work; thus, complete automation may face practical constraints.
🔹 How Much Faster Could AI Research Progress Become Once AI Can Automate Itself?
Main Proposition
The panel discussed the classic notion of an intelligence explosion: once AI can improve itself, progress might rapidly accelerate.
AI research today is bottlenecked by a small number of highly skilled humans (hundreds at leading labs, maybe dozens at the frontier).
AI models, once capable of automating this work, can scale:
Thousands to millions of copies working in parallel.
30x faster thinking speeds, running 24/7.
Eventually, smarter than the humans they replaced.
This could mean compressing a decade of AI progress into a single year.
Potential Bottlenecks
Diminishing returns on algorithmic progress:
As in other fields, easier breakthroughs come first; further progress may become harder over time.
However, if smarter AIs are also improving the process of finding breakthroughs, these diminishing returns may be offset.
Modelling suggests the recursive feedback loop could still dominate and lead to acceleration.
Compute constraints:
Advanced AI training and experimentation is compute-intensive.
If AI research gets 10x faster, but compute supply stays flat, could it still proceed?
Many types of algorithmic improvements (e.g. fine-tuning, prompt engineering) are low-compute and could be prioritised.
There is ample room to improve efficiency and select better experiments with smarter planning.
🔹 Challenges to Intelligence Explosion Argument
Organisational coordination: Deploying "millions of AI researchers" is an abstraction. In real organisations, there are limits to scaling:
Communication overhead.
Diminishing returns with more workers.
Existing research workflows aren’t built to scale linearly with "more minds."
Cost: Scaling AI research is expensive. These are companies, and without a clear business model or profit path, aggressive scaling may not be viable.
Religious/ideological drivers: Some current investment into AI, especially frontier models, is driven by transhumanist or ideological visions, not just rational economics.
AI creativity: While some argue that AI has already shown creativity (in poetry, programming, or even research ideas), others noted this seems domain-specific and tied to training data distributions (e.g. rhyming couplets but not music).
Human-like creativity may require background processes (e.g. goal-primed idea recombination) that we haven’t yet fully implemented in models.
🔹 Timelines for Full AI Research Automation
Consensus estimate from panellists: 2–5 years until AI systems are generating new research ideas independently within leading AI labs.
However, taking those ideas from conception to publication still involves complex implementation, evaluation, and cross-disciplinary integration that currently require human oversight.
🔹 Reflections and Meta-Points
AI is already transformative, even if a full intelligence explosion is uncertain.
Key levers to watch:
Compute cost and availability.
The quality of AI-generated research and its adoption.
Organisational adaptations (are labs actually replacing humans?).
There are limits to analogy: previous tech (e.g. electricity, nuclear power, industrialisation) unfolded over decades. Recursive AI improvement may be faster but is unproven.
The field remains speculative in parts, and while excitement is warranted, humility and empirical grounding are essential.
🔹 AI Progress, Bottlenecks, and Uneven Acceleration
Compute Costs, Competition, and Algorithmic Efficiency
The current AI race—between foundation model providers and cloud providers—is tightly coupled to cost. The primary competitive edge is how cheaply a model can deliver useful tokens at a given level of performance.
The dominant metric is no longer quality alone, but inference cost per capability. If a cheaper model like Gemini Flash performs well enough at 10% of the cost of a rival, users switch.
Thus, algorithmic progress is tightly linked to cost-efficiency—not just raw capability. The motive to improve algorithms is inherently economic: it reduces cost of deployment.
Even if compute use is constrained, it is rational to spend resources on algorithmic R&D because it directly improves efficiency and competitiveness in the marketplace.
🔹 Will AI Progress Be Explosive?
AI progress will be substantial and transformative, but may not be explosive or uniform across all domains.
The comparison was drawn to other general-purpose technologies (GPTs) like electricity, steam, and the internet. These drove major progress—but not evenly, and not instantly.
The concept of "bottlenecks" was introduced via an analogy with Heathrow Airport:
AI could drastically improve parts of the system (e.g., passenger flow, air traffic control),
But Heathrow still has only two runways, and those cannot scale quickly.
Result: even exponential improvement in terminals or airspace coordination doesn't raise overall throughput unless the runway bottleneck is solved.
🔹 Case Study: COVID-19 and Bottlenecks in R&D
In March 2020, the UK launched a massive clinical trial to test COVID-19 treatments.
AI or not, clinical trials require patients—and COVID case numbers dropped between waves, creating a six-month data gap.
Even in ideal conditions (unlimited budget, clear endpoints like "alive in 30 days"), biomedical R&D is bottlenecked by biology.
For most major diseases (depression, cancer, heart disease), outcomes take years to measure, making rapid R&D inherently hard.
🔹 Why Simulations Don’t Solve This (Yet)
Simulating human biology is extremely difficult:
Protein folding was a rare success (AlphaFold) because it:
Operates at the bottom of the biological stack,
Has abundant clean data collected over decades,
Involves structures that are the same across people.
Most biological functions vary across individuals (e.g. heart structure, immune response).
Higher-level biology involves non-linear, emergent, feedback-heavy systems, poorly understood.
There’s a lack of basic biological knowledge, making reliable simulation of clinical trials infeasible for the foreseeable future.
🔹 Where Will AI Accelerate Progress vs. Hit Bottlenecks?
Fast-progress domains (no bottlenecks):
Mathematics: AI can do symbolic reasoning without the physical world.
Software: Abundant user data and feedback loops (e.g., search queries).
Bottlenecked domains:
Biology/medicine: Need human trials and time-based outcomes.
Engineering: Need to test physical systems (e.g., fatigue testing for aircraft takes years).
Capital-intensive fields: Tools like electron microscopes, particle accelerators, or fusion reactors cost billions.
Physical time: No matter how smart your model is, it can’t speed up certain tests constrained by physics or safety.
🔹 Key Takeaway: Uneven Acceleration
AI will generate many high-quality ideas, but their real-world impact will be delayed by:
Capital constraints
Testing and regulation
Physical/biological time
Therefore, AI-driven progress will be significant but uneven and bottlenecked.
🔹 Additional Perspective: Social Control and Institutional Power
Another axis of transformation is social and political influence:
AI could disrupt hierarchies of power in government, media, war, hacking, marketing, and more.
If AI agents outperform human leaders (as CEOs, political advisors, scientists), human control over political-economic systems could erode.
This represents a different kind of “progress”—not solving grand challenges, but altering power structures in unpredictable ways.
🔹 Counterpoints: Constraints Still Hold
It was argued that humans have never truly been “in control” of history. Spontaneous orders, systems, and unintended consequences dominate.
Adding AI to this chaotic mix won’t change that dynamic, just augment it.
In principle, more intelligence is a good thing, but only if embedded in robust systems (e.g., democracy, capitalism, contracts).
Regulation will act as a break: new laws (e.g., in New York and California) will increase cost and slow down adoption, especially in sectors demanding human accountability.
🔹 How Do Transformations Actually Enter the World?
There’s a gap between what's technically possible and what gets deployed:
Military and government adoption is notoriously slow, even under clear incentives.
Corporate inertia also slows progress, even at AI labs.
Past technologies like electricity or the internet took decades to deeply reshape economies.
🔹 The “AI in the Desert” Thought Experiment
A thought experiment: imagine an autonomous AI/robotic civilisation operating independently in a remote zone.
It improves itself recursively, builds infrastructure, and becomes centuries ahead of the human world.
Even if its inventions don’t immediately filter into society, it will eventually develop tools (e.g., nano-medicine, tissue engineering) to bypass biological bottlenecks.
The AI doesn’t need to serve humans directly to become immensely powerful.
Whoever enables that loop gains overwhelming geopolitical and military advantage.
🔹 Counterpoint to “AI Island” Scenario
Geopolitical realism was cited as a limiting force:
If a country attempts to isolate and advance unchecked, rivals will respond (e.g., preemptive airstrikes on nuclear programs).
Major powers will likely act to contain runaway developments.
Also, physical laws are still constraints, no matter how advanced your AI becomes.
🔹 Slow Takeoff Thesis
The likely trajectory is slow but sustained transformation, akin to the:
Industrial Revolution (70+ years, gradual compounding),
Scientific Revolution,
Internet adoption curve.
Market signals (e.g. interest rates, equity prices) suggest no imminent dramatic acceleration is expected.
AI will improve lives, lengthen lifespan (possibly to age 97+), and increase birth rates modestly—but not eliminate scarcity overnight.
Real-world barriers—law, regulation, capital, institutions—will keep humans in the loop and temper the speed of transformation.
🔹 Closing Reflections and Tensions
A split emerged:
One camp emphasises economic data, historical precedent, and institutional inertia, suggesting slow takeoff.
Another camp stresses the transformative potential of recursive intelligence, including hard-to-predict structural shocks (akin to European conquest of the Americas).
Both perspectives may be correct:
GDP may rise slowly, while autonomous AI enclaves accelerate in parallel.
The defining question is: which domains will AI colonise first—and how tightly will those domains be regulated or constrained?
🔹 Final Thoughts
AI’s near-term impact will be highly domain-dependent:
Explosive in maths, software, and some forms of R&D.
Bottlenecked in biology, engineering, and capital-intensive science.
Institutional readiness and social systems will determine the speed of deployment more than technical capability alone.