Resources
Start here
We're Not Ready for Superintelligence is a walkthrough of the AI 2027 scenario, one expert view of where things might be headed.
From AI in Context by 80,000 Hours.
Courses & programs
Two tracks: a general intro on the left, and the technical research pipeline on the right.

BlueDot — The Future of AI
Finish in an evening with a clearer picture of frontier AI, the risks, and what's being done about them. The standard on-ramp into the field.
More on the strategy track
BlueDot — AGI Strategy
25 hours · Cohort · Pay-what-you-want
The serious follow-up to Future of AI: development trajectories, possible outcomes, and what strategic interventions might steer things well.
Astra Fellowship — Strategy & Governance
Constellation · 5 months · Berkeley · Paid
In-person stream for catastrophic-risk policy, AI governance, and field-building projects with senior advisors.

ARENA — Alignment Research Engineer Accelerator
Hands-on technical curriculum: transformers, RL, mech interp, and evals. The standard pipeline for engineers entering alignment research.
Research fellowships
SPAR
Supervised Program for Alignment Research
Remote, part-time, mentor-led research projects. Lower barrier to entry than MATS — great as a first research experience.
MATS
ML Alignment & Theory Scholars
Selective ~10-week paid research program. You're matched with a senior alignment researcher and ship a real project.
Essential reading
The eleven pieces 80,000 Hours considers the best on-ramp to understanding AI risk and where the field is going.
- 1ReportWilliam MacAskill & Fin Moorhouse
Preparing for the Intelligence Explosion
How rapid AI advancement could compress centuries of progress into decades — and why that demands preparation now.
- 2ReportKokotajlo, Alexander, Larsen, Lifland, Dean
AI 2027
A concrete, near-term AGI scenario built around AI-automated research, with explicit forecasts.
- 3EssayLeopold Aschenbrenner
Situational Awareness: The Decade Ahead
The case that AGI may arrive sooner than widely anticipated, with transformative global consequences.
- 4ArticleEge Erdil (Epoch AI)
The Case for Multi-Decade AI Timelines
A counterweight to short-timeline takes: why intelligence-explosion scenarios may be overconfident.
- 5EssayHolden Karnofsky
The Most Important Century
Argues that transformative AI could make the coming decades the most pivotal in human history.
- 6ArticleAjeya Cotra & Arvind Narayanan
Does AI Progress Have a Speed Limit?
Two researchers with contrasting views debate how fast AI is really moving — and what that means.
- 7ReportSevilla et al. (Epoch AI)
Can AI Scaling Continue Through 2030?
Projects continued scaling through 2030, constrained primarily by power and chip manufacturing.
- 8PaperJoe Carlsmith
Existential Risk from Power-Seeking AI
The canonical careful argument that sufficiently capable, goal-directed AI could pose an existential threat.
- 9PaperKulveit, Douglas, Ammann, Turan, Krueger, Duvenaud
Gradual Disempowerment
How advanced AI could erode human agency slowly through institutional drift, not sudden takeover.
- 10PaperRobert Long, Jeff Sebo, et al.
Taking AI Welfare Seriously
The case that future AI systems may be moral patients — and what we owe them if so.
- 11EssayDario Amodei
Machines of Loving Grace
Anthropic's CEO sketches a positive vision: what powerful AI could do for the world if it goes well.
Curated by 80,000 Hours — read the original list →
Looking to work on this?
Browse open roles on the 80,000 Hours AI safety & policy job board.