Resources

Start here

We're Not Ready for Superintelligence is a walkthrough of the AI 2027 scenario, one expert view of where things might be headed.

From AI in Context by 80,000 Hours.

Courses & programs

Two tracks: a general intro on the left, and the technical research pipeline on the right.

Essential reading

The eleven pieces 80,000 Hours considers the best on-ramp to understanding AI risk and where the field is going.

  1. 1
    ReportWilliam MacAskill & Fin Moorhouse

    Preparing for the Intelligence Explosion

    How rapid AI advancement could compress centuries of progress into decades — and why that demands preparation now.

  2. 2
    ReportKokotajlo, Alexander, Larsen, Lifland, Dean

    AI 2027

    A concrete, near-term AGI scenario built around AI-automated research, with explicit forecasts.

  3. 3
    EssayLeopold Aschenbrenner

    Situational Awareness: The Decade Ahead

    The case that AGI may arrive sooner than widely anticipated, with transformative global consequences.

  4. 4
    ArticleEge Erdil (Epoch AI)

    The Case for Multi-Decade AI Timelines

    A counterweight to short-timeline takes: why intelligence-explosion scenarios may be overconfident.

  5. 5
    EssayHolden Karnofsky

    The Most Important Century

    Argues that transformative AI could make the coming decades the most pivotal in human history.

  6. 6
    ArticleAjeya Cotra & Arvind Narayanan

    Does AI Progress Have a Speed Limit?

    Two researchers with contrasting views debate how fast AI is really moving — and what that means.

  7. 7
    ReportSevilla et al. (Epoch AI)

    Can AI Scaling Continue Through 2030?

    Projects continued scaling through 2030, constrained primarily by power and chip manufacturing.

  8. 8
    PaperJoe Carlsmith

    Existential Risk from Power-Seeking AI

    The canonical careful argument that sufficiently capable, goal-directed AI could pose an existential threat.

  9. 9
    PaperKulveit, Douglas, Ammann, Turan, Krueger, Duvenaud

    Gradual Disempowerment

    How advanced AI could erode human agency slowly through institutional drift, not sudden takeover.

  10. 10
    PaperRobert Long, Jeff Sebo, et al.

    Taking AI Welfare Seriously

    The case that future AI systems may be moral patients — and what we owe them if so.

  11. 11
    EssayDario Amodei

    Machines of Loving Grace

    Anthropic's CEO sketches a positive vision: what powerful AI could do for the world if it goes well.

Curated by 80,000 Hours — read the original list →

Looking to work on this?

Browse open roles on the 80,000 Hours AI safety & policy job board.

See jobs →