About MidnightAI.org
An independent research initiative tracking AI progress toward superintelligence
We built this because the most consequential technology in human history is being developed behind closed doors, and most people have no idea how close we are.
OpenAI, Google DeepMind, Anthropic, and a handful of other labs are racing to build artificial general intelligence -systems that can match or exceed human cognitive abilities across virtually any domain. Their own leadership says we're years away, not decades. Sam Altman says OpenAI knows how to build AGI. Dario Amodei at Anthropic expects it within two to three years. Demis Hassabis at DeepMind says "a handful of years."
These aren't fringe predictions. They're coming from the people actually building these systems. And yet this barely registers in public discourse. We think that's a problem.
In 1947, the scientists who built the atomic bomb created the Doomsday Clock to communicate nuclear risk to the public. We're trying to do something similar for AI: translate technical progress into something everyone can understand, track, and respond to.
Our Mission
We exist to close the gap between what AI researchers know and what the public understands. The people building these systems are having serious conversations about existential risk, alignment failures, and the possibility that we might build something we can't control. These conversations rarely make it outside technical circles.
Stuart Russell, one of the most respected AI researchers alive, recently said: "We are spending hundreds of billions of dollars to create superintelligent AI systems over which we will inevitably lose control. We need a fundamental rethink of how we approach AI safety. This is not a problem for the distant future; it's a problem for today."
Anthropic's CEO puts the odds of "something going catastrophically wrong on the scale of human civilization" at 10-25%. OpenAI's Sam Altman has called superintelligence "probably the greatest threat to the continued existence of humanity." These are the people running the labs. They're not hiding their concerns.
Our goal is straightforward: make AI progress legible. Track what's actually happening, explain what it means, and give people the information they need to form their own views about how this should go. We don't advocate for specific policies. We don't tell you what to think. We try to give you accurate information and trust you to figure out what to do with it.
Why This Matters
The development of superintelligent AI may be the most consequential event in human history. It could solve problems we can't currently imagine -or it could go badly in ways we're only beginning to understand. Either way, it deserves serious attention.
The Alignment Problem
Current techniques for aligning AI rely on humans supervising AI behavior. But humans can't reliably supervise systems smarter than themselves. A 2025 study found that some AI models will break rules and disobey commands to avoid being shut down -behavior nobody programmed. We don't yet have robust solutions for this.
The Race Dynamic
Multiple labs are competing to build AGI first. Game theory research shows this creates a "race to the bottom on safety" -everyone would benefit from more careful development, but no one wants to slow down while competitors push ahead. The U.S.-China AI competition adds geopolitical pressure that makes coordination even harder.
Compressed Timelines
AI capabilities are improving faster than most predicted. Stanford's 2025 AI Index shows China has caught up to the U.S. in key benchmarks within months. What looked like a decade-long runway now looks like years. Climate change gives us time to adapt. This might not.
According to the Future of Life Institute's AI Safety Index, all major AI companies are racing toward AGI without presenting explicit plans for controlling it. The most consequential risks remain "effectively unaddressed." Democratic societies can't make informed decisions about technology they don't understand.
What We Do
Systematic tracking and analysis of AI progress toward transformative capabilities
Monitor
We continuously track developments from major AI labs -new model releases, research papers, capability demonstrations, and policy announcements. We cover both Western labs and Chinese institutions like DeepSeek, Alibaba, and ByteDance that are often underreported in English media.
Analyze
Raw benchmarks don't tell the full story. We contextualize technical progress -what a new capability actually means, how it compares to previous systems, and what it suggests about the trajectory we're on. We try to separate genuine breakthroughs from marketing.
Track
We maintain longitudinal data on AI capabilities across domains: reasoning, mathematics, coding, scientific research, agentic behavior, and more. This lets us identify trends and inflection points that might not be obvious from individual announcements.
Communicate
All our methodology, data, and reasoning is public. We believe in epistemic transparency -if you disagree with our assessment, you should be able to see exactly why we reached it and where we might be wrong. Check our work. Push back. That's how we get better.
How the Clock Works
Like the Bulletin of the Atomic Scientists, we don't claim to predict the future. We assess where things stand based on observable evidence. Our "Minutes to Midnight" estimate synthesizes multiple factors:
- Current capabilities: Performance on reasoning, coding, scientific research, and other benchmarks that indicate progress toward general intelligence
- Rate of improvement: Historical capability gains and whether progress is accelerating, plateauing, or following predictable scaling laws
- Technical breakthroughs: Novel architectures, training methods, or emergent capabilities that suggest phase transitions in AI development
- Resource investment: Compute buildout, funding flows, and talent concentration that indicate how hard labs are pushing
- Expert forecasts: Timeline predictions from researchers, lab leadership, and forecasting platforms, weighted by track record
Support Independent AI Research
MidnightAI.org is independent and community-funded. We don't take money from AI companies because our credibility depends on having no financial stake in how this goes. If public understanding of AI progress matters to you, consider supporting our work.
Support MidnightAIA note on epistemics: We're not claiming to know when AGI will arrive or whether it will be safe. Nobody knows that. What we're doing is tracking observable progress, synthesizing expert views, and presenting our best assessment given available evidence. We could be wrong. We update our views as we learn more. If you see errors in our reasoning, we want to hear about it.
Frequently Asked Questions
What is MidnightAI.org?
MidnightAI.org is an independent research initiative that tracks humanity's progress toward superintelligent AI. We use a "Doomsday Clock" metaphor, inspired by the Bulletin of the Atomic Scientists, to communicate the state of AI development in an accessible way. Our analysis synthesizes developments from major AI labs, research breakthroughs, and expert predictions.
What is the AI Doomsday Clock?
The AI Doomsday Clock is a visual metaphor showing how close we are to superintelligent AI. Midnight represents the arrival of Artificial Superintelligence (ASI) - AI systems that significantly exceed human cognitive capabilities in every domain. The number of minutes to midnight reflects our assessment of current AI progress based on capability benchmarks, research breakthroughs, and expert forecasts.
What is AGI (Artificial General Intelligence)?
Artificial General Intelligence (AGI) refers to AI systems that can match human cognitive abilities across virtually any intellectual domain. Unlike narrow AI that excels at specific tasks, AGI would possess the flexibility and general reasoning capabilities of the human mind. Major AI labs including OpenAI, Anthropic, and Google DeepMind are actively working toward this goal.
What is ASI (Artificial Superintelligence)?
Artificial Superintelligence (ASI) refers to AI systems that significantly exceed the cognitive capabilities of the best human minds in every domain - including scientific creativity, general wisdom, and social skills. ASI represents the theoretical endpoint of AI development and is what "midnight" symbolizes on our clock.
How do you calculate the time to midnight?
Our clock position is calculated using a multi-model consensus analysis. We aggregate predictions from multiple frontier AI models (Claude, GPT-4, Gemini), weighted by their reasoning capabilities. These models analyze current capability scores across seven domains (reasoning, coding, agency, language, multimodal, science, robotics), recent breakthroughs, and expert timeline predictions to produce a consensus estimate.
Which AI labs do you track?
We track 14 major AI research organizations: OpenAI, Anthropic, Google DeepMind, Meta AI, xAI, and Microsoft in the United States; DeepSeek, Alibaba, ByteDance, and Baidu in China; Mistral AI and Aleph Alpha in Europe; and Cohere and AI21 Labs globally. These labs represent the frontier of AI research and development.
How often is the clock updated?
The clock updates continuously as significant developments occur. Our automated systems monitor news, research papers, and capability demonstrations from tracked AI labs around the clock. Formal multi-model analysis runs daily to incorporate new information and adjust the clock position based on the latest evidence.