The Superintelligence Rush Is Here. This Is What Comes Next.
|
“AGI? As in, artificial “general” intelligence? That’s so 2024. |
The AI industry just stopped pretending AGI is “someday” off in the future. Now, everyone’s racing to build artificial “superintelligence” by 2027. |
Just look at the marketing language: |
Microsoft just claimed it’s on the path to “medical superintelligence.” Meta launched “Meta Superintelligence Labs” with Zuck’s goal of “personal superintelligence for everyone.” |
Even OpenAI CEO Sam Altman says we’re already in a “gentle singularity”, which is basically a reference to superintelligence. Anthropic’s CEO Dario Amodei predicts a “country of geniuses in a data center” by 2026-2027. Superintelligence by another name, no? |
So what gives? Did we hit AGI and nobody told us? Where was the headline, fancy blog post, and podcast gauntlet to confirm it? Or is there actually something else going on entirely? |
It’s like the Turing test all over again. First, no one can hit it. Then, it’s up for debate. Then, the debate just sorta… moved on? |
First, the Race (Now – 2027): Clearly, whatever is happening right now is either… |
- The science is actually showing superintelligence is likely.
- AGI has been achieved behind closed doors and they just haven’t released it yet.
- No one can agree, and therefore never will agree, on what AGI looks like (rebrand!)
- Superintelligence is just a stickier (and therefore cooler) idea.
|
*There’s also the theory that “ASI” is actually just a guy named Soham who works at 3-4 startups at one time (AGI = “A Genius Indian”, ASI = “A Soham Indian” lol). |
Mark Zuckerberg proved Meta it is super serious about the idea of superintelligence by spending just shy of $15B on ScaleAI (a company central to training new AI), and by hiring 8 (and counting) top researchers from OpenAI, including Trapit Bansal (co-creator of the o1 reasoning model), Shengjia Zhao (co-creator of ChatGPT and GPT-4), and Shuchao Bi (co-creator of GPT-4o voice mode). |
OpenAI, naturally, did not like this. Chief Research Officer Mark Chen sent an urgent internal memo that said, “I feel a visceral feeling right now, as if someone has broken into our home and stolen something.” SamA called the move “not the craziest thing that would happen in OpenAI history” and said “missionaries will beat mercenaries.” And of course, OpenAI’s reassessing their researchers comp… |
Put another way: we’re in the lawless, cutthroat season of the AI race, where companies believe they’re approaching a winner-take-all moment and are willing to go all in to get there. |
So what happens if any of these cutthroats actually achieve superintelligence? |
Vinod Khosla, an extremely prescient and legendary tech investor, offered this vision in a new interview with Jack Altman (keep in mind, Vinod is an investor in OpenAI, but still). |
|
- 2025-2030: The AI Intern Era. Those superintelligent systems become your coworkers. Every professional gets AI assistants smarter than Stanford grads. AI handles 80% of work in 80% of all jobs.
- The 2030s: Corporate Extinction Event. Fortune 500 companies die faster than ever. Someone builds a billion-dollar company with 10 employees. The superintelligent “interns” surpass their human bosses.
- 2040+: Work Becomes Optional. You work because you want to, not because you need rent money. Humanoid robots handle physical labor. All expertise—medical, legal, educational—becomes free.
|
In sum, Khosla’s 80% sure the real show ends with humanity never needing to work again. |
His biggest fear? Not rogue AI. It’s China using “good AI” (free healthcare, education) to export its politics globally. But TBH, when you hear VCs talk about AI’s benefits, they sound preeeetty communist themselves with all their chat of cheap goods and no jobs. Sam Lessin had a good meme where he said “AI = communist, crypto = capitalist.” |
As for AGI… It’s clear that we are in a Jagged AGI stage, where we have AI systems that are superhuman in some areas but fail at seemingly simple tasks |
OpenAI’s o3 exemplifies this perfectly: it achieved 87.5% on ARC-AGI (exceeding the 85% “AGI threshold”), and can create complete business plans from single prompts, yet consistently fails simple riddles that humans solve easily. |
Similarly, Google’s AlphaFold predicts protein structure “to within the width of an atom” but cannot (yet) apply this knowledge to other molecular problems. |
This jagged nature explains why companies are simultaneously claiming AGI achievement while acknowledging significant limitations. Because we may already have AGI-level capabilities in many domains, superintelligence (where AI is smarter than humans) really is the next meaningful milestone.” |
|
0 Responses
Stay in touch with the conversation, subscribe to the RSS feed for comments on this post.