The video comes from a YouTuber who seems very knowledgeable about the latest advancements in artificial intelligence (AI). They start by defining AGI (artificial general intelligence) as an AI system that can perform any task as well or better than humans across different domains.
The video presents some compelling evidence that AGI may have already been achieved by companies like OpenAI, just not released publicly yet. Key points include:
- Recent AI models like OpenAI’s DALL-E 2, Nvidia’s Omniverse, and Google’s AlphaFold demonstrate capabilities that seem to approach AGI by understanding real-world physics, biochemistry, and autonomously learning across domains.
- Rapid hardware advances like specialized AI chips from Nvidia, AMD, and others are providing the computing power needed to develop AGI systems. Sam Altman seeking $7 trillion to build an “AI chip company” suggests massive hardware requirements.
- Surveys of AI experts place the expected timeline for AGI between 2030-2075, but accelerated progress suggests it could happen much sooner.
- An anonymous leaker “Jimmy Apples” claims OpenAI achieved AGI internally but is withholding release due to safety concerns. Key evidence cited includes accurate predictions about GPT-4, DALL-E 2, and delays in releasing powerful models.
- A 4chan leak titled “Recon” details a system called “Qar” that exhibits metacognition, cross-domain learning, and the ability to break advanced encryption schemes like AES-192, which could destabilize global economies if true.
- Sam Altman’s firing and quick rehiring at OpenAI may relate to the Qar/AGI situation described in the leaks.
The video provides some startling quotes and technical details to back up its central premise. For example, it directly quotes the leaked Qualia document saying it “provided a plaintext from a given AES-192 ciphertext by using [analysis] in a way we do not fully understand.” The narrator ominously notes “If this is true, it could destroy society“.
Whether all these claims are legitimate or not, the video paints a picture of AGI development being further along than most experts predict, with concerning implications if a superintelligent system was released before proper safety measures are in place. It’s a provocative thought piece making you question just how “intelligent” our latest AI breakthroughs truly are.