I'll stake my reputation on it. Gemini benchmarks beating competition. End-to-end SWE agent (Jules). End-to-end media packages (Flow, Whisk). Personal agents (Mariner, Astra). The largest dataset on the planet. Deep personalization already aggregated. TPUs outshining Nvidia, and creating a moat. Such top talent, they're historical figures. Self-driving cars (Waymo) beating Tesla to market. And finally, AlphaEvolve - the literal singularity.
In my opinion, this was the biggest Google I/O in history. Not just exciting, but humbling. Terrifying. Something crucial just happened.
Webpage. If you've been following Gemini, especially as a coder, they've been leading the pack for a month or two. This already had many thinking "show's over." While frontier model competition has always been a leap-frog arms-race - nobody could call any current winner a show-stopper - Google is different due to everything below (talent, money, TPUs, data, etc). Their models have historically been shite; unfathomable given they invented LLMs. The assumption was they were resting on their laurels. Fat. Making enough $ on ads to simply exist forevermore. Whether ChatGPT forced them into motion by threatening their bottom line; or Google was simply "slow and steady wins the race," I don't know. But here we are.
I'm listing this feature top because, of the industries about to become disrupted (toppled? deleted?), I think this will be the most self-feeding. It's the closest to the metal - it impacts those working on the problem itself.
Most vibe coders have already been using Gemini. Runners-up being Claude 3.7 and OpenAI (o3, o4, 4.1). By way of IDEs like Cursor, Roo Code (my favorite), Cline, Windsurf, etc. But 3rd-party tie into a model's capabilities is definitively shy of that model's true capabilities. If the company behind a model controls the end-to-end vibe-coding ecosystem, that SWE agent will simply be better. And you can see this is true of Jules, by way of LiveCodeBench, SWE-bench, etc.
Anthropic and OpenAI just released similar products. Indeed, those who've used them have reported stronger output than by 3rd-party IDEs / plugins using the same models. This was a big deal; I only just began to see the widening eyes in Reddit users of these tools. But it was too late. Google's launched only moments later.
New versions of prior models, now clobbering competition models (OpenAI, Stable Diffusion, Suno, etc). Watch this compilation of Veo 3. I don't have a non-Veo comparison on hand, but trust me - they're worse.
Further, they launched software studios for these tools:
And I don't even know what else, I'm still catching up, Jesus... Two things are important here. (1) These studios are all-in-one. Eg Flow generates frames, audio, music, speech, everything together - unified, multimodal. Prior to which, 3rd party packages for integrating these various elements were required. To wit, point (2) similar to Jules, this achieves better results when not only is a unified class of models coordinating in multi-modal results; but controlled by the creator of those models.
Watch that video above. Wrap your mind around what this means for the entire content-creation industry. Funny enough, I just started leaning invo video creation, and dusting off my podcast, to diversify against programming because I saw the signs. Hah! Thought I could escape. Oh, and now Google has a podcast generator... great...
Google has the deepest dataset on the planet. That is undisputed. All major LLM rivals are partnering with Microsoft (Bing) to try and approximate a constellation prize. But Google has 25 years of this, and the absolute lion's market share besides. People target SEO for Google first, always. Google Search Console, Analytics, Ads.
And speaking of ads, they have personalization. That's who they are - that's their business model. Learning what makes you tick, through Chrome, through search, to target you with ads. Every business has bought into this through Analytics, Adsense, and Adwords. In fact, Google's entire business has been AI-first since inception. They pioneered so much of the field and research.
So not only are the omniscient of "things" - but also of you. OpenAI added the memory feature to learn you through time. But Google already knows you.
This is as important as anything above. Google has proprietary hardware called Tensor Processing Units (TPUs); task-specific chips for AI computation. Recall that Nvidia is a gaming company. It just so happened that Graphical Processing Units (GPUs) worked well for AI computations too. Parallelizable Linear Algebra, by way of SDKs called CUDA & cuDNN. These SDKs were built on as TensorFlow and PyTorch. A moat was built software-side, only because re-building these SDKs & Frameworks for alternative hardware (AMD, Intel, etc) was more work than worth it. Or at least it seems - I've seen many attempted challengers, hardware and software, over the decades - but none which stick.
Except Google's. TPU hardware; TensorFlow software support for TPUs. Used (probably exclusively) internally by Google for their AI. And externally by some companies via Google Cloud Platform (GCP).
TPUs, from all indication, are faster and cheaper than Nvidia. Even Nvidia's AI-specific chips. Contested, but that's what I gather. At the very least their recent launch - Ironwood - is 3600x more performant than their first; and their first was already challenging SOTA Nvidia. For Google, this means they can run their AI more efficiently than competition. This allows them to offer stronger AI at the same price, or similar AI at a lower price. We're currently seeing the former, per Gemini Ultra's $250/m tag. This higher efficiency gives Google a moat: self-reliance, and lower cost for external businesses (to use GCP over AWS). That's a veritable monolopy. What came to my mind recently, is they could bring their manufacturing to the USA to avoid tariffs. They have the money, they control the ecosystem. A major advantage over those reliant on another company (Nvidia).
Their talent (employees / partners / ex-employees) are history-book legends. Geoffrey Hinton, Ian Goodfellow, Andrew Ng, Jeff Dean, Demis Hassabis, Richard Sutton. Hell even Ray Kurzweil as a mascot.
Many of these names are top figures in Reinforcement Learning specifically; and RL is the next frontier of AI agents, now that LLMs are "good enough." That is, they have an army of - not only those who pioneered this space in the first place, but whose current expertise is the focus on AI's essential next steps.
And finally, AlphaEvolve. Think vibe-coding, but for model algorithm self-improvement. There are three key armchair terms in AI:
AlphaEvolve is explicitly AI technology for improving AI technology. This is different than vibe coding (a la Jules), which is automated programming. The latter uses known solutions to build remixed products - it's not maximally "creative", and it's outward-focused. AlphaEvolve uses RL lessons in AlphaGo, AlphaFold, etc to find truly novel solutions, focused "internally". This is big, my friends. In fact, this is terrifying.
So Google comes late and wins. Fine. This time it's fundamental. AI isn't the next smartphone - it's the next species. China and USA are in an arms race here, often compared to the Cold War and nuclear technology, because whoever wins the AI race - well look, screw my reputation - they control the world.

Long coding sessions lead to physical fatigue and mental fog. A walking desk keeps you alert and focused, preventing costly bugs and burnout.Stay focused and healthy during long coding sessions.Get the factsGet the facts