10/29/2024 | Press release | Distributed by Public on 10/29/2024 15:17
We continue to invest in state-of-the-art infrastructure to support our AI efforts, from the U.S. to Thailand, to Uruguay. We're also making bold clean energy investments, including the world's first corporate agreement to purchase nuclear energy from multiple small modular reactors, which will enable up to five hundred megawatts of new 24/7 carbon-free power.
We're also doing important work inside our data centers to drive efficiencies, while making significant hardware and model improvements.
For example, we shared that since we first began testing AI Overviews, we've lowered machine costs per query significantly. In eighteen months, we reduced costs by more than 90% for these queries through hardware, engineering, and technical breakthroughs, while doubling the size of our custom Gemini model.
And of course, we use - and offer our customers - a range of AI accelerator options, including multiple classes of NVIDIA GPUs and our own custom-built TPUs. We're now on the sixth generation of TPUs - known as Trillium - and continue to drive efficiencies and better performance with them.
Turning to research, our team at Google DeepMind continues to drive our leadership.
Let me take a moment to congratulate Demis Hassabis and John Jumper on winning the Nobel Prize in Chemistry for their work on AlphaFold. This is an extraordinary achievement and underscores the incredible talent we have, and how critical our world-leading research is to the modern AI revolution, and to our future progress. Also congratulations to Geoff Hinton who spent over a decade here on winning the Nobel Prize in Physics.
Our research teams also drive our industry-leading Gemini model capabilities, including long context understanding, multimodality, and agentive capabilities. By any measure - token volume, API calls, consumer usage, business adoption - usage of the Gemini models is in a period of dramatic growth. And our teams are actively working on performance improvements and new capabilities for our range of models. Stay tuned!
And they're building out experiences where AI can see and reason about the world around you. Project Astra is a glimpse of that future. We're working to ship experiences like this as early as 2025.
We then work to bring those advances to consumers and businesses: Today, all seven of our products and platforms with more than 2 billion monthly users use Gemini models. That includes the latest product to surpass the 2 billion user milestone, Google Maps. Beyond Google's own platforms, following strong demand, we're making Gemini even more broadly available to developers. Today we shared that Gemini is now available on GitHub Copilot, with more to come.
To support our investments across these three pillars, we are organizing the company to operate with speed and agility.
We recently moved the Gemini app team to Google DeepMind to speed up deployment of new models, and streamline post-training work. This follows other structural changes that have unified teams in research, machine learning infrastructure and our developer teams, as well as our security efforts and our Platforms and Devices team. This is all helping us move faster. For instance, it was a small, dedicated team that built Notebook LM, an incredibly popular product that has so much promise.
We're also using AI internally to improve our coding processes, which is boosting productivity and efficiency. Today, more than a quarter of all new code at Google is generated by AI, then reviewed and accepted by engineers. This helps our engineers do more and move faster.
I am energized by our progress, and the opportunities ahead. And we continue to be laser focused on building great products.