Pure Storage Inc.

21/08/2024 | Press release | Distributed by Public on 21/08/2024 23:07

Rob Lee Discusses the Impact of Storage on Enterprise AI

Recently, Rob Lee, CTO of Pure Storage, sat down with Lynn Lucas, Pure Storage's CMO, for a wide-ranging conversation about the impact of enterprise AI on businesses and their data centers. Hear his insights on the subject and what organizations can do moving forward.

Lynn Lucas: You cannot have a discussion today-in business or our personal lives-without hearing about AI. It's everywhere, and organizations of every size are grappling with how to seize this tremendous, but yet very complex opportunity. What are you hearing?

Rob Lee: As I talk to our customers, I hear a lot of the same thing everywhere. Top down, they are trying to figure out where to go with AI. They're getting a lot of pressure to do more with AI. They're getting 50 projects thrown at them in every direction. And, they're really trying to figure out where to get started and where they can add the most value to their organizations with this new great technology.

LL: Where are organizations starting with AI? What are they trying to accomplish and why is this so difficult with this particular technology transition?

RL: Many organizations that are working to deploy AI are aiming to build better customer experiences, to operate more efficiently, or just to build great, new things.

The challenges arise when it comes to the details. Expertise is scarce, and AI technology is evolving very, very quickly. A lot of organizations and teams are really just still scrambling to figure out where best in their organizations to start when they have 50 projects coming at them.

This means asking:

  • How do you prioritize?
  • Where do you add the most value immediately to your company?
  • How do you attract and build that expertise and talent?
  • What tools do you deploy?
  • How do you build these systems and environments?

Once you've got that figured out, how do you get started?

Do you:

  • Build it all from scratch?
  • Look for buy options?
  • Partner with leading providers in the space?

I think people are in a scramble to deploy, almost ahead of figuring out what these answers look like.

LL: For those who are leading the data infrastructure for their companies, what opportunities and pressures do you think they're facing right now? Do you have any advice for them?

RL: The first step is just getting their data house in order. AI is really all about the data, whether it's the rich data sets that are fed into building these great models, or whether it's taking an existing model and applying it to your firm's proprietary data to make better decisions (a technology called retrieval-augmented gGeneration (RAG).

LL: AI is all about the data, and AI is only as good as the data and its availability, right?

RL: Absolutely. They call it a data center, not a networking center or a computing center. AI is putting a lot of pressure on organizations to figure out whether their data strategy is even ready for them to get started.

When I talk to a lot of customers, I'll say, "You want to deploy all this AI technology out here using operational data. Well, where does that data sit?" Oftentimes, the answer is that it sits in 20 different systems. HR has a database. Finance has a database. In a lot of cases, just getting started means modernizing those data systems and breaking down those silos. They just need to make their data ready to be accessed by AI-powered systems.

LL: Can you talk about the training and inference aspect of AI and how it makes a difference in what's required in the infrastructure?

RL: When we talk about deploying AI, there's not one thing or one way to deploy. There's a wide spectrum of build versus buy versus deploy options. These range from the folks who are building gigantic large language models (LLMs) from scratch to others that are taking that great work and fine-tuning it to specialize the AI for a particular problem domain, be it customer support, HR, or marketing. Then, there are those that are just taking those models and applying them to their own specific, proprietary data to make better business decisions.

The wide range of options is something that's really developed over the last six months. Customers now have many more choices, whether they need to build it from scratch or partner with best-of-breed providers to deploy this new technology really quickly.

LL: We see this rapid change, and we know businesses want to invest in infrastructure and have it work for them over the long haul. So if we bring this to Pure Storage and our platform, from your lens, what sets our platform apart from the rest of the market when it helps customers both today, but also two to five years into the future?

RL: A few things set the Pure Storage platform apart.

I'll start with performance. It's no secret that AI and performance are somewhat synonymous. What is maybe less appreciated is that performance for AI isn't just one thing. Training, fine-tuning, and inference all have very different performance profiles. The reality is that as the technology develops, it's really important for a customer to have access to a wide range of performance. It's like a Formula One race, right? The car with the fastest straight-line speed doesn't always win. You've got to have a good balance of cornering speed and straight-line speed and speed off the line. That agility is something that Pure Storage is uniquely good at.

The second thing is enterprise capability. You talked about preparing ourselves two, three, four, or five years into the future. Well, AI is definitely going mainstream. Mainstream means you can't have this potpourri of one-off science project environment solutions. It also means things like reliability, security, all of these enterprise requirements remain important.

Lastly, I would add flexibility. Two to five years is a long time in tech, but in AI, it's an eternity. The idea that you can perfectly predict what your infrastructure should look like in two, three, four, or five years in the AI space is impossible. Nobody's going to get that right. Flexibility is of utmost importance. You want to invest in infrastructure that gives you all the optionality and flexibility to grow and adapt with the technology.

LL: What are you most excited about when it comes to Pure's AI-related Launch announcements?

RL: Two things. The first is our deepening partnership with NVIDIA. We first started working with NVIDIA back in 2017. Now, we've announced our NVIDIA DGX SuperPOD certification to come later this year, and we have validated reference architectures with NVIDIA across the entire spectrum of training with AIRI, NVIDIA DGX BasePOD, and the NVIDIA OVX reference architecture on the inference side.

No matter where you are on that spectrum today and in the future, you've got great ease, right out of the box with Pure Storage. It is easy to get started with reference architectures that just work for all the use cases.

Second would be Evergreen//One for AI. I talked about flexibility before, and Evergreen//One is an ideal vehicle for customers to get their hands on that flexibility as their needs change. Whether they need more performance or more capacity, Evergreen//One gives them the ultimate vehicle to grow, shift, and flex in different directions, and I'm really excited to see that deepen our involvement in these AI projects.

Customers are already leaning on these and other advancements from Pure Storage to maximize their utilization and run faster in the AI space.

Discover How the Pure Storage Platform Helps You Tackle AI Initiatives

As your organization embarks on its AI journey, make sure that you're working with a platform that's going to give you the optionality, the flexibility, and, of course, the performance you need for various AI use cases. Learn more about AI solutions from Pure Storage.