DCF Show: VAST Data's Andy Pernsteiner On the Underpinnings of Data-Intensive AI/ML Compute Strategies

Oct. 24, 2023
The VAST Data Platform embodies a revolutionary approach to data-intensive AI computing. For this episode of the Data Center Frontier Show Podcast, we sat down for a chat with Andy Pernsteiner, Field CTO of VAST Data.

For this episode of the Data Center Frontier Show Podcast, we sat down for a chat with Andy Pernsteiner, Field CTO of VAST Data.

The VAST Data Platform embodies a revolutionary approach to data-intensive AI computing which the company says serves as "the comprehensive software infrastructure required to capture, catalog, refine, enrich, and preserve data" through real-time deep data analysis and deep learning.

  • In September, VAST Data announced a strategic partnership with CoreWeave, whereby CoreWeave will employ the VAST Data Platform to build a global, NVIDIA-powered accelerated computing cloud for deploying, managing and securing hundreds of petabytes of data for generative AI, high performance computing (HPC) and visual effects (VFX) workloads.
  • That announcement followed news in August that Core42 (formerly G42 Cloud), a leading cloud provider in the UAE and VAST Data had joined forces in an ambitious strategic partnership to build a central data foundation for a global network of AI supercomputers that will store and learn from hundreds of petabytes of data.
  • This week, VAST Data has announced another strategic partnership with Lambda, a, Infrastructure-as-a-Service and compute provider for public and private NVIDIA GPU infrastructure, that will enable a hybrid cloud dedicated to AI and deep learning workloads. The partners will build an NVIDIA GPU-powered accelerated computing platform for Generative AI across both public and private clouds. Lambda selected the VAST Data Platform to power its On-Demand GPU Cloud, providing customer GPU deployments for LLM training and inference workloads.

The Lambda, CoreWeave and Core42 announcements represent three burgeoning AI cloud providers within the short space of three months who've chosen to standardize with VAST Data as the scalable data platform behind their respective clouds. Such key partnerships position VAST Data to innovate through a new category of data infrastructure that will build the next-generation public cloud, the company contends.

As Field CTO at VAST Data, Andy Pernsteiner is helping the company's customers to build, deploy, and scale some of the world’s largest and most demanding computing environments. Andy spent the past 15 years focused on supporting and building large scale, high performance data platform solutions.

As recounted by his biographical statement, from his humble beginnings as an escalations engineer at pre-IPO Isilon, to leading a team of technical ninjas at MapR, Andy has consistently been on the frontlines of solving some of the toughest challenges that customers face when implementing big data analytics and new-generation AI technologies.

Listen to today's show:

Here's a timeline of key points discussed on the podcast:

0:00 - 4:12 - Introducing the VAST Data Platform; recapping VAST Data's latest news announcements; and introducing VAST Data's Field CTO, Andy Pernsteiner.

4:45 - History of the VAST Data Platform. Observations on the growing "stratification" of AI computing practices.

5:34 - Notes on implementing the evolving VAST Data managed platform, both now and in the future.

6:32 - Andy Pernsteiner: "It won't be for everybody...but we're trying to build something that the vast majority of customers and enterprises can use for AI/ML and deep learning."

07:13 - Reading the room, when very few inside that room have heard of "a GPU..." or know what its purpose and role is inside AI/ML infrastructure.

07:56 - Andy Pernsteiner: "The fact that CoreWeave exists at all is proof that the market doesn't yet have a way of solving for this big gap between where we are right now, and where we need to get to in terms of generative AI and in terms of deep learning."

08:17 - How VAST started as a data storage platform, and was extended to include an ambitious database geared for large-scale AI training and inference.

09:02 - How another aspect of VAST is consolidation, "considering what you'd have to do to stitch together a generative AI practice in the cloud."

09:57 - On how the biggest customer bottleneck now is partly the necessary infrastructure, but also partly the necessary expertise.

10:25 - "We think that AI shouldn't just be for hyperscalers to deploy" - and how CoreWeave fits that model.

11:15 - Additional classifications of VAST Data customers are reviewed.

12:02 - Andy Pernsteiner: "One of the unique things that CoreWeave does is they make it easy to get started with GPUs, but also have the breadth and scale to achieve a production state - versus deploying at scale in the public cloud."

13:15 - VAST Data sees themselves bridging the gap between on-prem and in the cloud.

13:35 - Can we talk about NVIDIA for a minute?

14:13 - Notes on NVIDIA's GPU Direct Storage, which VAST Data is one of only a few vendors to enable.

15:10 - More on VAST Data's "strong, fruitful" years-long partnership with NVIDIA.

15:38 - DCF asks about the implications of recent reports that NVIDIA has asked about leasing data center space for its DGX Cloud service.

16:39 - Bottom line: NVIDIA wants to give customers an easy way to use their GPUs.

18:13 - Is VAST Data being positioned as a universally adopted AI computing platform?

19:22 - Andy Pernsteiner: "The goal was always to evolve into a company and into a product line that would allow the customer to do more than just store the data."

20:24 - Andy Pernsteiner: "I think that in the space that we're putting much of our energy into, there isn't really a competitor."

21:12 - How VAST Data is unique in its support of both structured and unstructured data.

22:08 - Andy Pernsteiner: "In many ways, what sets companies like CoreWeave apart from some of the public cloud providers is they focused on saying, we need something extremely high performance for AI and deep learning. The public cloud was never optimized for that - they were optimized for general purpose. We're optimized for AI and deep learning, because we started from a place where performance, cost and efficiency were the most important things."

23:03 - Andy Pernsteiner: "We're unique in this aspect: we've developed a platform from scratch that's optimized for massive scale, performance and efficiency, and it marries very well with the deep learning concept."

24:20 - DCF revisits the question of bridging the perceptible gap in industry knowledge surrounding AI infrastructure readiness.

25:01 - Comments on the necessity of VAST partnering with organizations to build out infrastructure.

26:12 - Andy Pernsteiner: "It's very fortunate that Nvidia acquired Mellanox in many ways, because it gives them the ability to be authoritative on the networking space as well. Because something that's often overlooked when building out AI and deep learning architectures is that you have GPUs and you have storage, but in order to feed it, you need a network that's very high speed and very robust, and that hasn't been the design for most data centers in the past."

27:43 - Andy Pernsteiner: "One of the unique things that we do, is we can bridge the gap between the high performance networks and the enterprise networks."

28:07 - Andy Pernsteiner: "No longer do people have to have separate silos for high performance and AI and for enterprise workloads. They can have it in one place, even if they keep the segmentation for their applications, for security and other purposes. We're the only vendor that I'm aware of that can bridge the gaps between those two worlds, and do so in a way that lets customers get the full value out of all their data."

28:58 - DCF asks: Armed with VAST Data, is a company like CoreWeave ready to go toe-to-toe with the big hyperscale clouds -  or is that not what it's about?

30:38 - Andy Pernsteiner: "We have an engineering organization that's extremely large now that is dedicated to building lots of new applications and services. And our focus on enabling these GPU cloud providers is one of the top priorities for the company right now."

32:26 - DCF asks: Does a platform like VAST Data's address the power availability dilemma that's going to be involved with data centers' widespread uptake of AI computing?

Here are some links to some recent related DCF articles:

Did you like this episode? Be sure to subscribe to the Data Center Frontier show at Podbean to receive future episodes on your app.

Keep pace with the fast-moving world of data centers and cloud computing by connecting with Data Center Frontier on LinkedIn, following us on X/Twitter and Facebook, and signing up for our weekly newsletters using the form below.

About the Author

Matt Vincent

A B2B technology journalist and editor with more than two decades of experience, Matt Vincent is Editor in Chief of Data Center Frontier.

Courtesy of Stream Data Centers

Ready for AI? Not Until You’ve Overcome These 3 Challenges

Supporting AI deployments is easy to say and hard to do. Stuart Lawrence, VP of Product Innovation & Sustainability at Stream Data Centers and Mike Licitra, VP of Solutions Architecture...

White Papers

Download the full report.
Download the full report.
Download the full report.
Download the full report.
Download the full report.

2021 Overview of the Phoenix Data Center Market

April 6, 2021
This report, in conjunction with NTT, continues Data Center Frontier’s market coverage of growing data center hubs. Explore further for a comprehensive overview of the Phoenix...