Operationalizing AI at Scale: Google Cloud on Data Infrastructure, Search, and Enterprise AI

Google Cloud’s Sailesh Krishnamurthy explains why the future of AI success hinges less on model breakthroughs and more on how enterprises connect structured and unstructured data, govern access, and deliver reliable, low-latency systems at scale.
Feb. 3, 2026
8 min read

Key Highlights

  • Connecting unstructured data with operational systems remains a key challenge for enterprise AI applications.
  • Vector search is evolving to be co-located with relational data, enabling more efficient and relevant queries.
  • Shifting from static to continuously optimized retrieval systems requires a new mindset akin to search engineering.
  • Governance and security are critical as AI systems move into operational environments, ensuring data accuracy and access control.
  • Building reliable, low-latency, and unified data infrastructure is essential for supporting large-scale AI initiatives.

The AI conversation has been dominated by model announcements, benchmark races, and the rapid evolution of large language models. But in enterprise environments, the harder problem isn’t building smarter models. It’s making them work reliably with real-world data.

On the latest episode of the Data Center Frontier Show Podcast, Sailesh Krishnamurthy, VP of Engineering for Databases at Google Cloud, pulled back the curtain on the infrastructure layer where many ambitious AI initiatives succeed, or quietly fail.

Krishnamurthy operates at the intersection of databases, search, and AI systems. His perspective underscores a growing reality across enterprise IT: AI success increasingly depends on how organizations manage, integrate, and govern data across operational systems, not just how powerful their models are.

The Disconnect Between LLMs and Reality

Enterprises today face a fundamental challenge: connecting LLMs to real-time operational data.

Search systems handle documents and unstructured information well. Operational databases manage transactions, customer data, and financial records with precision. But combining the two remains difficult.

Krishnamurthy described the problem as universal.

“Inside enterprises, knowledge workers are often searching documents while separately querying operational systems,” he said. “But combining unstructured information with operational database data is still hard to do.”

Externally, customers encounter the opposite issue. Portals expose personal data but struggle to incorporate broader contextual information.

“You get a narrow view of your own data,” he explained, “but combining that with unstructured information that might answer your real question is still challenging.”

The result: AI systems often operate with incomplete context.

Vector Search Moves Into the Database

Vector search has emerged as a bridge between structured and unstructured worlds. But its evolution over the past three years has changed how enterprises deploy it.

Early use cases focused on semantic search, i.e. finding meaning rather than exact keyword matches. Bug tracking systems, for example, began identifying duplicate tickets through semantic similarity rather than exact wording.

But enterprises quickly discovered a new problem.

If vector search operates in a separate system from operational databases, combining results becomes inefficient. Queries might return semantically similar products, only to fail when inventory or pricing constraints are applied.

Krishnamurthy noted that this led to a shift in architecture.

“The challenge becomes that you need all of these pieces of information in a single system that you can combine efficiently.”

Google’s approach, implemented in AlloyDB, co-locates vector embeddings with relational data. Even if original files remain in object storage, their embeddings can live alongside structured data, allowing unified queries across both worlds.

The goal is minimizing data movement while maximizing query relevance.

A Shift in Mindset: Think Like a Search Engineer

Traditional databases prioritize exact answers. Search systems prioritize relevance.

AI systems increasingly operate in shades of gray between the two.

“For fifty years, databases focused on exact results,” Krishnamurthy said. “Now, applications need the best possible results, not just exact ones.”

This shift requires continuous improvement, evaluation techniques, and relevance tuning more familiar to search engineers than database administrators.

Enterprises must move from static queries to continuously optimized retrieval systems feeding AI agents and applications.

The metric shifts from correctness alone to usefulness.

Governance Becomes Non-Optional

As AI systems move from experiments to operational workloads, governance moves from compliance checkbox to foundational requirement.

Krishnamurthy frames trust in two parts:

  • Getting the right information.
  • Ensuring users only access data they are authorized to see.

Even small schema details matter.

“If you annotate that one column is billing address and another is shipping address,” he explained, “AI models can generate better queries automatically.”

But governance extends beyond schemas. Many operational systems rely on application-level access controls. AI agents querying databases must enforce the same rules dynamically.

“Security and accuracy are both very important when connecting operational databases to LLMs,” he said.

Without governance embedded in pipelines, AI systems risk returning incorrect or unauthorized data.

Google-Scale Lessons: Reliability, Latency, and Correctness

Operating AI systems at Google scale introduces non-negotiable constraints.

Reliability is job number one.

“If you’re running a five-nines service, internally we target five-and-a-half nines,” Krishnamurthy noted.

But AI systems introduce additional pressure points:

• Cost effectiveness across massive infrastructure
• Latency budgets stretched by multi-system pipelines
• Correctness challenges when data is copied across systems

Moving data between specialized systems introduces synchronization risks. Users expect updates to appear instantly everywhere.

Google increasingly favors unified storage layers supporting multiple data models over fragmented systems.

A single storage foundation simplifies reliability efforts and reduces developer complexity.

“Making one system really good is easier than making ten systems work together perfectly,” Krishnamurthy said.

The 2026 Priority: Data as AI Fuel

For CIOs and CTOs planning AI roadmaps in 2026, Krishnamurthy’s advice is direct:

Fix your data foundation first.

“It’s not possible to build modern AI infrastructure without organizing data as the fuel for that AI engine,” he said.

The starting point is a universal data catalog enriched with semantic information and governance controls. From there, organizations can move toward agentic workflows connecting multiple systems securely.

Google’s Data Cloud strategy centers on connecting transactional, analytical, and AI systems into a unified environment.

The future of enterprise AI lies less in isolated models and more in connected, governed, reliable data ecosystems.

The Infrastructure Reality Behind the AI Boom

For the data center industry, these insights carry an important implication.

The AI buildout isn’t just about GPUs and megawatts. It’s about building infrastructure capable of supporting continuous, data-driven AI systems operating at global scale.

Reliable storage layers, low-latency data paths, and unified architectures increasingly define competitive advantage.

As enterprises move beyond experimentation, the winners will be those who solve the operational realities of AI, not just the modeling ones.

Or, as Krishnamurthy’s comments make clear:

The future of AI success isn’t just about smarter models.

It’s about smarter data systems.

 

At Data Center Frontier, we talk the industry talk and walk the industry walk. In that spirit, DCF Staff members may occasionally use AI tools to assist with content. Elements of this article were created with help from OpenAI's GPT5.

 
Keep pace with the fast-moving world of data centers and cloud computing by connecting with Data Center Frontier on LinkedIn, following us on X/Twitter and Facebook, as well as on BlueSky, and signing up for our weekly newsletters using the form below.

About the Author

Matt Vincent

A B2B technology journalist and editor with more than two decades of experience, Matt Vincent is Editor in Chief of Data Center Frontier.

Sign up for our eNewsletters
Get the latest news and updates
Leviton Network Solutions
Source: Leviton Network Solutions
Sponsored
Mike Connaughton, Senior Product Manager at Leviton Network Solutions, explains the importance of cabling when transitioning to an immersion cooling system.
Image courtesy of Colocation America
Source: Image courtesy of Colocation America
Sponsored
Samantha Walters of Colocation America shares her thoughts on four trends she's seeing in the colocation space.