Database Optimization: Emerging Technologies to Solve Performance Challenges

July 16, 2021
With the increased focus on infrastructure modernization, databases have started migrating from on-prem to the cloud, multi-cloud, and hybrid cloud. Martin Wielomski, Director of Product Management at phoenixNAP Global IT Services, discusses how emerging technology like Optane DCPMM can solve database management challenges.

In this edition of Voices of the Industry, Martin Wielomski, Director of Product Management at phoenixNAP Global IT Services, discusses how emerging technology like Optane DCPMM can solve database management challenges.

Martin Wielomski, Director of Product Management at phoenixNAP Global IT Services.

The data-centric reality

We live and work in a data-centric world. Databases are at the core of almost every application we use. So, it is no surprise that the Database Management System (DBMS) market grew to 58.4 billion U.S. dollars in 2020.

With the increased focus on infrastructure modernization, databases have started migrating from on-prem to the cloud, multi-cloud, and hybrid cloud. As Gartner predicts, 75% of all databases will be deployed or migrated to a cloud platform by 2022. The same report reveals that only 5% of respondents ever considered for repatriation to on-prem environments.

This expansion leads to a constant data volume increase, accumulating zettabytes (that’s trillion gigabytes) upon zettabytes of data annually. Consequently, the demand for technologies capable of solving performance challenges the ever-growing datasets encounter is higher than ever.

Data management goals

While choosing an ideal infrastructure for database workloads, IT professionals generally focus on achieving the following:

  • Data integrity. Databases contain business-critical information vital for organizations that needs to stay protected against corruption and threats.
  • Database performance. With the amount of data constantly growing, demand for high-performance processing of datasets increases.
  • Cost-effectiveness. Organizations strive to lower infrastructure costs while maintaining data performance and integrity.

The status quo

To tackle the above-mentioned challenges, organizations tend to scale their resources up by upgrading memory and storage. While this does lead to database performance improvements, it comes with limitations and substantial expenses.

DRAM memory is costly, volatile, and has limited capacity. Traditional options such as SATA and even NVMe storage are cheaper, have greater capacity, and offer persistence, but lack speed when transferring data to the CPU. For data-hungry workloads, direct data access to CPU needs to be enabled for enhanced in-memory operations. Traditional storage cache cannot provide this form of acceleration, as opposed to DRAM, which serves data faster to the CPU for processing. CPU is not running empty cycles waiting for data availability, which enhances overall efficiency.

Out with the old and in with the new

Several OEMs are addressing the database optimization issue, pushing the frontiers of CPU, memory, and storage. Intel is one of them.

The new 3rd Generation Intel Xeon Scalable processors are built on 10nm technology with up to 40 cores per socket. The enhanced throughput delivers up to 1.64x more database transactions per minute, shortening the time required to perform high-quality data extraction from a database.

What the new CPUs also bring to the table is confidential computing. The newest generation of Intel Xeon Scalable processors fully supports Intel Software Guard Extensions (SGX) and complies with the marked needs for improved security and puts the concepts of confidential computing into production environments.

The new CPUs show impact on performance improvements for workloads requiring large in-memory operations when paired with Intel Optane Persistent Memory 200 series. Coming from the same ecosystem, the two components bring large datasets closer to the CPU, significantly boosting database performance.

Where memory meets storage

Intel DCPMM (Intel DC Persistent Memory Module – Intel Optane) combines the performance of DRAM with the capacity and data persistence of storage, all packed in a DIMM. It brings up to 25% higher memory bandwidth over the previous generation with in-memory database support and enhanced database management speed and performance.

When it comes to data integrity, this fusion of storage and memory enables data retainment over longer periods of time, even without a constant power supply. With no reloading upon restart, the data stays in the memory and is immediately available. This is important for mission-critical applications, where reloading from traditional storage can take anywhere from minutes to hours, depending on the database volume and complexity. Shorter and less frequent downtime results in fewer losses in case of outages and more reliability in general.

As for database management speed and performance, Optane DCPMM is faster and of higher endurance than most NAND storage. This speeds up not only transactions but real-time analytical workloads as well, with no performance drops, even under heavy workloads.

Finally, Optane Persistent Memory reduces costs related to database management. Relational databases perform best when located on a single server. Consolidating and reducing server footprint leads to saving on licensing, power consumption, and infrastructure. With 3rd Intel Xeon Scalable Processors allowing for more VMs per unit, virtualization enables greater performance on a smaller footprint and up to 25% lower costs per VM. Adding Optane DCPMM further lowers TCO with increased performance and consistent data integrity at a fraction of the cost of an all-DRAM system.

New technology, new possibilities

With a capacity of up to 512GB per unit and up to 6TB total system memory per socket, entire workloads can fit on a single Optane DCPMM module, making it suitable for even the largest datasets.

Since Optane DCPMM can act as both storage and memory, different caching, tiering, and storage combinations can be leveraged for additional performance optimization. For example, only the hottest tables or sub tables containing crucial data and indexes can be stored and cached in memory for the fastest access, leaving more memory free for increasing general system performance.

Another option for a highly effective, workload-optimized database management system involves a two-tier workload optimization for high disk I/O traffic. In these situations, Optane DCPMM can be used for hot data and SSDs for warm data.

The new 3rd Gen Intel Xeon Scalable processors are available with phoenixNAP’s Bare Metal Cloud servers, which can be provisioned automatically through API, CLI or Infrastructure as Code tools. The instances can be billed on an hourly basis and scaled or decommissioned with several simple clicks or lines of code. Monthly reservation options are also available for more predictable workloads and allow for improved cost savings.

The bottom line

When it comes to making our data-centric lives future-proof, Intel Optane Persistent Memory in combination with Intel’s 3rd Gen Xeon Scalable CPUs shows remarkable results. From aggregating datasets while ensuring privacy, data integrity, and high performance to embracing migration to virtualized environments, it far surpasses its predecessors. As more organizations leverage this technology, we are yet to see more innovative use cases it has in store.

Martin Wielomski is currently a Director of Product Management at phoenixNAP Global IT Services. He has years of experience in the Information Technology and Cloud Hosting industries and specializes in global business and product strategy development and international business, product management and evangelism. Martin believes in lifelong learning and leadership through engagement, while maintaining realistic and down to earth people approach.

About the Author

Voices of the Industry

Our Voice of the Industry feature showcases guest articles on thought leadership from sponsors of Data Center Frontier. For more information, see our Voices of the Industry description and guidelines.

Sponsored Recommendations

Get Utility Project Solutions

Lightweight, durable fiberglass conduit provides engineering benefits, performance and drives savings for successful utility project outcomes.

Guide to Environmental Sustainability Metrics for Data Centers

Unlock the power of Environmental, Social, and Governance (ESG) reporting in the data center industry with our comprehensive guide, proposing 28 key metrics across five categories...

The AI Disruption: Challenges and Guidance for Data Center Design

From large training clusters to small edge inference servers, AI is becoming a larger percentage of data center workloads. Learn more.

A better approach to boost data center capacity – Supply capacity agreements

Explore a transformative approach to data center capacity planning with insights on supply capacity agreements, addressing the impact of COVID-19, the AI race, and the evolving...

Courtesy of Trane

The Untapped Potential of Heat Reuse in Data Center Cooling Design

Danielle Rossi, Global Director of Mission Critical Cooling at Trane, explains why it's time to think outside the chiller and consider the critical role heat reuse can play in...

Adobe Stock, courtesy of Pkaza – Critical Facilities Recruiting

White Papers

Mgk Dcf Wp Cover1 2023 01 09 10 34 33

Data Center Microgrids: The Case for Microgrids at Data Centers

Jan. 9, 2023
Many of the systems that businesses and the public rely on in the modern world are dependent on the internet, making data centers a critical form of infrastructure. But as the...