Quantcast
Channel: webadmin@intel.com – Blogs@Intel
Viewing all articles
Browse latest Browse all 903

Using Data Value to Define Storage Requirements

$
0
0

By Mike Ferron-Jones, Technology Marketing Manager, Data Center Group at Intel

 

Over the past five years, the volume of data stored in homes, businesses, and data centers has grown exponentially. Gigabytes has grown to terabytes and petabytes. Looking ahead, the future will move us into the era of exabytes and zettabytes. By itself, data volume is an artifact of human and machine activity.

 

With the rising tide of data stores has come with new approaches to turn data into information. The most dramatic change in data management was pioneered in the last decade when Hadoop and other tools enabled meaningful analysis of unstructured data. Released from the bounds of a structured database and subject to cloud scale analysis tools, data that heretofore was unused or silent, suddenly started speaking and giving direction. Since then, data’s value has increased by informing better decisions.  Big data analysis has changed the way we live, travel, guide medical care, make friends, communicate, buy goods and services and express ourselves to our community.

 

Storage media devices have evolved to keep pace with exponential data growth. Given the current trend lines, however, incremental advances in legacy devices can no longer solve our data analysis challenges. We now need revolutionary changes in our approaches to data availability and storage, allowing larger datasets to guide better decisions in less time.

 

So how did we get to this point? Let’s take a step back and look at today’s storage media pyramid. This price performance pyramid reflects the fact that every technology has trade-offs in terms of data storage capacity, speed, and cost. While the technologies within the pyramid have changed over time, the fundamental approach has remained the same for many years.

 

This approach puts frequently used data in a fast, but relatively small and expensive, hot tier, and less-frequently used data in a large cold tier that uses slower, less-expensive technology. Data that falls somewhere between the hot and cold layers is stored in a warm tier. Currently, the hot and warm tiers leverage DDR4 DRAM and NAND solid-state disk (SSD), respectively, and the cold tier leverages spinning hard disk drives (HDDs).

todays-storage-media-pyramid.pngToday’s storage media pyramid

 

While it has served us well, the current storage media pyramid is starting to crack under the weight of today’s data loads. Datasets are swelling in size, the number of transactions is growing exponentially, and expected response times are shrinking. All the while, more and more processor cores and application containers are packed into each server, each contending to use that small and valuable HOT data layer. We need hot tier performance at cold tier cost efficiency and scale.

 

As things now stand, the layers below HOT cannot rise to the performance challenges posed by tomorrow’s processors and software, they are just too slow.

hot-layer-storage-performance.PNG

In recognition of this reality, Intel is working with a broad ecosystem to deliver revolutionary new technologies that can better support the emerging requirements for data performance.

 

As we begin to solve one performance bottleneck, we must look to the storage interface. Performance bottlenecks exist in legacy SAS and SATA interfaces.  Intel is a working with an industry consortium of more than 90 members to develop a standard storage interconnect called NVM Express (NVMe) to serve as the standard software interface for PCI Express* (PCIe*) SSDs.

 

Legacy SATA and SAS interfaces were defined for mechanical hard drives. These legacy interfaces are slow in terms of throughput and latency. NVMe leaps over the limitations of these legacy technologies. It was designed from the ground up for non-volatile memory, low latency, and amazing storage media performance.

 

For a closer look at this breakthrough technology, check out this video: Unlocking SSD Performance with NVM Express (NVMe) Technology

 

On another front, Intel is delivering innovations to evolve and revolutionize storage media and further disrupt today’s mainstream storage architectures. These innovations include 3D NAND, which dramatically increases transistor densities in storage media. Unveiled earlier this year, 3D NAND is the world’s highest-density flash memory. It promises to greatly increase the capacity and reduce $-per-gigabyte of solid-state drives (SSDs).

 

For a deep dive into 3D NAND, check out this webinar: Intel, Micron Discuss 3D NAND Technology. 3D fabrication methods increase data density and innovative materials science increases the speed at which data can be accessed by the processor.

 

These new technologies will redraw the storage media pyramid, faster access to larger amounts of data for more accurate and complete analysis. Medical research, climate modelling, solution finding will all benefit from these innovations.  Data has become much more valuable for us and will become increasingly valuable as Intel scales non-volatile memory and the processing power that unleashes data value.

 

As we look to the future, we need even more revolutionary advances in data access architectures. We will take up this topic (you might have heard about 3D XPointTM technology) in a follow-on post.

 

Intel, the Intel logo Xeon, Intel Atom, and Intel Core are trademarks of Intel Corporation in the United States and other countries.
* Other names and brands may be claimed as the property of others.

Read more >

The post Using Data Value to Define Storage Requirements appeared first on Blogs@Intel.


Viewing all articles
Browse latest Browse all 903

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>