During the 80’s, the original Star Wars movies featured amazing future technology and were all about “the power of the Force.” The latest movie has now broken all box office records and got me thinking about how much IT and computing technology has progressed over the years but yet, there is still so much left untapped.
Yes, several of the envisioned gains have come true – many of these driven by Moore’s Law and the growing force of the microprocessor revolution. For example, server virtualization software such as VMware radically redefined consolidation savings and productivity, CPU clock speeds got faster and microprocessors became commodities used everywhere – powering PCs, laptops, smart phones and intelligent devices of all types. But the full force and promise of using many microprocessors in parallel, what is now called ‘multicores,’ still remains largely untapped and I/O continues to be the major bottleneck holding back the IT industry from achieving the next revolution in consolidation, performance and productivity.
Virtual computing is still bottlenecked by I/O. Just as city drivers can only dream about flying vehicles as gridlock haunts their morning commute, IT is left wondering if they will ever see the day when application workloads will reach light speed.
How can it be that with multi-core processing, virtualized apps, abundant RAM and large amounts of flash, you still have to deal with I/O-starved virtual machines (VMs) while many processor cores remain idle? Yes, you can run several independent workloads at once on the same server using separate CPU and memory resources, but that’s where everything begins to break down. The many workloads in operation generate concurrent I/O requests yet only one core is charged with I/O processing. This architectural limitation strangles the life out of application performance. Instead of one server doing vast quantities of work, IT is forced to add more servers and racks to deal with I/O bottlenecks – this sprawl goes against the ‘consolidation and productivity savings’ which is the basic premise and driver of virtualization.
All it takes, then, is a few VMs running simultaneously on multi-core processors churning out almost inconceivable volumes of work and you quickly overwhelm the one processor tasked with serial I/O. Instead of a flood of accomplished computing, a trickle of I/O emerges. IT is left feeling like the kids who grew up watching Star Wars who ask – where are our flying starships and when can we travel at light-speed?!
The good news is that all is not lost. DataCore has a number of bright minds hard at work to bring a revolutionary breakthrough for I/O to prime time, DataCore Parallel I/O technology lets virtualized traffic flow through without slowdown. Its unique software-defined parallel I/O architecture is needed to capitalize on today’s powerful multi-core/parallel processing infrastructure. By enlisting software to drive I/O processing across many different cores simultaneously, this eradicates I/O bottlenecks and drives a higher level of consolidation savings and productivity. The better news is that this technology is already on the market today.
Just like Star Wars has shattered the world record, check out how DataCore recently set the new world record on price-performance and on a hyperconverged system (on the Storage Performance Councils peer reviewed SPC1 benchmark). DataCore also reported the best performance per footprint and the fastest response times ever and so while the numbers do not actually reach light-speed, DataCore has lapped the field not once but multiple times. See for yourself the latest benchmark results in this article that appeared in Forbes: The Rebirth of Parallel I/O.
How? DataCore’s software actively senses the I/O load being generated by concurrent VMs. It adapts and responds dynamically by assigning the appropriate number of cores to process the input and output traffic. As a result, VM’s no longer sit idle waiting on a serial I/O thread to become available. Should the I/O load lighten, however, CPU cores are freed to do more computational work.
This not only solves the immediate performance problem facing multi-core virtualized environments, it significantly increases the VM density possible per physical server. It allows IT to do ‘far more with less.’ This means fewer servers or racks and less space, power and cooling are needed to get the work done. In effect, it achieves remarkable cost reductions through maximum utilization of CPU cores, memory and storage while fulfilling the productivity promise of virtualization.
You can read more about this in DataCore’s white paper, “Waiting on I/O: The Straw that Broke Virtualization’s Back.”
Share this post via:
Podcast EP267: The Broad Impact Weebit Nano’s ReRAM is having with Coby Hanoch