Data Center

Virtualization and Data Center Fabrics are Among Gartner’s Top 10 IT Trends

Gartner managing VP and chief of research David Cappuccio kicked off the Gartner Data Center Conference in Las Vegas today with a look at the top 10 trends facing IT and how they’ll impact the data center.

While you can find the full list in this fine piece from my good friends at Network World (which covered much the same talk at a previous Gartner event), I wanted to focus on a couple of trends I found most interesting: virtualization and fabric data centers.

The tip of the virtualization iceberg

Most IT folks look at virtualization as something we do to servers, Cappuccio said, but that’s only the “tip of the iceberg.”  The next move is to things like hardware virtualization and, especially, desktop virtualization, where you host an image of a desktop in the data center that users view on their desktops. All data remains centralized, providing better security, and upgrades are far simpler, since you only need to deal with that single central image, not hundreds or thousands of individual desktops.

It all makes sense, but it doesn’t wind up saving money, as Cappuccio pointed out – which is likely why it hasn’t taken off. As Gartner CFO Chris LaFond pointed out in a separate session, you’ve got to show a quick ROI on IT projects or forget it. Given all the infrastructure that goes into desktop virtualization – mainly lots of high-powered servers and new desktop machines (albeit less expensive ones) along with a significant software investment and the desktop virtualization equation isn’t adding up for most customers.

In the mean time, though, customers should take another look at the servers they’ve already virtualized. If they’re running at 50% to 55% utilization, they’re under-virtualized, Cappuccio says. That means there’s room for further consolidation – and that’s an easy ROI case to make.

Fabrics come to the data center

All that virtualization leads to lots of smaller, higher-density servers stuffed into racks that now require far more bandwidth than they previously did to deal with demand. Within four years, Gartner thinks bandwidth (as measured in I/O) per rack will increase 25 times – before considering storage and multimedia effects.

Enter fabric-based computing, which Gartner defines as “a set of compute, storage, memory and I/O components joined through a fabric interconnect and the software to configure and manage them.”

What that essentially means is larger switches with far more ports than their predecessors. So no longer do you need to have two or three tiers of switches, each aggregating traffic up to the next tier.

Once everything it tied to the data center fabric, it becomes much easier to pool resources – both locally and globally –and to move workloads around as needs require, Cappuccio says. That enables companies to make best use of resource by time of day, such as shutting down some under-utilized servers at night and shifting the remaining burden to other servers, ideally those in a different time zone where the workday is still going strong.

Of course, high-density racks will also require significant and perhaps innovative cooling technology, which is a topic we’ve covered many times, such as in this post on room, row and rack cooling technologies.

 


No Responses