The market forces and series of cascading system failures of the 2008 financial crisis created a perfect storm in the financial services industry. Now, a decade after the storm, banks and other financial institutions are still trying to wade through the aftermath. New challenges that have arisen as a consequence call for new approaches to managing data centers.
New regulations, new competitors, new digital demands and even new technologies are forcing finance industry data center operators to navigate these previously uncharted waters. Additionally, the slow rate of recovery for most banks has resulted in restricted levels of capital available for infrastructure improvement and expansion.
As IT teams focused on projects for consolidation and virtualization to free up space and cut costs in existing facilities, many data center infrastructure teams were forced to “sweat the assets,” concentrating on maintenance and optimization of the existing power and cooling systems — sometimes well beyond the initial design lifecycle. Those teams have achieved some impressive gains in efficiency and cost reductions, but, over time, the risks of pushing infrastructure to the limits outweigh the incremental gains.
Many banks have already or are in the midst of closing branches, as digital banking options become more popular and consumers choose mobile apps. With these new digital demands loads have dramatically increased. Yet again, overloading capacity and pushing infrastructure beyond the initial design standards cause risks and there’s certainly no room for threats to security and uptime in this industry.
At the same time, compute environments have become more varied and complex. Instead of supporting one monolithic infrastructure, banks now have to manage distributed platforms, which compounds the need for new investments and new ideas for evolving and expanding data center infrastructure.
New Approaches for Financial Services Data Centers
To help balance risk and performance for existing data centers, we offer a complete portfolio of life cycle services — from monitoring and analytics that are helping to fill the gaps through maintenance, optimization and Data Center Infrastructure Management (DCIM), to assessments that could lead to recommissioning or identifying opportunities to free up power, space and cooling.
Certainly, identifying and eliminating waste must be done within the context of risk. So if making a change improves efficiencies, but could negatively impact risk, then it’s a move that can’t be made — despite being good for the bottom line. At that point new designs and technologies would be needed to meet current demands and make space for the future.
Overall, the industry is moving towards a new model for managing shifting data center demands, which will likely be a combination of self-owned/self-managed, colocation and cloud — depending on the organization.
For many, colocation space has been an interim approach to mitigate risk and satisfy shorter term or localized requirements. Moving some IT assets to colocation frees up capital and re-allocates funds to an operational budget, simplifying the ability to meet the demand for compute power, but this option may not always be the best choice financially.
Whether your colocation strategies are to fill a short-term need or part of a long-term, multi-site solution approach, enterprise wide DCIM software systems can dramatically simplify and improve the management of IT infrastructure assets wherever they may reside.
To extend capacity in existing data centers and even for new approaches to data center construction, prefabricated power, cooling and IT, “all in one” modules work extremely well in the finance industry because of the consistency of build and resulting capability for easily replicating across primary, back up and dark sites — not to mention the opportunity to invest and grow as digital demands and capital investment priorities permit.
Just as Open Compute technologies like Open Rack designs are making far better use of space and more efficient and reliable use of energy and cooling capacities, prefabricated and modular data centers are fine tuning infrastructure to gain much better cost performance per kW. Our own analysis found total cost of ownership savings of 30% compared with traditional built-out power and cooling infrastructure.
Where standardization, speed of deployment and overall IT cost reductions are the priority for smaller and edge compute needs in trading floors, private banking, bank and other branch offices micro data center solutions and modular, scalable UPS’ are solutions worth investigating. Many financial services organizations have realized that capital cost savings alone can be 50% or more as compared to traditional technology rooms.
While these are just a few examples of fresh approaches our banking and finance customers are taking to better manage their data centers one thing is certain; more new models will be needed to address the still turbulent waters after the storm.
Visit our website for more information on financial services data centers and managing your digital transformation.
7 years ago
George, it is a valuable information on DCIM. Benefit of collection of data and the management and planning of the DC all made easy by the rich features (graphical 3d views) etc. But the basics must be taken seriously as if you input garbage then expect garbage and have a dedicated team to run under BAU once implemented. Even to move it to Colo or cloud providers you must know what you have and how the source device and the target device connected via switch, router etc and what is the current cabling used for this connectivity. If you have all that information in one single place your planning for any migration is made easy. That is why I think the major benefit of DCIM solution.Thank you for the great article.
7 years ago
Thanks for the comment and the insights, Raj. I think you’re thoughts are completely accurate.