The emergence of cloud computing and the technological and market developments that underlie this trend have prompted organizations to reevaluate their data center strategy. There is no magic formula that clearly points to modernization, redevelopment or outsourcing of data centers. The motivations for making these choices range from operational problems in legacy data centers to the promise of cloud computing lowering costs and providing greater flexibility and cost elasticity.
This article discusses the technical infrastructure in data centers and recent technological and market developments that have a significant impact on the strategic choices that our clients make about the future of their data centers. The central theme is “do more with less”. Nonetheless, consolidation and migration of data centers comes with significant costs and risks.
Introduction
Data centers are the ganglion hubs of the nervous system of our economy. In fact, almost all automated data processing systems are housed in data centers. Government and large enterprises alike are particularly dependent on these data-processing factories.
A data center comprises not only the building with the technical installations inside, but also the IT equipment within the building that is used for the processing, storing and transporting of data. Data centers have a useful life of ten to twenty years, while IT equipment must be replaced about every five years. The investment for a midsize data center[“Midsize data center” means a data center with a floor area of 5,000 square meters or more that is air conditioned. Large data centers, usually for IT service providers, may have tens of thousands of square meters of floor space that is air conditioned.] is at least one hundred million Euros. In contrast to the long lifetime of a data center, technological developments and business objectives evolve at an extremely high tempo. A data center strategy must focus on future requirements and the organization’s capacity to change so it can adapt to these new technologies.
This article discusses recent technological and market developments that have a significant impact on the strategic choices that our clients make about the future of their data centers. We will also discuss the challenges that are encountered on the path to the consolidation and migration of existing data centers.
What is going on in the data center?
More than one quarter of the annual IT spending of large organizations is devoted to data centers. These costs are further divided up for the data center building and technical installations for power supply and cooling (together totaling eight percent) and server and storage devices (seventeen percent) ([Kapl08]). The economic crisis has put increasing pressure on IT budgets and investments so that the data center has risen higher up on the CIO agenda ([Fria08]).
Figure 1. Simplified model of an IT infrastructure.
Figure 1 illustrates a greatly simplified layered model of a technical IT infrastructure. A distinction is made between the IT infrastructure that is physically concentrated in a data center (left), the decentralized IT infrastructure for commercial buildings such as workplace automation and process automation in industrial environments (right) and the network connections between the data center and distributed IT environments (middle).
The sequence of the layers indicates that each layer is necessary for the layer above and that ideally technology choices can be made for each layer that are independent of the other layers. The free market system and open standards means that several technological solutions for each infrastructure layer are available on the market that offer the same functionality. Consider, for example, the industry standards that support specific design formats for IT equipment and equipment racks, standard data transport protocols such as Ethernet and TCP/IP on different platforms, storage protocols like CIFS, NFS and iSCSI, and middleware solutions, databases and applications on the various vendor platforms.
A data center includes one or more buildings with technical installations for power and cooling of a framework of network, storage and server equipment. These devices run hundreds to thousands of software applications, such as operating systems, databases, and customized or standard software applications. The data center is connected via fast (fiber) networks to other data centers, office locations or production facilities.
With decentralized IT environments, the IT equipment intended for end users or production sites must be close at hand. Given the small size and the decentralized nature of these spaces we do not refer to these as data centers but as Main and Satellite Equipment Rooms (MER’s, SER’s).
The technical installations and IT infrastructure in data centers are primarily dependent on the reliable supply of electricity and dependent on the provision of water for cooling and fuel for emergency power supplies.
Technological developments
This section discusses some recent technological developments that have a significant impact on the strategic choices that our clients make about the future of their data centers.
Virtualization
The virtualization of server hardware and operating systems has a huge impact on how data centers are designed and managed. Using virtualization, it is possible to consolidate multiple physical servers into one powerful physical server running multiple operating systems or instances of the same operating system running logical servers in parallel. The motivation to use virtualization comes from research showing that with respect to time the load experienced on servers is about twenty percent and on web servers is about 7.4 percent ([Barr07], [Meis09]). The crux of virtualization is to greatly increase the utilization of IT equipment and in particular servers.
Figure 2 illustrates how two physical servers can be consolidated into one physical server using virtualization techniques.
Figure 2. Virtualization makes it possible to consolidate logical servers on one physical platform.
Virtualization greatly reduces the required number of physical servers. Up to twenty five servers on one physical server to be virtualized depending on the nature of the applications running on these servers. The use of virtualization can cause a substantial drop in data center operational costs because the management effort required is significantly reduced by a factor of five to twenty fewer physical servers. However, this requires significant investment and migration efforts. The data center strategy must evaluate the magnitude of the investment in virtualization technology and the migration of existing servers to virtual servers.
Data storage systems and Storage Area Networks
In recent years, data storage has become fully decoupled from servers through the centralizing of storage and servers using a Storage Area Network (SAN). The SAN is a dedicated network between servers and data storage. These data storage systems contain large numbers of hard disks and are equipped with specialized technologies for efficient redundant data storage.[RAID is an abbreviation of Redundant Array of Independent Disks and is the name given to the methodology for physically storing data on hard drives where the data is divided across disks, stored on more than one disk, or both, so as to protect against data loss and boost data retrieval speed. Source: http://nl.wikipedia.org/wiki/Redundant_Array_of_Independent_Disks.]
This centralization of data storage is transparent to the IT infrastructure layers that support it. This means that the operating system or application is unaware that the data is stored centrally via the SAN system (see also the notes to Figure 1). If data storage systems in various data center locations are connected via a SAN, disk writes can be replicated in real time across multiple locations. Centralization of storage systems has considerably increased the utilization of the capacity of these systems.
Combined with server virtualization, SANs not only allow the quick replication of data to multiple locations, but also allow simple replication of virtual servers from one location to another. The article “Business continuity using Storage Area Networks” in this Compact looks at SAN’s in depth as an alternative to tape based data backup systems.
SANs and central storage equipment are among the most expensive components within the IT infrastructure. A data center strategy should therefore evaluate the investments in data storage systems and the associated qualitative and quantitative advantages.
Cloud computing
By cloud computing is meant a delivery model providing IT infrastructure and application management services via the Internet. Cloud computing is not so much a technological development in itself. Cloud computing is made possible through a combination of technological developments, including flexible availability of network bandwidth, virtualization and SANs.
The main advantage of cloud computing is the shift from investments in infrastructure to operational costs for the rental of cloud services (“from capex to opex”), transparency in costs (“pay per use”), the consumption of IT infrastructure services according to real needs (“elasticity”) and the high efficiency and speed with which infrastructure services are delivered (“rapid deployment” by fully automated management processes and self-service portals).
Cloud computing differs from traditional IT with respect to the following characteristics ([Herm10]):
- “multi-tenancy” (IT infrastructure is shared across multiple customers)
- rental services (the use of IT resources is separated from the ownership of IT assets)
- elasticity (capacity can be immediately scaled up and down as needed)
- external storage (data is usually stored externally from the supplier)
A cloud computing provider must have sufficient processing, storage and transportation capacity available to handle increasing customer demand for capacity as it occurs. In practice, the maximum upscaling is limited to a percentage of the total capacity of the “cloud” and involves an upward limit on elasticity.
Figure 3 illustrates the variety of forms of cloud services.
Figure 3. Overview of different types of cloud services ([Herm10]).[Click for a larger image]
The main difference between the traditional model of in-house data centers and a private cloud is the flexibility that the private cloud allows. The private cloud make use of standardized hardware platforms high availability and capacity, virtualization and flexible software licensing where operational costs are partly dependent on the actual use of the IT infrastructure. The private cloud is not shared with other customers and the data is located “on site”. In addition, access to the private cloud is not via the Internet. The network infrastructure of the organization itself can be used. According to cloud purists, one cannot speak about cloud computing in this case.
The internal private cloud uses the same technologies and delivery models as the external private and public cloud, but without the risk of primary data storage being accessed by a third party. The cost of an internal private cloud may be higher than the other types. Nonetheless, for many organizations, the need to meet privacy and data protection directives outweigh the potential cost savings of using the external private or public cloud.
The data center strategy should provide direction on when and which IT applications will be deployed via cloud services. Subsequently, it will not be necessary to reserve capacity for these applications in your own data centers.
New style of Disaster Recovery
Two thirds of all organizations have a data center that serves as a backup site for the primary data center in case of serious IT disaster. This is called a “Disaster Recovery Site”. Half of these organizations own such a data center themselves ([Bala07]). This means that about one third of all organizations have no alternate location and that the other organizations have invested in their own facilities or rent from an IT service provider.
The cost for these fall-back facilities is relatively high. This is primarily because of the extremely low utilization of capacity. The previously described technological developments offer cost effective alternatives for a disaster recovery set up.
A high degree of virtualization and a fast fiber optic network between two data center locations (twin data centers) are the main ingredients for guaranteeing a high level of availability and continuity. Virtualization allows an application to run in parallel without allocating the processing capacity on the backup site that would normally be needed to run it. In a twin data center, synchronization occurs 24/7 for the data and several times a day for the applications. In the event of a disaster, processing capacity must be rapidly ramped up and allocated for the respective application(s) at the backup site and the users “redirected” accordingly.
The twin data center concept is not new. The Parallel Sysplex technology from IBM has been available for decades. This allows a mainframe to be set up as a cluster of two or more mainframes at sites that are miles apart. The mainframes then operate as a single logical mainframe that synchronizes both the data and processing between both locations. A twin data center also allows you to implement Unix and Windows platforms twice without incurring double costs.
Cloud computing providers also offer specific services for disaster recovery purposes. An example of a Disaster Recovery service in the cloud is remote backup. These backups are no longer written to tape, but stored at an external location of a cloud provider. These backups can be restored at any location there is an Internet connection.
Cost-effective Disaster Recovery is high on the CIO agenda and thus is a strong motivation to invest in data centers and cloud initiatives. Accordingly, a data center strategy should pay appropriate attention to how data center investments address the issue of Disaster Recovery.
High-density devices
Virtualization allows the consolidation of a large number of physical servers on a single (logical) powerful server. The utilization of this powerful server is significantly higher than on separate physical servers (on average eighty percent for a virtual cluster of servers versus twenty percent for a single server). This means that a highly virtualized data center has significantly higher processing capacity per square meter. In recent years, the various hardware vendors have introduced increasingly larger and more powerful servers, such as the IBM Power 795, Oracle Sun M8000/M9000 and HP 9000 Superdome. In the last twenty years, there was a shift from mainframe data processing to more compact servers. It now seems there is a reverse trend toward so-called “high-density devices”.
A direct consequence is a higher energy requirement per square meter, not just to sustain these powerful servers but also to cool them. Existing data centers cannot always provide the higher power and cooling requirements, so the available space is not optimally utilized. In addition, the weight of such systems is such that the bearing capacity of floors in data centers is not always sufficient and it may be necessary to strengthen the raised computer floor.
This makes it a challenge for data center operators to balance the increasing density of the physical concentration of IT equipment and virtualization with the available power, cooling and floor capacity. The paradox is that the use of cost-effective virtualization techniques means that the limits of existing data centers are quickly approached and this gives rise to additional costs ([Data]).
A data center strategy must allow for the prospect of placing high-density devices in existing or new data centers.
Data center “in a box”
The concept of a data center “in a box” refers to the development where processing, storage and network equipment is “clustered” into logical units. Such a cluster is created by linking racks of equipment together that have redundant provisions for guaranteeing power and cooling. A data center “in a box” can also be constructed in existing data centers. The equipment, power and cooling are harmonized such that high-density devices can be placed in “old-fashioned” data centers.
The advantage of this concept is that physical changes are not required after the one-off installation of the cluster technology until the maximum processing or storage capacity is reached. This allows most management activities to be carried out entirely remotely.
A fitting example of a data center “in a box” is “container-based computing” where just such a cluster is built into a 20 or 40 foot shipping container. Similar mini data centers have been used for many years by the military as temporary facilities for use at remote locations. A more recent development is the use of mini data centers in shipping containers as modules in a large scalable data center. A few years ago, Google even applied for a patent for this method ([Goog10]).
A data center strategy should indicate what contribution there will be from the data center “in a box” concept.
Automation of IT operations processes
A significant portion of the costs to operate a data center is for personnel. In addition, the extensive automation of deployment processes reduces the completion cycle of IT projects from months to weeks.
There is a noticeably strong trend to extensively automate IT operations processes in the data center. This also includes traditional management tools (workflow tooling for ITIL[Information Technology Infrastructure Library, usually abbreviated to ITIL, was developed as a reference framework for setting up management processes within an IT organization. http://nl.wikipedia.org/wiki/Information_Technology_Infrastructure_Library.] administration processes and the CMDB[CMDB: Configuration Management Database, a collection of data where information relating to the Configuration Items (CI’s) is recorded and administered. The CMDB is the fulcrum of the ITIL management processes.]) integrated with tools for the modeling of the relationship between business processes, applications and the underlying IT infrastructure (“Business/IT alignment”), performance monitoring, automated testing, IT costs and resource planning, IT project and program planning, security testing and much more. An example of such an IT operations tool suite is HP’s Business Technology Optimization (HP BTO) ([HPIT]).
The extensive automation of IT operations processes and the use of central storage and virtualization enables IT organizations to manage data centers with a minimum of personnel. Only the external hardware vendors still need physical access to the computer floors in the data center and only within tight maintenance windows. Otherwise, the data center floor is unmanned. This is called the lights-out principle because the absence of the personnel in the data center means that the lighting can be practically turned off permanently. Again, this is not a new concept. Nonetheless, the use of central storage and virtualization reduces the number of physical operations on the data center floor to a minimum, which brings us a great deal closer to the lights-out principle.
The automation of IT operations processes has far-reaching implications for the operational procedures, competencies and formation of IT departments. This should receive sufficient attention in the data center strategy.
Market developments
This section discusses the future of data centers as seen by several trendsetting vendors of IT services and solutions.
IT service providers such as Atos Origin define their data center vision so as to enable them to better meet the needs of their customers. Atos Origin defines the following initiatives in its data center vision ([Atos]):
- reduction in costs and faster return on investment
- quicker response to (changing) business requirements (agility)
- availability: the requirement has grown to “24/7 forever”
- security and continuity: increased awareness, partly due to terrorist threats
- compliance: satisfy industry and government mandated standards
- increase in density requirements: the ability to manage high-density systems that have vigorously increasing energy consumption and heat production
- increase in energy efficiency: utilization of more energy-efficient IT hardware and cooling techniques
Cisco’s data center vision ([Cisc]) specifies increased flexibility and operational efficiency and the breaking apart of traditional application silos. Cisco specifies a prerequisite, namely, the improvement of risk management and compliance processes in data centers to guarantee the integrity and security of data in virtual environments. Cisco outlines a development path for data centers with a highly heterogeneous IT infrastructure going through several stages of consolidation, standardization, automation of administration and self-service leading to cloud computing.
IBM uses modularity to increase the stability and flexibility of data centers ([IBM10]) (“pay as you grow”). The aim is to bring down both investment and operational costs to a minimum. Reducing energy consumption is also an important theme for IBM because much of the investment and operational costs affecting the construction of a data center are energy related. IBM estimates that approximately sixty percent of the investment in a data center (particularly the technical installations for cooling and redundant power supplies) and fifty to seventy-five percent of non-personnel operating costs (power consumption by data center and IT equipment) for a data center are energy related. According to IBM, the increasing energy demands of IT equipment requires data center designs that anticipate a doubling or tripling of energy needs over the lifetime of a data center.
Just like Cisco, Hewlett Packard (HP) has identified a development path for data centers ([HPDa]) where there is a shift from application-specific IT hardware to shared services based on virtual platforms and automated management and then onto service oriented data centers and cloud computing. In this context, HP promotes its Data Center Transformation (DCT) concept as an integrated set of projects for the consolidation, virtualization and process automation within data centers.
The common thread in these market developments is reduction in operational costs, increased flexibility and stability of data center services by reducing the complexity of the IT infrastructure and a strong commitment to virtualization and energy-efficient technologies. Cloud computing is seen as a logical next step in the consolidation and virtualization of data centers.
Challenges in data center consolidation
Data center consolidation is all about bringing together a multitude of outdated and inefficient data centers and computer rooms into one or a limited number of modern green data centers. At first glance, this seems like a technical problem involving not much more than an IT relocation. Nothing is further from the truth. Organizations are struggling with questions such as: How do we involve the process owners in making informed decisions? Do we understand our IT infrastructure well enough to carry this out in planned and controlled manner? How do I limit risks of disruption during the migration? How large must the new data center be to be ready for the future? Or should we just take the step to the cloud? What are the investment costs and the expected savings from a data center consolidation path?
In brief, it is not easy to prove that the benefits of data center consolidation outweigh the costs and risks. In the next section, we briefly discuss the challenges associated with data center consolidation and the migration of IT applications between data centers.
Data center consolidations risks
Data center consolidation requires a large number of well-managed migrations within a short period of time. Simultaneously, “the shop” must remain open. This makes these endeavors highly complex and inherently risky:
- The time available for a migration phase to complete is limited and brief. High availability requirements forces migrations to be carried out within a limited number of weekends in a year.
- The migration or relocation of applications in a way that does not jeopardize data or production requires sophisticated fall-back scenarios. These fall-back scenarios add additional complexity to the migration plans and usually halve the time in which migrations can be carried out.
- The larger the scale of migrations, the greater the complexity. The complexity of migration scenarios increases with the number of underlying technical components and the number of hardware, applications and management services vendors. This increases the risk incurred through lack of oversight and in making outright mistakes.
In the following sections, we look at mitigation measures within the migration method and organization that reduce the risks of data center migrations to a manageable level.
Reducing project risks
The complexity of a data center migration makes it critical that the migration project be set up in a structured manner to reduce risk. The goal of this process is to identify risks in a continuous, proactive and uniform way during the project, weigh these in a consistent manner, and proactively manage them using the realized mitigation measures.
A typical data center migration project consists of a thorough analysis of the environment to be migrated, thorough preparation where the IT infrastructure is broken into logical infrastructure components that will be each migrated as a whole and subprojects for the migration of each of the logical infrastructure components. Each migration project requires the development of migration plans and fall-back scenarios, the performance of automated tests, and the comprehensive testing of each scenario. In fact, comprehensive testing and dry runs of the migration plans in advance significantly reduce the likelihood of the need for a fall-back during the migration.
Minute-to-minute plans must be drawn up because of the importance of performing all actions in the correct sequence or simultaneously. Examples of such actions are the deactivating and reactivating of hardware and software components. The scale and complexity of these plans requires that these be supported by automated tools that resemble the management of real-time processes in a factory.
Reducing migration risks
There are different methods involved in the migration of applications and technical infrastructure. Each of these methods are illustrated in Figure 4 along with a brief listing of their advantages and disadvantages.
Figure 4. Data center migration methods, advantages and disadvantages.
A physical move, the “lift and shift” method, has the inherent risk that device hardware failures may arise during deactivation, transport and reactivation. If these hardware failures cannot be resolved quickly, there is no fall-back scenario to rely on.
In a physical migration (P2P), an equivalent IT infrastructure is built at site B and the data and copies of the system configurations are transferred via a network migration. The advantage of this method is the relative ease of migration. The disadvantage is that there is no technological progress and thus no efficiency advantages such as the higher utilization of servers and storage systems.
In the virtualization approach (P2V), a virtualization platform is built at the new location B and the applications are virtualized and tested. The actual data is then migrated over the network. The disadvantage of this scenario is the uncertainties that are introduced because all applications will be virtualized. Changes in the production application at location A should also be performed in the virtualized environment on site B. The advantage is that a significant improvement in efficiency can be achieved because the same applications will need significantly less hardware after the migration.
The virtual migration (V2V) assumes a high degree of virtualization at location A so it is fairly simple to transfer data and applications to a similar virtualization platform at location B. This migration approach is similar to how a twin data center replicates applications and data across several sites. The disadvantage of this method is that not all applications are virtualized.
In practice, a combination of these migration methods are used depending on the nature of the platform that needs to be rehoused.
Cost-benefit assessments
Choosing the right mix of migration methods requires finding a balance between migration costs and risks. Heavily reducing the migration risks could lead to a final outcome where the same technical standards are used as before the migration. This limits the possibility of achieving cost and efficiency benefits from technological advances. Ideally, the technical architecture of the environment after the migration aligns well with the technical standards of the IT management organization. If the data center management is outsourced then alignment should be sought with the “factory standards” of the IT service provider.
Managing too strictly on the basis of reducing migration risks will lead to disappointment regarding the operational cost savings after the migration because there is insufficient alignment with the service provider standards. The migration scenario is thus a trade-off between an acceptable migration risk, the requirements dictated by an application such as by the CIA classification (Confidentiality, Integrity, Availability) and the costs involved in the migration itself and the operational phase afterwards.
What data center strategy is suitable for your organization?
The technological and market developments described in this article may lead to a reevaluation of the existing data center strategy. One can construct a new data center, redevelop the existing data center, partially or entirely host IT infrastructure with a third party, or in combination with hosting of infrastructure also make use of cloud computing services. By way of explanation, we have selected three possibilities for making choices in data center strategies.
1. Constructing your own data center
Constructing large new data centers is a trend that is particularly noticeable with contemporary Internet giants such as Apple, Google and Facebook. Even though renting space from IT providers is relatively simple, the trend of constructing own data centers continues. Enterprises no longer want to be constrained by restrictions that may result from the placement of IT equipment with a third party. In addition, organizations no longer want to be dependent on service agreements, hidden limitations in the services provided, or the “everything at additional cost” formula.
Another consideration when building your own data center is that organizations still want to keep their own data close at hand. This is apparent not just from the popularity of private clouds, but that many organizations are still struggling with concerns about security and control over the underlying infrastructure. This is why organizations that predominantly earn their revenue by providing web services or IT support services would rather remain the owner of the entire IT infrastructure including data center.
Cost considerations also play a significant role in the choice to construct a new data center. Although the utilization of housing and hosting services of third parties at first seems financially attractive, organizations always want to convert recurring monthly expenses into revenue. This is especially true if the return on investment is greater by doing it yourself than by utilizing housing and hosting.
Green IT is a major development that affects the choice to construct a new data center. This is especially true for large scale utilization of data center facilities. For many organizations, the choice of constructing and owning a data center is more efficient and cost effective than that of utilizing a provider.
2. Redevelopment of an existing data center
Although the redevelopment of an existing data center may at first appear to be lower in cost, redevelopment can quickly turn into a huge complex project and eventually cost millions more than a new construction project. The complexity arises mainly because the IT infrastructure must remain available while the redevelopment of the data center space takes place. Work activities often take place close to the expensive hardware that is sensitive to vibration, dust and temperature fluctuations. In addition, staff of one or more contractors have access to the data center where the confidential information of the organization is stored, and this gives rise to additional security risks.
Nevertheless, the redevelopment of an existing data center also has advantages. Redevelopment does not require a detailed migration plan for moving hardware from location A to location B. Sometimes decisions go beyond just cost considerations and technology motivations. If management of an organization believes in maintaining a competitive advantage by keeping the data center at the headquarters location, management will be considerably less likely to build a new data center at a new location.
3. Outsourcing (parts of) the IT infrastructure or using cloud services
Outsourcing (parts of) the IT infrastructure can also be a consideration in avoiding new construction or redevelopment costs. However, the outsourcing of IT can cost just as much if not more. Many organizations consider cloud services from third parties because they believe that there will be significant cost savings. In fact, the time-to-market is relatively short because there is no need for hardware selection and installation projects. However, recent research shows that outsourcing where new technology is used does not necessarily reduces costs and deliver flexibility in contrast to construction or redevelopment of your own data center ([Koss10]).
Examples of data center strategies
Data center strategy within the National Government
In the letter Minister Donner sent to the House on 14 February 2011 ([Rijk]), he announced that within the scope of the Government Reduction Program, the number of data centers of the central government would be drastically reduced from more than sixty to four or five. Such a large-scale consolidation of data centers had not previously been carried out in the Netherlands. This involved many departments, benefits agencies and a large number of data centers working with European or international standards. It was a singular challenge. Edgar Heijmans, the program manager of Consolidation Datacenters, states ([Heijm]) that this is a necessary step toward the use of cloud services within the national government. In the long-term plan for the chosen approach, he identified the steps: common data center housing, common data center hosting and finally the sharing of an application store in a government cloud. KPMG has been involved both in preparing the business case for data center consolidation for the government, as well as a comprehensive analysis of the opportunities and risks of cloud computing within the state.
International bank and insurer
An international bank-insurer combination had a data center strategy where about fifteen data centers in the Benelux would be consolidated into three modern newly constructed data centers. Some years ago when this strategy was formed, it was not yet known that the crisis in the financial sector meant growth projections would have to be revised downwards. Or, that, in 2010, the banking and insurance activities would be split into two separate companies. The crisis and the division had a significant impact on the business case for the planned data center consolidation. KPMG was involved with an international team in the reassessment of the data center strategy and the underlying business case.
European insurer
A few years back when this major insurance company outsourced its IT infrastructure management activities to a number of providers, it was already known that its data centers were outdated. The insurer had experienced all sorts of technical problems from leaky cooling systems to weekly power outages. The strategy of this insurer was to accommodate the entire IT infrastructure in the data centers of the provider in the Netherlands and Germany. The migration of such a complex IT infrastructure, however, required a detailed understanding of the relationship between the critical business chains, applications and underlying technological infrastructure. At the time of the release of this Compact, this insurer is currently completing the project that will empty its existing data centers and move these to its data center provider. They have chosen to virtualize existing systems and to carry out the “virtual relocation” of the systems and associated data in a limited number of weekends. KPMG was brought into this project to set up the risk management process.
Conclusions
Our experience shows that there is no magic formula that clearly points to modernization, redevelopment or outsourcing of data centers. The principles of a good data center strategy should be aligned with business objectives, investment opportunity, and the “risk appetite” of the organization. The technological and market developments described in this article make long term decisions necessary. The central theme is “do more with less”. “With less” in the sense of consolidating data centers and server farms with server virtualization. This also means that the same processing capacity requires less energy. “Do more” in the sense of more processing capacity for the same money and new opportunities to accommodate Disaster Recovery in existing data centers.
These innovations require large-scale migration within and between data centers and this is coupled with significant investment, costs and migration risks. To reduce these risks to an acceptable level, proper assessments must be made of the costs and risks taken during the migration and during the operational phase after migration. The article draws from experience and provides a few examples of data center strategies, namely, the construction of a new data center, the redevelopment of an existing data center, and the outsourcing of data center activities.
Literatuur
[Bala07] Balaouras, Schreck and Forrester, Maximizing Data Center Investments for Disaster Recovery And Business Resiliency, October 2007.
[Barr07] Barrosso and U. Hölze, The Case For Energy-Proportional Computing, Google, IEEE Computer Society, December 2007.
[Cisc] Cisco Cloud Computing – Data Center Strategy, Architecture and Solutions, http://www.cisco.com/web/strategy/docs/gov/CiscoCloudComputing_WP.pdf.
[Data] Data Center Optimization, Beware of the Power Density Paradox, http://www.transitionaldata.com/insights/TDS_DC_Optimization_Power_Density_Paradox_White_Paper.pdf.
[Fria08] Friar, Covello and Bingham, Goldman Sachs IT Spend Survey 2008, Goldman Sachs Global Investment Research.
[Goog10] Google Patents ‘Tower of Containers’, Data Center Knowledge, June 18th, 2010, http://www.datacenterknowledge.com/archives/2010/06/18/google-patents-tower-of-containers/.
[Herm10] J.A.M. Hermans, W.S. Chung and W.A. Guensberg, De overheid in de wolken? De plaats van cloud computing in de publieke sector (Government in the clouds? The place for cloud computing in the public sector), Compact 2010/4.
[HPDa] HP Data Center Transformation strategies and solutions, Go from managing unpredictability to making the most of it: http://h20195.www2.hp.com/v2/GetPDF.aspx/4AA1-6781ENW.pdf.
[HPIT] http://en.wikipedia.org/wiki/HP_IT_Management_Software.
[IBM10] Modular data centers: providing operational dexterity for an increasingly complex world, IBM Global Technology Services, november 2010, ftp://public.dhe.ibm.com/common/ssi/ecm/en/gtw03022usen/GTW03022USEN.PDF.
[Kapl08] Kaplan, Forrest and Kindler, Revolutionizing Data Center Energy Efficiency, McKinsey & Company, July 2008: http://www.mckinsey.com/clientservice/bto/pointofview/pdf/Revolutionizing_Data_Center_Efficiency.pdf.
[Koss10] D. Kossmann, T. Kraska and S. Loesing, An evaluation of alternative architectures for transaction processing in the cloud, ETH Zurich, June 2010.
[Meis09] D. Meisner, B.T. Gold and T.F. Wenisch, PowerNap: Eliminating Server Idle Power, ASPOLOS ’09, Washington DC, USA, March 2009.