Showing posts with label Data Center. Show all posts
Showing posts with label Data Center. Show all posts

Friday, March 21, 2014


Questions to Ask Yourself When Selecting a Data Center: Is the infrastructure built to meet or exceed Tier III standards?

According to a study by the Ponemon Institute, the cost of data center downtime across industries is approximately $7,900 per minute, which is a 41% increase from the $5,600 cost in 2010. This same study also showed that 91% of data centers have experienced an unplanned outage in the past 24 months. (Read more about the average costs involved with outages for 2013 here). Facility outages are not only financially devastating, but seriously harmful to an organization’s reputation.
Thankfully, the data center industry has adopted a standardized methodology to determine availability in a facility, which will help you determine what is right for your business so you can make an informed decision.  Developed by the Uptime Institute, this tiered system offers companies a way to measure both performance and return on investment (ROI).
To be considered a Tier III facility it must meet or exceed the following standards:
  • Multiple independent distribution paths serving the IT equipment
  • Concurrently maintainable site infrastructure with expected availability of 99.982%
  • 72 hour power outage protection
  • All IT equipment must be dual-powered and fully compatible with the topology of a site’s architecture
Another important element in Tier III compliance is N+1 redundancy on every main component, which provides greater protection and peace of mind for crucial IT operations by ensuring a redundant system is always available in case a component fails or must be taken offline for maintenance.

Each LightEdge data center, including the new Kansas City facility currently being built at SubTropolis Technology Center, meets or exceeds the concurrent maintainability requirements of the Uptime Institutes Tier III standards. With our Tier III infrastructure, any one component can fail and the datacenter will remain operational.

LightEdge’s Kansas Center data center is scheduled to open during the spring of this year. Check out our Facebook, Twitter, Google+, and LinkedIn pages for the most recent photos of our construction progress. Download the spec sheet to learn more about the facility here.


Some things to consider when running a data center

As businesses go increasingly digital, the need for data centers to secure company information is more important than ever before. It is not only tech giants like Google and Facebook that need a place to house their information - businesses across the healthcare, government and industrial sectors are looking to data centers as a solution to their storage needs. But running a data center is not something that can be done impulsively. Whether your company has the funds and scale of operations to occupy its own center or ends up looking into existing facilities, here are some important considerations to keep in mind to maximize an enterprise data center operation.

Consider renewable energy solutions
 
In Hollywood movies, data centers are generally represented as massive, noise-intensive operations that actively drain energy out of whatever area they are occupying. This public perception of such facilities is understandable given that data centers must rely on a constant supply of energy - after all, their functionality depends on remaining active at all times. But just because they harness energy sources does not mean data centers can't function in an environmentally-minded, sustainable way.
Just ask Google, a company that has been dealing with data storage needs ever since it rented its first data storage facility - a closet-sized, 7 foot by 4 foot operation with a mere 30 computers - in 1998, according to CNET. Google has come a long way since then, and so has its dedication to sustainable methods of data center operation. The tech giant now has a vast network of data centers spanning the globe.
What unites Google's facilities is a singular commitment to renewable energy. With renewable energy currently powering more than a third of Google's data storage facilities, the company is always looking for ways to expand the use of solar and wind power, according to its site. Because it is challenging to have a renewable power generator on location, the company did the next best thing: It reached out to renewable energy providers in the area - such as wind farms - and made deals to buy energy from them. Among Google's energy suppliers are wind farms in Sweden and Oklahoma. Through these sources, the company is not only able to maintain solid data room cooling practices, but benefit the local community.   

Have good outside air cooling

When it comes to maintaining an optimal data room temperature, it's best to follow the lead of companies well-versed in data storage. Google and Microsoft are two such businesses, and they both share a commitment to harnessing natural resources to keep their data centers cool.
In Dublin, Microsoft has invested more than $800 million to date in order to build a data center that covers almost 600,000 square feet. The enormous size of the facility would seem to present a major cooling challenge, but the company has been able to surmount that by using fresh air cooling, Data Center Knowledge reported. By building the center in Ireland, where the temperature remains optimal for data room cooling, Microsoft is able to maximize the location as a natural cooling solution - a move that saves significant energy costs while keeping the company environmentally friendly as well. And its commitment to environmentally sound solutions does not end with cooling: the center also recycles 99 percent of the waste it produces.
Google has a similarly cooling-minded approach with its data facility in Finland, which it hopes will be almost completely powered by wind energy by 2015, according to Data Center Knowledge. The wind energy will come from a wind park located nearby. But the center is not waiting until then to implement good temperature practice. Instead of relying on chillers and other machine-based cooling techniques, Google relies on seawater from a nearby Gulf to cool the facility. Its efforts in Finland are part of a broader effort to expand the Google sphere of influence.
"The Google data center in Hamina offers Eastern Finland a tremendous opportunity to jump from the industrial to digital age," said Will Cardwell, a professor at a nearby university.

But just as important as what goes on inside a center is the environment around it. That is because data centers are invariably affected by the physical location in which they are located. With that in mind, here are some more things to look into in order to maximize your data center potential.

Choose the location wisely

Considering that data centers are necessarily connected to the physical environment they inhabit, it is important to pinpoint the best location possible. Data centers are always going to require top-notch capabilities to maintain a good server room temperature, but the ease with which that happens can depend on the location of the center. As always, Google is at the top of the game with regard to location selection. Its Hamina, Finland center is strategically placed near the Gulf of Finland, enabling an easy and natural data room cooling solution.
But Google is not the only company maximizing natural environments for data center growth. Iron Mountain specializes in underground data center solutions, according to Data Center Knowledge. Formerly a storage company for physical records, Iron Mountain already had a 145-acre underground storage facility in a former limestone mine before it got into the data center business. This location turned out to be perfect for data center needs. Blocked from the sunlight and other external heat sources, the underground facility stays at about 52 degrees without any kind of additional cooling function. An underground lake provides further protection against ever needing to bring in a machine cooling system. The company's so-called "data bunker" gained so much popularity that Iron Mountain decided to expand its sphere of operations.

Give back to the community the center is in

Data centers often require a big fleet of staff to operate. Fortunately, they're usually built near communities from which workers can be hired. But as much as data centers plan to benefit from the community they inhabit, it is just as important to look for ways to give back. This kind of behavior encourages connectedness with the community and improves the reputation of the center - and therefore the company - in the public eye.
Google paid special attention to the local community as it developed its Hamina center. When they began mapping out the concept for the center, Google realized that construction would take about 18 months. And so they turned to the locals for help. In the process, they provided steady employment for 800 workers in the engineering and construction sectors, according to Data Center Knowledge. Google's willingness to involve locals in the construction process helped forge a lasting bond between the tech giant and the city.
This bond did not go unnoticed.
"Google's investment decision is important for us and we welcome it warmly," Finnish president Jyrki Katainen said.
And for those who work at the center, life is good.
"No two days are the same as we change our roles around frequently to keep things fresh and new," said Julian Cooper, a hardware operations worker at the facility.

Be prepared to surmount environmental obstacles

In the event of a disaster like a hurricane or earthquake, it is vitally important for all enterprises - especially data centers - to make sure their stock is safe. Iron Mountain understands the principle of environmental preparadness quite well, which is why they offer underground data storage solutions. By storing data underground, Iron Mountain protects it against any conceivable natural disaster. This nature-prove construction is especially important for companies like Marriott, which chose to house data at the Iron Mountain bunker because of the sense of complete security it afforded.
"We have always had a rigorous and constant focus on having disaster preparedness in place," said Marriott operational vice president Dan Blanchard. "Today we have a data center that provides Marriott with a tremendous capability for disaster recovery, and we have a great partner in Iron Mountain."
According to tech journalist David Geer, earthquakes pose a huge threat to data centers in many areas around the world, since they can be difficult to predict and potentially cause large-scale damage. If a company intends to build its facility in an area susceptible to earthquakes, it should apply the most stringent safeguards, including building a center that is capable of withstanding a quake one degree higher than the requirement for the zone it occupies.

CHARLES DOUGHTY<br/>
Iron Mountain 
CHARLES DOUGHTY
Iron Mountain
Charles Doughty is Vice President of Engineering, Iron Mountain, Inc.
We’re all familiar with Moore’s Law, stating that the number of transistors on integrated circuits doubles approximately every two years. Whether we measure transistor growth, magnetic disk capacity, the square law of price to speed versus computations per joule, or any other measurement, one fact persists: they’re all increasing and doing so exponentially. This growth is the cause of density issues plaguing today’s data centers. Simply put, more powerful computers generate more heat which results in significant additional cooling costs each year.
Today, a 10,000 square-foot data center that is running about 150 watts per square foot costs roughly $10 million per megawatt to construction, depending on location, design and cost of energy. If the approximately 15 percent rate of data growth of the last decade continues over the next decade, that same data center would cost $37 million per megawatt. A full thirty percent of these costs are related to the mechanical challenges of cooling a data center. While the industry is experienced with the latest chilled water systems, and high-density cooling, most organizations aren’t aware that Mother Nature can deliver the same results for a fraction of the cost.

The Efficiency Question: Air vs. Water

Most traditional data centers rely on air to direct cool a data center. When we analyze heat transfer formulas, it turns out water is even more efficient at cooling a data center, and the difference is in the math, namely the denominator:
formula-air-water
air-water-volume
With the example above, the energy consumed by the 10,000 square-foot data center creates over 5 million BTUs of heat rejection. Using the formulas in the figures above and assuming a standard delta T of 10 degrees, this data center would require more than 470,000 cubic feet per minute (CFM) of air to cool that facility, but only 1,000 gallons of water per minute. In order to cool this data center, the system would need between 150-200 horsepower to convey that many cubic feet of air per minute, but only 50-90 horsepower to convey 1,000 gallons per minute – roughly 462 times more efficient! If analyzed on a per cubic foot basis – one cubic foot of air to one cubic foot of water, water is actually about 3,400 times more efficient than air.

Physics 101: The Thermodynamics of the Underground

However, for an underground data center, there’s more at work. In a subterranean environment, Mother Nature gives you a consistent ambient temperature of 50 degrees. (So to begin with, you don’t have to depend on cooling systems as much since it is cool to start. Then you can get further efficiencies by using an underground water source or aquifer.)
The ideal environment for a subterranean data center is made of aquifers, or stone that has open porosity like basalt, limestone and sandstone; aquicludes, such as dense shales and clays, will not work as effectively. In a limestone subterranean environment, heat rejection can increase from 4 to 500 percent because of the natural heat sink characteristics of the stone. The most appealing implication here is that the stone can manage the energy fluctuations and peaks inherent to any data center. (the limestone absorbs heat which further reduces the need for cooling)
As the water system funnels 50 degree water from the aquifer to cool the data center, the heat is rejected into the water which is then funneled back about 10 degrees warmer. Mother Nature deals with that heat by obeying the second law of thermodynamics which governs equilibrium and the transfer of energy. For the subterranean data center operator, this means working within the conductivity of the surrounding rock, thus it is important to be knowledgeable of the lithology and geology of the local stratus, along with understanding, the effects of a continuous natural water flow and the psychrometric properties of air.

The Cost of Efficiency

Of course, there are other data center cooling strategies being used aside from the subterranean lake designs including well systems, well point systems and buried pipe systems to name a few. Right now, well systems are being used in Eastern Pennsylvania to cool nuclear reactors producing hundreds of megawatts of energy with mine water. Well point systems are generally used in residential applications, but the concept doesn’t scale well without becoming prohibitively expensive. Buried pipe systems are used quite a bit and require digging a series of trenches backfilled with a relatively good conductive granular material, but beyond 20-30 kilowatts, this method does not scale well.
How much cost do each of these methods incur? An underground geothermal lake design will cost less than $500 per ton, while well-designed chill water systems range from $2,000-4,000 a ton. The discrepancy in cost is created by the mechanics – in a geothermal lake, there are no mechanics: water is simply pumped at grade. Well and buried pipe systems can cost more than $5000 a ton, and these systems do not scale very well.
By understanding Mother Nature and using her forces to our advantage, we can increase the capacity and further improve on the effectiveness of the geothermal lake design. By drilling a borehole from the surface into the cavern, air transfer mechanisms can easily be incorporated; anytime the air at the surface is at or below 50°, that cool air will to drop into the mine. Even without motive force or air handling units, a four to five foot borehole can contribute about 30,000 cubic feet of air per minute! If an air handling unit is add, the 30,000 CFM of natural flow can easily become 100,000-200,000 CFM. What was a static geothermal system is now a dynamic geothermal cooling system with incredible capacity at minimal incurred cost.

Opportunities for the Future

When analyzing and predicting what data centers are going to look like in the future, a recurring theme starts to emerge: simplicity and lower-cost. Because of the cost pressures facing IT departments and CFOs alike, underground data centers using hybrid water, air and rock cooling mechanisms are an increasingly attractive option.
There are even opportunities to turn these facilities into energy creators. For example, by adding power generating turbines atop boreholes, operators can harness the power of heat rising from the data centers below. Furthermore, by tapping into natural gas reserves, subterranean data centers could become a prime energy source, thus eliminating the need for generators and potentially achieving a power usage effectiveness measurement of less than one. The reality is that if you know Mother Nature well, you can work with her – she’s very consistent – and the more we learn, the more promising the future of data center design looks.
 http://www.datacenterknowledge.com/archives/2014/03/18/tomorrows-data-centers-mother-nature-cools-best/
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

Thursday, September 26, 2013

Iron Mountain: #Underground Data Center Tour By: Colleen Miller

Nicholas Salimbene, Director, Business Development, for #Iron Mountain Data Centers, gives a video tour of its underground data center, which is located 220 feet underground. The facility is constructed in a former limestone mine and impervious to #natural disaster from storms to earthquakes. Iron Mountain uses the underground location to cool the data halls, with underground water and naturally low temperatures. The video shows an overview of the facility through stages of construction. Video runs 2:37 minutes.







Welcome to Iron Mountain's premiere underground data center facility, located 200 feet beneath rolling countryside in a former limestone mine. The facility spans 145 acres and provides one of the most physically and environmentally secure colocation infrastructures available.

Tuesday, December 27, 2011

Green Mountain Data Center

 Green Mountain Data Center
Located inside the mountain in a former NATO ammunition depot (the largest in northern Europe)
Built for the highest military security level. Secured against electromagnetic pulses (EMP)
"Nuclear secure" facility, secured against sabotage and direct attack from the sea.



A Natural Cooling System for an Underground Norwegian Data Farm
Green Mountain Data Center
 Green Mountain Data Center a prime piece of real estate tucked inside a scenic Norwegian mountain. Built next to a cool water fjord and surrounded by evergreens and lush rock-clinging mosses, the space boasts of bright, airy subterranean halls carved out of natural cave walls and almost transcendental settings above ground. This will be the comfortable new home for many of Norway’s data servers. The Green Mountain Data Center is one of the first pioneering data centers that will greatly reduce its costs by harnessing the cooling power of the environment, namely, the steady flow of cool water from an adjacent fjord. Alas, the grass seems to be consistently always greener in Scandinavia.
The Green Mountain Data Center contains nine ‘Mountain Halls’—each spanning well over 1,000-square-meters of space to host rows and rows of servers—a workshop, and an administration building. Its servers will be hooked up to an uninterrupted supply of power from a total of eight independent generators as well as three supply lines connected to the central Norwegian network, and its carbon footprint has been thoroughly eliminated.


Of course its most compelling feature, aside from its generally pleasant, Hobbit-like atmosphere noted by Gizmodo, is the cooling system, which relies on the nearby Rennesøy fjord to provide an abundance of cold water year round to cool its resident motherboards. Facebook has gone a similar route by planting a server farm in the Arctic, but we wouldn’t be hard pressed to say that we like the hospitable environment of this data farm better, and it’s nice to see yet another Scandinavian mountain bunker to add to our favorites!





The Mountain Hall

    Approx. 21,500 m2 floor space in the mountain
    The areas consists of:
    – 6 mountain halls each of 1,855 m2
    (11 x 164 m each) in size
    – 2 mountain halls of 1,546 m2 (19 x 82 m each) in size
    - 1 Mountain hall with internal structure 1,370 m2 in size
    - I.e. combined mountain halls of 15,692 m2
    - Warehouse/workshop 520 m2
    - Administration building 840 m2
    - Quay w/"roll on-roll off" option


 Fire safety and fire
protection

    Closed caverns enable the use
    of inert / hypoxic air ventilation
    Reduced oxygen level to prevent fire and smoke
    - 02 reduced to 15 -16 %
    - Fire cannot arise as the combustion process
    does not get enough oxygen
    - Corresponds to an altitude of approx. 3,000 m
    Hypoxic air ventilation/Inert ventilation system
    - Reduces/limits smoke formation
    - Prevents combustion/fire
    - Ensures continuous operation
    - No fire damage
    - No secondary extinguishing damage (corrosion,
    harm to the environment, poisoning, etc.)
    - No problems with hard disks due to the triggering
    of fire extinguishing equipment
 Safe as a vault

    Located inside the mountain in a former NATO
    ammunition depot (the largest in northern Europe)
    Built for the highest military security level
    - Secured against electromagnetic pulses (EMP)
    - "Nuclear secure" facility
    - Secured against sabotage and direct attack
    from the sea
    "Best in class" data security


 Communication
- redundancy

    High capacity and redundancy
    Local broad band operators
    Good connectivity to the world
    Multiple high capacity lines to Oslo
    Multiple high capacity lines directly to the UK
    Multiple high capacity lines to continental Europe
    Carrier neutral availability

Monday, November 1, 2010

John Clune, President, Cavern Technologies Data Center





John Clune, President, Cavern Technologies Data Center, Will Speak On President/CEO Panel IMN’s Data Center Forum John Clune, President of Cavern Technologies--the Midwest’s premier underground data center-- will speak on the President/CEO Panel, at IMN’s upcoming Data Center Forum. The two-day event will be held at the Hyatt Regency Century Plaza, Los Angeles, CA, on November 8th and 9th.

John Will Speak On President/CEO Panel IMN's Data Center Forum John Clune, President of Cavern Technologies--the Midwest's premier underground data center-- will speak on the President/CEO Panel, at IMN's upcoming Data Center Forum. The two-day event will be held at the Hyatt Regency Century Plaza, Los Angeles, CA, on November 8th and 9th. October 31, 2010 John Clune, President of Cavern Technologies--the Midwest's premier underground data center-- will speak on the President/CEO Panel, at IMN's upcoming Data Center Forum. The two-day event will be held at the Hyatt Regency Century Plaza, Los Angeles, CA, on November 8th and 9th. The Forum on Financing, Investing and Real Estate Development for Data Centers features leading industry experts who will address the successes and challenges of the data center industry, including over 20 Presidents, CEOs and COOs of Data Center companies. Data Centers, the emerging asset class that investors need to seriously consider for their investment portfolio, has continued growing throughout the recession. With demand doubling every two years, many believe this is the one place where 20% returns are still possible. The major highlight of the conference are the 20 data center Presidents and CEOs who will be speaking at the conferencesays Steven Glener, Senior Vice President at Information Management Network, the forum host. They will be addressing the key financing, expansion, corporate strategy, capital markets, power and technology, from a C-Suite level -- a unique and critical perspective, exclusive to the IMN conference. Cavern Technologies specializes in the development, leasing and operation of build-to-suit wholesale data centers, located 125-feet underground in a 3 million square foot facility designed for energy efficiency, housed in an environmentally regulated, secure infrastructure. Cavern Technologies' world-class data center and collocation facility is SAS-70 certified and designed to meet the specialized power, cooling and security requirements companies need to house IT systems that support their mission-critical business processes. Cavern provides tenants with unique business solutions and a value proposition focused on minimizing the total cost of ownership of data center and collocation infrastructure.

Tuesday, August 3, 2010

USSHC Ultimate Underground Data Bunker


USSHC was founded in 2002. However, the process of creating the ultimate underground data bunker actually started in 1999 with the needs of an Internet Service Provider. It was found that truly geographic IP transit could not be fulfilled by the local telephone carriers due to telephone tariffs. The larger Tier 1 Network Service Providers could not provide higher reliability because they depend on the local carriers to deliver the local access to their network. No matter how services were delivered via these traditional means, there was always a single point of failure.

There are solutions that use fiber rings to deliver services, but even fiber rings have single points of failure: The telephone exchange central offices.

The only way to have true IP redundancy is to have different connections from multiple IP transit providers from opposite directions, that use different local fiber networks, from entirely different companies. Most data centers try to meet these needs using fiber paths from carriers not bound by telephone tariffs. However, this “solution” was accompanied by a host of other problems.

In terms of physical security, every data center investigated shared a common building with other tenants. Either the data center was an afterthought and added to an existing building (server closet that grew in to dedicated space), or it was purpose built but with much office space and other common space in the same facility. Both of these types of shared structures increase the risk of collateral damage due to fires in the same building, and to security risks due to the large numbers of people sharing the facility. The best data center fire suppression system in the world doesn’t do a bit of good if the office above it building burns down on top of the data center.

Major shortcomings in physical security were also a recurring theme during the search for a data center. Many facilities share common space with other businesses. Despite partitioning off a building, the common mechanical facilities such as chiller plant, and electrical, are typically shared with other tenants. Obtaining building wide security is difficult not only due to the different tenants sharing common areas, but due to reception areas in buildings that were open to the public and entirely unsecured. Most “secure” server spaces were found to be secured from public areas by walls made of sheet rock! Some sheet rock walls contained windows! We desired something a little more secure than two layers of half inch thick sheet rock, or a single pane of glass.

Despite finding several facilities that all claimed to be “hardened” and able to withstand the force of a tornado with walls made of reinforced concrete, and at least one door made of steel with no windows to the outside world, further investigation revealed that at most, they were only partially below ground, (walk out basement) and all lacked physical plant equipment that was designed to operate during major contingencies. They also shared office space in the same building. Time and time again, it was found that 100% of the data centers had their heat rejection and standby power systems above ground. And in no case were the generators or air conditioning systems “hardened” at all. While the servers may survive if the data center took a direct hit from even a small EF-1 tornado, they would not remain operational for any length of time once the uninterruptible power supply batteries were exhausted. Even if the connectivity and building itself survived, and a generator was tough enough to operate after a storm or tornado, the external cooling would not. Even with power, the servers would quickly overheat, leading to downtime, and possible data corruption or loss.

Some data centers that claimed to be “hardened” were found to require a constant feed of municipal water for on site cooling. With all of the redundancy built in to the site, the whole data center could fail due to a non redundant source of cooling water that could be interrupted due to a pipe break, power outage, earthquake, or even simple maintenance. Or the whole data center could fail due to a water pipe break that would flood the facility with a high pressure torrent of municipal water.

Then there were the data centers located in flood plains. We were shocked at just how many data centers were located in flood plains. More alarming was the “head in the clouds” attitude that most had about the flood plain being entirely acceptable because the data center was on an upper floor.

The harder we looked, and the more we uncovered, the more discouraged we became. Eventually however, USSHC solved all of these problems, and then some.

The idea behind USSHC was to provide a safe, secure place to house an Internet Service Provider that would be immune from any form of disaster, “deep in an Iowa underground bunker” where the power would always stay on, and the servers would always stay connected, fully online, and fully operational, despite what was going on in the outside world.

Since it went live in 2002, the facility has been expanded to allow other companies to share the same level of redundancy, security, and performance.

In 2009, USSHC opened the GWAB (Geek with a box data suite) to offer an economical alternative to our premium data center colocation offerings.

Monday, August 2, 2010

Innovations for the Data Center

6 Cool Innovations for
The Data Center


1. Fiber optics with a twist

The success of the HDMI cable in consumer electronics has proved that having a common cable that works with Blu-ray players, HDTV sets and just about any set-top box helps remove clutter and confusion. Intel has developed Light Peak following this same approach. It's fiber-optic cable that will first be used with laptop and desktop computers to reduce clutter and to speed transmission, but it could also make its way to the data center as a way to connect servers and switches.

The 3.2mm cable, which is about as thin as a USB cable, can be up to 100 feet long. Intel has designed a controller that will sit inside a computer, and cables are currently in production. Third parties, including Hewlett-Packard and Dell, will start making computers with Light Peak fiber-optic cables in them by 2011, according to Intel.

For data centers, Light Peak presents a few interesting possibilities. Fiber optics have been in the data center since the early 1990s, when IBM introduced its Escon (Enterprise Systems Connection) product line; it connects mainframes at 200Mbit/sec. Light Peak differs in that it runs at 10GB/sec., and Intel claims that the components will be less expensive and lighter-weight than existing fiber-optic products.

"Intel claims Light Peak will be less complex and easier to manage by eliminating unnecessary ports, and deliver the higher throughput required by high performance e-SATA and DisplayPort systems," says Charles King, an analyst at Pund-IT in Concord, Mass. "If the company delivers on these promises, Light Peak could simplify life for data center managers plagued by installing, managing and troubleshooting miles of unruly optical cables."

Success here will depend on "how willingly developers and vendors" embrace Light Peak and build products around it, King explains.

2. Submerged liquid cooling and horizontal racks

Liquid cooling for data centers is not a new concept, of course, but Green Revolution Cooling has added a new twist. For starters, the rack is turned on its side, which helps with cable management and makes it easier for administrators to access equipment, and the horizontal rack is surrounded by liquid. A new coolant, called GreenDEF, is made from mineral oil that is nontoxic, costs less than other liquid-cooling methods and is not electrically conductive like water, according to a GR Cooling spokesman.

"The liquid actually moves through the floor and circulates up through all of the computing nodes," says Tommy Minyard, director of advanced computing systems at the Texas Advanced Computing Center, part of the University of Texas at Austin. This means more-effective cooling because heat is moved away from the processors via cables on the sides and under the rack, he explains. Minyard is installing GR Cooling systems in his data center and expects a 30% to 40% savings compared to traditional air-cooled systems.

Data center cooling device

Green Revolution uses horizontal devices for racks, along with a new type of coolant, to reduce energy costs in a data center.

Minyard says liquid cooling has made a rebound lately, recalling the days when Cray offered submerged cooling systems, and he notes that even IBM is moving back into chilled-liquid-cooling some compute nodes.

Pund-IT's King says a major issue is that enterprises have fought the return of liquid cooling in the data center because of the high costs of implementing the technology and because it is unproven as a widespread option.

"Liquid cooling usually costs much more to install upfront than air cooling," says Mark Tlapak, GR Cooling's co-founder. "Compared to air, every liquid cooling system has some added nuance, such as electric conductivity with water-based cooling systems. " But, he says, "spring a leak in the water systems, and you lose electrical equipment." Still, for Minyard, GR Cooling is an ideal fit: His data center gravitates toward dense, powerful systems that pack intense power into small spaces, such as IBM blade servers and the latest Intel processors. The Ranger supercomputer, for example, uses 30kw of power per rack.

3. Several broadband lines combined into one

Enterprises can spend many thousands of dollars on fiber-optic lines and multiple T1 connections, but at least one emerging technology is aiming to provide a lower-cost alternative.

Mushroom Networks' Truffle Broadband Bonding Network Appliance creates one fast connection out of up to six separate lines, a technique known as bonding. The Truffle combines the bandwidth of all available broadband lines into one giant pipe, with download speeds of up to 50Mbit/sec., the company says. Internet access may be through a DSL modem, cable modem, T1 line or just about any broadband connection.

This helps increase overall throughput, and acts as a backup mechanism, too. If one of the "bonded" lines fails, the Truffle connection just keeps running with the other available lines.

Steve Finn, a television producer in Kenya, uses Mushroom Networks' appliance for a program called Africa Challenge that is broadcast to eight African countries. He relies heavily on broadband to produce the content and at one time paid as much as $4,000 per month for connectivity. Speeds vary depending on latency and traffic, but he says the bonded speed is generally about four times faster (four lines times the speed of each individual line), at about half the cost of one equivalent high-speed line.

Frank J. Bernhard, an analyst at Omni Consulting Group, says Mushroom Networks fills a niche for companies that do not want to pay the exorbitant fees for multiple T1 or T3 connections but still need reliable and fast Internet access. Other companies, including Cisco Systems, offer similar bonding technology, but at a greater cost and with more complexity at install, which means the technique has not yet been widely used.

4. Multiple data centers more easily connected

In a very large enterprise, the process of connecting multiple data centers can be a bit mind-boggling. There are security concerns, Ethernet transport issues, operational problems related to maintaining the fastest speed between switches at branch sites, and new disaster planning considerations due to IT operations running in multiple locations.

Cisco's new Overlay Transport Virtualization, or OTV, connects multiple data centers in a way that seems really easy compared with the roll-your-own process most shops have traditionally used. Essentially a transport technology for Layer 2 networking, the software updates network switches, including the Cisco Nexus 7000, to connect data centers in different geographic locations.

The OSV software costs about $25,000 per license and uses the maximum bandwidth and connections already established between data centers.

There are other approaches for linking multiple data centers, a Cisco technical spokesman acknowledges, including those involving Multiprotocol Label Switching (MPLS) or, before that, frame-relay and Asynchronous Transfer Mode protocols.

But unlike some of the older approaches, the spokesman explains, Cisco OTV does not require any network redesign or special services in the core, such as label switching. OTV is simply overlaid onto the existing network, inheriting all the benefits of a well-designed IP network while maintaining the independence of the Layer 2 data centers being interconnected.

Terremark, a cloud service provider based in Miami, uses Cisco OTV to link 13 data centers in the U.S., Europe and Latin America. The company says there is a significant savings compared with taking a "do-it-yourself" approach to linking data centers, due to reduced complexity and OTV's automated fail-over system that helps multiple data centers act as one if disaster strikes.

"Implementing the ability to balance loads and/or enact emergency fail-over operations between data centers traditionally involved a dedicated network and complex software," says Norm Laudermilch, Terremark's senior vice president of infrastructure. "With Cisco OTV, Ethernet traffic from one physical location is simply encapsulated and tunneled to another location to create one logical data center."

Virtual machines from one location can now use VMware's VMotion, for instance, to automatically move to another physical location in the event of a failure.

5. Priority-based e-mail storage

Communication is what drives a business, but too often the bits and bytes of an e-mail exchange are treated in the data center as just another data set that needs to be archived. Messagemind automatically determines which e-mails can be safely archived onlower-cost systems.

The tool analyzes all company communication -- tracking which messages end users read, delete or save -- and then groups them according to priority level.

Data center administrators can use that information to store e-mail based on priority level, which in turn can save money. For example, instead of storing all e-mails in one high-cost archive, messages marked as low priority -- based again on the end user's clicking behavior -- can be stored in lower-cost storage systems. High-priority e-mail can be stored on higher-performance, and higher-cost, media.

That same behind-the-scenes analysis can be used outside the data center, rolled up into a dashboard that managers and end users can view to help them on projects. For example, business units can view e-mail diagrams that show who is communicating effectively on a project and who seems to be lagging behind and rarely contributing.

Pund-IT's King says Messagemind is an intriguing prospect because e-mail has become such a wasteland of broken conversations and disconnected project discussions. Managing e-mail becomes even more painful if a company is subject to litigation, and e-mail becomes part of the legal discovery process.

"Even the best e-mail solutions require employees to manage their messages," says King. "If it works as advertised, I could see this catching hold in enterprises. By managing e-mail more effectively -- and automatically -- Messagemind's solution could take a great deal of weight off the shoulders of data center admins struggling under ever-increasing volumes of stored messages."

6. User accounts virtualized for easier restoration

Virtualization has become the buzzword of the past decade, but it usually involves abstracting an operating system from a server or data from your storage allocations. AppSense is virtualization software for user accounts. It extracts user profile settings from Windows applications and maintains them separately. That means that if an application is updated or changed, the user information is still available. If user settings are corrupted or lost, administrators can restore the settings with a minimum of bother.

Landon Winburn, a software systems specialist at the University of Texas Medical Branch in Galveston, Texas, uses AppSense to virtualize user-account profiles for his 3,000 students. Winburn says the university used to manage user settings manually, taking about 40 to 60 calls per week related to log-ins. The university also had five to 10 corruptions per day related to user settings.

"Before AppSense, the only solution for a corrupt profile was to delete the profile and have the user start again from scratch for all applications," says Winburn.

But now, with AppSense's ability to restore these settings, the university doesn't have to directly address the problems, since they are handled automatically. By virtualizing accounts, the university could also increase the number of XenApp Server accounts from 40 user profiles per server to about 80.

John Brandon is a veteran of the computing industry, having worked as an IT manager for 10 years and a tech journalist for another 10. He has written more than 2,500 feature articles and is a regular contributor to Computerworld.

Saturday, July 3, 2010

DataChambers Underground Data Center

http://v2.datachambers.com/wp-content/uploads/2009/08/Nicholas-Kottyan-150x150.jpg

Nicholas L. Kottyan

With more than 25 years of experience in the technology and telecommunications industries, Nicholas Kottyan is especially well-versed in the critical information technology challenges facing today’s businesses. He helps clients develop business continuity plans that ensure seamless customer service… manage growing electronic records…and find cost-effective ways to monitor, manage and maintain critical data networks.

Mr. Kottyan has led DataChambers through two significant expansions of its data center facilities to meet growing demand.

Before joining the company, he was president and CEO of Peak 10 Inc., a data center services company he co-founded. He also has served as senior vice president for CT Communications Inc., a publicly traded local telephone company in Concord, N.C., where he led an expansion into new long distance, wireless PCS and Internet services markets.

In 1991 Mr. Kottyan founded Teledial America of North Carolina, which he sold to LCI International (now part of Qwest Communications). He also has served as president and CEO of Phone America of Carolina.

Mr. Kottyan is currently chairman of the N.C. Technology Association, the primary voice of North Carolina’s technology industry.

Patrick Craig

Patrick Craig is chief technology guru for the DataChambers team – experienced in the hardware, software and industry protocols that underpin successful networks and data centers.

Patrick Craig

Patrick Craig

Mr. Craig was instrumental in the design of the high-availability infrastructure DataChambers uses to support mission-critical business functions for its clients. He also leads the company’s Network Operations Center (NOC). The NOC team monitors and manages client systems around-the-clock to detect and resolve potential issues before they impact performance.

Before joining DataChambers, Mr. Craig was managed services administrator for Divine Inc., a software and technology company. He also served as vice president of IT technologies and as lead systems administrator for NetUnlimited, a voice and data solutions provider. During his tenure with NetUnlimited, he managed three of the company’s divisions and designed the infrastructure needed to support thousands of users.


WINSTON-SALEM, N.C. – MAY 6, 2010 – DataChambers, a North Carolina-based technology firm, today announced it has secured financing from NewBridge Bank to support construction of a new data center on its 80-acre campus in Winston-Salem.

Work is well under way on the 20,000-square-foot, $9 million project, which was announced last spring. When completed, it will more than double the company’s capacity to house data networks for its clients, including more than 110 firms in 28 states.

“We’ve been pleased with the progress of the project, which positions us for significant growth,” said Nicholas Kottyan, CEO of DataChambers. “We expect to be up and running by early summer.”

Construction of the new facility involves the demolition and rebuilding of a section of the former office building where DataChambers is headquartered. The space is located 18 feet underground in a secure, blast-resistant bunker.

General contractor for the project is Landmark Builders.

“DataChambers is a great success story in our region, and we could not be more pleased that they have chosen NewBridge Bank as their financial partner,” said Terry Freeman, Senior Vice President and Commercial Relationship Manager for NewBridge Bank. “This locally owned and operated business is the ideal client for NewBridge Bank to help move forward.”

About DataChambers

DataChambers is a full-service information technology and managed services provider specializing in electronic data storage, 24×7 managed information technology solutions, secure co-location services for mission-critical information technology infrastructure, secure tape vaulting, and offsite records storage and management. The company is SAS 70 Type II audited and meets rigorous national standards for safeguarding client systems and data. DataChambers’ 140,000+-square-foot headquarters in Winston-Salem, N.C., is based on an 80-acre campus owned by the firm’s majority shareholders. For more information, visit www.datachambers.com.

About NewBridge Bank

NewBridge Bank is a full service, state chartered community bank headquartered in Greensboro, North Carolina. NewBridge Bank offers financial planning and investment alternatives such as mutual funds and annuities through Raymond James Financial Services, Inc., a registered broker dealer. NewBridge Bank is one of the largest community banks in North Carolina with assets of approximately $2 billion. The Bank has 33 banking offices in the Piedmont Triad of North Carolina, the Wilmington, N.C. area and Harrisonburg, Va. The stock of NewBridge Bancorp, the Bank’s parent company, trades on the NASDAQ Global Select Market under the symbol “NBBC.”



http://www.datachambers.com/2010/05/06/datachambers-secures-financing-for-major-data-center-expansion/#more-571

Wednesday, May 19, 2010

Underground data center to help heat Helsinki

Underground data center will save $561K, heat homes in Helsinki

From deep underground, data center will help heat Helsinki homes

By Andrew Nusca |


The Finnish capital of Helsinki is preparing to house what may be the greenest data center on the planet.

Hidden deep within the bedrock of a massive cave underneath popular orthodox Christian landmark Uspenski Cathedral, the planned data center — which will be comprised of hundreds of computer servers — is expected to emit substantial amounts of heat.

That heat will then be captured and channeled into the city’s district heating network, a system of water-heated pipes that are used to warm homes in the city.

How’s that for renewable energy?

The new data center is due online in January and is intended for use by local IT services firm Academica. It’s a novel way of using the power consuming nature of data centers — known to be energy hogs — for good.

Data centers themselves have recently been under scrutiny for their expense, which can account for up to a third of a corporation’s total energy bill. Together, those data centers add up: the server farms run by Google alone use 1 percent of the world’s energy, and demand for more power only grows each year.

Temperature is part of the problem. Often, more power is used to cool large data centers than actually compute with them.

What’s more, all that power consumption leads to emissions: data center emissions of carbon dioxide total one-third the amount that airlines produce, according to a Reuters report, grow 10 percent each year.

That’s enough emissions to rival entire nations such as Argentina or the Netherlands.

The new Helsinki data center promises to use half as much energy as the average data center, and its capacity to heat homes will be the energy-producing equivalent of one large wind turbine — about 500 large homes.

The data center is expected to shave approx. $561,000 per year from Academics annual power bill, said sales director Pietari Päivänen.

Oh, and the significance of the church above it? Security. The cave used to be a World War II-era bomb shelter for city officials to escape from Russian air raids.

The Finnish capital of Helsinki is preparing to house what may be the greenest data center on the planet.

Hidden deep within the bedrock of a massive cave underneath popular orthodox Christian landmark Uspenski Cathedral, the planned data center — which will be comprised of hundreds of computer servers — is expected to emit substantial amounts of heat.

That heat will then be captured and channeled into the city’s district heating network, a system of water-heated pipes that are used to warm homes in the city.

Thursday, April 22, 2010

Data Center of the Future: 1 million server data center

What Will the Data Center of the Future Look Like?

BY: ERICO GUIZZO

Today's most advanced data centers house tens of thousands of servers. What would it take to house 1 million?

New York Times Magazine has an article on data centers -- the massive (though invisible to most users) computing infrastructure that runs our web searches, email, blogs, tweets. The article does a good job describing the architecture of current mega data centers and the challenges in building them. But what I missed in the story is: Where do we go from here. What will the data center of the future look like?

Spectrum tried to offer an answer to this very question early this year. In the February '09 article "Tech Titans Building Boom," by UC Berkeley professor Randy H. Katz, we presented an illustration (below) of what a 1 million server data center might look like. That vision -- a roofless facility with hundreds of server-packed shipping containers -- was based in part on Microsoft's Generation 4 data center design. But I'm still wondering: Is that the future of the cloud? A parking lot crammed with steel boxes?

True, there's been some innovation, including an underground data center in Sweden and Google's patented servers-on-a-barge idea. But I guess I was hoping for some real breakthrough in data center design -- a real departure in how these facilities are built and operate. Just to throw out an idea, what about a data center based on AS/RS (Automated Storage and Retrieval Systems)? Picture servers piled vertically into high enclosures with robotic arms that attach power/cooling/network connections and replace defective parts.

Have a wild vision for the data center of the future? Let us know. If it's good we might even run it in the magazine.

million server data center

The Million-Server Data Center. See a larger version here.

Illustration: Bryan Christie Design

Wednesday, March 17, 2010

Data Centers to Expand in 2010, 2011: Campos Research & Analysis said's demand will be high for data centers

Data Centers to Expand in 2010, 2011

Rich Miller

More than a third of large corporate data center users in North America plan to expand their footprint in 2010, and many are expanding because they have run out of power, not space. Those were the key findings in survey data released Wednesday by Digital Realty Trust. The survey of senior decision makers with responsibility for their companies’ data center strategies was conducted by Campos Research & Analysis for Digital Realty. Among the key findings:
    • 83 percent of respondents are planning data center expansions in the next 12 to 24 months;
    • 36 percent of respondents have definite plans to make those expansions during 2010;
    • 73 percent of respondents plan to add two or more facilities as part of their data center expansions;

It’s not surprising that Digital Realty believes demand will be high, since the company is in the business of building and leasing data centers. But the customer survey’s major points were echoed by multiple panelists at Wednesday’s New York event.

Financing is a Factor
“Demand has been pretty steady,” said Dan Golding, Managing Director at DH Capital, an investment bank specialized in hosting and telecom deals. “The story has really been supply. It’s been very, very difficult for people to finance new data centers.”

At the national level, the pending demand for data center space may be three times greater than the available supply of quality space, according to Jim Kerrigan, the director of the data center practice at the real estate firm Grubb & Ellis. “All those deals that got shelved in 2009 because the CFO said no .. they’re going to happen,” said Kerrigan.

The end users at DataCenterDynamics New York included large firms in the financial sector, who concurred with the notion that cost-cutting has resulted in pent-up demand for data center space. “A year and a half ago we were talking about new data centers,” said Glenn Neville, Director of Engineering at Deutsche Bank. “Since then we’ve been talking about how long we can go with our current data centers. Our plans for growth are still there. Those plans are being postponed, but they’re not being cancelled.”

“It feels like someone closed a door, and things are backing up behind it,” said David Schirmacher, a vice president at Goldman Sachs.

Big Chunks of Space Grow Scarce
Kerrigan said the supply and demand challenges will be most acute for companies needing large footprints of contiguous space. That imbalance stands in stark relief to the requirements described in the Digital Realty survey, in which 70 percent of companies planning data center expansions say they envision large projects of at least 15,000 square feet in size or 2 megwatts or more of power.

“One of the most interesting pieces of data in this study is the lead role that power is now playing in these expansions,” said Chris Crosby, Senior Vice President of Corporate Development for Digital Realty Trust. “The need for additional power has become the main driver for data center expansion plans as companies seek facilities with adequate power and favorable utility rates to control operating costs.”

As a result, more companies are tracking their data center power usage and using the data in their capacity planning. The survey found that 76 percent of respondents now meter their power use, while the number of companies that meter power down to the PDU level increased by 29 percent over last year. “These are very positive signs that companies better understand their data centers’ energy use and can make informed decisions to reduce energy consumption,” said Crosby.

Digital Realty Trust: Data Centers to Expand in 2010, 2011

Written by Jeffrey Clark
Tuesday, 09 March 2010
A 2010 survey conducted by Campos Research & Analysis on behalf of Digital Realty Trust indicates that a significant portion of large North American companies are planning to expand their data center infrastructure in 2010 and 2011. This survey, conducted in mid-January, queried high-level company employees (executives or upper-level managers in information technology or finance) from 300 large companies. The participant companies were required to have a minimum of 5,000 employees or a minimum annual revenue of a billion dollars, and the individual respondents were required to be in charge of managing some aspect of the company’s data centers, whether operation, expansion, or implementation.

The companies represented in the survey consist of a fairly wide cross section of industry, with slightly over a quarter dedicated to IT, the Internet, or telecommunications. About 15% of the companies were in finance, with the remainder in the “other” category. According to the survey, the companies average about four data centers each, with nearly 20% operating six or more data centers.

Of the companies represented in the survey, 22% built or acquired a new data center in the 12 months preceding the survey, and 63% did so between one and three years prior to the survey. With ever-increasing demand for data services despite the recent economic downturn and with most companies having built or acquired additional data center facilities over a year prior to the survey, a significant increase in data center facilities in 2010 and 2011 seems to make sense, especially as companies hope for continued economic recovery.

For 2010, 36% of responding companies indicated that they were “definitely” planning data center expansions; 46% responded that they would “probably” expand their data centers, leaving only 18% that were unlikely to initiate an expansion this year. The numbers for planned data center expansions in 2011 were virtually identical, with a total of 84% either “probably” or “definitely” planning an expansion. Most of the companies (63%) with definite plans to expand have only one or two locations slated for expansion; 14% indicated plans to expand four or more data center locations.

The survey also indicated that data center budgets are on the increase, with 75% of the respondents expecting some increase; 35% expect an increase of less than 10%, and 30% expect an increase between 10% and 20%. According to the survey report, the average budget increase for 2010 is forecast at 8.3%—a change of +1.7% over the 2009 budget increase. IT budgets followed a similar trend, with an average expected increase of 8.1% and with about 73% expecting some level of budget growth. On average, the survey report indicated that 35% of represented companies’ IT budgets are dedicated to data center operations and development.

At the top of the list of preferred locations for new or expanded data centers are several American metro areas, led by New York City, which is closely followed by Chicago, Los Angeles, and Dallas. Foreign locations that ranked high on the survey’s list included London, Singapore, Paris, and Tokyo.

The survey also inquired about the respondents’ reasoning for their expected data center expansion in 2010; those that indicated on the survey definite plans for expansion rated the relative importance of various reasons for this expansion. The leading reason was power capacity, with 74% of the respondents indicating that this reason was “extremely important.” Next was disaster recovery and Sarbanes-Oxley (compliance, presumably) at 72%, followed by security at 69% responding with “extremely important.” Other possible reasons cited in the survey include energy efficiency, consolidation, cooling, redundancy, potential regulations, environmental concerns, and need for additional space.

A potential indicator of the scope of the expected 2010 data center expansions is the expected change in power usage. According to the survey report, the average expected power usage increase among companies that have definite plans for expansion is 12.8%, with 40% planning between a 10% and 20% increase. For all companies included in the survey, the average expected power usage increase is 8.3%.

Overall, the survey commissioned by Digital Realty Trust indicates no slackening in large companies’ increasing need for data center services. The expected expansions over the next two years are likely predicated on whether the economy begins to recover or whether the United States (and, to a lesser extent, the rest of the world) is in for a so-called double-dip recession.

Tuesday, February 2, 2010

Swiss Fort Knox

















Swiss Fort Knox

http://www.swissfortknox.ch/swissfortknox-english/
The two large, high secure data centers (Swiss Fort Knox®) inside the Swiss Alps are being operated by the company SIAG. The operation and its surveillance of the physical as well as logical infrastructure is coordinated by two security operation centers (SOC). SIAG’s headquarters is located in Zug, Switzerland. The company was founded in 1994.
SIAG is a globally recognized specialist for the management and the reliable and high secure safekeeping and exchange of digital information during its entire life cycle. Experienced and professionally trained security experts advice the customer from the risk identification all the way to an efficient and effective risk minimization. Quality as well as an efficient and cost optimized overall solution of IT security problems represents the backbone of our service offering. SIAG does care for its demanding customer base all around the globe. SIAG solutions are always internationally applicable and ideally customized to match the specific demand of the customer. Our specialists are monitoring the predefined security objectives from the customer site all the way to the parallel datasets inside the high secure Swiss Fort Knox®. Only technologies matching up uncompromisingly with strict criterias of SIAG will be used for these innovative security services.
Underground Secure Data Center Operations

Technology based companies are building new data centers in old mines, caves, and bunkers to host computer equipment below the Earth's surface.

Underground Secure Data Center Operations have a upward trend.

Operations launched in inactive gypsum mines, caves, old abandoned coal mines, abandoned solid limestone mines, positioned deep below the bedrock mines, abandoned hydrogen bomb nuclear bunkers, bunkers deep underground and secure from disasters, both natural and man-made.

The facility have advantages over traditional data centers, such as increased security, lower cost, scalability and ideal environmental conditions. There economic model works, despite the proliferation of data center providers, thanks largely to the natural qualities inherent in the Underground Data Centers.

With 10,000, to to over a 1,000,000 square feet available, there is lots of space to be subdivided to accommodate the growth needs of clients. In addition, the Underground Data Centers has an unlimited supply of naturally cool, 50-degree air, providing the ideal temperature and humidity for computer equipment with minimal HVAC cost.

They are the most secure data centers in the world and unparalleled in terms of square footage, scalability and environmental control.

Yet, while the physical and cost benefits of being underground make them attractive, they have to also invested heavily in high-speed connectivity and redundant power and fiber systems to ensure there operations are not just secure, but also state-of-the-art.

There initially focused on providing disaster recovery solutions, and backup co-location services.

Clients lease space for their own servers, while other provides secure facilities, power and bandwidth. They offers redundant power sources and multiple high-speed Internet connections through OC connected to SONET ring linked to outside connectivity providers through redundant fiber cables.

Underground Data Centers company augments there core services to include disaster recovery solutions, call centers, NOC, wireless connectivity and more.

Strategic partnering with international, and national information technology company, enable them to offer technology solutions ranging from system design and implementation to the sale of software and equipment.

The natural qualities of the Underground Data Centers allow them to offer the best of both worlds premier services and security at highly competitive rates.

Underground Data Centers were established starting in 1990's but really came into there own after September 11 attacks in 2001 when there founders realized the former mines, and bunker offered optimal conditions for a data center. The mines, and bunkers offered superior environmental conditions for electronic equipment, almost invulnerable security and they located near power grids.

Adam Couture, a Mass.-based analyst for Gartner Inc. said Underground Data Centers could find a niche serving businesses that want to reduce vulnerability to any future attacks. Some Underground Data Centers fact sheet said that the Underground Data Center would protect the data center from a cruise missile explosion or plane crash.

Every company after September 11 attacks in 2001 are all going back and re-evaluating their business-continuity plans, This doesn't say everybody's changing them, but everybody's going back and revisiting them in the wake of what happened and the Underground Data Center may be just that.

Comparison chart: Underground data centers

Five facilities compared
Name InfoBunker, LLC The Bunker Montgomery Westland Cavern Technologies Iron Mountain The Underground
Location Des Moines, Iowa* Dover, UK Montgomery, Tex. Lenexa, Kan. Butler County, Penn.*
In business since 2006 1999 2007 2007 Opened by National Storage in 1954. Acquired by Iron Mountain 1998.
Security /access control Biometric; keypad; pan, tilt and zoom cameras; door event and camera logging CCTV, dogs, guards, fence Gated, with access control card, biometrics and a 24x7 security guard Security guard, biometric scan, smart card access and motion detection alarms 24-hour armed guards, visitor escorts, magnetometer, x-ray scanner, closed-circuit television, badge access and other physical and electronic measures for securing the mine's perimeter and vaults
Distance underground (feet) 50 100 60 125 220
Ceiling height in data center space (feet) 16 12 to 50 10 16 to 18 15 (10 feet from raised floor to dropped ceiling)
Original use Military communications bunker Royal Air Force military bunker Private bunker designed to survive a nuclear attack. Complex built in 1982 by Louis Kung (Nephew of Madam Chang Kai Shek) as a residence and headquarters for his oil company, including a secret, 40,000 square foot nuclear fallout shelter. The office building uses bulletproof glass on the first floor and reception area and 3-inch concrete walls with fold-down steel gun ports to protect the bunker 60 feet below. Limestone mine originally developed by an asphalt company that used the materials in road pavement Limestone mine
Total data center space (square feet) 34,000 50,000 28,000 plus 90,000 of office space in a hardened, above-ground building. 40,000 60,000
Total space in facility 65,000 60,000 28,000 3 million 145 acres developed; 1,000 acres total
Data center clients include Insurance company, telephone company, teaching hospital, financial services, e-commerce, security
monitoring/surveillance, veterinary, county government
Banking, mission critical Web applications, online trading NASA/T-Systems, Aker Solutions, Continental Airlines, Houston Chronicle, Express Jet Healthcare, insurance, universities, technology, manufacturing, professional services Marriott International Inc., Iron Mountain, three U.S. government agencies
Number of hosted primary or backup data centers 2 50+ 13 26 5
Services offered Leased data center space, disaster recovery space, wholesale bandwidth Fully managed platforms, partly managed platforms, co-location Disaster recovery/business continuity, co-location and managed services Data center space leasing, design, construction and management Data center leasing, design, construction and maintenance services
Distance from nearest large city Des Moines, about 45 miles* Canterbury, 10 miles; London, 60 miles Houston, 40 miles Kansas City, 15 miles Pittsburgh, 55 miles
Location of cooling system, includng cooling towers Underground Underground Above and below ground. All cooling towers above ground in secure facility. Air cooled systems located underground. Cooling towers located outside
Chillers located above ground to take advantage of "free cooling." Pumps located underground.
Location of generators and fuel tanks Underground Above ground and below ground Two below ground, four above ground. All fuel tanks buried topside. Underground Underground
*Declined to cite exact location/disatance for security reasons.