Showing posts with label Data Center Industry. Show all posts
Showing posts with label Data Center Industry. Show all posts

Thursday, September 26, 2013

Iron Mountain: #Underground Data Center Tour By: Colleen Miller

Nicholas Salimbene, Director, Business Development, for #Iron Mountain Data Centers, gives a video tour of its underground data center, which is located 220 feet underground. The facility is constructed in a former limestone mine and impervious to #natural disaster from storms to earthquakes. Iron Mountain uses the underground location to cool the data halls, with underground water and naturally low temperatures. The video shows an overview of the facility through stages of construction. Video runs 2:37 minutes.







Welcome to Iron Mountain's premiere underground data center facility, located 200 feet beneath rolling countryside in a former limestone mine. The facility spans 145 acres and provides one of the most physically and environmentally secure colocation infrastructures available.

Tuesday, October 5, 2010

Hunt Midwest seeks tenant for SubTech underground data center space Read more: Hunt Midwest seeks tenant for SubTech underground data center space -

SubTech Data Center is a ground-level facility built inside solid limestone,
offering security unmatched by any other data center facility. SubTech is located in Kansas City, which provides one of the lowest utility costs in the country and is ranked #2 in the United States for enterprise data center operating affordability. SubTech's data center solutions are reliable and flexible, offering maximum power and connectivity for your robust data center needs.
Site plans by
http://www.totalsitesolutions.com/
Expanding or needing data center space? SubTech has millions of square feet available for IT and raised floor area. The facility provides clients with data center space ranging from 5,000 - 100,000+ square feet with 16' clear ceiling heights throughout. The initial 100,000 s.f. can be built out in 20,000 s.f. modules. Click here to download our site plan. http://www.subtechkc.com/site_plan.pdf

Ora Reynolds (left), president of Hunt Midwest Real Estate Development, and Tammy Henderson, director of real estate marketing and governmental affairs, are preparing for when a portal (background) will be the front door to an underground data center.

Hunt Midwest Real Estate Development Inc. is getting into the data center business — or rather, under it. Subtechkc

The Kansas City-based company plans to build a 40,000-square-foot data center in Hunt Midwest SubTropolis. The massive underground business complex is roughly northeast of Interstate 435 and Missouri Highway 210 in Kansas City.
Construction on the estimated $30 million SubTech project will begin once Hunt Midwest signs a tenant or tenants for the first 20,000 square feet, company President Ora Reynolds said.
Reynolds said the company originally planned a 100,000-square-foot project but scaled back after an unsuccessful attempt in the summer to add state tax incentives for data centers to a bill aimed at retaining automotive jobs. Missouri is at a disadvantage, she said, because Kansas, Nebraska, Iowa and Oklahoma offer financial breaks for data centers.
“The investment in a data center is so much more expensive than a regular building,” Reynolds said. “The investment is so large you can’t do ‘If you build it, they will come’ if you don’t think you can compete.”
Reynolds said the company envisions a Tier 3 facility, meaning it has redundant power and cooling systems, with the ability to expand in 20,000-square-foot increments.
“The thing that we’re saying is the biggest advantage, and what makes us different, is we have 8 million square feet that has been mined out and has not been developed at the present time,” Reynolds said. “Somebody who’s out there and says, ‘I need 20,000 square feet now, but I know I’m going to grow and need 100,000 square feet in the next five years,’ we can accommodate them while somebody who has an office building wouldn’t let half the building stay empty.”
She said Hunt Midwest would be strictly a landlord, preferring to find a managed services/collocation firm to become the main tenant, subleasing rack and cabinet space to smaller companies or leasing entire data suites or powered shells — where the tenants install most of the technical infrastructure themselves.
The data center business has taken off in recent years as companies have looked for options to remotely operate or back up data networks.
New York-based Tier 1 Research said in a Sept. 23 report that demand has outstripped supply in many markets because the economy has slowed construction and financing of new data centers.
The underground data center is relatively new in the industry, despite the obvious increase in security and resistance to natural disasters.
Tier 1 analysts Jason Schafer and Michael Levy said in a separate report looking at the SubTech project that so-called data bunkers have had trouble attracting tenants in other markets because of the added complexity of supplying power and getting rid of excess heat and moisture.
They said that SubTech does have size and the ability to grow in phases going for it but that it will run into the same skepticism other operators encounter.
“This isn’t to say that there isn’t a market for a secure underground data center facility,” they wrote. “It just fits the needs of fewer types of tenants that are likely comparing all data center providers.”
Cavern Technologies operates a 40,000-square-foot data center in the underground Meritex Lenexa Executive Park. Cavern President John Clune said the company has grown from four customers three years ago to 35.
“Our market has really taken off,” he said, adding that not having to construct an actual building and underground’s cooler air temperatures let Cavern compete on cost. “The economics of the underground allow us to provide more space for the money.”
Clune said that data centers typically charge as much as $1,200 a rack but that he charges $2,900 for 250 square feet — enough room for four racks.
“It’s when people come down here that the light goes on,” he said.
Numerous area companies operate their data centers, including some in underground space.
Overland Park-based Sprint Nextel Corp. has three data centers supporting network operations, with two built into earthen embankments, spokeswoman Melinda Tiemeyer said.
Other companies use underground caves to store computer data tapes.


What is SubTropolis?

SubTropolis was created through the mining of a 270-million-year-old limestone deposit. In the mining process, limestone is removed by the room and pillar method, leaving 25-foot square pillars that are on 65-foot centers and 40 feet apart.
The pillars’ even spacing, concrete flooring and 16-foot high, smooth ceilings make build-to-suit facilities time and cost efficient for tenants. A tenant requiring 10,000 to one million square feet can be in their space within 150 days. SubTropolis is completely dry, brightly lit, with miles of wide, paved streets accessed at street level.
Hunt Midwest SubTropolis sets the standard for subsurface business developments.

Read more: Hunt Midwest seeks tenant for SubTech underground data center space - Denver Business Journal

Tuesday, August 3, 2010

USSHC Ultimate Underground Data Bunker


USSHC was founded in 2002. However, the process of creating the ultimate underground data bunker actually started in 1999 with the needs of an Internet Service Provider. It was found that truly geographic IP transit could not be fulfilled by the local telephone carriers due to telephone tariffs. The larger Tier 1 Network Service Providers could not provide higher reliability because they depend on the local carriers to deliver the local access to their network. No matter how services were delivered via these traditional means, there was always a single point of failure.

There are solutions that use fiber rings to deliver services, but even fiber rings have single points of failure: The telephone exchange central offices.

The only way to have true IP redundancy is to have different connections from multiple IP transit providers from opposite directions, that use different local fiber networks, from entirely different companies. Most data centers try to meet these needs using fiber paths from carriers not bound by telephone tariffs. However, this “solution” was accompanied by a host of other problems.

In terms of physical security, every data center investigated shared a common building with other tenants. Either the data center was an afterthought and added to an existing building (server closet that grew in to dedicated space), or it was purpose built but with much office space and other common space in the same facility. Both of these types of shared structures increase the risk of collateral damage due to fires in the same building, and to security risks due to the large numbers of people sharing the facility. The best data center fire suppression system in the world doesn’t do a bit of good if the office above it building burns down on top of the data center.

Major shortcomings in physical security were also a recurring theme during the search for a data center. Many facilities share common space with other businesses. Despite partitioning off a building, the common mechanical facilities such as chiller plant, and electrical, are typically shared with other tenants. Obtaining building wide security is difficult not only due to the different tenants sharing common areas, but due to reception areas in buildings that were open to the public and entirely unsecured. Most “secure” server spaces were found to be secured from public areas by walls made of sheet rock! Some sheet rock walls contained windows! We desired something a little more secure than two layers of half inch thick sheet rock, or a single pane of glass.

Despite finding several facilities that all claimed to be “hardened” and able to withstand the force of a tornado with walls made of reinforced concrete, and at least one door made of steel with no windows to the outside world, further investigation revealed that at most, they were only partially below ground, (walk out basement) and all lacked physical plant equipment that was designed to operate during major contingencies. They also shared office space in the same building. Time and time again, it was found that 100% of the data centers had their heat rejection and standby power systems above ground. And in no case were the generators or air conditioning systems “hardened” at all. While the servers may survive if the data center took a direct hit from even a small EF-1 tornado, they would not remain operational for any length of time once the uninterruptible power supply batteries were exhausted. Even if the connectivity and building itself survived, and a generator was tough enough to operate after a storm or tornado, the external cooling would not. Even with power, the servers would quickly overheat, leading to downtime, and possible data corruption or loss.

Some data centers that claimed to be “hardened” were found to require a constant feed of municipal water for on site cooling. With all of the redundancy built in to the site, the whole data center could fail due to a non redundant source of cooling water that could be interrupted due to a pipe break, power outage, earthquake, or even simple maintenance. Or the whole data center could fail due to a water pipe break that would flood the facility with a high pressure torrent of municipal water.

Then there were the data centers located in flood plains. We were shocked at just how many data centers were located in flood plains. More alarming was the “head in the clouds” attitude that most had about the flood plain being entirely acceptable because the data center was on an upper floor.

The harder we looked, and the more we uncovered, the more discouraged we became. Eventually however, USSHC solved all of these problems, and then some.

The idea behind USSHC was to provide a safe, secure place to house an Internet Service Provider that would be immune from any form of disaster, “deep in an Iowa underground bunker” where the power would always stay on, and the servers would always stay connected, fully online, and fully operational, despite what was going on in the outside world.

Since it went live in 2002, the facility has been expanded to allow other companies to share the same level of redundancy, security, and performance.

In 2009, USSHC opened the GWAB (Geek with a box data suite) to offer an economical alternative to our premium data center colocation offerings.

Monday, August 2, 2010

Innovations for the Data Center

6 Cool Innovations for
The Data Center


1. Fiber optics with a twist

The success of the HDMI cable in consumer electronics has proved that having a common cable that works with Blu-ray players, HDTV sets and just about any set-top box helps remove clutter and confusion. Intel has developed Light Peak following this same approach. It's fiber-optic cable that will first be used with laptop and desktop computers to reduce clutter and to speed transmission, but it could also make its way to the data center as a way to connect servers and switches.

The 3.2mm cable, which is about as thin as a USB cable, can be up to 100 feet long. Intel has designed a controller that will sit inside a computer, and cables are currently in production. Third parties, including Hewlett-Packard and Dell, will start making computers with Light Peak fiber-optic cables in them by 2011, according to Intel.

For data centers, Light Peak presents a few interesting possibilities. Fiber optics have been in the data center since the early 1990s, when IBM introduced its Escon (Enterprise Systems Connection) product line; it connects mainframes at 200Mbit/sec. Light Peak differs in that it runs at 10GB/sec., and Intel claims that the components will be less expensive and lighter-weight than existing fiber-optic products.

"Intel claims Light Peak will be less complex and easier to manage by eliminating unnecessary ports, and deliver the higher throughput required by high performance e-SATA and DisplayPort systems," says Charles King, an analyst at Pund-IT in Concord, Mass. "If the company delivers on these promises, Light Peak could simplify life for data center managers plagued by installing, managing and troubleshooting miles of unruly optical cables."

Success here will depend on "how willingly developers and vendors" embrace Light Peak and build products around it, King explains.

2. Submerged liquid cooling and horizontal racks

Liquid cooling for data centers is not a new concept, of course, but Green Revolution Cooling has added a new twist. For starters, the rack is turned on its side, which helps with cable management and makes it easier for administrators to access equipment, and the horizontal rack is surrounded by liquid. A new coolant, called GreenDEF, is made from mineral oil that is nontoxic, costs less than other liquid-cooling methods and is not electrically conductive like water, according to a GR Cooling spokesman.

"The liquid actually moves through the floor and circulates up through all of the computing nodes," says Tommy Minyard, director of advanced computing systems at the Texas Advanced Computing Center, part of the University of Texas at Austin. This means more-effective cooling because heat is moved away from the processors via cables on the sides and under the rack, he explains. Minyard is installing GR Cooling systems in his data center and expects a 30% to 40% savings compared to traditional air-cooled systems.

Data center cooling device

Green Revolution uses horizontal devices for racks, along with a new type of coolant, to reduce energy costs in a data center.

Minyard says liquid cooling has made a rebound lately, recalling the days when Cray offered submerged cooling systems, and he notes that even IBM is moving back into chilled-liquid-cooling some compute nodes.

Pund-IT's King says a major issue is that enterprises have fought the return of liquid cooling in the data center because of the high costs of implementing the technology and because it is unproven as a widespread option.

"Liquid cooling usually costs much more to install upfront than air cooling," says Mark Tlapak, GR Cooling's co-founder. "Compared to air, every liquid cooling system has some added nuance, such as electric conductivity with water-based cooling systems. " But, he says, "spring a leak in the water systems, and you lose electrical equipment." Still, for Minyard, GR Cooling is an ideal fit: His data center gravitates toward dense, powerful systems that pack intense power into small spaces, such as IBM blade servers and the latest Intel processors. The Ranger supercomputer, for example, uses 30kw of power per rack.

3. Several broadband lines combined into one

Enterprises can spend many thousands of dollars on fiber-optic lines and multiple T1 connections, but at least one emerging technology is aiming to provide a lower-cost alternative.

Mushroom Networks' Truffle Broadband Bonding Network Appliance creates one fast connection out of up to six separate lines, a technique known as bonding. The Truffle combines the bandwidth of all available broadband lines into one giant pipe, with download speeds of up to 50Mbit/sec., the company says. Internet access may be through a DSL modem, cable modem, T1 line or just about any broadband connection.

This helps increase overall throughput, and acts as a backup mechanism, too. If one of the "bonded" lines fails, the Truffle connection just keeps running with the other available lines.

Steve Finn, a television producer in Kenya, uses Mushroom Networks' appliance for a program called Africa Challenge that is broadcast to eight African countries. He relies heavily on broadband to produce the content and at one time paid as much as $4,000 per month for connectivity. Speeds vary depending on latency and traffic, but he says the bonded speed is generally about four times faster (four lines times the speed of each individual line), at about half the cost of one equivalent high-speed line.

Frank J. Bernhard, an analyst at Omni Consulting Group, says Mushroom Networks fills a niche for companies that do not want to pay the exorbitant fees for multiple T1 or T3 connections but still need reliable and fast Internet access. Other companies, including Cisco Systems, offer similar bonding technology, but at a greater cost and with more complexity at install, which means the technique has not yet been widely used.

4. Multiple data centers more easily connected

In a very large enterprise, the process of connecting multiple data centers can be a bit mind-boggling. There are security concerns, Ethernet transport issues, operational problems related to maintaining the fastest speed between switches at branch sites, and new disaster planning considerations due to IT operations running in multiple locations.

Cisco's new Overlay Transport Virtualization, or OTV, connects multiple data centers in a way that seems really easy compared with the roll-your-own process most shops have traditionally used. Essentially a transport technology for Layer 2 networking, the software updates network switches, including the Cisco Nexus 7000, to connect data centers in different geographic locations.

The OSV software costs about $25,000 per license and uses the maximum bandwidth and connections already established between data centers.

There are other approaches for linking multiple data centers, a Cisco technical spokesman acknowledges, including those involving Multiprotocol Label Switching (MPLS) or, before that, frame-relay and Asynchronous Transfer Mode protocols.

But unlike some of the older approaches, the spokesman explains, Cisco OTV does not require any network redesign or special services in the core, such as label switching. OTV is simply overlaid onto the existing network, inheriting all the benefits of a well-designed IP network while maintaining the independence of the Layer 2 data centers being interconnected.

Terremark, a cloud service provider based in Miami, uses Cisco OTV to link 13 data centers in the U.S., Europe and Latin America. The company says there is a significant savings compared with taking a "do-it-yourself" approach to linking data centers, due to reduced complexity and OTV's automated fail-over system that helps multiple data centers act as one if disaster strikes.

"Implementing the ability to balance loads and/or enact emergency fail-over operations between data centers traditionally involved a dedicated network and complex software," says Norm Laudermilch, Terremark's senior vice president of infrastructure. "With Cisco OTV, Ethernet traffic from one physical location is simply encapsulated and tunneled to another location to create one logical data center."

Virtual machines from one location can now use VMware's VMotion, for instance, to automatically move to another physical location in the event of a failure.

5. Priority-based e-mail storage

Communication is what drives a business, but too often the bits and bytes of an e-mail exchange are treated in the data center as just another data set that needs to be archived. Messagemind automatically determines which e-mails can be safely archived onlower-cost systems.

The tool analyzes all company communication -- tracking which messages end users read, delete or save -- and then groups them according to priority level.

Data center administrators can use that information to store e-mail based on priority level, which in turn can save money. For example, instead of storing all e-mails in one high-cost archive, messages marked as low priority -- based again on the end user's clicking behavior -- can be stored in lower-cost storage systems. High-priority e-mail can be stored on higher-performance, and higher-cost, media.

That same behind-the-scenes analysis can be used outside the data center, rolled up into a dashboard that managers and end users can view to help them on projects. For example, business units can view e-mail diagrams that show who is communicating effectively on a project and who seems to be lagging behind and rarely contributing.

Pund-IT's King says Messagemind is an intriguing prospect because e-mail has become such a wasteland of broken conversations and disconnected project discussions. Managing e-mail becomes even more painful if a company is subject to litigation, and e-mail becomes part of the legal discovery process.

"Even the best e-mail solutions require employees to manage their messages," says King. "If it works as advertised, I could see this catching hold in enterprises. By managing e-mail more effectively -- and automatically -- Messagemind's solution could take a great deal of weight off the shoulders of data center admins struggling under ever-increasing volumes of stored messages."

6. User accounts virtualized for easier restoration

Virtualization has become the buzzword of the past decade, but it usually involves abstracting an operating system from a server or data from your storage allocations. AppSense is virtualization software for user accounts. It extracts user profile settings from Windows applications and maintains them separately. That means that if an application is updated or changed, the user information is still available. If user settings are corrupted or lost, administrators can restore the settings with a minimum of bother.

Landon Winburn, a software systems specialist at the University of Texas Medical Branch in Galveston, Texas, uses AppSense to virtualize user-account profiles for his 3,000 students. Winburn says the university used to manage user settings manually, taking about 40 to 60 calls per week related to log-ins. The university also had five to 10 corruptions per day related to user settings.

"Before AppSense, the only solution for a corrupt profile was to delete the profile and have the user start again from scratch for all applications," says Winburn.

But now, with AppSense's ability to restore these settings, the university doesn't have to directly address the problems, since they are handled automatically. By virtualizing accounts, the university could also increase the number of XenApp Server accounts from 40 user profiles per server to about 80.

John Brandon is a veteran of the computing industry, having worked as an IT manager for 10 years and a tech journalist for another 10. He has written more than 2,500 feature articles and is a regular contributor to Computerworld.

Thursday, January 7, 2010

Data Center Site selection

Mike Manos discusses data center site selection, you need to “kick the dirt” to find what is real
http://loosebolts.files.wordpress.com/2009/12/mikeatquincy_thumb.jpg?w=244&h=184
At Gartner’s Data Center Conference, Mike Manos made an excellent point that “75% of the data center costs are effected by site selection.” Great architecture is designed to a site characteristics. But, the status quo is to design data centers that are built based on past experiences. Green data centers need to be designed to fit with site characteristics.

Kickin’ Dirt
by Mike Manos
I recently got an interesting note from Joel Stone, the Global Operations Chief at Global Switch. As some of you might know Joel used to run North American Operations for me at Microsoft. I guess he was digging through some old pictures and found this old photo of our initial site selection trip to Quincy, Washington.

As you can see, the open expanse of farmland behind me, ultimately became Microsoft’s showcase facilities in the Northwest. In fact you can even see some farm equipment just behind me. It got me reminiscing about that time and how exciting and horrifying that experience can be.

Kicking the Dirt.

Many people I speak to at conferences generally think that the site selection process is largely academic. Find the right intersection of a few key criteria and locate areas on a map that seem to fit those requirements. In fact, the site selection strategy that we employed took many different factors into consideration each with its own weight leading ultimately to a ‘heat map’ in which to investigate possible locations.

Even with some of the brightest minds, and substantial research being done, its interesting to me that ultimately the process breaks down into something I call ‘Kickin Dirt’. Those ivory tower exercises ultimately help you narrow down your decisions to a few locations, but the true value of the process is when you get out to the location itself and ‘kick the dirt around’. You get a feel for the infrastructure, local culture, and those hard to quantify factors that no modeling software can tell you.

Mike makes an excellent point for the decision on site selection.

Once you have gone out and kicked the dirt, its decision time. The decision you make, backed by all the data and process in the world, backed by personal experience of the locations in question, ultimately nets out to someone making a decision. My experience is that this is something that rarely works well if left up to committee. At some point someone needs the courage and conviction, and in some cases outright insanity to make the call.

Are you willing to take a risk in site selection? Most aren’t. But, the leaders are, and they are the ones who are first to go where others haven’t and have lower costs. Mike has said the cost of the land was a great deal as no one thought of the land as a data center site. Google are the others who have this down.


Google and Yahoo as the industry leaders, as they have both "kicked the dirt" in far better places. Google has sited in Oregon (excellent tax incentives + cheap energy + green power), North Carolina (best of breed incentive package + reasonable power) and Finland (great existing asset + tax incentives + green power). Yahoo has built in Quincy (yes, they started first), Omaha (special tax incentives + cheap energy), Switzerland (green energy + tax holidays) and Buffalo (same).


Underground Secure Data Center Operations

Technology based companies are building new data centers in old mines, caves, and bunkers to host computer equipment below the Earth's surface.

Underground Secure Data Center Operations have a upward trend.

Operations launched in inactive gypsum mines, caves, old abandoned coal mines, abandoned solid limestone mines, positioned deep below the bedrock mines, abandoned hydrogen bomb nuclear bunkers, bunkers deep underground and secure from disasters, both natural and man-made.

The facility have advantages over traditional data centers, such as increased security, lower cost, scalability and ideal environmental conditions. There economic model works, despite the proliferation of data center providers, thanks largely to the natural qualities inherent in the Underground Data Centers.

With 10,000, to to over a 1,000,000 square feet available, there is lots of space to be subdivided to accommodate the growth needs of clients. In addition, the Underground Data Centers has an unlimited supply of naturally cool, 50-degree air, providing the ideal temperature and humidity for computer equipment with minimal HVAC cost.

They are the most secure data centers in the world and unparalleled in terms of square footage, scalability and environmental control.

Yet, while the physical and cost benefits of being underground make them attractive, they have to also invested heavily in high-speed connectivity and redundant power and fiber systems to ensure there operations are not just secure, but also state-of-the-art.

There initially focused on providing disaster recovery solutions, and backup co-location services.

Clients lease space for their own servers, while other provides secure facilities, power and bandwidth. They offers redundant power sources and multiple high-speed Internet connections through OC connected to SONET ring linked to outside connectivity providers through redundant fiber cables.

Underground Data Centers company augments there core services to include disaster recovery solutions, call centers, NOC, wireless connectivity and more.

Strategic partnering with international, and national information technology company, enable them to offer technology solutions ranging from system design and implementation to the sale of software and equipment.

The natural qualities of the Underground Data Centers allow them to offer the best of both worlds premier services and security at highly competitive rates.

Underground Data Centers were established starting in 1990's but really came into there own after September 11 attacks in 2001 when there founders realized the former mines, and bunker offered optimal conditions for a data center. The mines, and bunkers offered superior environmental conditions for electronic equipment, almost invulnerable security and they located near power grids.

Adam Couture, a Mass.-based analyst for Gartner Inc. said Underground Data Centers could find a niche serving businesses that want to reduce vulnerability to any future attacks. Some Underground Data Centers fact sheet said that the Underground Data Center would protect the data center from a cruise missile explosion or plane crash.

Every company after September 11 attacks in 2001 are all going back and re-evaluating their business-continuity plans, This doesn't say everybody's changing them, but everybody's going back and revisiting them in the wake of what happened and the Underground Data Center may be just that.

Comparison chart: Underground data centers

Five facilities compared
Name InfoBunker, LLC The Bunker Montgomery Westland Cavern Technologies Iron Mountain The Underground
Location Des Moines, Iowa* Dover, UK Montgomery, Tex. Lenexa, Kan. Butler County, Penn.*
In business since 2006 1999 2007 2007 Opened by National Storage in 1954. Acquired by Iron Mountain 1998.
Security /access control Biometric; keypad; pan, tilt and zoom cameras; door event and camera logging CCTV, dogs, guards, fence Gated, with access control card, biometrics and a 24x7 security guard Security guard, biometric scan, smart card access and motion detection alarms 24-hour armed guards, visitor escorts, magnetometer, x-ray scanner, closed-circuit television, badge access and other physical and electronic measures for securing the mine's perimeter and vaults
Distance underground (feet) 50 100 60 125 220
Ceiling height in data center space (feet) 16 12 to 50 10 16 to 18 15 (10 feet from raised floor to dropped ceiling)
Original use Military communications bunker Royal Air Force military bunker Private bunker designed to survive a nuclear attack. Complex built in 1982 by Louis Kung (Nephew of Madam Chang Kai Shek) as a residence and headquarters for his oil company, including a secret, 40,000 square foot nuclear fallout shelter. The office building uses bulletproof glass on the first floor and reception area and 3-inch concrete walls with fold-down steel gun ports to protect the bunker 60 feet below. Limestone mine originally developed by an asphalt company that used the materials in road pavement Limestone mine
Total data center space (square feet) 34,000 50,000 28,000 plus 90,000 of office space in a hardened, above-ground building. 40,000 60,000
Total space in facility 65,000 60,000 28,000 3 million 145 acres developed; 1,000 acres total
Data center clients include Insurance company, telephone company, teaching hospital, financial services, e-commerce, security
monitoring/surveillance, veterinary, county government
Banking, mission critical Web applications, online trading NASA/T-Systems, Aker Solutions, Continental Airlines, Houston Chronicle, Express Jet Healthcare, insurance, universities, technology, manufacturing, professional services Marriott International Inc., Iron Mountain, three U.S. government agencies
Number of hosted primary or backup data centers 2 50+ 13 26 5
Services offered Leased data center space, disaster recovery space, wholesale bandwidth Fully managed platforms, partly managed platforms, co-location Disaster recovery/business continuity, co-location and managed services Data center space leasing, design, construction and management Data center leasing, design, construction and maintenance services
Distance from nearest large city Des Moines, about 45 miles* Canterbury, 10 miles; London, 60 miles Houston, 40 miles Kansas City, 15 miles Pittsburgh, 55 miles
Location of cooling system, includng cooling towers Underground Underground Above and below ground. All cooling towers above ground in secure facility. Air cooled systems located underground. Cooling towers located outside
Chillers located above ground to take advantage of "free cooling." Pumps located underground.
Location of generators and fuel tanks Underground Above ground and below ground Two below ground, four above ground. All fuel tanks buried topside. Underground Underground
*Declined to cite exact location/disatance for security reasons.