There are solutions that use fiber rings to deliver services, but even fiber rings have single points of failure: The telephone exchange central offices.
The only way to have true IP redundancy is to have different connections from multiple IP transit providers from opposite directions, that use different local fiber networks, from entirely different companies. Most data centers try to meet these needs using fiber paths from carriers not bound by telephone tariffs. However, this “solution” was accompanied by a host of other problems.
In terms of physical security, every data center investigated shared a common building with other tenants. Either the data center was an afterthought and added to an existing building (server closet that grew in to dedicated space), or it was purpose built but with much office space and other common space in the same facility. Both of these types of shared structures increase the risk of collateral damage due to fires in the same building, and to security risks due to the large numbers of people sharing the facility. The best data center fire suppression system in the world doesn’t do a bit of good if the office above it building burns down on top of the data center.
Major shortcomings in physical security were also a recurring theme during the search for a data center. Many facilities share common space with other businesses. Despite partitioning off a building, the common mechanical facilities such as chiller plant, and electrical, are typically shared with other tenants. Obtaining building wide security is difficult not only due to the different tenants sharing common areas, but due to reception areas in buildings that were open to the public and entirely unsecured. Most “secure” server spaces were found to be secured from public areas by walls made of sheet rock! Some sheet rock walls contained windows! We desired something a little more secure than two layers of half inch thick sheet rock, or a single pane of glass.
Despite finding several facilities that all claimed to be “hardened” and able to withstand the force of a tornado with walls made of reinforced concrete, and at least one door made of steel with no windows to the outside world, further investigation revealed that at most, they were only partially below ground, (walk out basement) and all lacked physical plant equipment that was designed to operate during major contingencies. They also shared office space in the same building. Time and time again, it was found that 100% of the data centers had their heat rejection and standby power systems above ground. And in no case were the generators or air conditioning systems “hardened” at all. While the servers may survive if the data center took a direct hit from even a small EF-1 tornado, they would not remain operational for any length of time once the uninterruptible power supply batteries were exhausted. Even if the connectivity and building itself survived, and a generator was tough enough to operate after a storm or tornado, the external cooling would not. Even with power, the servers would quickly overheat, leading to downtime, and possible data corruption or loss.
Some data centers that claimed to be “hardened” were found to require a constant feed of municipal water for on site cooling. With all of the redundancy built in to the site, the whole data center could fail due to a non redundant source of cooling water that could be interrupted due to a pipe break, power outage, earthquake, or even simple maintenance. Or the whole data center could fail due to a water pipe break that would flood the facility with a high pressure torrent of municipal water.
Then there were the data centers located in flood plains. We were shocked at just how many data centers were located in flood plains. More alarming was the “head in the clouds” attitude that most had about the flood plain being entirely acceptable because the data center was on an upper floor.
The harder we looked, and the more we uncovered, the more discouraged we became. Eventually however, USSHC solved all of these problems, and then some.
The idea behind USSHC was to provide a safe, secure place to house an Internet Service Provider that would be immune from any form of disaster, “deep in an Iowa underground bunker” where the power would always stay on, and the servers would always stay connected, fully online, and fully operational, despite what was going on in the outside world.
Since it went live in 2002, the facility has been expanded to allow other companies to share the same level of redundancy, security, and performance.
In 2009, USSHC opened the GWAB (Geek with a box data suite) to offer an economical alternative to our premium data center colocation offerings.
Nice article..
ReplyDeleteHere some explain about hosting service with security..I have never know about this topic so can't explain..
ReplyDeleteuninterruptible power supply