Monday, August 2, 2010

Innovations for the Data Center

6 Cool Innovations for
The Data Center


1. Fiber optics with a twist

The success of the HDMI cable in consumer electronics has proved that having a common cable that works with Blu-ray players, HDTV sets and just about any set-top box helps remove clutter and confusion. Intel has developed Light Peak following this same approach. It's fiber-optic cable that will first be used with laptop and desktop computers to reduce clutter and to speed transmission, but it could also make its way to the data center as a way to connect servers and switches.

The 3.2mm cable, which is about as thin as a USB cable, can be up to 100 feet long. Intel has designed a controller that will sit inside a computer, and cables are currently in production. Third parties, including Hewlett-Packard and Dell, will start making computers with Light Peak fiber-optic cables in them by 2011, according to Intel.

For data centers, Light Peak presents a few interesting possibilities. Fiber optics have been in the data center since the early 1990s, when IBM introduced its Escon (Enterprise Systems Connection) product line; it connects mainframes at 200Mbit/sec. Light Peak differs in that it runs at 10GB/sec., and Intel claims that the components will be less expensive and lighter-weight than existing fiber-optic products.

"Intel claims Light Peak will be less complex and easier to manage by eliminating unnecessary ports, and deliver the higher throughput required by high performance e-SATA and DisplayPort systems," says Charles King, an analyst at Pund-IT in Concord, Mass. "If the company delivers on these promises, Light Peak could simplify life for data center managers plagued by installing, managing and troubleshooting miles of unruly optical cables."

Success here will depend on "how willingly developers and vendors" embrace Light Peak and build products around it, King explains.

2. Submerged liquid cooling and horizontal racks

Liquid cooling for data centers is not a new concept, of course, but Green Revolution Cooling has added a new twist. For starters, the rack is turned on its side, which helps with cable management and makes it easier for administrators to access equipment, and the horizontal rack is surrounded by liquid. A new coolant, called GreenDEF, is made from mineral oil that is nontoxic, costs less than other liquid-cooling methods and is not electrically conductive like water, according to a GR Cooling spokesman.

"The liquid actually moves through the floor and circulates up through all of the computing nodes," says Tommy Minyard, director of advanced computing systems at the Texas Advanced Computing Center, part of the University of Texas at Austin. This means more-effective cooling because heat is moved away from the processors via cables on the sides and under the rack, he explains. Minyard is installing GR Cooling systems in his data center and expects a 30% to 40% savings compared to traditional air-cooled systems.

Data center cooling device

Green Revolution uses horizontal devices for racks, along with a new type of coolant, to reduce energy costs in a data center.

Minyard says liquid cooling has made a rebound lately, recalling the days when Cray offered submerged cooling systems, and he notes that even IBM is moving back into chilled-liquid-cooling some compute nodes.

Pund-IT's King says a major issue is that enterprises have fought the return of liquid cooling in the data center because of the high costs of implementing the technology and because it is unproven as a widespread option.

"Liquid cooling usually costs much more to install upfront than air cooling," says Mark Tlapak, GR Cooling's co-founder. "Compared to air, every liquid cooling system has some added nuance, such as electric conductivity with water-based cooling systems. " But, he says, "spring a leak in the water systems, and you lose electrical equipment." Still, for Minyard, GR Cooling is an ideal fit: His data center gravitates toward dense, powerful systems that pack intense power into small spaces, such as IBM blade servers and the latest Intel processors. The Ranger supercomputer, for example, uses 30kw of power per rack.

3. Several broadband lines combined into one

Enterprises can spend many thousands of dollars on fiber-optic lines and multiple T1 connections, but at least one emerging technology is aiming to provide a lower-cost alternative.

Mushroom Networks' Truffle Broadband Bonding Network Appliance creates one fast connection out of up to six separate lines, a technique known as bonding. The Truffle combines the bandwidth of all available broadband lines into one giant pipe, with download speeds of up to 50Mbit/sec., the company says. Internet access may be through a DSL modem, cable modem, T1 line or just about any broadband connection.

This helps increase overall throughput, and acts as a backup mechanism, too. If one of the "bonded" lines fails, the Truffle connection just keeps running with the other available lines.

Steve Finn, a television producer in Kenya, uses Mushroom Networks' appliance for a program called Africa Challenge that is broadcast to eight African countries. He relies heavily on broadband to produce the content and at one time paid as much as $4,000 per month for connectivity. Speeds vary depending on latency and traffic, but he says the bonded speed is generally about four times faster (four lines times the speed of each individual line), at about half the cost of one equivalent high-speed line.

Frank J. Bernhard, an analyst at Omni Consulting Group, says Mushroom Networks fills a niche for companies that do not want to pay the exorbitant fees for multiple T1 or T3 connections but still need reliable and fast Internet access. Other companies, including Cisco Systems, offer similar bonding technology, but at a greater cost and with more complexity at install, which means the technique has not yet been widely used.

4. Multiple data centers more easily connected

In a very large enterprise, the process of connecting multiple data centers can be a bit mind-boggling. There are security concerns, Ethernet transport issues, operational problems related to maintaining the fastest speed between switches at branch sites, and new disaster planning considerations due to IT operations running in multiple locations.

Cisco's new Overlay Transport Virtualization, or OTV, connects multiple data centers in a way that seems really easy compared with the roll-your-own process most shops have traditionally used. Essentially a transport technology for Layer 2 networking, the software updates network switches, including the Cisco Nexus 7000, to connect data centers in different geographic locations.

The OSV software costs about $25,000 per license and uses the maximum bandwidth and connections already established between data centers.

There are other approaches for linking multiple data centers, a Cisco technical spokesman acknowledges, including those involving Multiprotocol Label Switching (MPLS) or, before that, frame-relay and Asynchronous Transfer Mode protocols.

But unlike some of the older approaches, the spokesman explains, Cisco OTV does not require any network redesign or special services in the core, such as label switching. OTV is simply overlaid onto the existing network, inheriting all the benefits of a well-designed IP network while maintaining the independence of the Layer 2 data centers being interconnected.

Terremark, a cloud service provider based in Miami, uses Cisco OTV to link 13 data centers in the U.S., Europe and Latin America. The company says there is a significant savings compared with taking a "do-it-yourself" approach to linking data centers, due to reduced complexity and OTV's automated fail-over system that helps multiple data centers act as one if disaster strikes.

"Implementing the ability to balance loads and/or enact emergency fail-over operations between data centers traditionally involved a dedicated network and complex software," says Norm Laudermilch, Terremark's senior vice president of infrastructure. "With Cisco OTV, Ethernet traffic from one physical location is simply encapsulated and tunneled to another location to create one logical data center."

Virtual machines from one location can now use VMware's VMotion, for instance, to automatically move to another physical location in the event of a failure.

5. Priority-based e-mail storage

Communication is what drives a business, but too often the bits and bytes of an e-mail exchange are treated in the data center as just another data set that needs to be archived. Messagemind automatically determines which e-mails can be safely archived onlower-cost systems.

The tool analyzes all company communication -- tracking which messages end users read, delete or save -- and then groups them according to priority level.

Data center administrators can use that information to store e-mail based on priority level, which in turn can save money. For example, instead of storing all e-mails in one high-cost archive, messages marked as low priority -- based again on the end user's clicking behavior -- can be stored in lower-cost storage systems. High-priority e-mail can be stored on higher-performance, and higher-cost, media.

That same behind-the-scenes analysis can be used outside the data center, rolled up into a dashboard that managers and end users can view to help them on projects. For example, business units can view e-mail diagrams that show who is communicating effectively on a project and who seems to be lagging behind and rarely contributing.

Pund-IT's King says Messagemind is an intriguing prospect because e-mail has become such a wasteland of broken conversations and disconnected project discussions. Managing e-mail becomes even more painful if a company is subject to litigation, and e-mail becomes part of the legal discovery process.

"Even the best e-mail solutions require employees to manage their messages," says King. "If it works as advertised, I could see this catching hold in enterprises. By managing e-mail more effectively -- and automatically -- Messagemind's solution could take a great deal of weight off the shoulders of data center admins struggling under ever-increasing volumes of stored messages."

6. User accounts virtualized for easier restoration

Virtualization has become the buzzword of the past decade, but it usually involves abstracting an operating system from a server or data from your storage allocations. AppSense is virtualization software for user accounts. It extracts user profile settings from Windows applications and maintains them separately. That means that if an application is updated or changed, the user information is still available. If user settings are corrupted or lost, administrators can restore the settings with a minimum of bother.

Landon Winburn, a software systems specialist at the University of Texas Medical Branch in Galveston, Texas, uses AppSense to virtualize user-account profiles for his 3,000 students. Winburn says the university used to manage user settings manually, taking about 40 to 60 calls per week related to log-ins. The university also had five to 10 corruptions per day related to user settings.

"Before AppSense, the only solution for a corrupt profile was to delete the profile and have the user start again from scratch for all applications," says Winburn.

But now, with AppSense's ability to restore these settings, the university doesn't have to directly address the problems, since they are handled automatically. By virtualizing accounts, the university could also increase the number of XenApp Server accounts from 40 user profiles per server to about 80.

John Brandon is a veteran of the computing industry, having worked as an IT manager for 10 years and a tech journalist for another 10. He has written more than 2,500 feature articles and is a regular contributor to Computerworld.

No comments:

Post a Comment

Underground Secure Data Center Operations

Technology based companies are building new data centers in old mines, caves, and bunkers to host computer equipment below the Earth's surface.

Underground Secure Data Center Operations have a upward trend.

Operations launched in inactive gypsum mines, caves, old abandoned coal mines, abandoned solid limestone mines, positioned deep below the bedrock mines, abandoned hydrogen bomb nuclear bunkers, bunkers deep underground and secure from disasters, both natural and man-made.

The facility have advantages over traditional data centers, such as increased security, lower cost, scalability and ideal environmental conditions. There economic model works, despite the proliferation of data center providers, thanks largely to the natural qualities inherent in the Underground Data Centers.

With 10,000, to to over a 1,000,000 square feet available, there is lots of space to be subdivided to accommodate the growth needs of clients. In addition, the Underground Data Centers has an unlimited supply of naturally cool, 50-degree air, providing the ideal temperature and humidity for computer equipment with minimal HVAC cost.

They are the most secure data centers in the world and unparalleled in terms of square footage, scalability and environmental control.

Yet, while the physical and cost benefits of being underground make them attractive, they have to also invested heavily in high-speed connectivity and redundant power and fiber systems to ensure there operations are not just secure, but also state-of-the-art.

There initially focused on providing disaster recovery solutions, and backup co-location services.

Clients lease space for their own servers, while other provides secure facilities, power and bandwidth. They offers redundant power sources and multiple high-speed Internet connections through OC connected to SONET ring linked to outside connectivity providers through redundant fiber cables.

Underground Data Centers company augments there core services to include disaster recovery solutions, call centers, NOC, wireless connectivity and more.

Strategic partnering with international, and national information technology company, enable them to offer technology solutions ranging from system design and implementation to the sale of software and equipment.

The natural qualities of the Underground Data Centers allow them to offer the best of both worlds premier services and security at highly competitive rates.

Underground Data Centers were established starting in 1990's but really came into there own after September 11 attacks in 2001 when there founders realized the former mines, and bunker offered optimal conditions for a data center. The mines, and bunkers offered superior environmental conditions for electronic equipment, almost invulnerable security and they located near power grids.

Adam Couture, a Mass.-based analyst for Gartner Inc. said Underground Data Centers could find a niche serving businesses that want to reduce vulnerability to any future attacks. Some Underground Data Centers fact sheet said that the Underground Data Center would protect the data center from a cruise missile explosion or plane crash.

Every company after September 11 attacks in 2001 are all going back and re-evaluating their business-continuity plans, This doesn't say everybody's changing them, but everybody's going back and revisiting them in the wake of what happened and the Underground Data Center may be just that.

Comparison chart: Underground data centers

Five facilities compared
Name InfoBunker, LLC The Bunker Montgomery Westland Cavern Technologies Iron Mountain The Underground
Location Des Moines, Iowa* Dover, UK Montgomery, Tex. Lenexa, Kan. Butler County, Penn.*
In business since 2006 1999 2007 2007 Opened by National Storage in 1954. Acquired by Iron Mountain 1998.
Security /access control Biometric; keypad; pan, tilt and zoom cameras; door event and camera logging CCTV, dogs, guards, fence Gated, with access control card, biometrics and a 24x7 security guard Security guard, biometric scan, smart card access and motion detection alarms 24-hour armed guards, visitor escorts, magnetometer, x-ray scanner, closed-circuit television, badge access and other physical and electronic measures for securing the mine's perimeter and vaults
Distance underground (feet) 50 100 60 125 220
Ceiling height in data center space (feet) 16 12 to 50 10 16 to 18 15 (10 feet from raised floor to dropped ceiling)
Original use Military communications bunker Royal Air Force military bunker Private bunker designed to survive a nuclear attack. Complex built in 1982 by Louis Kung (Nephew of Madam Chang Kai Shek) as a residence and headquarters for his oil company, including a secret, 40,000 square foot nuclear fallout shelter. The office building uses bulletproof glass on the first floor and reception area and 3-inch concrete walls with fold-down steel gun ports to protect the bunker 60 feet below. Limestone mine originally developed by an asphalt company that used the materials in road pavement Limestone mine
Total data center space (square feet) 34,000 50,000 28,000 plus 90,000 of office space in a hardened, above-ground building. 40,000 60,000
Total space in facility 65,000 60,000 28,000 3 million 145 acres developed; 1,000 acres total
Data center clients include Insurance company, telephone company, teaching hospital, financial services, e-commerce, security
monitoring/surveillance, veterinary, county government
Banking, mission critical Web applications, online trading NASA/T-Systems, Aker Solutions, Continental Airlines, Houston Chronicle, Express Jet Healthcare, insurance, universities, technology, manufacturing, professional services Marriott International Inc., Iron Mountain, three U.S. government agencies
Number of hosted primary or backup data centers 2 50+ 13 26 5
Services offered Leased data center space, disaster recovery space, wholesale bandwidth Fully managed platforms, partly managed platforms, co-location Disaster recovery/business continuity, co-location and managed services Data center space leasing, design, construction and management Data center leasing, design, construction and maintenance services
Distance from nearest large city Des Moines, about 45 miles* Canterbury, 10 miles; London, 60 miles Houston, 40 miles Kansas City, 15 miles Pittsburgh, 55 miles
Location of cooling system, includng cooling towers Underground Underground Above and below ground. All cooling towers above ground in secure facility. Air cooled systems located underground. Cooling towers located outside
Chillers located above ground to take advantage of "free cooling." Pumps located underground.
Location of generators and fuel tanks Underground Above ground and below ground Two below ground, four above ground. All fuel tanks buried topside. Underground Underground
*Declined to cite exact location/disatance for security reasons.