Showing posts with label data center temperature. Show all posts
Showing posts with label data center temperature. Show all posts

Friday, March 21, 2014



Some things to consider when running a data center

As businesses go increasingly digital, the need for data centers to secure company information is more important than ever before. It is not only tech giants like Google and Facebook that need a place to house their information - businesses across the healthcare, government and industrial sectors are looking to data centers as a solution to their storage needs. But running a data center is not something that can be done impulsively. Whether your company has the funds and scale of operations to occupy its own center or ends up looking into existing facilities, here are some important considerations to keep in mind to maximize an enterprise data center operation.

Consider renewable energy solutions
 
In Hollywood movies, data centers are generally represented as massive, noise-intensive operations that actively drain energy out of whatever area they are occupying. This public perception of such facilities is understandable given that data centers must rely on a constant supply of energy - after all, their functionality depends on remaining active at all times. But just because they harness energy sources does not mean data centers can't function in an environmentally-minded, sustainable way.
Just ask Google, a company that has been dealing with data storage needs ever since it rented its first data storage facility - a closet-sized, 7 foot by 4 foot operation with a mere 30 computers - in 1998, according to CNET. Google has come a long way since then, and so has its dedication to sustainable methods of data center operation. The tech giant now has a vast network of data centers spanning the globe.
What unites Google's facilities is a singular commitment to renewable energy. With renewable energy currently powering more than a third of Google's data storage facilities, the company is always looking for ways to expand the use of solar and wind power, according to its site. Because it is challenging to have a renewable power generator on location, the company did the next best thing: It reached out to renewable energy providers in the area - such as wind farms - and made deals to buy energy from them. Among Google's energy suppliers are wind farms in Sweden and Oklahoma. Through these sources, the company is not only able to maintain solid data room cooling practices, but benefit the local community.   

Have good outside air cooling

When it comes to maintaining an optimal data room temperature, it's best to follow the lead of companies well-versed in data storage. Google and Microsoft are two such businesses, and they both share a commitment to harnessing natural resources to keep their data centers cool.
In Dublin, Microsoft has invested more than $800 million to date in order to build a data center that covers almost 600,000 square feet. The enormous size of the facility would seem to present a major cooling challenge, but the company has been able to surmount that by using fresh air cooling, Data Center Knowledge reported. By building the center in Ireland, where the temperature remains optimal for data room cooling, Microsoft is able to maximize the location as a natural cooling solution - a move that saves significant energy costs while keeping the company environmentally friendly as well. And its commitment to environmentally sound solutions does not end with cooling: the center also recycles 99 percent of the waste it produces.
Google has a similarly cooling-minded approach with its data facility in Finland, which it hopes will be almost completely powered by wind energy by 2015, according to Data Center Knowledge. The wind energy will come from a wind park located nearby. But the center is not waiting until then to implement good temperature practice. Instead of relying on chillers and other machine-based cooling techniques, Google relies on seawater from a nearby Gulf to cool the facility. Its efforts in Finland are part of a broader effort to expand the Google sphere of influence.
"The Google data center in Hamina offers Eastern Finland a tremendous opportunity to jump from the industrial to digital age," said Will Cardwell, a professor at a nearby university.

But just as important as what goes on inside a center is the environment around it. That is because data centers are invariably affected by the physical location in which they are located. With that in mind, here are some more things to look into in order to maximize your data center potential.

Choose the location wisely

Considering that data centers are necessarily connected to the physical environment they inhabit, it is important to pinpoint the best location possible. Data centers are always going to require top-notch capabilities to maintain a good server room temperature, but the ease with which that happens can depend on the location of the center. As always, Google is at the top of the game with regard to location selection. Its Hamina, Finland center is strategically placed near the Gulf of Finland, enabling an easy and natural data room cooling solution.
But Google is not the only company maximizing natural environments for data center growth. Iron Mountain specializes in underground data center solutions, according to Data Center Knowledge. Formerly a storage company for physical records, Iron Mountain already had a 145-acre underground storage facility in a former limestone mine before it got into the data center business. This location turned out to be perfect for data center needs. Blocked from the sunlight and other external heat sources, the underground facility stays at about 52 degrees without any kind of additional cooling function. An underground lake provides further protection against ever needing to bring in a machine cooling system. The company's so-called "data bunker" gained so much popularity that Iron Mountain decided to expand its sphere of operations.

Give back to the community the center is in

Data centers often require a big fleet of staff to operate. Fortunately, they're usually built near communities from which workers can be hired. But as much as data centers plan to benefit from the community they inhabit, it is just as important to look for ways to give back. This kind of behavior encourages connectedness with the community and improves the reputation of the center - and therefore the company - in the public eye.
Google paid special attention to the local community as it developed its Hamina center. When they began mapping out the concept for the center, Google realized that construction would take about 18 months. And so they turned to the locals for help. In the process, they provided steady employment for 800 workers in the engineering and construction sectors, according to Data Center Knowledge. Google's willingness to involve locals in the construction process helped forge a lasting bond between the tech giant and the city.
This bond did not go unnoticed.
"Google's investment decision is important for us and we welcome it warmly," Finnish president Jyrki Katainen said.
And for those who work at the center, life is good.
"No two days are the same as we change our roles around frequently to keep things fresh and new," said Julian Cooper, a hardware operations worker at the facility.

Be prepared to surmount environmental obstacles

In the event of a disaster like a hurricane or earthquake, it is vitally important for all enterprises - especially data centers - to make sure their stock is safe. Iron Mountain understands the principle of environmental preparadness quite well, which is why they offer underground data storage solutions. By storing data underground, Iron Mountain protects it against any conceivable natural disaster. This nature-prove construction is especially important for companies like Marriott, which chose to house data at the Iron Mountain bunker because of the sense of complete security it afforded.
"We have always had a rigorous and constant focus on having disaster preparedness in place," said Marriott operational vice president Dan Blanchard. "Today we have a data center that provides Marriott with a tremendous capability for disaster recovery, and we have a great partner in Iron Mountain."
According to tech journalist David Geer, earthquakes pose a huge threat to data centers in many areas around the world, since they can be difficult to predict and potentially cause large-scale damage. If a company intends to build its facility in an area susceptible to earthquakes, it should apply the most stringent safeguards, including building a center that is capable of withstanding a quake one degree higher than the requirement for the zone it occupies.

Monday, August 2, 2010

Innovations for the Data Center

6 Cool Innovations for
The Data Center


1. Fiber optics with a twist

The success of the HDMI cable in consumer electronics has proved that having a common cable that works with Blu-ray players, HDTV sets and just about any set-top box helps remove clutter and confusion. Intel has developed Light Peak following this same approach. It's fiber-optic cable that will first be used with laptop and desktop computers to reduce clutter and to speed transmission, but it could also make its way to the data center as a way to connect servers and switches.

The 3.2mm cable, which is about as thin as a USB cable, can be up to 100 feet long. Intel has designed a controller that will sit inside a computer, and cables are currently in production. Third parties, including Hewlett-Packard and Dell, will start making computers with Light Peak fiber-optic cables in them by 2011, according to Intel.

For data centers, Light Peak presents a few interesting possibilities. Fiber optics have been in the data center since the early 1990s, when IBM introduced its Escon (Enterprise Systems Connection) product line; it connects mainframes at 200Mbit/sec. Light Peak differs in that it runs at 10GB/sec., and Intel claims that the components will be less expensive and lighter-weight than existing fiber-optic products.

"Intel claims Light Peak will be less complex and easier to manage by eliminating unnecessary ports, and deliver the higher throughput required by high performance e-SATA and DisplayPort systems," says Charles King, an analyst at Pund-IT in Concord, Mass. "If the company delivers on these promises, Light Peak could simplify life for data center managers plagued by installing, managing and troubleshooting miles of unruly optical cables."

Success here will depend on "how willingly developers and vendors" embrace Light Peak and build products around it, King explains.

2. Submerged liquid cooling and horizontal racks

Liquid cooling for data centers is not a new concept, of course, but Green Revolution Cooling has added a new twist. For starters, the rack is turned on its side, which helps with cable management and makes it easier for administrators to access equipment, and the horizontal rack is surrounded by liquid. A new coolant, called GreenDEF, is made from mineral oil that is nontoxic, costs less than other liquid-cooling methods and is not electrically conductive like water, according to a GR Cooling spokesman.

"The liquid actually moves through the floor and circulates up through all of the computing nodes," says Tommy Minyard, director of advanced computing systems at the Texas Advanced Computing Center, part of the University of Texas at Austin. This means more-effective cooling because heat is moved away from the processors via cables on the sides and under the rack, he explains. Minyard is installing GR Cooling systems in his data center and expects a 30% to 40% savings compared to traditional air-cooled systems.

Data center cooling device

Green Revolution uses horizontal devices for racks, along with a new type of coolant, to reduce energy costs in a data center.

Minyard says liquid cooling has made a rebound lately, recalling the days when Cray offered submerged cooling systems, and he notes that even IBM is moving back into chilled-liquid-cooling some compute nodes.

Pund-IT's King says a major issue is that enterprises have fought the return of liquid cooling in the data center because of the high costs of implementing the technology and because it is unproven as a widespread option.

"Liquid cooling usually costs much more to install upfront than air cooling," says Mark Tlapak, GR Cooling's co-founder. "Compared to air, every liquid cooling system has some added nuance, such as electric conductivity with water-based cooling systems. " But, he says, "spring a leak in the water systems, and you lose electrical equipment." Still, for Minyard, GR Cooling is an ideal fit: His data center gravitates toward dense, powerful systems that pack intense power into small spaces, such as IBM blade servers and the latest Intel processors. The Ranger supercomputer, for example, uses 30kw of power per rack.

3. Several broadband lines combined into one

Enterprises can spend many thousands of dollars on fiber-optic lines and multiple T1 connections, but at least one emerging technology is aiming to provide a lower-cost alternative.

Mushroom Networks' Truffle Broadband Bonding Network Appliance creates one fast connection out of up to six separate lines, a technique known as bonding. The Truffle combines the bandwidth of all available broadband lines into one giant pipe, with download speeds of up to 50Mbit/sec., the company says. Internet access may be through a DSL modem, cable modem, T1 line or just about any broadband connection.

This helps increase overall throughput, and acts as a backup mechanism, too. If one of the "bonded" lines fails, the Truffle connection just keeps running with the other available lines.

Steve Finn, a television producer in Kenya, uses Mushroom Networks' appliance for a program called Africa Challenge that is broadcast to eight African countries. He relies heavily on broadband to produce the content and at one time paid as much as $4,000 per month for connectivity. Speeds vary depending on latency and traffic, but he says the bonded speed is generally about four times faster (four lines times the speed of each individual line), at about half the cost of one equivalent high-speed line.

Frank J. Bernhard, an analyst at Omni Consulting Group, says Mushroom Networks fills a niche for companies that do not want to pay the exorbitant fees for multiple T1 or T3 connections but still need reliable and fast Internet access. Other companies, including Cisco Systems, offer similar bonding technology, but at a greater cost and with more complexity at install, which means the technique has not yet been widely used.

4. Multiple data centers more easily connected

In a very large enterprise, the process of connecting multiple data centers can be a bit mind-boggling. There are security concerns, Ethernet transport issues, operational problems related to maintaining the fastest speed between switches at branch sites, and new disaster planning considerations due to IT operations running in multiple locations.

Cisco's new Overlay Transport Virtualization, or OTV, connects multiple data centers in a way that seems really easy compared with the roll-your-own process most shops have traditionally used. Essentially a transport technology for Layer 2 networking, the software updates network switches, including the Cisco Nexus 7000, to connect data centers in different geographic locations.

The OSV software costs about $25,000 per license and uses the maximum bandwidth and connections already established between data centers.

There are other approaches for linking multiple data centers, a Cisco technical spokesman acknowledges, including those involving Multiprotocol Label Switching (MPLS) or, before that, frame-relay and Asynchronous Transfer Mode protocols.

But unlike some of the older approaches, the spokesman explains, Cisco OTV does not require any network redesign or special services in the core, such as label switching. OTV is simply overlaid onto the existing network, inheriting all the benefits of a well-designed IP network while maintaining the independence of the Layer 2 data centers being interconnected.

Terremark, a cloud service provider based in Miami, uses Cisco OTV to link 13 data centers in the U.S., Europe and Latin America. The company says there is a significant savings compared with taking a "do-it-yourself" approach to linking data centers, due to reduced complexity and OTV's automated fail-over system that helps multiple data centers act as one if disaster strikes.

"Implementing the ability to balance loads and/or enact emergency fail-over operations between data centers traditionally involved a dedicated network and complex software," says Norm Laudermilch, Terremark's senior vice president of infrastructure. "With Cisco OTV, Ethernet traffic from one physical location is simply encapsulated and tunneled to another location to create one logical data center."

Virtual machines from one location can now use VMware's VMotion, for instance, to automatically move to another physical location in the event of a failure.

5. Priority-based e-mail storage

Communication is what drives a business, but too often the bits and bytes of an e-mail exchange are treated in the data center as just another data set that needs to be archived. Messagemind automatically determines which e-mails can be safely archived onlower-cost systems.

The tool analyzes all company communication -- tracking which messages end users read, delete or save -- and then groups them according to priority level.

Data center administrators can use that information to store e-mail based on priority level, which in turn can save money. For example, instead of storing all e-mails in one high-cost archive, messages marked as low priority -- based again on the end user's clicking behavior -- can be stored in lower-cost storage systems. High-priority e-mail can be stored on higher-performance, and higher-cost, media.

That same behind-the-scenes analysis can be used outside the data center, rolled up into a dashboard that managers and end users can view to help them on projects. For example, business units can view e-mail diagrams that show who is communicating effectively on a project and who seems to be lagging behind and rarely contributing.

Pund-IT's King says Messagemind is an intriguing prospect because e-mail has become such a wasteland of broken conversations and disconnected project discussions. Managing e-mail becomes even more painful if a company is subject to litigation, and e-mail becomes part of the legal discovery process.

"Even the best e-mail solutions require employees to manage their messages," says King. "If it works as advertised, I could see this catching hold in enterprises. By managing e-mail more effectively -- and automatically -- Messagemind's solution could take a great deal of weight off the shoulders of data center admins struggling under ever-increasing volumes of stored messages."

6. User accounts virtualized for easier restoration

Virtualization has become the buzzword of the past decade, but it usually involves abstracting an operating system from a server or data from your storage allocations. AppSense is virtualization software for user accounts. It extracts user profile settings from Windows applications and maintains them separately. That means that if an application is updated or changed, the user information is still available. If user settings are corrupted or lost, administrators can restore the settings with a minimum of bother.

Landon Winburn, a software systems specialist at the University of Texas Medical Branch in Galveston, Texas, uses AppSense to virtualize user-account profiles for his 3,000 students. Winburn says the university used to manage user settings manually, taking about 40 to 60 calls per week related to log-ins. The university also had five to 10 corruptions per day related to user settings.

"Before AppSense, the only solution for a corrupt profile was to delete the profile and have the user start again from scratch for all applications," says Winburn.

But now, with AppSense's ability to restore these settings, the university doesn't have to directly address the problems, since they are handled automatically. By virtualizing accounts, the university could also increase the number of XenApp Server accounts from 40 user profiles per server to about 80.

John Brandon is a veteran of the computing industry, having worked as an IT manager for 10 years and a tech journalist for another 10. He has written more than 2,500 feature articles and is a regular contributor to Computerworld.

Monday, April 5, 2010

Why data center temperatures have moderated

Computerworld – Industrial Light & Magic has been replacing its servers with the hottest new IBM BladeCenters — literally, the hottest.

For every new rack ILM brings in, it cuts overall power use in the data center by a whopping 140 kW — a staggering 84% drop in overall energy use.

But power density in the new racks is much higher: Each consumes 28 kW of electricity, versus 24 kW for the previous generation. Every watt of power consumed is transformed into heat that must be removed from each rack — and from the data center.

The new racks are equipped with 84 server blades, each with two quad-core processors and 32GB of RAM. They are powerful enough to displace seven racks of older BladeCenter servers that the special effects company purchased about three years ago for its image-processing farm.

To cool each 42U rack, ILM’s air conditioning system must remove more heat than would be produced by nine household ovens running at the highest temperature setting. This is the power density of the new infrastructure that ILM is slowly building out across its raised floor.

These days, most new data centers have been designed to support an average density of 100 to 200 watts per square foot, and the typical cabinet is about 4 kW, says Peter Gross, vice president and general manager of HP Critical Facilities Services. A data center designed for 200 W per square foot can support an average rack density of about 5 kW. With carefully engineered airflow optimizations, a room air conditioning system can support some racks at up to 25 kW, he says.

At 28 kW per rack, ILM is at the upper limit of what can be cooled with today’s computer room air conditioning systems, says Roger Schmidt, IBM fellow and chief engineer for data center efficiency. “You’re hitting the extreme at 30 kW. It would be a struggle to go a whole lot further,” he says.

[Read our related story, "Why data center temperatures have moderated." Also, read Robert Mitchell's blog post, "Fans: The new power hogs in the data center."]

The sustainability question

The question is, what happens next? “In the future are watts going up so high that clients can’t put that box anywhere in their data centers and cope with the power and cooling? We’re wrestling with that now,” Schmidt says. The future of high-density computing beyond 30 kW will have to rely on water-based cooling, he says. But data center economics may make it cheaper for many organizations to spread out servers rather than concentrate them in racks with ever-higher energy densities, other experts say.

Energy-efficiency tips

Refresh your servers. Each new generation of servers delivers more processing power per square foot — and per unit of power consumed. For every new BladeCenter rack Industrial Light & Magic is installing, it has been able to retire seven racks of older blade technology. Total power savings: 140 kW.

Charge users for power, not just space. “You can be more efficient if you’re getting a power consumption model along with square-footage cost,” says Ian Patterson, CIO at Scottrade.

Use hot aisle/cold aisle designs. Good designs, including careful placement of perforated tiles to focus airflows, can help data centers keep cabinets cooler and turn the thermostat up.

Kevin Clark, director of information technologies at ILM, likes the gains in processing power and energy efficiency he has achieved with the new BladeCenters, which have followed industry trends to deliver more bang for the buck. According to IDC, the average server price since 2004 has dropped 18%, while the cost per core has dropped by 70%, to $715. But Clark wonders whether doubling compute density again, as he has in the past, is sustainable. “If you double the density on our current infrastructure, from a cooling perspective, it’s going to be difficult to manage,” he says.

Underground Secure Data Center Operations

Technology based companies are building new data centers in old mines, caves, and bunkers to host computer equipment below the Earth's surface.

Underground Secure Data Center Operations have a upward trend.

Operations launched in inactive gypsum mines, caves, old abandoned coal mines, abandoned solid limestone mines, positioned deep below the bedrock mines, abandoned hydrogen bomb nuclear bunkers, bunkers deep underground and secure from disasters, both natural and man-made.

The facility have advantages over traditional data centers, such as increased security, lower cost, scalability and ideal environmental conditions. There economic model works, despite the proliferation of data center providers, thanks largely to the natural qualities inherent in the Underground Data Centers.

With 10,000, to to over a 1,000,000 square feet available, there is lots of space to be subdivided to accommodate the growth needs of clients. In addition, the Underground Data Centers has an unlimited supply of naturally cool, 50-degree air, providing the ideal temperature and humidity for computer equipment with minimal HVAC cost.

They are the most secure data centers in the world and unparalleled in terms of square footage, scalability and environmental control.

Yet, while the physical and cost benefits of being underground make them attractive, they have to also invested heavily in high-speed connectivity and redundant power and fiber systems to ensure there operations are not just secure, but also state-of-the-art.

There initially focused on providing disaster recovery solutions, and backup co-location services.

Clients lease space for their own servers, while other provides secure facilities, power and bandwidth. They offers redundant power sources and multiple high-speed Internet connections through OC connected to SONET ring linked to outside connectivity providers through redundant fiber cables.

Underground Data Centers company augments there core services to include disaster recovery solutions, call centers, NOC, wireless connectivity and more.

Strategic partnering with international, and national information technology company, enable them to offer technology solutions ranging from system design and implementation to the sale of software and equipment.

The natural qualities of the Underground Data Centers allow them to offer the best of both worlds premier services and security at highly competitive rates.

Underground Data Centers were established starting in 1990's but really came into there own after September 11 attacks in 2001 when there founders realized the former mines, and bunker offered optimal conditions for a data center. The mines, and bunkers offered superior environmental conditions for electronic equipment, almost invulnerable security and they located near power grids.

Adam Couture, a Mass.-based analyst for Gartner Inc. said Underground Data Centers could find a niche serving businesses that want to reduce vulnerability to any future attacks. Some Underground Data Centers fact sheet said that the Underground Data Center would protect the data center from a cruise missile explosion or plane crash.

Every company after September 11 attacks in 2001 are all going back and re-evaluating their business-continuity plans, This doesn't say everybody's changing them, but everybody's going back and revisiting them in the wake of what happened and the Underground Data Center may be just that.

Comparison chart: Underground data centers

Five facilities compared
Name InfoBunker, LLC The Bunker Montgomery Westland Cavern Technologies Iron Mountain The Underground
Location Des Moines, Iowa* Dover, UK Montgomery, Tex. Lenexa, Kan. Butler County, Penn.*
In business since 2006 1999 2007 2007 Opened by National Storage in 1954. Acquired by Iron Mountain 1998.
Security /access control Biometric; keypad; pan, tilt and zoom cameras; door event and camera logging CCTV, dogs, guards, fence Gated, with access control card, biometrics and a 24x7 security guard Security guard, biometric scan, smart card access and motion detection alarms 24-hour armed guards, visitor escorts, magnetometer, x-ray scanner, closed-circuit television, badge access and other physical and electronic measures for securing the mine's perimeter and vaults
Distance underground (feet) 50 100 60 125 220
Ceiling height in data center space (feet) 16 12 to 50 10 16 to 18 15 (10 feet from raised floor to dropped ceiling)
Original use Military communications bunker Royal Air Force military bunker Private bunker designed to survive a nuclear attack. Complex built in 1982 by Louis Kung (Nephew of Madam Chang Kai Shek) as a residence and headquarters for his oil company, including a secret, 40,000 square foot nuclear fallout shelter. The office building uses bulletproof glass on the first floor and reception area and 3-inch concrete walls with fold-down steel gun ports to protect the bunker 60 feet below. Limestone mine originally developed by an asphalt company that used the materials in road pavement Limestone mine
Total data center space (square feet) 34,000 50,000 28,000 plus 90,000 of office space in a hardened, above-ground building. 40,000 60,000
Total space in facility 65,000 60,000 28,000 3 million 145 acres developed; 1,000 acres total
Data center clients include Insurance company, telephone company, teaching hospital, financial services, e-commerce, security
monitoring/surveillance, veterinary, county government
Banking, mission critical Web applications, online trading NASA/T-Systems, Aker Solutions, Continental Airlines, Houston Chronicle, Express Jet Healthcare, insurance, universities, technology, manufacturing, professional services Marriott International Inc., Iron Mountain, three U.S. government agencies
Number of hosted primary or backup data centers 2 50+ 13 26 5
Services offered Leased data center space, disaster recovery space, wholesale bandwidth Fully managed platforms, partly managed platforms, co-location Disaster recovery/business continuity, co-location and managed services Data center space leasing, design, construction and management Data center leasing, design, construction and maintenance services
Distance from nearest large city Des Moines, about 45 miles* Canterbury, 10 miles; London, 60 miles Houston, 40 miles Kansas City, 15 miles Pittsburgh, 55 miles
Location of cooling system, includng cooling towers Underground Underground Above and below ground. All cooling towers above ground in secure facility. Air cooled systems located underground. Cooling towers located outside
Chillers located above ground to take advantage of "free cooling." Pumps located underground.
Location of generators and fuel tanks Underground Above ground and below ground Two below ground, four above ground. All fuel tanks buried topside. Underground Underground
*Declined to cite exact location/disatance for security reasons.