By Harry Mok
Thousands of servers — the backbone that supports computer networks and computer-aided research — are hosted at University of California campuses and medical centers, but advances in technology largely have made obsolete the need to physically locate these machines near their users.
To varying degrees, UC campuses and medical centers have centralized their servers to cut costs and energy use and make computing more efficient. Now, the 10-campus system is taking this strategy systemwide to conserve even more energy and help UC reach its ambitious goals for cutting carbon emissions.
The pilot project will move servers from campuses to the San Diego Supercomputer Center, where 225 racks of server space have been dedicated to testing the feasibility of a systemwide colocation center.
"It's the right thing to do from power usage perspective and the right thing to do from a sustainability perspective," said Paul Weiss, chief information officer at the UC Office of the President and a member of the Regional Data Center Management Work Group that is overseeing the San Diego colocation project.
The spinning disk drives and processors in servers use large amounts of electricity and they create heat, which require air conditioning. Colocation brings servers that otherwise would be running independently at different locations and puts them in a central data center that, in many cases, offers a more energy efficient cooling system, additional security, fire protection and emergency backup capabilities. With many servers in one place, economies of scale make their operation more efficient.
The efficiencies produced can reduce the use of air conditioning and energy while creating more computing capacity. Additionally, having many servers in one place helps enable large-scale virtualization, which uses software to increase the computing efficiency of servers, leading to a reduction in the number machines needed.
With fewer servers, the workload for managing the systems goes down, and in UC's case if it fully colocated, much of that would shift from campuses to the central data center.
Fear of loss of control
While there's general agreement on the benefits of a regional approach to hosting, there's still trepidation by some over losing direct control of their machines. Many system administrators and researchers who manage servers have to overcome the mindset that they have to being able to touch their equipment, said Arlene Allen, director of information systems and computing at UC Santa Barbara.
"It's kind of like, ‘Gee, I bought this car, but I never get to drive it,' and that applies to computing equipment as well." Allen said.
Modern servers don't need someone in the driver's seat to make them go, and in most instances, they can be operated remotely, Allen said.
"I have no practical instance of needing to put my hands on a server now," she said, adding it's a mental block that even she's had to overcome.
A colocation and virtualization project at the UC Office of the President begun in 2005 reduced the number of physical machines from about 500 to 280. In many cases, users gained access to better technology by utilizing the central data center.
"Part of the way I think we got this project going is that there was no upfront cost (for users)," said Steve Cavalli, data center manager at the UC Office of the President. "We told people, ‘We'll help you move over. This is where you should be anyway, and you'll be better supported.'"
Dodging dripping pipes
Continuing Education of the Bar, a UC-affiliated professional development program for lawyers, colocated its servers to the UC Office of the President's data center to take advantage of better technology.
Prior to the move, the organization's servers were in a room that was not designed to be a data center. Condensation from air conditioning pipes was at risk of dripping onto the servers and building codes required fire sprinklers in the room.
"Just imagine if we would have had a fire and the sprinkler would start spraying and pouring water over our production machine," said Mei-lian Lin, director of information technology and services for Continuing Education of the Bar. "Fortunately, for the 10 years we were in that building, that did not happen."
Now, the servers are in a secured room at the UC Office of the President data center with a better cooling system, a fire protection system and no drippy pipes. Colocation also kept CEB's website online and its computers accessible remotely by employees when the organization moved into a different building.
"While moving the office, people were able to work from home," Lin said. "That was a big success for us. No down time to our customers at all."
In general, energy use will go down with colocation, but at UC campuses, the bill for electricity usually is not charged to departments, groups or individuals, so cost savings may not be an incentive to colocate. Moving servers to San Diego would mean they would start getting a power bill.
A subsidy from the UC Office of the President available to each UC campus will cover the cost of migrating 10 racks to San Diego. All campuses have submitted or are in the process of submitting plans to UCOP for migrating at least one rack of servers.
The savings associated with systemwide colocation is being evaluated, but a recent study indicates that electricity rates paid by campuses are 35 to 50 percent higher than those at the San Diego Supercomputing Center.
A 2008 study commissioned by the UC Office of the President found that UC's 15 data centers alone used about 29 million kilowatt hours of electricity per year at a cost of about $2.7 million. What's not known is the cost for running servers outside of data centers.
Short of doing a detailed inventory across UC and putting a power meter on all the servers in people's offices, closets and elsewhere, there's no count of how many there are, how much energy they're using and how much it costs to operate them, Weiss said.
Despite the unknowns, "There's no doubt that UC will benefit by having a data center strategy," Weiss said.