Wednesday, May 27, 2009

European Code of Conduct for Data Centres


Last week we signed up to European Commission Code of Conduct for Data Centres

We decided to transform our enterprise data centre in 2007 into an energy efficient data centre.

The first thing we did was to increase the room temperature from 18C to 24C. We did this gradually one degree Celsius per month to gauge the effect. Modern computer equipment does not need to run below 20C and it an old mainframe legacy concept that some data centres still run at such cold temperature.

Next we turned off the humidifiers; again modern computing equipment does not need humidity of exactly 50%. We found that often one air conditioning unit was boiling bottle using a 4KW element to humidify the room while the other unit was de-humidifying the room. Just plain CRAZY!

We enforced hot and cold isles and created most efficient air flow by using blanking plates, changing air tiles and diverting under floor air flow.

We virtualised every server that could be virtualised by using 64 bit HP blade servers. You now need to business case to have a physical server.

Our HP Blade server enclosures use 90% plus power supplies and power management making them very efficient.

We consolidated our Storage using Netapp reducing the amount of spinning disks by over 500 while increasing actual capacity.

The data centre is now a lights out data centre. A sensor senses when someone is inside the room and switches the lights on. They switch off automatically when the person leaves.

We monitored electricity meter power consumption and UPS IT equipment consumption to work out Green Grid PUE and DCIE metrics

However the main investment was to replace our Stulz computer room air conditioning units for a pair of Uniflair Free-cooling units

Our legacy air conditioning equipment was ‘direct expansion’ where a refrigerant gas is used to transfer heat from inside the building (via the AC unit) to outside air (via the condensing unit). In this type of machine the complete heat transfer process is carried out by condensing & evaporating the refrigerant gas.

The new Uniflair system still relies on the same principal, however instead of relying on ‘pumping’ the refrigerant around the system, its transfers the heat to a water system which absorbs the heat generated & carries it to the roof. The key feature of this system however, is that for much of the year it can transfer enough heat from the room to the water without having to run the compressor system at all – and this is where substantial cost and environmental savings can be achieved.

When outside air temperatures are low enough, the water can simply be passed through the roof top cooler and passed back to the AC unit where it is diverted straight across the evaporator to cool the room air.

The new system also uses ‘EC’ drive fans which are the latest low energy fan drives saving approximately 33% over standard motor drives.

Benefits

There are numerous benefits to changing to the Free Cooling water cooled option

• Free cooling available for much of the year (any time it is less than 16C)
• Lower energy consumption.
• Lower refrigerant system content.
• Improved low energy fans drives.
• Great capacity control – improves room conditions & lower energy use.


Electricity Meter Readings for data centre

28/11/2008 12/1/2009 91,420 KWh
13/1/2009 10/02/2009 42,940 KWh
11/02/2009 25/02/2009 22,520 KWh
26/02/2009 03/04/2009 59,710 KWh

UPS meter reading for IT equipment load

4,720 KWh 28/11/2008 31/11/2008
36,582 KWh 01/12/2008 31/12/2008
36,582 KWh 01/01/2009 31/01/2009
33,042 KWh 01/02/2009 28/02/2009
36,582 KWh 01/03/2009 31/03/2009
35,402 KWh 01/04/2009 30/04/2009

Green Grid PUE Metrics

Although the free-cooling was installed in November 2008, the free cooling element was not fully active until early December 2008.

Prior to November 2008 we estimate our Data Centre PUE to have been close to 2.0 although the average legacy data centre PUE is 2.5 according to the Uptime Institute

November to December 2008 PUE 1.68

January 2009 PUE 1.29

February 2009 PUE 1.36

March 2009 PUE 1.36

Microsoft have been one of the largest names to sign the code. They have a PUE of 1.6 and have targeted 1.2

Google led the charge with one of its data centres running an average PUE of 1.11 and the rest averaging 1.19 PUE

Another idea is to use a networked energy monitor like the Ringdale Energy monitor which connects to the network via a Ethernet cable contains a web server with data logger
We will be installing this in-expensive monitor in the next few weeks


Link to the Ringdale monitor can be found here


We would certainly encourage others to sign the EU code of conduct.

More information on the European Code of Conduct on Data Centres can be found here

No comments: