When I started a web-hosting company back in 1995, we used bakery racks to hold Sun servers — mostly ES-450s, as I recall. The first real Internet data centers were then based on the designs of the telco industry, starting at 100 watts/sq ft, working their way up to 300 watts/sq ft for fairly dense server farms. Air conditioning was usually the biggest problem. As server densities increased, many aging data centers were only able to use a portion of their floorspace because they couldn’t supply enough cooling for the newer servers. As expensive as they were, it was very hard to design and build a data center that would be able to handle the increased densities five years down the road. Looking back only 13 years later, we had no idea how steep the scale curve would become.
Now, as Ina Fried reports at cnet, Microsoft (and I assume others) are buying servers not by the box or even by the rack, but pre-assembled and fully networked in shipping containers with densities of thousands of watts/sq foot. They have to. They’re adding 10,000 servers a month. They don’t repair or replace individual servers when they fail. They just monitor the total number of working servers in the container. When some percentage of the servers have failed, they yank the entire container and send it back to the supplier for refurbishing. Or, if the technology has improved, a container can simply be replaced with one that has even more-densely packed servers. (I assume that that each container has its own air conditioning and just requires water in/out.)
With densities like this, the big guys (Google, Microsoft, Yahoo, etc.) are primarily in the electrical-power business. In some cases, they’re even building (or planning to build) their own generating facilities. It only took a decade to get to this point. Hard to imagine how we’ll be building data centers in another ten years.