“Cloud services reduce energy consumption by consolidating IT resources across multiple enterprises resulting in better resource utilization. However, any potential energy reduction due to cloud services have been quickly erased by the rapid increase in Internet usage that require ever larger server farms and even more energy consumption,” says UMass Amherst computer scientist Ramesh Sitaraman, an expert in the foundational aspects of large Internet-scale distributed systems. According to Sitaraman, data centers that host IT infrastructure consume about 1.5% of all the power in the world today, and they already contribute greenhouse gas emissions comparable to a mid-sized country! And with Internet usage doubling about every two years, Sitaraman asks, “What will that 1.5% be in ten years? Twenty years? It will be the big consumer of power and one of the biggest sources of emissions. It’s a sleeping giant.”
The big challenge, he says, is to make cloud systems much more energy efficient but without sacrificing performance. “If you design an energy efficient system that doesn’t perform well, no one will use it,” says Sitaraman.
Sitaraman’s research on the science of fault tolerant networks and distributed algorithms has had a profound impact on the way the Internet functions today. In fact, almost every Internet user in the world directly benefits from Sitaraman’s innovations when they use the Internet to read news, watch videos, buy products, play games online, or use a social network. Building on his past decade of research on how to make the Internet more reliable and better performing, he’s now using his experience to design Internet systems that consume drastically less energy.
Sitaraman’s particular focus is a type of cloud system called the Content Delivery Network (or, CDN for short). CDNs are among the major technology innovations of the past decade and are also one of the earliest examples of an Internet-scale cloud service. “CDNs today deliver most major media events (super bowl), music downloads (Apple iTunes), video downloads (Netflix), games (Nintendo), social networks (Facebook), online retail, and news media,” says Sitaraman. “It is hard to imagine life today without CDNs”.
In the late 1990’s, the Internet was becoming increasingly important for enterprises even though it remained painstakingly unreliable and slow. “Deep scientific ideas and sophisticated algorithms were needed to transform the vast unreliable Internet into a high-performance communication medium that an online retailer can use,” adds Sitaraman. Central to that transformation was the idea of a CDN. “Conceptually,” says Sitaraman, “a CDN is a virtual network that provides high performance to users and is built over the failure-prone Internet.”
Sitaraman’s interest in virtual networks started in the early 1990s with his Princeton PhD thesis, where he developed algorithms for building fully-reliable virtual networks on top of actual networks that were failure-prone. “Much of my early research dealt with networks inside large parallel computers that were quite different from the Internet,” says Sitaraman. “But when the Internet came into its own in the late 1990’s, I was intrigued by the thought of building a virtual network over the Internet at a global scale never imagined before.”
Subsequently, he joined his research colleagues from MIT who had earlier founded Akamai Technologies. As a principal architect, Sitaraman helped pioneer CDNs and helped build the Akamai network that is perhaps the world’s largest CDN today. Currently, the Akamai network consists of more than 100,000 servers, located in over 75 countries, and 1900 networks around the world. It serves 15-30% of all web traffic and hundreds of billions of web user requests per day.
“The next-generation CDNs and cloud systems must be energy-aware. They have to achieve the delicate balance between minimizing energy use and delivering high performance,” says Sitaraman. Energy and performance often pull in opposite directions, which requires the development of sophisticated algorithms to keep the goals in balance.
To take an example. suppose one were to turn off unused servers of a CDN during non-peak hours to save energy. A major but unexpected surge in global Internet traffic during the non-peak hours due to a breaking news story, such as that experienced during the capture of Osama Bin Laden, could leave the CDN with insufficient live server capacity to serve all the users who want to view the story, causing degraded performance or an outage.
“Rethinking the distributed algorithms that are at the heart of CDNs and cloud systems to balance both energy and performance goals is the major research challenge,” says Sitaraman. A significant step in this regard is his recent research done in collaboration with UMass graduate student Vimal Mathew and computer scientist Professor Prashant Shenoy on algorithms for turning off servers in a CDN to save energy. These algorithms hedge the risk of a major traffic surge by maintaining a live pool of spare servers while gradually turning off the remaining idle servers in an orchestrated fashion to save energy.
An evaluation of these new algorithms shows it can reduce total energy usage of a CDN by almost 50% while maintaining high performance to users and minimizing the wear-and-tear of the server hardware. This work is outlined in a recent paper Energy-Aware Load Balancing in Content Delivery Networks, coauthored with Mathew, and Shenoy. The paper was presented at the 31st Annual IEEE International Conference on Computer Communications (INFOCOM), March 2012.
“Our preliminary research is promising, says Sitaraman. “There is a vast space of green research just waiting to be explored that could well revolutionize the way the Internet works all over again.”
David Bartone '12G and Karen J. Hayes '85
“There is a vast space of green research just waiting to be explored that could well revolutionize the way the Internet works all over again.”
– Ramesh Sitaraman