In 2000, would anyone have accurately predicted where the world of data and data centres is today? I doubt it, as so much has changed in the past 18 years but let’s try to predict what is going to happen to data centres over the next 15-20 years.
Data centres store and process data and to predict their future so it may be wise to look at where we were in 2000 and where we are today. Of course, new innovations in technology and unpredictable changes in the wider world may dramatically alter the way we currently think and work but let’s give it a go anyway.
In most aspects of life, a want or need drives innovation. However, in the data communications arena it can be argued that advances in technology have enabled and driven changes in all aspects of our social and working lives. For example, in 2000 broadband was first introduced in the UK with a download speed of 512kbs, mobile phones were used for making phone calls and text messaging and most documents were sent by “snail mail” or fax (remember them?). Then, in 2004, Facebook was created and in 2007 Apple introduced the first smart phone and the world as we knew it changed.
Today, everyone is connectable to everyone 100% of the time and everyone expects instant access to everything. “Paperless” business transactions is the new norm and everything we now do leaves a digital footprint. To put this into some sort of “data” perspective, we now create as much data in a single day as was created from the beginning of time until 2000 and the speed at which we create this data is increasing exponentially. Also consider this, the data processing power of a 2MW data centre in 2000 is now the same data processing power found in a 6kW data rack and despite this huge increase in data processing power, data centres are still struggling to keep up with demand.
It is technology that has enabled this data explosion and it is data centres that have had to adapt to keep up with the demand for the storage and processing of data. The “oldies” amongst us will remember mainframes then desktop PCs and then fileservers as leading-edge technology. Today we are witnessing the transition from server rooms, to Cloud computing to Edge computing.
Not that many years ago, the more advanced organisations moved their in-house data processing to The Cloud. This was a good idea at the time; however, The Cloud became a victim of its own success as more and more organisations saw the advantages of Cloud computing and, in any case, more and more data was being created. As a result of its popularity, The Cloud became somewhat overcrowded and data centres struggled to keep up with the growth in demand. Something else needed to happen to solve this problem.
Today we talk about the Internet of Things (IoT) where pretty much any electrical/electronic machine/device is capable of connecting to the internet and thereby capable of creating data (note: these intelligent machines/devices are sometimes referred to as edge devices). Most of the data edge devices create is disorganised and random with no real use but at some time some of this data may be vitally important so it must be collected, processed and stored. It is where it is processed and stored that is the issue and this is where Edge computing comes in.
In order to make best use of the data they collect and of the advantages The Cloud offers, organisations today are creating their own micro and small data centres and using them to store and process their data local to the organisation. The organisation can then decide whether to store their data locally, send it to The Cloud (also known as the core) or to discard it. The Edge devices and these micro/small data centres are Edge computing in action.
How does the above help us predict the future of data centres? Well, we know that ever increasing amounts of data is being created at a faster rate every day and that this data revolution has not really impacted the developing world yet. With this in mind, the need for more and more large and mega data centres will continue despite the ongoing innovations in data processing and storage technologies and despite organisations becoming more refined about what they send to the core. We also know that it makes logical and practical sense to filter and process the data as close as possible to its source. With this in mind, there will be an ever-increasing number of organisation specific micro and small data centres operating at The Edge.
So, what will the data centres of the future look like? Without doubt 100% availability will continue to be their overriding objective with Tier 4 levels of availability becoming the norm, not the exception. The growth in renewable energy creation will ease some of the environmental pressures on data centres, however, pursuit of PUE of as close to 1.0 as possible will continue to be a major objective because of the need to minimise operational running costs. Also, with data processing and storage technology changing so rapidly, and with data volumes increasing so quickly, it will remain practically impossible for organisations to accurately predict their needed data centre capacity (and therefore its availability and efficiency) unless the data centre infrastructure is highly flexible by design.
The physical infrastructure needed to enable the highest levels of availability, efficiency and flexibility are easier to design in when planning and building a large and mega data centre than when planning a micro and small data centre. I may be wrong, but I suspect it is easier for a large organisation to build a mega data centre inside the arctic circle and adjacent to hydroelectric dams to make best use of free cooling and almost guaranteed power availability than it is for a typical UK based SME to relocate its entire workforce and business to the arctic circle.
If we now focus on the need for Edge related micro and small data centres, from an air cooling perspective, the good news is that modern IT systems can run at higher temperatures so the need for very closely controlled (and hence expensive) cooling is less but it is still needed. The bad news is that average temperatures in the UK will continue to rise and the air-cooling systems must be sized to manage the heat of the summer when the potential for free cooling has all but disappeared.
From a power protection perspective, as no data centre can run without electrical power the guaranteed availability of clean power will remain critically important and with the UK’s power quality not good enough to ensure 100% uptime, the need for power protection systems will remain essential.
The latest generation of power protection (i.e. UPS) equipment is modular in design (to give flexibility), has nine 9’s availability (due to high module reliability and “hot swap” capability) and is almost 98% true, on-line efficient. While technology improvements are always possible when a UPS is almost 100% available and 100% efficient, there is not much further for the technology to go. However, UPS energy storage in the form of lithium ion (Li-ion) batteries will be a game changer.
Li-ion batteries are smaller, lighter and will happily operate at higher ambient temperatures. This means some of the environmental, floor loading and structural challenges of introducing a micro data centre into an existing SME on, say, the top floor London building will simply disappear.
In summary then, the exponential growth in society’s online connectivity and data creation continue unchecked and will undoubtedly see further significant growth in large and mega data centres. At the same time, Edge computing will also result in the need for a rapid and significant increase in “local” micro and small data centres to support the data activities of organisations of all sizes. A well designed micro and small data centre will last an organisation several generations of IT equipment whereas a poorly designed micro and small data centre could cost an organisation a lot of money in terms of poor availability, wasted infrastructure and running costs.
CumulusPower, Centiel’s 4th generation, three-phase, modular UPS combines a unique Intelligent Module Technology (IMT), with a fault-tolerant parallel Distributed Active Redundant Architecture (DARA), to offer industry leading availability of 99.9999999% with a low total cost of ownership. This excellence in system availability is achieved through fully independent and self-isolating intelligent modules – each with individual power units, intelligence (CPU and communication logic), static bypass, control, display and battery. Its simple N+1 scalable configuration ensures optimum efficiency and it is Li-ion ready.
Centiel supports the data centres of today and those 10-15 years into the future.
David Bond, Chairman Centiel UK, Board Member Centiel SA and AP
Originally featured in Mission Critical Power magazine October 2018