AI at the Edge: Building the Infrastructure for our Future Industries
In a previous article, Building AI-ready Infrastructure, we looked at the challenges that face the builders of digital infrastructure to create the massive engines that will power the ‘AI Revolution’ – in particular, the mega-data centres that will host the training systems used in Generative AI platforms like ChatGPT.
Most of the attention in the data centre industry is on these monsters, but there is more to it that we need to consider.
This article looks at the other uses, applications, and implications of AI, and the infrastructure required to maintain them.
The Growth of Industrial AI
There are many flavours of AI, and although much of the current focus is on Generative AI, commercial applications use all sorts of other techniques to get the benefits that AI can offer. Indeed, there are some AI experts who think that too much emphasis is being given to the prominent large language models, and that the market will require a more diverse model for deploying infrastructure that will support real-world applications.
There are many examples of industrial and manufacturing applications using AI already to optimise, for example, production-line efficiency in factories. These systems take data from sensors and devices (e.g. cameras), and then control the manufacturing processes in real time to improve efficiency, or to reduce the use of raw ingredients – a great example being the use of specialist glues in the automobile industry for sticking windscreens to car bodies – an AI platform has been in use to reduce the amount of glue used without compromising the efficacy of the bond. This may sound, trivial but the quantities used globally mean that even small proportional savings can amount to huge monetary savings.
This type of application, used across multiple industries, has enormous potential for saving precious resources (or money), and many industries have been using these techniques for years. However, it is mostly the large manufacturers and processing companies that have been able to exploit this. Deploying this type of system can be expensive and usually entails situating a lot of processing power close to the production line. This excludes smaller enterprises from being able to take advantage as the barrier to entry is too high and involves maintaining IT kit that is expensive and difficult to look after.

Solutions at the Edge
The ideal solution for them would be the ability to use something like existing cloud services for running AI-driven applications. However, for a lot of applications, this is not viable within the existing cloud services offered by the likes of Amazon, Microsoft, or Google. For most, the reason is either the amount of data that needs to be processed (i.e. it costs a fortune to transport the humungous amounts of data generated in their production lines to where it could be processed), or the network latency (i.e. the time required to transmit data to and from their facility) is too long – real-time control of industrial processes requires extremely fast networks.
The answer here is to create a
distributed cloud-like infrastructure – putting processing power near to the companies (or users) that are generating or consuming the data. This is what defines the
edge. Ask four data centre people what the edge is and you will probably get a minimum of 5 definitions. For me, it’s quite simple – the edge is where data processing needs to happen. As I said above, that need is either defined by the sheer quantity of data that needs to be processed (it’s prohibitively expensive to transport it a long distance) or the latency (processing must be done really quickly so that real-time control can be delivered).
Building for the Edge of Tomorrow
The edge is going to be increasingly more important as commercial AI applications get developed. It’s not just industrial and manufacturing companies that will benefit – we are already seeing multiple applications in healthcare, retail, transportation, and logistics, as well as consumer apps. Virtual or augmented reality, as well as gaming, becomes a whole lot better on a distributed platform, and while it may not yet be the ‘killer app’ for the edge, it’s one that most people can understand.
So, what does AI infrastructure at the edge look like? Unlike the large language model training platforms, edge AI data centres can be much smaller and easier to build. The AI systems used at the edge do not usually require the extreme power densities that the training systems need. The big difference is that there will be lots of them. Having access to an edge data centre within 20-40km will normally be sufficient for many applications, but this means that we will need to build possibly hundreds of new (small) data centres to cover a country the size of the UK. At the moment, 80% of the UK’s data centres are in Docklands or Slough, and half of the rest of the capacity is in Newport. So, most of the country is served by only 10% of the data centres. That’s why edge or distributed infrastructure needs to be built new, and they need to go where economic activity dictates – in all population centres and anywhere where things are made, processed, shipped or used.
For the data centre industry, I think that is just as exciting as the need to build the behemoth data centres at the core – the truth is, we need both.
For further guidance on the topics discussed, reach out to Duncan Clubb, Senior Partner for Data Centres, Edge, and Cloud using the form below.
You can also read more about our Telecoms, Media, and Technology services at our
TMT micro-site.
Contact - AI at the Edge article
Subscribe to our insights
Blog Subscribe







