Achieving ultra-reliable networks in 2022: the challenge of delivering 5G and low latency

Jon Wilton

Capacity Europe 2021

This article was written to support a panel discussion at Capacity Europe 2021 on 21 October in London. The panel is titled ‘How Digitalisation is Driving the Interconnectivity Landscape’, and it is being moderated by Charles Orsel des Sagets, Managing Partner at Cambridge Management Consulting for Europe and LATAM. Contributions to the article were made by Ivo Ivanov, CEO of DE-CIX International; Tim Passingham, Chairman at Cambridge Management Consulting; and Eric Green, Senior Partner at Cambridge Management Consulting. Thank you to everyone involved.  

5G illustration with neon icons and a satellite tower cell

How the pandemic has shaped the roadmap for internet connectivity in Europe

The pandemic has brought into sharp focus the need and importance of reliable and flexible networks at home. The switch to remote working and the rapid normalcy of meetings from our desks at home, brought with it surges in internet traffic and demand for reliable, stable connections. 


Networks across Europe coped well, for the most part, but there are questions hanging over the near-horizon, about how carriers will adapt and scale for both a growing remote workforce and the predicted rise of new technologies.


5G is part of the answer (but only a part) and its development and rollout will coincide with and drive innovations in IoT, autonomous vehicles, and AI. These emerging technologies also exist within larger tectonic shifts in society and culture, including increasing digitalisation, virtualisation and autonomy in services; and the beginnings of the decentralised application of blockchain technology.


As our society embraces digitalisation, and the process is accelerated by Covid-19, we ask, what are the major challenges faced by carriers heading into 2022 to deliver on twin fronts of both infrastructure demand and customer expectation? 


We will discuss the issue of low-latency, particularly in meeting customer expectations. These expectations anticipate the demands of video-centric content, remote working, IoT and gaming. We will also explore metrics for customer experience, and conclude with the impact of 5G technology. 

Measuring Quality of Experience for internet connectivity

With a surge in internet traffic during lockdowns potentially being the start of a sustained uptick in demand, including online games and growing markets for streaming games and VR, there is a spotlight on the issue of latency as a key indicator of customer expectation. 


Let us first, talk more broadly about indicators of network quality in 2021 and beyond.

Digital Equality - The widening speed gap in Europe

Europe’s internet speeds have increased by more than 50% in the last 18 months. However, this comes at the cost of widening gaps between urban and rural areas and also between Northern European countries and South-Eastern Europe. 


The UK too lags behind much of its Western European neighbours when it comes to average internet speed. It was placed 47th on the list in a study conducted in 2020. In fact, the average broadband speed in the UK was less than half that of the Western European average. 


The EU has a stated goal to be the most connected continent by 2030. It has already taken action to do this by ending roaming charges and introducing a price-cap on inter-EU communications. The key goal is for every European household to have access to high-speed internet coverage by 2025 and gigabit connectivity by 2030.


The elevation of internet access as a necessary human right is of course encouraging, and so are the targets set by the EU. However, for these targets to be truly meaningful, there needs to progress on a number of challenges to connectivity across Europe. Redefining the metrics we use to track this progress is also vital.

Measuring Quality of Experience (QoE)

There are a variety of problems with measuring internet speeds in a comparable way. Usually, ISPs present averages across a range of time in Mbps or sometimes the % of plan speed achieved across a range of time. 


As bandwidth in many countries in Europe moves towards, and over, 100Mbps, this proxy is becoming a weaker indicator of user experience. 


There are also a number of key reasons why figures published by an ISP might be misleading compared to actual user experience. 


Some of these problems are as follows:


  • Lab-testing of internet speed does not replicate the real-world chain of devices/hosts involved in sending and receiving packets


  • Averages of Mbps ignores speeds at peak periods when the network is congested and networks throttle bandwidth


  • The ‘plan speed’ does not reflect actual speeds experienced in a household where packet queuing and WiFi congestion and network  affects users differently on the customer LAN


  • This metric ignores latency, which is becoming a better signal of internet experience in an age of video streaming and online gaming (more on this below)


Many voices in the industry are pushing for more holistic Quality of Experience (QoE) metrics to enlarge the current set of Quality of Service (QoS) measurements. 


The difference between QoE and QoS is that the latter method is comparable to measuring the success of a call centre by how many calls are concluded in a given day. This metric completely ignores whether a caller’s problem was sufficiently resolved or how satisfied the caller felt about the interaction, the ‘experience’.


Research shows that users are happy when a website loads in under two seconds (QoE). If network management is calibrated with this information, bandwidth saved can be allocated elsewhere if necessary (QoS). 


Thus, one characteristic of QoE is the realisation that there are many examples where a better QoS (above a threshold) does not readily impact the user’s perception/experience of the service. 


This has some important ramifications in terms of design. For example, services such as online gaming rely on low latency far more than video streaming, where buffering protocols absorb lag. QoE can be used to design SLAs and network management that are specific to the needs of an individual service. 


If network providers can achieve methods of gathering QoE data, it can be used to build Autonomic Network Management (ANM) capabilities that use artificial intelligence to allow networks to achieve even more efficient network performance that reacts in real-time to user experience. 

Low latency and packet loss

Bandwidth has generally been king in the history of communication networks. Low latency has generally lagged behind (pun intended) as a priority in the upgrading of networks.

What is latency?

From a QoE perspective, latency can be roughly defined as ‘the delay between a user’s action and the response of a web application’ – in QoS terms, this is the time taken for a data packet to make a round trip to and from a server (round trip delay). 


Latency is affected by many variables, but the main four are:


  • Transmission medium: The physical path between the start and end points i.e. a copper-based network is much slower than fibre-optic.


  • Network management: The efficiency of routers and other devices or software that manage incoming traffic


  • Propagation: The further apart two nodes are in the network will affect latency. For every 100 miles of fibre-optic cable it is estimated this adds 1ms of latency


  • Storage delays: Accessing stored data will generally increase latency

Jitter

There are two types of latency issue.


One is the ‘lag’ (delay) we defined above, and the other is ‘jitter’, the variations in latency that can make connections unreliable. Jitter is usually caused by network traffic jams, bad packet queuing and setup errors.

Packet loss

Impacting the QoE ‘perception’ of latency is also packet loss. Packet loss occurs when packets of data do not reach their intended destination. It is commonly caused by congestion and hardware issues —the issue can be more frequent over WiFi where environmental factors and weak signal are factors. The effect of packet loss is worse for real-time services such as video, voice and gaming. Packet loss is also worse in networks where there are no TCP protocols to retrieve and re-send packets that have dropped.

Why is low latency so important now?

“All areas of business and private life rely more heavily today than ever before on digital applications. The latency-sensitivity of these applications is not only a hallmark of quality and guarantee of commercial productivity, but also – in critical use cases, a lifeline”
—Ivo Ivanov, CEO of DE-CIX International

Recent technological innovations all tend to require lower latency. Cloud applications, mobile gaming, virtual/augmented reality, and the smart home rely on real-time monitoring and fast signal to action responsiveness. The growth of IoT and a world of interconnected sensors dictate that networks have a consistently low latency that is less than human reaction speeds.


  • Human beings: 250 milliseconds responding to a visual stimulus 
  • 4G latency: 200 milliseconds
  • 5G latency: 1 millisecond


Consider the safety implications when your car can react 250 times faster than you. At 100km/h the reaction speed of a human creates a reaction distance of 30m. With a 1 millisecond (1ms) reaction time, your autonomous car can break with a reaction distance of 3cm.


End-to-end latency infographic for difference use cases

The relationship of latency and user experience to geography

The maximum affordable latency for a decent end-user experience with today’s general-use applications is around 65 milliseconds. However, a latency of no more than 20 milliseconds is necessary to perform all these daily activities with the level of performance that everybody deserves. Translating this into distance, this means the content and the applications need to be as close to the users as possible. Geographically speaking, applications like interactive online gaming and live streaming in HD/4K need to be less than 1,200 km from the user. But the applications that our digital future will be based on will demand much lower latency – in the range of 1-3 milliseconds. Smart IoT applications, and critical applications requiring real-time responses, like autonomous driving, need to be performed within a range of 50-80 km from the user.

How networks can reduce latency

There are a variety of ways of lowering latency. Businesses can pay for dedicated private networks and links that deliver extremely reliable and stable connections. This is also one of the few solutions that tackles performance gaps in the ‘middle mile’ (the network infrastructure that connects last mile (local) networks to highspeed network service providers) of the internet. 


Any service which uses the backbone of the internet will run into problems of inefficient routing due to:


  • Border Gateway Protocol (BGP) for routing (because it has no congestion avoidance)


  • Least-cost routing policies


  • Transmission Control Protocol (TCP): It is a blunt-tool protocol that reacts strongly to congestion and throttles throughput

SD-WAN, latency and efficient network management

Photo of neon glow of exposed cluster of optical fibres

One other solution is offered by the latest breed of SD-WAN software. SD-WAN operates as a virtual overlay of the internet, testing and identifying the best routes via a feedback loop of metrics. Potentially SD-WAN can limit packet loss and decrease latency by sending data through pre-approved optimal routes. MPLS does something similar, labelling traffic to ensure it is dealt with on a priority basis; but this service is more expensive than SD-WAN and its architecture is not suited to cloud connectivity.


SD-WAN is a hybrid solution, meaning that the software overlay can route traffic over a host of networks, including MPLS, a dedicated line and the internet. WAN management also includes a host of virtualised network tools that optimise network efficiency. This includes abbreviating redundant data (known as deduplication), compression, and also caching (where frequent data is stored closer to the end user). 


To find out more about the range of network infrastructure and SD-WAN services offered by Cambridge Management Consulting visit our website.

5G promises ultra-low latency

Infographic of transmission distance 5G vs 4G

5G promises to lead us into a world of ultra-low latency, paving the way for robotics, IoT, autonomous cars, VR and cloud gaming. For this to become a reality, new infrastructure must be installed; this requires significant investment from governments and telecoms companies. Most countries need to install much more fibre to deal with the backhaul of data. 


During the transition, the current 4G network will need to support 5G and there will be a combination of new and old tech, patches and upgrades to masts. Edge computing will eventually move data-centres closer to users, also contributing to lower latency. It could be many years before we see the kinds of low-latency connections that have been promised. 

How 5G and 'network slicing' will end high latency

With the fifth generation of cellular data, gigabit bandwidth should become the norm, and the frame length (the time waiting to put bits into the channel) will be drastically reduced. 5G moves up the electromagnetic spectrum to make use of millimeter waves (mmWave), which have much greater capacity but poorer propagation characteristics. These millimeter waves can be easily blocked by a wall, or even a person or a tree. Therefore, operators will use a combination of low, mid, and high range spectrum to support different use cases. 


The mid- to long-term solution to propagation restrictions is that 5G will require a network of small cells as well as the cell towers to support them (NG-RAN architecture). Small cells can be located on lampposts, sides of buildings, and also within businesses and public buildings. They will enable the ‘densification’ of networks, broadcasting high capacity millimeter waves primarily in urban areas. Because optical fibre may not be available at all sites, wireless backhaul will be a common option for small cells.


Edge computing will further support this near-user vision. Using off-the-shelf servers, and smaller data centres closer to the cell towers, edge computing can ensure low latency and high bandwidth. 


Infrastructure requirements of 5G (infographic)

“As latency requirements get lower and lower, it becomes more and more important to bring interconnection services as close to people and businesses as possible, everywhere. Latency truly is the new currency for the exciting next generation of applications and services” 

—Ivo Ivanov, CEO of DE-CIX International

What is network slicing?

The key innovation enabling the full potential of 5G architecture to be realised is network slicing. This technology adds an extra dimension by allowing multiple logical networks to simultaneously overlay a shared physical network infrastructure. This creates end-to-end virtual networks that include both networking and storage functions. 


Operators can effectively manage diverse 5G use protocols with differing throughput, network latency and availability demands by ‘slicing’ network resources and tailoring them to multiple users.

What is realistic progress for 5G in 2022?

Neon 5G text on black background

According to the California-based company Grand View Research, the global 5G infrastructure market size —valued at $1.9bn in 2019— is projected to reach $496.6bn by 2027.


There are however significant costs associated with 5G roll-out, as well as complications arising from planning regulations (for small cells in the UK alone, separate planning applications have to be files for each cell) and the need to alleviate public health fears about the technology. 


There is still also the issue of digital equality (conquering the digital divide). There is a risk the divide could widen further if 5G services are concentrated only in cities, as economics will almost certainly dictate.


The EU recently announced their Path to the Digital Decade, a concrete plan to achieve the digital transformation of society and the economy by 2030. 


Read more about the Path to the Digital Decade.

“The European vision for a digital future is one where technology empowers people. So today we propose a concrete plan to achieve the digital transformation. For a future where innovation works for businesses and for our societies. We aim to set up a governance framework based on an annual cooperation mechanism to reach targets in the areas of digital skills, digital infrastructures, digitalisation of businesses and public services.” 

—Margrethe Vestager, Executive Vice President for ‘A Europe Fit for the Digital Age’

5G has been dubbed by some as the next industrial revolution. If all the technologies that it intends to drive are realised within the next decade, that could certainly be the case. What is achievable in the short-term, however, is less clear and progress could be slowed by infrastructural barriers and rising costs.


As we head into 2022 there needs to be significant work to upgrade legacy systems to integrate with the rollout of 5G and an acceleration laying fibre optic cables to deal with the backhaul of data from the proliferation of 5G cells. 


While 5G leads the technological improvement of the network, lowering latency at the network edge also needs to be a primary goal and operators must focus on latency as one element (albeit it a key element) of a holistic strategy to improve the mobile internet experience (and measure this against a robust QoE framework). 

Contributors

Thanks to Ivo Ivanov, CEO of DE-CIX International; Charles Orsel des Sagets, Managing Partner, Cambridge MC; Eric Green, Senior Partner, Cambridge MC; and Tim Passingham, Chairman, Cambridge MC, who all made contributions to this article. Special thanks to Ivo Ivanov, for his quotes.


Thanks to Karl Salter, web designer and graphic designer, for infographics.


You can find out more about Ivo Ivanov on LinkedIn and DE-CIX via their website.


Read bios for Charles Orsel des Sagets, Tim Passingham and Eric Green.

About Us

Cambridge Management Consulting is a specialist consultancy drawing on an extensive network of global talent. We are your growth catalyst, assembling a team of experts to focus on the specific challenges of your market.

 

With an emphasis on digital transformation, we add value to any business attempting to scale by combining capabilities such as marketing acceleration, digital innovation, talent acquisition and procurement. 

 

Founded in Cambridge, UK, we created a consultancy to cope specifically with the demands of a fast-changing digital world. Since then, we’ve gone international, with offices in Cambridge, London, Paris and Tel Aviv, 100 consultants in 17 countries, and clients all over the world.


Find out more about our SD-WAN and network architecture consultancy services.


Find out more about our digital transformation services and full list of capabilities.


Subscribe to our insights

Blog Subscribe

An artistic representation of fin LEO satellites lined up in space
by Mauro Mortali 16 April 2025
"Is it Snowing in Space?!" “Is it snowing in space?!” Asks a disgruntled Bill Murray in the film Groundhog Day when he is told that he cannot call out from the snowbound town of Punxsutawney, Pennsylvania. If there is a remake, Bill might not have to worry: signal dead zones may soon be a thing of the past due to recent advancements in satellite technology. Whereas the old picture of satellite communications was a scientist in the wilderness with a big clunky antenna, these days the technological payload is all in space. Recent advancements such as Low Earth Orbit (LEO) satellites, advanced beamforming, and the use of mobile spectrum bands means that any phone supporting 4G LTE can potentially receive satellite data directly. This integration of satellite and terrestrial networks is set to reshape the mobile industry, creating both opportunities and challenges for traditional mobile network operators (MNOs) and mobile virtual network operators (MVNOs). In this article we give an overview of the technological advancements, the major players in the market, and then consider the effects this will have on traditional wholesale mobile market structures; concluding with the emerging opportunities for new revenue and growth. The Evolution of Satellite Connectivity Historically, satellite communications operated independently from terrestrial networks, serving specialised markets with limited scalability and high entry barriers. However, recent advancements, particularly in Low Earth Orbit (LEO) satellite technology, have dramatically altered this scenario. The most well-known example is obviously SpaceX, which has played a pivotal role in democratising space: reducing barriers to entry and making satellite connectivity more scalable, performant, and accessible. SpaceX and other companies have found innovative ways to dramatically reduce costs. Since Sputnik 1 in 1957, launching payloads into space has been prohibitively expensive, with costs exceeding $100,000 per kilogram in the 1960s and averaging $16,000/kg for heavy payloads from 1970 to 2010. SpaceX’s innovations have brought these costs down through reusable rockets, vertical integration, economies of scale, and advancements in materials and manufacturing processes: leading to price points as low as $100 per kilogram in recent years. However, cost is just one of the barriers. The real gambit has been provided by Low Earth Orbit (LEO) satellites, which typically orbit at altitudes ranging from approximately 160 to 2,000 km and offer low-latency, high-speed connectivity — making them ideal for real-time applications and direct-to-device communications. The latest generation of technologies now enable LTE mobile phones to connect directly to satellites without specialised hardware, marking a significant milestone in mobile communications. The Major Satellite-to-Cell Players While SpaceX's Starlink has garnered the most attention, several other major companies are actively developing satellite-to-cell technologies and forming strategic partnerships with terrestrial mobile operators. As of April 2024, Starlink had established 15 partnerships with mobile carriers globally — including T-Mobile in the US. T-Mobile has structured its beta program to begin with text messaging capabilities, gradually expanding to include picture messages, data connectivity, and eventually voice calls. As of February 2025, it is reported that 7,086 Starlink satellites are in orbit, with 7,052 being operational. AST SpaceMobile has emerged as a significant innovator, achieving a historic milestone in April 2023 with the first-ever two-way voice call directly with an unmodified smartphone, via their BlueWalker 3 satellite. AST SpaceMobile launched its first five commercial satellites, the BlueBird 1-5 mission, on September 12, 2024, aboard a SpaceX Falcon 9 rocket. Lynk Global represents another significant player. In a recent expense report, it revealed that each satellite costs around $400,000 to build and up to $815,000 to launch into space. They hope to have up to 1000 satellites (for full continuous broadband coverage) in orbit by 2025 and 32 mobile network operator (MNO) partnerships by the end of 2025. The company has successfully demonstrated text messaging capabilities from satellites to standard cellular devices and continues to expand its constellation and service offerings. Huawei has partnered with China Telecom to demonstrate satellite-to-phone messaging capabilities, while Apple has worked with Globalstar to implement emergency satellite messaging features in recent iPhone models. Implications for Traditional Wholesale Mobile Market Structures Traditionally, the wholesale mobile market has been structured around MNOs, MVNOs, and wholesale aggregators. Revenue streams have typically included MVNO wholesale pricing, and IoT and machine-to-machine (M2M) solutions. However, the rise of satellite-to-cell technology poses potential threats to this established model. Disintermediation of MNOs and MVNOs Satellite-to-cell connectivity introduces the potential for disintermediation, where control traditionally held by MNOs could become fragmented across multiple parties in the value chain. As satellite providers increasingly offer direct-to-device services, traditional operators risk losing their central role in network management and customer relationships. Pricing Pressure on Wholesale Markets The increased availability and competition from satellite connectivity providers could exert downward pressure on wholesale pricing. As satellite services become more affordable and accessible, traditional wholesale providers may face challenges in maintaining their pricing structures and profitability. Competitive Pressure in IoT and Enterprise Applications Satellite connectivity is particularly well-suited for IoT and enterprise applications, especially in remote or challenging environments. As satellite-to-cell technology matures, traditional wholesale providers may face intensified competition in these segments, necessitating strategic adjustments to remain competitive. Emerging Opportunities in Satellite-to-Cell Connectivity Despite these challenges, the integration of satellite connectivity into mobile networks also presents substantial opportunities for innovation and growth. Forward-thinking operators can leverage satellite-to-cell technology to develop new business models and revenue streams. Hybrid Terrestrial-Satellite Subscription Models Providing Ubiquitous Connectivity Operators can offer hybrid subscription plans that seamlessly integrate terrestrial and satellite connectivity. Such models provide customers with uninterrupted coverage, enhancing user experience and creating differentiated service offerings. Wholesale Satellite Resale for MVNOs Satellite-to-cell technology opens new avenues for MVNOs to expand their service portfolios. By reselling satellite connectivity, MVNOs can offer enhanced coverage and reliability, particularly in underserved or remote regions, thereby attracting new customer segments. IoT and Enterprise-Focused Applications Satellite connectivity is a natural fit for IoT and enterprise applications, such as remote monitoring, asset tracking, and industrial automation. Mobile operators can forge strategic partnerships with satellite providers to deliver specialised solutions for these markets, tapping into new revenue opportunities. Emergency-Only and Disaster Recovery Plans Satellite-to-cell technology can play a crucial role in emergency and disaster recovery scenarios, providing a reliable backup to terrestrial networks when they are unavailable or overwhelmed. Operators can develop emergency-only plans that leverage satellite connectivity to ensure critical communications during crises. Conclusion Satellite-to-cell technology represents a convergence of space and terrestrial communications systems that promises to fundamentally alter global connectivity markets and players. The dramatic reduction in launch costs by a factor of 20 has enabled the deployment of massive satellite constellations that were previously economically unfeasible. The competitive landscape continues to evolve rapidly, with SpaceX, AST SpaceMobile, and Lynk, and traditional telecommunications companies all pursuing various technological approaches and business models. Commercial text messaging services are already becoming available through beta programs, with video calling capabilities demonstrated and voice calls progressing toward wider availability. The integration of 5G standards with satellite networks continues to advance through collaborative industry initiatives, with projections of a $50 billion market by 2032. As this technology continues to mature throughout 2025 and beyond, it promises to eliminate mobile dead zones and create new application possibilities that were previously unimaginable. The future of mobile communications is undoubtably hybrid: blending terrestrial and non-terrestrial networks into seamless connectivity solutions that follow users wherever they go. This has wide reaching implications for connectivity in remote and isolated regions, and offers perhaps the fastest and most cost-efficient route to bridging the digital divide. It will also transform how we respond in disaster zones and hazardous areas — increasing the ability to protect and save lives with faster and safer humanitarian and emergency services.
Silhouette of 737 plane in a neon sky
by Tom Burton 9 April 2025
What Problem do Too Many SaaS Providers Have in Common? Many SaaS security providers have a history of treating important safety and security features as something to upsell. This raises the important question of whether a software vendor has a moral responsibility for the secure operation of their solution. In this article, we explore the implications of treating important security and safety features as an upsell, using Boeing as a test case of where this can go wrong. The Case of Boeing and the Aviation Industry The case against Boeing is emblematic of a more systemic issue across the aviation industry, and many other industries. The public became aware of this issue under tragic circumstances when the Lion Air and Ethiopian Air Boeing 737 Max airliners crashed in 2018 and 2019 respectively. According to the widely quoted New York Times article , the crash could have been avoided if the pilots had access to two safety features that were sold by Boeing as optional extras. According to the incident reports, at the root of the incident were the angle-of-attack sensors. These mechanical sensors operate in a similar fashion to a weathervane to measure whether the aircraft’s nose is pointing above or below the direction of airflow. Being mechanical, they may be prone to malfunction, perhaps jamming after having been installed incorrectly — as was believed to be the case for the Lion Air aircraft . The system that led to the aircraft’s demise, which identifies the risk of the aircraft stalling, only listened to one of the sensors. A difference in the signal being sent by the two sensors was not recognised by the anti-stall system; and the instruments that would have alerted the pilots to the conflicting signals were upsell items. This wasn’t a fancy, nice-to-have bell or whistle that makes the flight more comfortable, efficient, or profitable. It is an underlying safety feature of the aircraft. If there was no safety requirement for the redundancy of two sensors, it is difficult to see why there would ever be more than one. Boeing has now addressed the issue, and the anti-stall system listens to both sensors, responding safely in the event of conflicting signals. It should also be noted that the investigation identified pilot error and deficiencies in the training that contributed to the disasters (and this will be relevant to our points regarding many SaaS product decisions as well). The SaaS Parallels Cloud-delivered Software as a Service (SaaS) has revolutionised the tech industry, and catalysed a phenomenal level of innovation and growth. It has enabled new software capabilities to be brought to market faster than ever before, and facilitated the ability to reach a scale with costs defrayed across multiple customers that would have been unimaginable 30 years ago. However, the benefits of being able to access a service from anywhere, at any time, by anyone also presents significant risks. The ‘anyone’ can be a malicious party operating outside of the reach of law enforcement or extradition. As a result, there are clear commercial responsibilities placed on SaaS providers to secure their infrastructure from attack, and those that do not are unlikely to last long in the marketplace. But just like the aviation industry, there are different flavours of security, and different perceptions of what is considered essential. Taking due care and applying due diligence to ensure that the platform itself is adequately secured from a direct attack is clearly the vendor’s responsibility – but what about those elements of security that relate to risk owned by their customers? One key element of customer risk relates to the security of a user’s password. It is their responsibility to make sure they choose a long and random string drawn from upper case, lower case, numerical, and special characters (if allowed). It is also their responsibility to ensure that they do not ever use the same password for multiple applications or services. But, we know that compromised credentials is a common failure mode. Just because it is the user’s responsibility to mitigate this risk, this doesn’t mean that system developers do not also have some mutual responsibility to make it easier for the user to exercise that responsibility; controls have been developed specifically for that purpose. The most obvious ones are Multi Factor Authentication (MFA, or 2FA), and Single Sign On (SSO). With MFA, we improve the security of the credentials by also verifying that the user is in possession of their trusted device before we trust them at sign in. With SSO, we minimise the number of credentials and accounts to manage by federating with a single corporate account; we can then concentrate our effort to secure that corporate account rather than spreading our resources thinly. Both are relatively easily implemented these days, particularly in the case of SSO where the OAuth protocols are widely offered by Identity Providers. Once implemented, both are essentially free to operate, particularly if MFA uses an Authenticator app rather than SMS text messages. SaaS providers recognise that this security is important, and they will frequently implement MFA and SSO controls into their applications to meet that customer demand. But, too frequently, we see them only offered as part of the more expensive subscription options. This element of security is not enhancing the vendor’s core proposition; it is not making their offering more functional, better looking, or more efficient for their users. It is just making it more secure, and therefore to treat it as an item to upsell comes across as price-gouging rather than the responsible application of good security practice. It is almost as though these vendors have run out of innovative bells and whistles that their clients would value in their core product, so they have had to resort to undermining the security of their cheaper options in order to encourage their customers to pay for their more expensive ones. It is equivalent to a bank only using the CSC code on a card to secure transactions for customers who pay for their premium banking services, because, after all, it is the customer’s responsibility to protect their card details. Conclusion What we have described here is not universal, and probably is not even representative of the majority of SaaS providers. But, when you are reviewing a new service, we urge you to take a closer look at what security your provider is charging extra for. If low cost, high value security controls are being upsold, then you may want to consider what other security good practices are not being considered essential. For more information about our cyber security consulting services and Secure by Design principles in action, please contact Tom Burton, Partner for Cyber Security, using the form below.
by Clive Quantrill 3 April 2025
As the UK's ageing copper landline network becomes increasingly unstable, Cambridge Management Consulting reports that BT is urging Critical National Infrastructure (CNI) providers to expedite their transition from analogue to digital voice. With the Public Switched Telephone Network (PSTN) nearing the end of its life, organisations face significant risks if they delay planning and execution for this essential upgrade. Recent data indicates that 60% of CNI providers in the UK still lack a strategic plan to migrate from the legacy analogue network. This statistic underscores an urgent need for action to safeguard essential public services, such as healthcare, water, energy, emergency services, and government operations. The transition is not merely a technological upgrade; it is a once-in-a-generation programme to future-proof communications and improve service reliability. The PSTN, our communications backbone for over a century, is becoming increasingly prone to faults and difficult to maintain, with recent reports showing a 45% increase in significant resilience incidents. The impact of this transition is wide-reaching, affecting critical systems such as telemetry monitoring sensors, emergency phone lines, telecare alarms in hospitals and care homes, CCTV, intruder and fire alarms and older EPOS machines.  As the below graphic shows, a broad spectrum of devices and services will be affected by the analogue switch off, including ISDN, ASDL and Fibre to the Cabinet (FTTC) broadband services. The majority of organisations are almost certainly in the dark when it comes to common knowledge of all of the devices affected, lacking the internal expertise and records to identify and audit complex, interrelated legacy systems.
Red abstract architecture with a cloud passing through the square arch
by Tom Burton 27 March 2025
Well Intended Guidance Leaves more Questions than Answers The UK Government Digital Services – part of the Department for Science, Innovation and Technology – has recently published guidance for how the public sector should adopt a multi-region approach to cloud technology. At first sight this appears encouraging. Any unnecessary constraints on hosting arrangements (or any other non-functional requirements) reduce the available market of providers, constrain competition, and therefore inevitably reduce value for money. If parts of Government, whether central, regional or local, have felt that everything must be hosted in the UK then it makes sense to produce guidance that clarifies this perception and helps to open their options up. But for guidance to be useful it should guide. It should make it easier for people to take actions that they previously would have discounted. The guidance in this case, which at 1420 words is almost as short as this article, probably leaves the reader with more questions than answers. It may reveal some unknowns, but without increasing certainty. The Guidance in a Nutshell A summary of the guidance is as follows: Look wider than UK: Many cloud solutions may not offer UK hosting, particularly new innovative solutions that haven’t scaled up yet. Irrespective, their staff are likely to be distributed around the world if the service is supported 24/7. There may also be other benefits in looking wider than UK hosting, such as enabling better business continuity and disaster recovery options if the vendor only has one UK site. Get legal advice: Before you even consider a non-UK option you need to seek advice from your own legal advisors and your Data Protection Officer (DPO). Ensure compliance with ICO guidance: Before you even consider a non-UK option you need to check and make sure that any international transfer of personal data will be compliant with the Information Commissioner’s Office (ICO) guidance, and you should get further guidance from your own legal advice and DPO. Do a full review of vendor security: Before you even consider a non-UK option you need to make sure the vendor and solution are compliant with your own security policies. In a nutshell, it says: 'you should consider options outside of the UK but only if you have checked everything is legal and secure'. This seems to be verging on a statement of the obvious; the real difficulty in going offshore is covering all of the legal, regulatory and security compliance aspects. Adequacy is a Moment in Time On point 3, the guidance points out data protection compliance is easier if the country in question is considered by the ICO to be adequate – having equivalent regulations for data protection to the UK. Sound advice. But even this is not that simple. For instance, the USA is not considered adequate unless it is under an extension of the EU-US Data Privacy Framework. This framework is dependent on an Executive Order that the Biden administration put in place, and it is entirely possible that it will be revoked by the current administration. If such an action was taken, or if for any other reason the EU decides that adequacy is no longer met (also not unlikely given Herr Schrems has achieved this twice already and has stated he plans to challenge the DPF), then the vendor will no longer be considered compliant. Consideration is Far Wider than Residency Security is far wider than data residency though. This is where point 4 both states the obvious and understates the complexity. Managing risk in the supply chain is inherently difficult. Cloud providers, and particularly SaaS solutions, aggravate this challenge by an order of magnitude. By their nature they are solutions designed for a broad and varied range of customers. This means they will always involve compromise. If they tried to meet the most demanding requirements, they would price themselves out of the scale marketplace. If they went for the lowest common denominator, they would be unable to meet the requirements of the majority. An individual customer can rarely dictate a specific security requirement for themselves. They are also highly opaque. The vendor presents their service as a black box. The features delivered to the customer are defined, but much of the underlying design and the means the vendor uses to manage it in operation are hidden. This makes assessing the risk far more of a judgement call than when the design and delivery is conducted under your control. Depending on the supplier, and the leverage that the customer has over them, it may be possible to get some information and assurances; but the right questions need to be asked, and the answers need to be interpreted correctly. Third party certifications and audits, such as the ISO27000 series of standards or the SOC1, SOC2 and SOC3 reports, can also provide some additional assurances. But only the customer will be able to decide the extent to which they can mitigate the risk, and the confidence they have in the supplier to manage their own. This is a business decision informed by the specifics and nuances of the risks being considered. Summary It is important to minimise the non-functional requirements and keep an open mind about potential solutions and vendors. This includes looking wider than just the UK when national security requirements are not paramount. But this is not something that can be distilled onto a single sheet of A4 in any meaningful way. Yes, there are legal and regulatory issues that need to be reviewed. And geopolitical risk needs to be factored in, considering how you would respond to future external changes that are outside of the UK’s control. But from experience, the greatest challenge is getting comfortable that the vendor’s organisation and their solution have adequate security – this applies equally whether the solution is hosted in the UK or overseas. The SaaS world is opaque, and balances priorities across a broad and varied customer base. The public sector needs to increase its adoption of cloud and SaaS solutions to remain efficient and relevant, in the same way that the private sector has had to. But the route to responsible adoption is more nuanced, requiring candid conversations with suppliers, and ultimately an informed but subjective judgement by the customer’s leadership. Sources/Links: DSIT Guidance for Multi-region cloud and software-as-a-service ↩︎ ICO Guide to International Transfers ↩︎ Executive Order (E.O.)14086 of October 7, 2022, on Enhancing Safeguards for United States Signals Intelligence Activities ↩︎ Note: This article originally appeared on Tom Burton's personal blog at https://digility.net/insights/
Palace of Westminster at night
by Craig Cheney 25 March 2025
The Digital Communities All-Party Parliamentary Group (APPG) shared the ‘Care to connect: Public Switched Telephone Network (PSTN) Migration’ report with key parliamentarians on Monday at a launch meeting on Parliament Street. This report highlights key recommendations for managing the ongoing Public Switched Telephone Network (PSTN) migration, focusing on protecting vulnerable residents and ensuring effective solutions. Here are the major takeaways for local government and communication providers: Data-Sharing Agreements (DSAs) DSAs between communication providers (CPs), local authorities, and telecare providers are crucial for identifying vulnerable residents during the migration. Challenges include inconsistent responses from local authorities and fragmented approaches across CPs. The APPG recommends all local authorities and housing associations sign DSAs, regardless of progress in digital switchover, to promote uniformity and resident safety. Telecare Devices The sale of analogue telecare devices must end, as these can leave residents unsupported during the transition. The government, in collaboration with the TEC Services Association (TSA), should enforce higher standards (TEC Quality’s Quality Standards Framework) across the telecare industry to achieve robust digital migration practices. Financial support for local councils is critical to replace outdated telecare devices and prevent double costs. Battery Backup Solutions Existing guidance from Ofcom, requiring one-hour resilience for power cuts, is insufficient. The APPG recommends increasing power backup requirements to at least 4 hours in homes and 6 hours for fixed networks. Communication and energy providers must jointly create resilient power solutions, particularly for vulnerable residents reliant on telecare devices. A multi-sector priority service register should integrate communications and energy service protection for those at risk. Sunset of 2G and 3G Networks UK mobile network operators plan to stop supporting 2G and 3G networks by 2033, with some networks already switched off. There are cases where local authorities and residents have purchased telecare devices using 2G/3G SIM cards, as a lower-cost, interim solution — these devices will need to be replaced again, posing double replacement costs for local authorities and additional risks to residents. The government should stop the sale of analogue devices and accelerate efforts to prevent the redeployment of outdated telecare alarms. Summary We welcome these recommendations alongside the December 2023 PSTN Charter, the Telecare National Action Plan and the PSTN Non-voluntary Migration Checklist. The conclusions make it clear that coordination between local and central government, industry regulators (such as Ofcom and Ofgem), and communication providers (CPs), as well as significant investment in digital teams at a local level, are essential goals to ensure a safe and inclusive digital switchover for all vulnerable residents and telecare users. Read the full report here: https://digitalcommunities.inparliament.uk/care-to-connect-public-switch-telephone-network-migration-report About the APPG The Digital Communities APPG is a cross-party group of parliamentarians, with the aim to promote the delivery of digitally equipped places that support and foster a connected, healthy, and productive community. This includes the creation and maintenance of sustainable digital infrastructure, as well as providing residents with equal opportunity to thrive in a digital world. The LGA provides the secretariat to the APPG. Cambridge Management Consulting Our Public Sector and PSTN teams can help local councils and other public bodies by providing strategy, financial planning, procurement, and project management services to ensure that you have a comprehensive transition strategy and accurate financial costing for the PSTN switch-off. We can help you follow the recommendations in this report by completing a full audit, signing DSAs with CPs and most importantly, protecting vulnerable service users. Get in touch with Craig Cheney, Managing Partner and lead for Public & Education, to discuss a range of services which might suit your needs: ccheney@cambridgemc.com (or use the form below). Act now, before time and resources run out.
A hazy smog view across a city skyline
by Simon King 20 March 2025
What Do Your Scope 3 Emissions Have to Do with Inflation? Scope 3 emissions cover everything outside your direct operations —the carbon footprint of your supply chain, purchased goods, logistics, business travel, and more. The higher your Scope 3 emissions, the more energy-intensive your supply chain is. And the more energy-intensive your supply chain, the more vulnerable you are to rising costs. Think of it this way: High Production Costs- If your suppliers are heavily dependent on fossil fuels, their production costs are rising fast Price Volatility- If your supply chain lacks efficiency and resilience, price volatility will hit you harder Locking in High Costs- If you’re not actively engaging with suppliers to reduce emissions, you’re locking in long-term cost increases that could have been avoided Without accurate Scope 3 data and a clear engagement strategy , businesses are leaving themselves open to higher prices, lower margins, and greater financial risk . Why Businesses Struggle with Scope 3 A major challenge is that Procurement and Sustainability teams often operate in silos: Procurement teams focus on cost and supplier relationships but often lack deep sustainability expertise Sustainability teams focus on compliance and decarbonisation but aren’t typically measured on financial performance This disconnect means emissions reduction is rarely treated as a financial opportunity —when in reality, cutting carbon from your supply chain is also one of the most effective ways to reduce exposure to cost inflation. The Businesses That Get This Right Will Lower their Costs Leading organisations are already taking action. They are: Gathering detailed Scope 3 emissions data to map out cost risks in their supply chain Engaging suppliers to drive efficiency, reduce emissions, and lower costs Building resilience by shifting towards lower-carbon, more cost-stable alternatives The result? Lower long-term costs, reduced financial risk, and a competitive edge over those stuck with inefficient supply chains. This is not just about sustainability compliance —it’s about smart financial decision-making. If You’re Not Taking Action, You’re Losing Money Every business will feel the impact of rising supply chain costs—but not every business will be prepared for them. If you don’t have accurate Scope 3 emissions data and an effective engagement strategy, you are: Paying more than you need to for essential goods and services Exposing your business to long-term cost inflation Missing out on opportunities to build a stronger, more resilient supply chain The sooner you act, the better the outcome for your bottom line and the planet. Is your business ready to take control of its costs? Get in touch with Cambridge Management Consulting and edenseven today. About edenseven edenseven is the sustainability-focussed sister-company of Cambridge Management Consulting. We work with businesses across all sectors in multiple regions to deliver robust and deliverable net-zero strategies. The success of any strategy relies on its awareness of how changes in policy and subsidies can create both risks and opportunities for a business. If you are a business trying to enter a new market or evolving in an existing market and would like to learn more about how edenseven can support you, please get in touch with the team at edenseven at info@edenseven.co.uk or use the contact form below. Find out more about edenseven on their website: edenseven.co.uk
by Daniel Fitzsimmons 13 March 2025
Peter Drucker wrote in his book The Practice of Management (1954) that ‘it is the customer who determines what a business is’. This sentiment still firmly holds true today, as consumers increasingly expect personalised shopping experiences from aspirational businesses that desire to have a positive impact on the community, country, or world in some way. Across this series of articles, Daniel Fitzsimmons explores the role of customer-centricity as a mechanism to support the delivery of superior customer experience and business profitability. In the first two articles in this Customer Centricity series, Daniel has established the foundations of what makes a truly customer-centric organisation, and how a business can be tailored towards ensured customer satisfaction. In the final article in the series, he takes this further to discuss how technological innovation can amplify these goals. Digital Transformation – Technology Acceptance Model (TAM) Technology is typically the most common interaction point for customers engaging with products, and is especially critical to the service industry. The banking industry has pioneered the digitalisation of services (Dube and Helkkula, 2015), with digital payment services and blockchain solutions. In a fiercely competitive environment, the creation of superior value requires increased insight into how customers experience value (Medberg and Heinonen, 2014). Value can be typically defined as the ‘consumers’ overall assessment of the utility of a product based on perceptions of what is received and what is given’ (Zeithma, 1988). This concept can be extended to a value definition in the following forms: Total Monetary Value – The amount a customer is prepared to pay for a product Perceived Use Value – Defined by a customer’s perception (utility) Exchange Value – Realised when the product is sold Value can be enhanced through digital capabilities, marking technology solutions, and digital marketing strategies to support user acceptance. Securing User Acceptance One compelling approach to understanding how users may engage with a new technology is the TAM model. The TAM model suggests that Perceived Usefulness (PU) and Perceived Ease of USE (PEOU), define how a user will interact with a new product or service, i.e. if the product usefulness and ease of use can be communication, barriers to adoption can be mitigated. When developing new customer solutions, mobilisation of the TAM model is the engagement of consumers in product development, and inclusion of then construct of ‘user intent’ to inform product ideation. Venkatesh et al. formulated the unified theory of acceptance and use of technology (UTAUT). This model was found to outperform other models (Adjusted R square of 69 percent), and is worthy of further investigation in terms of its ability to predict user acceptance of new technology solutions. Experimentation Technology should function as an enabling mechanism to support experimentation in the creation of products and services, and increased alignment with prospective customers. Experimentation, which from an engineering perspective represents ‘continuous improvement’, allows businesses to see what does and doesn’t resonate with target personas, iterating towards a value proposition that will drive superior customer engagement and subsequently an increased % of the customer wallet. Booking.com runs more than 1,000 tests simultaneously to fine tune its offering specific to a user profile, behaviours, and characteristics. Experimentation and the subsequent data generated provides a meaningful base from which to make decisions, thereby negating ‘strong opinions or the HiPPO mentality, which is often pervasive in organisations. For experimentation to be successful, leadership needs to create a culture of curiosity in the business, supported by organisational design and the psychological safety to try and fail. Digital continuity provides an exciting opportunity to enhance the customer voice in product development. Real time data availability provides instant insight into consumer preference, which can be used to support product development and increasingly personalised product offers. Through the experimentation cycle, digital prototypes can be rolled out quickly to support the product innovation cycle. For experimentation to be successful, customer requirements should be integrated into business operations to create an industry-aligned value proposition (Ohmae, 1988). Conclusion Throughout this three-part series, I have demonstrated the importance of customer-centricity as a critical way to ensure success. In this article specifically, I have covered how to leverage technology – a power that is already prevalent and constantly evolving – to best support this venture. Building upon the TAM model, technology can be used to facilitate enhanced customer satisfaction, consequently spurring innovation and growth.
Impressionist and colourful depiction of a man surfing a large wave
by Naaz Bax 7 March 2025
Funds donated by Cambridge MC supplied some new equipment, including new boards.
Shelf stacked with gold awards that look like Oscars
by Lucas Lefley 4 March 2025
At Cambridge Management Consulting, we pride ourselves on building a consultancy practice that goes beyond traditional consulting. Our team is composed of specialist practitioners who have reached the pinnacle of their industries, bringing years—often decades—of hands-on experience to guide others in achieving exceptional results. This approach has established Cambridge MC as a consultancy powered by a network of diverse, proven expertise, consistently recognised for its impact and innovation. Our consultants and their work have been honoured with numerous accolades, reflecting the value we bring to our clients and industries. For example, Zoë Webster, an expert in AI, Digital & Innovation, was named one of AI Magazine’s Top 10 AI Leaders in the UK & Europe, celebrated as a pioneer reshaping industries and societies. Similarly, Craig Cheney, Managing Partner, Marvin Rees, Board Advisor, and David White, Associate, were recognised with a World Economic Forum Award for Public Private Collaboration for their contributions to the Bristol City Leap project. Craig Cheney was made an Alderman of the City of Bristol, acknowledging his eminent services to the city; and just recently, Marvin Rees OBE was introduced into the House of Lords. These achievements were further complemented by our success at the Consultancy Awards, where Cambridge MC proudly received awards in every category we were nominated for. The Consultancy Awards The Consultancy Awards, hosted annually by The Consultancy Growth Network, celebrate hard work, commitment, and innovation across the consultancy sector. Cambridge MC was honoured to receive three awards in recent years, recognising our contributions across key areas: Digital Transformation: For our project management of a multinational oil and gas company, coordinating the development of a portfolio of high-priority EV charging hub sites in major cities. Productivity Improvement / Cost Reduction: For delivering £10m in savings for a large UK online retailer in just 13 weeks, leveraging our expertise in procurement, contract, and vendor management. Fastest Growing: Celebrating our 30% growth in revenue, 100% increase in geographies, and doubling the profit we donate to charity to 12%.  These awards are a testament to our commitment to delivering exceptional results for our clients while contributing to the industries we serve. Celebrating Industry Excellence While receiving accolades is always an honour, the opportunity to give back to the industries that shaped us is equally rewarding. Cambridge MC has been privileged to sponsor and judge several prestigious awards, recognising the talent and innovation that drive progress across telecommunications, technology, and connectivity. ITP Telecoms Awards As Platinum Sponsors of the ITP Telecoms Awards, hosted by the Institute of Telecommunications Professionals, we celebrated the achievements of individuals and organisations making significant contributions to the digital industry. Tim Passingham, Founder & Chairman of Cambridge MC, presented the Engineer of the Year award to Mike Mawson, Head of Fibre Innovation at Hyperoptic, recognising his exceptional work in advancing telecommunications. Global Connectivity Awards The Global Connectivity Awards, held at the O2 in London, marked its 20th year of honouring innovation across 40 categories, from technology breakthroughs to regional achievements. Cambridge MC’s Managing Partner, Charles Orsel des Sagets, joined the panel of 30 impartial judges, bringing over 30 years of expertise in fintech, cybersecurity, and connectivity to evaluate the finalists. This event highlighted the ingenuity shaping the connectivity industry and provided a platform to celebrate its brightest minds. World Communication Awards The World Communication Awards, now in its 25th year, continues to recognise excellence across telecommunications. Naaz Bax, Senior Partner and Chief of Staff at Cambridge MC, served as a judge and presented the prestigious Woman in Telecoms Award. This category celebrated the achievements of brilliant women in the industry, with the award going to Josephine Sarouk, Managing Director of Bayobab Group, for her invaluable contributions to telecommunications. DCD>Global Awards The DCD>Global Awards, held at Grosvenor House in London, celebrated talent and achievement in the data centre and telecommunications industries. Duncan Clubb, Senior Partner for Data Centres, Edge & Cloud, brought his expertise to the judging panel, evaluating finalists in categories such as the Edge Data Center Project of the Year. This event showcased the transformative impact of innovation in data centre infrastructure and edge computing. A Legacy of Ingenuity The awards, events, and individuals highlighted here reflect the wealth of expertise, innovation, and achievement that define the consulting, telecommunications, and technology industries. At Cambridge MC, we are privileged to contribute to these industries, whether by delivering impactful projects, receiving accolades, or celebrating the achievements of others. As we look ahead, we remain committed to supporting and shaping the industries we serve, continuing to drive progress and innovation in the years to come.
More posts