Cyber Security

Security Audit & Maturity Assessment


Expert Assessments for Enhanced Maturity

Know thy enemy

Robust cyber security starts with knowledge of your vulnerabilities


It is vital to transform vulnerability awareness into a strategic advantage, so that your security posture is dynamic & responsive to the evolving cyber landscape.


Understanding your own vulnerabilities is akin to knowing your enemy and stepping into their shoes.


We can help by making a high-level audit of your business; striking a balance between risk, compliance, and the requirements of your business environment.

Cost-Effective Compliance & Enhanced Risk Management

When conducting security audits and assessments, there's an intrinsic link between understanding risk appetite, compliance needs, and business drivers. Engaging in an objective and well-supported strategic cyber risk assessment can identify some low-hanging fruit while also aligning unique business goals with cyber security risks.


This step should naturally progress to a focussed in-depth discovery process, where we meticulously identify and assess vulnerabilities through comprehensive discovery and testing.


By integrating a best-practice approach, organisations can pre-emptively address multiple compliance standards, reducing the overheads and the complexity associated with meeting each regulation individually.

Benefits of our model


Driven by expertise

Our team has the experience, technical background, and real-world experience to deal with complex cyber security challenges.

Secure by design

We build security into your IT infrastructure and business processes from the ground up, concentrating on our 6 key areas.

Bespoke roadmaps

We develop tailored cybersecurity roadmaps that consider your specific business models, industry challenges, and risk profiles.

Client-centric model

We ensure close collaboration with our clients. This includes regular updates, clear communication, and flexible engagement models.

Thought leadership

We leverage advanced technologies like AI and automation to improve the efficiency and effectiveness of our solutions.

Training programmes

We offer innovative training and up-skilling programs to foster a culture of cyber security awareness within organisations.

Ready to talk?

Our Cyber Security experts have decades of experience and lead a 'secure-by-design' approach to significantly improve your cyber resilience.


Get in touch today to discuss how we can help.

GET IN TOUCH

Digital Security in Numbers

$9.5tr


Cybercrime costs predicted for 2024.

$4.45m


The global average cost per data breach as of 2023; as so many incidents go unreported, this is just the tip of the iceberg.

$215bn


Expected to be spent in the cyber security industry by the end of the year.

"You might find that when someone installed the log-in software, they stored usernames and passwords in plain text in a log file on the server. Bingo! You pull the log file, and now you have the full kill chain."

Perlroth, Nicole. This Is How They Tell Me the World Ends. Bloomsbury Publishing.

Cyber Security insights


Silhouette of 737 plane in a neon sky
by Tom Burton 9 April 2025
What Problem do Too Many SaaS Providers Have in Common? Many SaaS security providers have a history of treating important safety and security features as something to upsell. This raises the important question of whether a software vendor has a moral responsibility for the secure operation of their solution. In this article, we explore the implications of treating important security and safety features as an upsell, using Boeing as a test case of where this can go wrong. The Case of Boeing and the Aviation Industry The case against Boeing is emblematic of a more systemic issue across the aviation industry, and many other industries. The public became aware of this issue under tragic circumstances when the Lion Air and Ethiopian Air Boeing 737 Max airliners crashed in 2018 and 2019 respectively. According to the widely quoted New York Times article , the crash could have been avoided if the pilots had access to two safety features that were sold by Boeing as optional extras. According to the incident reports, at the root of the incident were the angle-of-attack sensors. These mechanical sensors operate in a similar fashion to a weathervane to measure whether the aircraft’s nose is pointing above or below the direction of airflow. Being mechanical, they may be prone to malfunction, perhaps jamming after having been installed incorrectly — as was believed to be the case for the Lion Air aircraft . The system that led to the aircraft’s demise, which identifies the risk of the aircraft stalling, only listened to one of the sensors. A difference in the signal being sent by the two sensors was not recognised by the anti-stall system; and the instruments that would have alerted the pilots to the conflicting signals were upsell items. This wasn’t a fancy, nice-to-have bell or whistle that makes the flight more comfortable, efficient, or profitable. It is an underlying safety feature of the aircraft. If there was no safety requirement for the redundancy of two sensors, it is difficult to see why there would ever be more than one. Boeing has now addressed the issue, and the anti-stall system listens to both sensors, responding safely in the event of conflicting signals. It should also be noted that the investigation identified pilot error and deficiencies in the training that contributed to the disasters (and this will be relevant to our points regarding many SaaS product decisions as well). The SaaS Parallels Cloud-delivered Software as a Service (SaaS) has revolutionised the tech industry, and catalysed a phenomenal level of innovation and growth. It has enabled new software capabilities to be brought to market faster than ever before, and facilitated the ability to reach a scale with costs defrayed across multiple customers that would have been unimaginable 30 years ago. However, the benefits of being able to access a service from anywhere, at any time, by anyone also presents significant risks. The ‘anyone’ can be a malicious party operating outside of the reach of law enforcement or extradition. As a result, there are clear commercial responsibilities placed on SaaS providers to secure their infrastructure from attack, and those that do not are unlikely to last long in the marketplace. But just like the aviation industry, there are different flavours of security, and different perceptions of what is considered essential. Taking due care and applying due diligence to ensure that the platform itself is adequately secured from a direct attack is clearly the vendor’s responsibility – but what about those elements of security that relate to risk owned by their customers? One key element of customer risk relates to the security of a user’s password. It is their responsibility to make sure they choose a long and random string drawn from upper case, lower case, numerical, and special characters (if allowed). It is also their responsibility to ensure that they do not ever use the same password for multiple applications or services. But, we know that compromised credentials is a common failure mode. Just because it is the user’s responsibility to mitigate this risk, this doesn’t mean that system developers do not also have some mutual responsibility to make it easier for the user to exercise that responsibility; controls have been developed specifically for that purpose. The most obvious ones are Multi Factor Authentication (MFA, or 2FA), and Single Sign On (SSO). With MFA, we improve the security of the credentials by also verifying that the user is in possession of their trusted device before we trust them at sign in. With SSO, we minimise the number of credentials and accounts to manage by federating with a single corporate account; we can then concentrate our effort to secure that corporate account rather than spreading our resources thinly. Both are relatively easily implemented these days, particularly in the case of SSO where the OAuth protocols are widely offered by Identity Providers. Once implemented, both are essentially free to operate, particularly if MFA uses an Authenticator app rather than SMS text messages. SaaS providers recognise that this security is important, and they will frequently implement MFA and SSO controls into their applications to meet that customer demand. But, too frequently, we see them only offered as part of the more expensive subscription options. This element of security is not enhancing the vendor’s core proposition; it is not making their offering more functional, better looking, or more efficient for their users. It is just making it more secure, and therefore to treat it as an item to upsell comes across as price-gouging rather than the responsible application of good security practice. It is almost as though these vendors have run out of innovative bells and whistles that their clients would value in their core product, so they have had to resort to undermining the security of their cheaper options in order to encourage their customers to pay for their more expensive ones. It is equivalent to a bank only using the CSC code on a card to secure transactions for customers who pay for their premium banking services, because, after all, it is the customer’s responsibility to protect their card details. Conclusion What we have described here is not universal, and probably is not even representative of the majority of SaaS providers. But, when you are reviewing a new service, we urge you to take a closer look at what security your provider is charging extra for. If low cost, high value security controls are being upsold, then you may want to consider what other security good practices are not being considered essential. For more information about our cyber security consulting services and Secure by Design principles in action, please contact Tom Burton, Partner for Cyber Security, using the form below.
Binary code art installations - hundreds of numbers hanging from the ceiling
by Tom Burton 25 October 2024
Would you feel comfortable flying in an aeroplane designed by engineers who only considered what might go wrong after they had built it? ‘Secure by Design’ (SbD) is not a technology, it is a set of principles to be adopted to improve business risk and resilience. It has strong similarity to conventional engineering practices, and it will save money by reducing wasteful rework. The critical first step is to understand the risks that the solution will be exposed to. Like Failure Mode Analysis in conventional engineering, these inherent risks form an essential part of the solution requirements. The design can then be a collaborative and iterative exercise of review and enhancement to meet the security requirements. Effort spent defining requirements before design and implementation is widely recognised to save time and money. The situation is no different with security requirements, but there are wider benefits as well, compared to addressing security late in the lifecycle: Security controls applied after design and implementation are more likely to restrict functionality, undermining overall user satisfaction and the return on investment Early engagement reduces the risk of budgets overruns, or having to accept inadequate security if you can’t secure the budget A well-documented set of risks, security controls and design decisions can then follow the solution through implementation and into operations, enabling future change to understand past rationale Above all else, late identification of risk and security requirements causes wasteful rework of the solution, which will cost time and money The key to success is defining the system scope correctly. If the scope is too great and encompasses a number of separate systems, then the benefits are eroded and the exercise becomes more akin to a homogenous enterprise risk assessment. If the scope is too small, the number of systems becomes unwieldy and unsustainable to assess and manage. It is not a Technology, and it is not New Despite what you might believe from some of the cyber tech product sheets, SbD is not a technology (for that matter, Zero Trust, which we see as a valuable component of SbD practice, is not a technology either). It is a philosophy or strategy, a set of principles that bring efficiency, consistency, and discipline to cyber risk management. You may find tools that help you to adopt these principles, and the practice requires a sound understanding of technology, but above all SbD is a human endeavour. Like many other buzzwords in the security community, SbD is frequently presented as something rather mystical, requiring specialist knowledge and attracting a new set of standards and vocabulary. We don’t hold with this concept; in our view, it ‘does exactly what it says on the tin’. It is about ensuring the system’s very design enforces security and mitigates risk rather than relying on sticking plasters applied after implementation. Whether those design features are preventative controls, controls to detect and respond to issues, or any other category, they will have been defined and tuned to the specific risks and characteristics of the solution in advance (and managed through life). The concept is not new. The benefits of early security engagement have been known for some time. But sadly, this has been frequently ignored. As the cyber security industry matures, and the frequency and impact of cyber attacks on businesses increases, the call for this discipline has been increasing. Governments are starting to mandate it in the standards and security governance of technology programmes. The Similarities between Digital and Conventional Engineering Most engineering lifecycles, not just those related to digital solutions, recognise the importance of spending adequate time defining the requirements. At the start of the programme, the level of uncertainty will be at its greatest. The purpose of Requirements Engineering is to reduce that uncertainty so that design and implementation can proceed with direction and to minimise the number of ‘wrong turns’ that have to be unwound. If you do not reduce uncertainty as early as possible, the problems grow as they move downstream, and solving them then becomes a disheartening exercise in ‘pushing water uphill’. Let us imagine that we want someone to build us a house. We would go to our local house building company and commission the job; if they get started immediately, the chances of the end result being anything like what we originally wanted would be almost zero. Where do we want our home located? How many bedrooms, bathrooms, and living rooms? What architectural style? What about the fixtures and fittings? We will identify everything wrong once the sub-optimal, ill-thought-out building is completed for our inspection. Putting those right at this stage will cost orders of magnitude more than they would have with an effective design phase. Worse, there will be many issues that we cannot put right without starting again, and, therefore, we will be left operating in a flawed and compromised solution. Where do we Start? So, how do we identify the security requirements for the design? What is Requirements Engineering in a security context? The security requirements are defined by the risks that the solution will be exposed to. One of the most important SbD principles emphases this by stating that you must ‘adopt a risk-driven approach’. These risks and your organisation’s appetite to accept risk determine the requirements for controls; or, to put it another way, the controls are required to mitigate the risk to a level that it is within your organisation’s appetite. Again, there are similarities with conventional engineering. Understanding the risks that the design must treat is similar to identifying the Failure Modes of an aircraft or other system. The risks need to be articulated so that all stakeholders can understand them, including by the non-technical and non-security communities. Getting all stakeholders to sign off on these inherent risks is crucial to ensure that everyone recognises the constraints the solution will be confined by. If you do not have a sound understanding of the risks before work starts on the design, let alone the implementation, then you are lacking an essential part of the solution requirements. Review, Collaborate, and Iterate Once you have the security requirements, you can feed them into the design process similar to functional requirements. Selecting appropriate controls to meet the requirements will undoubtedly require some specialist expertise. However, this is similar to the requirement for technical architects to be familiar with the technologies employed in the solution stack. This design process should be iterative. Requirements change, frequently due to learning in one iteration providing feedback into the next. The security requirements may influence the architectural approach to fulfil the functional requirements. Occasionally, a complete rethink may be required to adjust the functional requirements to meet the security constraints while also meeting the business needs. However, like the house-building analogy above, this time spent optimising the design will be significantly less than the time, cost, and disruption caused if security is addressed later in the lifecycle. Each iteration takes the proposed design, reviews the inherent risks to identify any that can be retired or if new ones have been created, assesses the residual risk given the existing security controls, and identifies additional security controls to reduce the residual risk to an acceptable level. Done collaboratively, this can introduce fast feedback into the design process, and, over time, the technical architects will become more familiar with security issues and their resolutions. Zero Trust’s Role in the Exercise, and Scope Definition Zero Trust is another trending buzzword frequently camouflaged with mystique, or hijacked as a ‘feature’ on product sheets. My view on Zero Trust is similar to my view on SbD: it should be easy to understand, and ‘does exactly what it says on the tin’. In design and in operations, we start from the baseline that nothing is trusted. Whether it is digital identities, devices, applications, or services, we can only trust them once we have an objective and explicit reason to trust them. We use the principle of Zero Trust extensively when applying SbD. By having no implicit trust in any identity, device, or service, we can decide on the minimum level of trust we need to enforce and the maximum level of trust that the entity can offer. If the maximum trust on offer is less than the minimum trust we need, then there is a design decision to be made about how we close the gap. It may be necessary to reduce functionality in order to reduce the required minimum. Or, we may need to put in place other compensatory controls to reduce the risk in other ways. Defining an appropriate scope of the system is key to success. If you set the scope too large, then everything is inside the ‘circle of trust’, and SbD becomes a homogenous exercise in enterprise security. If you set the scope too small then you will drown under the sheer quantity of projects to manage. The World is not a Greenfield Site, and Security is not a Fire-and-Forget Weapon The world is not a greenfield site, and there will be challenges retrofitting a SbD approach to the broad portfolio of legacy solutions. There is no simple or quick solution to this, it will be a case of progressively revisiting each project’s architecture and identifying the changes that will make it secure by design. But, risk can help us here too. Some projects or services will be sufficiently low-risk so that they can be tolerated until they are retired (so long as they are not trusted by any other more important system). The SbD approach lends itself well to a progressive rollout. SbD will limit the negative impact that a legacy system can have on a target system, because nothing outside of a project’s scope is implicitly trusted. You can only aim for a perfect world by progressively taking steps to make it a better world. In this article, we explain why risk management needs to be addressed at the design phase of projects. This does not mean that we believe this is the end of the journey. Security and risk management still needs to be managed in operations as new threats change the risk profile, or change is applied to a system. But with the foundations laid early in the lifecycle, the task of management through life becomes easier. The documentation generated by SbD should provide clear traceability between risks and controls. When a project is reviewed in life, the rationale behind previous decisions can be clearly understood, enabling change to be an informed process. Summary This article outlines why I believe applying the principles of Secure by Design avoids issues getting into operations, and saves time and money. If what I have described already seems obvious, then that is positive. However, from my experience, too many projects do not consider security to be an essential component of design. I believe that this is a missed opportunity, and, when applied correctly, it delivers solutions that are more secure and easier to manage.
A military helmet with a cyber overlay
by Tom Burton 4 October 2024
Our demographics and the moral value that we place on life as a society mean that our military must rely on technology to an ever increasing degree in order to exploit its advantages. However, the increased dependence on support from suppliers transforms the supply chain into an extended part of the networked battlespace, and thus its security and resilience has become a critical concern. Capabilities with a Competitive Advantage also Bring New Vulnerabilities In general, any new capability that has given our military a competitive edge also brings with it new vulnerabilities. A recent example is the introduction of GPS and other navigation systems. When these became widespread in the 90s, the risk of getting ‘geographically embarrassed’ was reduced, and thus members of the army bought consumer GPS receivers for personal use on exercise and operations. Thus, this capability represented a significant advance, and the joke that ‘the most dangerous thing in the combat zone is an officer with a map’ became less relevant. However, skills like map reading need to be learned and practised, and as such the more we rely on technological aids, the more we atrophy muscle memory . How many of us follow phone directions, only to realise we haven’t learned the route and have no feel for the environment we have just travelled through? This effect is organisational as well as individual. The strive for achieving efficiency through digital transformation has led to fragility, with the loss of capacity and capability when digital services are disrupted. The global IT failure caused by CrowdStrike overnight on the 18-19th July demonstrates this clearly; in just a few hours, a software update crashed 8.5m computers globally, severely disrupting banks, airlines, rail services, healthcare, and other critical services. Maintaining full capacity in a reversionary mode is not economically viable once core business processes have been digitally optimised. However, reducing the likelihood and impact of a systemic incident like this requires systems to be designed with resilience from the outset. Good Cyber and Data Security is about Much More than Preventing Data Leaks It is natural to assume that maintaining data security is primarily about preventing someone from stealing confidential information. Granted, this has been an important consideration since spies first operated; this is why we classify and compartmentalise information. However, confidentiality is only a part of the problem. If we look back at the trends over the last decade, many of the most damaging attacks have been ransomware. In these incidents, the attackers deny their victims the ability to access their own information until they pay a fee. It is also vital to ensure that information is not modified covertly. It is an intriguing aspect of human nature that people frequently assume the information presented on a computer is completely accurate, when they would not have the same trust in information provided by a human. When serving, I saw staff officers assume that a unit’s location displayed on a digital map was accurate to within metres and always up to date. They knew, though, that the underlying information had been reported by a human to another human, over the radio, sporadically, and as an approximate six-figure grid reference. That instinctive belief in digital accuracy contrasts with the physical map table, where the information was recognised as inherently vague and out of date. Protecting the availability of information and preventing its modification is just as important as preventing it from falling into the wrong hands. Why do we need to care? What is the threat? What must we protect to preserve our fighting power and freedom of manoeuvre on military operations? How could malicious actors undermine military capability? We first need to step above the world of ‘bits and bytes’ and decide what maligned intents might target us. The following are just a few examples, but they illustrate that the systemic nature of our digital landscape makes the risks far more complex and nuanced than they first appear. Espionage Espionage is as old as human conflict. Two and a half thousand years ago, Sun Tzu wrote a whole chapter on the importance of espionage and the use of spies. It is practiced across all contexts from the grand strategic and political levels , down to the compromise of tactical communications and devices . Espionage is also rife across the defence industrial base to gain insight and intellectual property about future weapon systems so that they can be countered and copied . Capability Denial Even with Mission Command to empower and delegate, any operation relies on the efficient flow of information and commands to exploit opportunities and achieve the desired effects. This makes Command and Control capabilities a ripe target. One hour before Russia launched its full-scale invasion, it attempted to disrupt Ukraine’s C2 capabilities by executing a cyber-attack on the communications company Viasat . Disruption of communications bearers is an obvious approach, but a widespread attack on networked computers would be more complicated to recover from. And, as we realise the vision of an ‘ Internet of Military Things ’, described recently by the UK Chief of General Staff, by networking all elements of battlefield equipment, digital denial could extend across those platforms, disrupting intelligence, logistics, mobility, and fires. Subversion & Deception Subversion and deception are already directed at our personal lives; phishing attacks, spoofed websites, fake news, trolls, and bots all attempt to manipulate the way we think and act. A notable case involved an AI-generated deep-fake of a company CFO on a video conference call, leading to criminals defrauding Arup, a UK Engineering firm, by HK$200m (US$25m) . It may be a while before we see Microsoft Teams in the trenches, but reachback from formation headquarters to the home-base is nothing new. Are we prepared for remote support into theatre, provided by partners and suppliers, being used as a vector to conduct highly realistic live deception and socially engineered attacks like the one Arup experienced? Degradation of the Moral Component The moral component – the ability to get people to fight – is the pre-eminent of the three essential elements that make up fighting power according to the UK defence doctrine . Many things would influence it, but a sense of confidence in the security and wellbeing of a soldier’s family at home is a key one. What if the family at home couldn’t access money because the military payroll system had been attacked? How quickly would force motivation and cohesion on operations deteriorate? What is Being Done, and What More Should We Do? The UK government has recognised the threats and risks for some time, and it has done a lot to reduce them. Cyber security has been recognised as a fundamental part of national security for over a decade, with the Defence Industrial Sector identified as critical national infrastructure . The Ministry of Defence’s (MOD) recent shift in governance policy to demand that systems are Secure by Design , and that a programme’s Senior Responsible Officer takes ownership and responsibility for risk, is significant progress. However, threat and risks are not static. Foreign state hacks, both covert and overt, have risen with geopolitical instability . In the most recent National Cyber Security Centre’s annual review, they specifically described the intensity and pervasive nature of the cyber threat from Russia. Cyber-attacks against our information, digital services, and infrastructure, will be a core component of any hybrid war, not least because of their deniability. We can see this today with attacks that closely correlate with the Kremlin’s interests and motivations, such as the recent attack by Russian hackers on NHS partners in London . Fragile networks are only as strong as their weakest link. For some time, the defence ‘network’ has spanned the wider defence enterprise, which extends deep into the supply chain. Our need to maintain technological advantage and agility means we will need to source innovation far beyond the traditional Defence OEMs, and we will need to get updates into theatre quickly and frequently. This makes the supplier of a digital ‘widget’ part of the operational network, even if they’re not connected to it. So, the extended network is expanding and becoming increasingly operationally critical, and the capabilities and motivations of the geopolitical threats we face are evolving. What was adequate five years ago is unlikely to be sufficient for the next five. There are many steps that can be taken to respond to this change, and the following three focus on resilience in the extended defence network: Threat Escalation Contingency Planning All networks have non-critical capabilities that deliver softer benefits and efficiency. However, every piece of software, network segment, or service presents a part of the surface that can be attacked. When the threat escalates, we can reduce our attack service by pre-emptively switching off non-core services, and further segmenting critical capabilities, all at the expense of efficiency . There is evidence that Ukraine’s resilience in the face of Russian cyber-attacks in 2022 benefitted from this preparation . Preparing and testing these measures takes time and imposing it on suppliers will also have commercial consequences. Enhanced Continuous Supplier Assurance Supplier assurance for cyber risk has been an element of MOD risk management for some time, albeit the tools to facilitate it have been limited since the Octavian Supplier Cyber Protection Service was retired without replacement in 2021 . However, when the scope of the networks at risk increases and the threats evolve, we need to change our posture. This will affect the suppliers to focus on, the questions we ask, and the standards we expect. Assurance needs to be flexible and dynamic; threat changes may require targeted or widespread reviews at short notice, with commercial as well as practical implications. Cyber Stress Testing The Bank of England introduced its Critical National Infrastructure Banking Supervision and Evaluation Testing ( CBEST ) in 2014 to assure operational resilience in the UK financial sector. Implementing the Defence equivalent of CBEST would take some significant time and effort to deliver results. However, without this type of activity, there is insufficient objective evidence that risk and resilience are tolerable. Conclusions Our demographics and the moral value we place on life as a society mean our military’s ability to deter and, if necessary, defeat a belligerent nation-state, will rely on it exploiting technological advantage. The evolution of conflict in Ukraine also demonstrates that industries will need to be able to deliver digital enhancements to that technology rapidly into theatre to maintain an advantage. But this introduces vulnerabilities well beyond the boundaries of Government departments and their Tier 1 suppliers. If the enemy can exploit these vulnerabilities, the impact would be significantly greater than the equivalent several decades ago. The increased dependence on agile reachback support from suppliers makes the supply chain an extended part of the networked battlespace, and their security and resilience are critical components of the risk calculus. A lot of progress has been made over the last ten years. But this period has also demonstrated that we should expect a cyber-capable adversarial state to do against us. To prevent and, if necessary, prosecute a war in the future, we need to not just maintain, but significantly enhance our management of risk in the defence supply chain. To find out more about our Cyber Security services and security philosophy, check out our service page . To contact Tom Burton and arrange a free consultation, use the form below or email Tom at tburton@cambridgemc.com .
Aerial shot of city with a triangle shaped roof terrace in the centre
by John Madelin 17 June 2024
What are NIS2 & DORA? Standing for the Network and Information Security Directive, the NIS Directive is an EU Regulation which details a blanket level of cyber security measures required of all Member States and organisations within them, as well as those with or seeking to establish a footprint in Europe. In 2022, the Official Journal of the European Union published their updates to this Directive in NIS2 , which made their regulations more stringent while broadening the scope of who it applies to. One of these amendments differentiated between entities deemed ‘important’ and ‘essential’, whereby the latter, which includes Banking and Finance, will be subject to closer scrutiny and greater penalties regarding their compliance with NIS2 – or lack thereof . This level of regulated scrutiny will also be heightened by a further EU directive, the Digital Operations Resilience Act ( DORA ). Similar to NIS2, DORA is described as establishing a ‘ comprehensive framework for harmonising digital resilience processes and standards ’. However, where NIS2 applies to all business entities within the EU, DORA is specifically designed to ‘strengthen the resilience of digital operations in the financial sector ’. Thus, though accounting for similar processes and practices, as we shall outline, the emergence of both NIS2 and DORA represent at least two sets of cyber criteria which financial entities must comply with, not only to avoid legal penalty, but to remain robust in an increasingly dangerous digital environment. NIS2 and DORA are scheduled to become national law on the 17 th October 2024 and 17 th January 2025 respectively, and it is important to understand both in order to ensure that your business is compliant with their requirements. NIS2 Requirements Chapter 4 of NIS2 requires that all Member States of the EU ensure that all of their essential and important entities ‘take appropriate and proportionate technical, operational, and organisational measures to manage the risks posed to the security of network and information systems’ . By ‘appropriate and proportionate’, NIS2 directs all such entities to adopt an ‘all-hazards approach’, by which they refer to a baseline set of requirements including: a. Internal Security Policies: Develop and enforce good essential policies that ensure robust internal security practices. b. Incident Handling: Establish tested protocols to effectively respond to and manage security incidents. c. Backup Management & Disaster Recovery: Ensure reliable backup solutions and disaster recovery plans to safeguard data integrity, also ensuring continuity. d. Supply Chain Security: Maintain mutual responsibilities with partners through clear connections and dependencies to avoid the cascade effect of major incidents. e. Information Security Maintenance: Ensure the security of your network, including vulnerability handling and disclosure. f. Ongoing Assessment: Continuously update and monitor information security measures to protect against the ever-changing street smarts of evolving threat actors. g. Cyber Security Hygiene & Training: Regularly assess and adapt security measures to current threat landscapes, which are often basic and repeated. h. Cryptography & Encryption: Provide continuous cyber security training promoting best practices among employees, and ensuring Quantum-ready cryptography, a subject of other evolving regulations. i. Human Resources Security: Implement thorough background checks and enforce security protocols for all personnel. j. Multi-Factor Authentication: Enhance access control through the use of multi-factor authentication, which is always a feature of successful cyber incidents. DORA Requirements DORA is considered a Lex Specialis for financial sector entities, meaning that, where it possesses overlapping or shared regulations and principles with NIS2, DORA takes precedence. Thus, though it is still important to remain aware and informed regarding NIS2 and its requirements, it is more important to be equipped with an acute understanding of DORA. DORA requires that all financial entities be equipped with an ‘ internal governance and control framework ’ designed to strengthen their cyber defences, particularly in regards to the transfer of data, risk of corruption, confidentiality and loss of data, and protection from human error. In order to ensure this, DORA insists upon the implementation of the following processes: a. An information security policy with clearly defined rules to protect the availability, authenticity, integrity, and confidentiality of data. b. A sound infrastructure management structure which makes use of appropriate techniques and mechanisms, such as those which isolate affected assets in the event of a cyber attack. c. Policies which limit the physical access to information assets and ICT assets to what is legitimate and approved. d. Protocols for strong authentication mechanisms based on relevant standards and systems, including the use of encryption. e. Controls for ICT change management in order to ensure that any changes are recorded, tested, assessed, approved, and verified. f. Appropriate policies for patches and updates . Implications for the Finance Sector Both NIS2 and DORA may appear to establish relatively basic levels of cyber security awareness and defence, however it is important that they are properly implemented and strengthened within your operations. This is partly due to the financial and reputational losses that can and will impact your organisation in the event of a cyber security breach. In considering financial entities to be essential, NIS2 makes them liable to a fine of up to €10m or 2% of their annual turnover, whichever is higher. Similarly, DORA penalises any instance of non-compliance with a daily fine of up to 1% of the average daily worldwide turnover of the financial entity until compliance is reimposed. Furthermore, the reporting obligations of both Acts pose significant and specific considerations to financial entities, based on how and when an organisation should bring awareness to a potential or recent cyber security breach. DORA’s Article 10: Detection imposes that financial entities shall ‘have in place mechanisms to promptly detect anomalous activities’, and expands the reporting process in Article 17: ICT-related incident management process to ensure that ‘major’ cyber security incidents are reported to the appropriate management bodies in order to enact mitigation and prevention procedures. Similarly, NIS2’s Article 23: Reporting Obligations requires that all essential and important entities promptly identify and report any ‘significant’ cyber security breach or incident to their representative computer security incident response teams (CSIRTs). There are two main indicators which make an incident ‘significant’ under NIS2: one is that it has affected or caused damage to other entities or persons; the second is that ‘it has caused or is capable of causing severe […] financial loss for the entity concerned’. This is particularly emphatic for organisations which by nature and definition handle and advertise the possession of large amounts of money, a consideration which DORA highlights as an Act specific to the financial sector. In their classifications of ICT-related incidents which financial entities should use to determine their impact, DORA specifies ‘the criticality of services affects, including the financial entity’s transactions’ as well as ‘the economic impact, in particular direct and indirect costs and losses’. Thus, it is crucial for financial organisations to ensure that their operations are properly barricaded against cyber threats, and that they have airtight contingencies and reporting protocols in place in case they are breached. Finally, it is important to internalise clear accountability within your organisation. NIS2 makes it clear that the responsibility for the approval, delivery, and maintenance of an essential entity’s cyber security risk-management measure rests with the management bodies of the entity. This includes coordinating cyber security training and the provision of ‘sufficient knowledge and skills to enable them to identify risks and assess cybersecurity risk-management practices’. DORA is even clearer in this regard, specifying that the management body of the financial entity shall ‘bear the ultimate responsibility for managing the financial entity’s ICT risk’. Thus, the stakes are higher for executives and C-suite professionals to ensure compliance, as they will be the ones held accountable for breaches and attacks. How Cambridge MC can Help Whether your company is based primarily inside or outside the EU, it is crucial that your organisation complies with NIS2 and DORA by the end of the year if you have any entities or subsidiaries, or currently/plan to conduct work in any EU Member States. In any case, NIS2 and DORA represent aspirational sets of guidelines pertaining to the cyber hygiene of your organisation that would only strengthen it to internalise.  This is particularly salient in a regulatory culture which is increasingly prioritising and scrutinising cyber security. As of April this year, the UK Government implemented minimum security standards to protect consumers and businesses from cyber attacks. These include the banning of easily guessable default passwords; regulations which, like NIS2 and DORA, are seemingly basic yet possess higher stakes for non-compliance. At Cambridge Management Consulting, we have a team of experienced Cyber Security professionals with decades of combined practical experience in the field, as well as detailed and up-to-date knowledge on all relevant regulations and principles. To avoid your organisation from being left behind or penalised for a lack of cyber maturity, contact our cyber team to understand your pain points and vulnerabilities—we will work with you to construct, assess, and deliver a comprehensive strategy to resolve them. Contact John Madelin , our Managing Partner for Cyber Security, or learn more about our Cyber Security capability here .
SEE MORE INSIGHTS

Our Cyber Security practice is led by Tom Burton

Partner - Cyber Security

Tom Burton is a cyber professional with over 20 years of experience in business, IT, and security leadership roles. His expertise lies in simplifying complex security problems and enhancing cyber security and efficiency across various industries such as Defence, Aerospace, and Pharmaceuticals. His approach is based on applying engineering principles to deliver sustainable business change.


Tom's career highlights include serving as a Commissioned Officer in the British Army, where he was promoted to CIO. He later joined Detica (now BAE Systems Applied Intelligence) as the Strategic Advisor to the Ministry of Defence CIO, overseeing a multi-billion set of IT-enabled benefits-driven change programmes. He also held the position of Global Head of Managed Security Services, growing the business from sub-£1m to £15m+ orders.


In 2014, Tom moved to KPMG UK as Director for Cyber Security, responsible for selling and delivering business across various sectors. He co-founded Cyhesion in 2017, developing a SaaS platform to disrupt the Third-Party Risk Management market. Most recently, he founded Digility in 2022 to deliver security and digital transformation consultancy and interim management, serving as Interim CISO at a Tier 1 Outsourced Service Provider.

Our team can be your team


Our team of experts have multiple decades  of experience across many different business environments and across various geographies.


We can build you a specialised team with the skillset and expertise required to meet the demands of your industry.


Our combination of expertise and an intelligent methodology is what realises tangible financial benefits for clients.

SPEAK TO THE TEAM

Our Cyber Security Experts

Get in touch with our Consultants today


We are a highly collaborative team of senior-level executive professionals able to adapt to any challenge, however niche & challenging.

+44 (0)1223 750335

info@cambridgemc.com

Contact Form - Cyber Security Practice

Case Studies


Our team has had the privilege of partnering with a diverse array of clients, from burgeoning startups to FTSE 100 companies. Each case study reflects our commitment to delivering tailored solutions that drive real business results.

CASE STUDIES

A little bit about Cambridge MC


Cambridge Management Consulting is a specialist consultancy drawing on an extensive global network of talent. We are your growth catalyst.


Our purpose is to help our clients make a better impact on the world.

ABOUT CAMBRIDGE MC