This dichotomy prompted me to consider where the leadership and demarcation of responsibility in this area should reside. In some respects, the U.K. strategy clarifies several pre-existing initiatives, such as the establishment of the NCSC. It suggests that whilst there is a differentiation between cyber protection for government and wider society, there are clear areas where the government should be involved to either protect key assets like energy or communication or to avoid threat contagion.
Strategies of this scale are inherently complex and therefore easy to criticise. To its credit, the U.K. strategy begins by acknowledging the breadth of challenges and attempting to address elements within its control. It must, however, be kept updated with the fast-changing technology landscape. Its globalised scope makes the challenge all the starker – the people, devices, data, companies, traffic, and cyber threats it advises on are increasingly global. That said, governments are limited by their spheres of influence.
Malicious actors have a broad range of motivations, from criminal gangs seeking financial rewards to states bent on using cyberwarfare to cripple enemy infrastructure. Whilst declarations of actual war garner the world's attention, cyber warfare is ongoing, borderless, invisible, and causes damage in ways kinetic weapons cannot. With that in mind, do our reference points for traditional warfare map well onto cyber weaponry? For example, there have been attempts to define the debate (if not the rules) for cyber warfare, like the Tallinn Manual or research from the United Nations Institute for Disarmament Research. But, as with many things in technology, the landscape is rarely static and frameworks must quickly evolve.
The other main difference is that the battlefield for cyberwarfare can extend to anywhere the internet touches, rather than some far-flung land seen only through the lens of news editors. It is shared by civilian and state actors, and indeed civilian computers may unwittingly contribute to an attack. The nature of this distributed threat means democratic governments do not control traffic, and therefore the onus for cybersecurity falls to many. It is sensible that governments play a role in defining minimum standards and policy frameworks, and should perhaps lead with best practices to protect public assets. National security and intelligence agencies also have a role in thwarting domestic cyber threats. Ultimately, the global nature of the threat requires intelligence sharing and the development of best practices among allies.
With incursions increasing in type and scale from DDoS to voter interference, attacks on critical infrastructure to data exfiltration, their ongoing adoption by state-backed actors is nearly guaranteed. As attacks grow in sophistication and size, there is evidence machine learning and AI will play an increasing role in both offensive and defensive operations. Companies can try to insure themselves against threats, but realistically this is fallible, reactive, and short-sighted. The arms race between adversaries has reached a point where it is prudent to collectively consider a different architecture to meet the threat. That architecture is known as zero trust.
Zero trust on the world stage
Zero trust is not a silver bullet. Even if it was, it would take years for all companies, governments, users, and OT/IOT technology to overhaul their networks. While adopting new architecture is a good step forward, it must be combined with the removal of legacy technology. Otherwise, it’s merely adding complexity rather than improving security.
The point is that zero trust and its granular, identity-based brokered access are realistic aspirations and the tools exist today. It can be adopted for users, devices, and workloads in whatever environment they reside. It is no doubt a journey, but it’s one that improves security posture and reduces the scope of attacks as it is implemented. Just building perfect protection around users whilst neglecting OT, for example, no doubt ignores some attack vectors. But so long as your ultimate strategy is holistic, then each progression is an improvement. Rather than succumb to inertia, organisations should take the first step.
This is perhaps where governments can help. They can legislate baseline requirements for themselves, companies, and service providers in their sphere of control. There are also enough supra-national organisations to allow standards and co-operation on these frameworks. What we don’t need is more government incursion or backdoors, as these are inevitably used for perceived or actual nefarious means.
Governments also have the ability, remit, and funding to go on the offensive. They are as much, if not more so, a target for other states or criminals, but can also proactively or reactively respond to threats. Deception technologies are an interesting option here. Whilst they don’t overtly go on the offensive, they can entice and identify threat actors, then propagate the block to themselves (and others), thus mitigating that threat vector.
There’s a role for government agencies on national and international levels to protect the integrity of society and to provide the frameworks in which businesses and individuals can have confidence. Cybersecurity is, without doubt, a shared responsibility, and therefore it's logical to pursue foundational architectures like zero trust. The U.S. acted boldly on this and, in my opinion, the U.K. missed an opportunity. Zero trust architecture offers a proven framework for scalable, always-on security regardless of where the user, workload, or device is located.
After all, you can’t attack what you can’t see.
This blog was originally published by CXO REvolutionaries here.
Written by Howard Sherrington, Director of Transformation Strategy, Zscaler.