There is quite a buzz about Zero Trust (ZT) and Zero Trust Architecture (ZTA) in IT security today. You may be thinking that as a developer or architect, ZT isn’t something that concerns you directly. You’ll get directives from your boss (or your boss’s boss, or even worse, their boss) to build on some new platform or integrate with some new tool, and you’ll be able to check the boxes you need to check, hopefully without affecting your development schedules and workflows too much.
I’m here to tell you that there may be something for you in this ZT hype. If you can read the water, you may be able to ride this wave to accelerate the pace at which you deliver high value software. With a less informed approach you may find yourself paddling against those same waves.
Why you should care about Zero Trust Architecture
Zero Trust doesn’t mean nothing is trusted, but it does mean all trust must be earned, or in a more technical sense, authenticated and authorized. Zero trust is most directly a reaction to problems associated with privileged locations on networks. For example, many internal services at both small and large companies are available when you are on a corporate network, and no additional authentication is needed to access these resources once you join the network. IT and application architectures of these sorts make lateral movement attacks especially dangerous.
The interesting bit for architects and developers is that ZT concepts intersect with two very important ideas that we have to contend with every time we design software systems: security and adaptability. Crucially, if a service has to prove its identity no matter where it runs, then that means it should be able to run anywhere. This shift in mindset can enable architects to take advantage of the latest runtimes, services, and platforms to quickly develop high value software and services. You shouldn’t be doomed to develop your next app behind the company firewall on outdated and locked-down VMs just because that’s where the data is. In a ZT framework, since every app takes on the responsibility of proving itself worthy of access, that app gains the freedom to run anywhere. In a sense, the apps are all grown up now: You can let them leave the nest of your corporate network.
How Ionic enables ZTA
Much of the early work in the ZT space has focused on network layer controls. Focusing on the network allows solutions to quickly throw a safety blanket across a broad swath of sensitive company resources. These are fantastic services and a great idea!
Ionic takes a different approach. Instead of focusing on the network, we provide developers and architects with tools and solutions to work with sensitive data and externalize authorization logic. These are some unconventional choices, so I’ll take a minute to explain the approach and why I think this approach may make sense as a ZT starting point for developers and architects.
First: Why focus on the data? In many cases, the data is what actually needs to be secured. Sometimes you want to protect and secure action against a service (e.g. an API call), but often you need to figure out how to securely interact with the sensitive data behind those APIs. The data could be sensitive because it contains financial details (PCI), medical details (HIPAA), education details (FERPA), proprietary company details, or for any number of other reasons. The data could be files on disk, records in a database, or messages on a queue. The rules and regulations are different in each case but the goal is the same: Make sure that only authorized users and applications can interact with the data. A data-first approach allows us to treat the data similarly regardless of where it exists: The security controls travel with the data instead of being dependent upon how that data is stored.
Focusing on data also permits more granular control. Individual data records returned from the same API endpoint (or from the same database table, or the same message queue) can vary greatly in sensitivity. Securing the API with a network overlay or proxy can’t account for this complexity since overlays don’t often have deep insight into the properties and context of the data they are securing. Look to the data catalog space for an extreme version of this data context problem: When organizations dump large amounts of data in a data lake, that data is often unusable until it is scanned and indexed so appropriate controls can be applied.
Second: Why externalize authorization? The idea of moving authorization control outside of the application may seem a bit strange until you think about a related problem. New applications rarely perform authentication from scratch — they tend to lean on authentication frameworks and tools like SAML, OAuth, Active Directory, and IdPs. These tools help externalize identity from applications to minimize user toil through features like SSO and provide the raw materials for resolving the credentials presented by a user or workload into an identity. Externalizing authorization confers similar benefits, which are magnified in highly regulated environments or in systems that work with very sensitive data.
As an architect and developer who has spent much of my career working with large sensitive datasets, what is most interesting to me about externalizing authorization is the level of control that I can push outside of the application to other decision makers in the organization. Giving security and risk teams the ability to monitor authorization workflows, and change those in real time via policies that they design, means a lot of conversations about using an interesting data source can shift from “no” to “yes”.
What this means to you
Zero Trust has received a great deal of hype recently, and some of that is certainly merited. The shift from traditional perimeter-based security models to a “prove it” access model really does help reduce risk by hardening soft targets and mitigating common attack patterns. The space is still young, and the changes needed are comprehensive: I agree with others in the industry that there is still a long way to go to build up the tools, systems and best practices to get the most out of Zero Trust.
However, I believe that the rapid rate of change in high-value development platforms (cloud, containers, serverless), mixed with increasing focus on privacy and security (GDPR, CCPA), presents an opportunity that can be seized today. By adding controls in the right places, a Zero Trust architecture can provide the ability to move fast, securely — without over-coupling security to system runtimes and platforms. This is what I’m excited most about for Zero Trust architectures in the next few years. If this opportunity sounds interesting to you, check out our developer resources to learn more about how Ionic Machina addresses data security and externalized authorization and empowers you on a zero trust journey.
Timothy Van Heest, chief architect at Ionic Security coordinates and prioritizes the work of the engineering organization. He is drawn toward difficult problems that stand in the way of people understanding and working with complex systems. Whatever the origin of these problems, his goal is to use processes and systems to allow capable people to interact confidently with these systems and do excellent work.
Part 2/3 Blog Series: Zero Trust