Zero Trust for Developers: What is it and why should you care? | Machina SDK design diagram pictured; see blog post for details

Zero Trust for Developers: What is it and Why Should Developers Care?

While traditional models for network security assigned trust based on location — anyone in the office was often trusted by default — Zero Trust models focus on users and context. Forrester now describes the five pillars of the Zero Trust extended ecosystem, which includes data, people, workloads, networks and devices. The strategy places data at the center. As Ionic Security’s CTO, Bill LeBlanc tells it, data “is the main driver for any security strategy. If in order to remain agile you must build security into your SDLC from the start, then you also have to make data – and data handling – a first-class citizen in that conversation.”

What does this mean for developers?

As a security strategy, adoption of Zero Trust has typically come from the top down. So, other than answers like “my boss told me to do it,” why should developers care? Quite simply, Zero Trust moves the responsibility for security from the network perimeter toward the data itself. Developers can no longer rely on just an API token that acts as both authentication and authorization. You now need to understand how to secure each and every stage of an interaction within the context of the request: the identity of the user, the state of the device making the request, the app being used, and the sensitivity of data the request is trying to access.

I’ve seen it written that developers must then ensure their code implements the approved security policies that would allow, block, or restrict access to the data. The caveat is that policy in a real-world setting can be extremely complex. Policies are likely to change over time, and some of these changes can be significant as privacy initiatives like GDPR and CCPA have shown us. How will policy administrators work with developers to get policy changes approved and implemented? And how many times will you need to recode policy logic to address the next regulation that comes out? Clearly, hard coding policies directly into the application can lead to technical debt down the road, and it may not even be feasible in the first place. 

A better approach is to abstract policy from application code and centralize it at the policy decision or enforcement point. That is the role of authorization engines. Developers should be able to describe attributes that provide that full context around the data (identity, location, sensitivity, conditions, etc.), and allow the authorization engine–using rules managed and maintained according to the policies of the organization–to interpret and enforce access controls based on that context. 

DevSecOps and Shift Left

“Shifting left” urges developers to build security into their design and code base from the start. In terms of Zero Trust for developers, this enables the transition from implicit trust to explicit verification, and strong identity and access management. 

While there’s truth in that statement, again it shouldn’t mean that developers should hard code policy-based controls into their applications. As Bill further writes, “DevSecOps… should include removing the need for programmers to understand, code, test, and maintain data protection in silos. Don’t make your software engineers reinvent policy and data handling rules every single time, for every single application. Free them to focus on the features and functionality that are core to your business.”

Open Source

Zero Trust for developers also requires not trusting open-source solutions or third-party plugins that are often required to create modern applications. This shouldn’t mean avoiding open source completely, but it is important that developers know what components are being used and how they map to the five pillars. As always, developers should be familiar with secure coding best practices, and be willing to subject reused code to static verification, pen tests, and the like.

Addressing Full Context

Full context allows you to limit access and apply compliance controls. In her informative blog series covering attribute-based access controls (ABAC), Christy Smith writes that “ABAC goes beyond just users and roles to perform a comprehensive evaluation across multiple vectors: What type of data is it? Where is it located? Where is the subject requesting access located? How is access being requested? Over what network?” 

Vectors to consider when getting context include:

  • Identity of the user
  • Identity of the device making the request
  • Data classification (e.g. sensitivity level, contains PII or PHI, etc.)
  • State of the client application (client type, app name and version, OS, etc).
  • Environmental conditions such as location of the requestor, state of the device, time of the request

The ABAC standard categorizes these as follows:

  • Subject: describes the user attempting the access (clearance level, department, role, job title, etc.)
  • Action: describes the action being attempted e.g. read, delete, view, approve…
  • Object: describes the object or resource being accessed e.g. the object type (medical record, bank account…), the department, the classification or sensitivity, the location…
  • Environment: provides context like time, location or dynamic aspects of access control

Machina Tools

The data-security model implemented by Ionic Machina encourages developers to think about security in terms of identity and context rather than key management or cryptography. For the developer this means acting on attributes as metadata. Let’s look at how that might be accomplished using one of the SDKs

Machina Tools — SDKs

Built on top of Machina REST APIs, Machina Software Development Kits allow you to build custom applications that can control access to sensitive data with a relatively small amount of code. With no background in cryptography, you can quickly add high-value security to existing applications without having to change existing program logic. Machina lets you control access to data using cryptographic keys and our just-in-time policy decision engine. 

Machina SDKs are available for Linux, MacOS and Windows and support Java, C++, C#, Python and Javascript. Each SDK exposes functions that enable secure communication between your application and Machina.

Zero Trust for Developers. Diagram shows Machina passing Keys to Machina SDK. Inside SDK box, keys go to Agent box. Profile Persistor box passes Profile to Agent box. Agent passes keys to Crypto box. On either side of the Crypto box, bi-directional arrows point to a box on the left labeled 'Plaintext' and a box on the right labeled 'CipherText'
Zero Trust for Developers with Machina

At the center of the SDK is an Agent class that manages communication between your device or endpoint and services provided by Machina. At the core is the notion of accessibility through user- and device-centric authentication and authorization workflows. To initialize the Agent, you will typically instantiate a device profile that uniquely identifies your device. Your profile contains the credentials your device needs to securely access your Machina tenant. Once the Agent is initialized you can create and fetch keys and update key attributes. You then have the option to encrypt data using the supplied keys, if you choose. 

Gathering Full Context Through Metadata

Once your application is authorized, Machina captures data about the device: browser, operating system, IP address, location, and so on. In fact, Machina captures several metadata types that map to NIST and ABAC standards. These include:

Metadata TypeDescriptionZTX Pillar
Key AttributesInformation associated with the data (e.g., PII)Data
Data MarkingsSubset of key attributesData
Subject AttributesInformation about the userPeople, Devices
Request metadataAssociated with the key or device requestData, people, workloads, networks, devices
App metadataAssociated with the application making the requestWorkloads

Using this metadata, policy administrators can write policies in the Console that control who, when, where, and under what conditions an application (workload) has access to data. Alternatively, developers can author these policy rules using Machina APIs. When there’s an attempt to access data, Machina can factor in this metadata to make policy decisions that allow or deny access to the protected data. 

Key Attributes and Data Markings

Key attributes are defined by the developer and contain metadata about the use of a key. When you associate a key with data (.e.g, a field, record, file, blob, etc.), you are effectively tagging that data and using key attributes as descriptors to “tag” that data. 

Data markings are a subset of key attributes and are given a higher level of precedence in Machina. Data markings can be used in policy rules, and provide detailed analytics within Machina Console. Once an attribute has been declared a data marking, Machina Console will present additional information about the attribute, including the number of policies associated with a data marking, and the values associated with the attribute’s name. Machina data markings are maintained in a centralized store of keys instead of the many copies of protected data. This approach offers substantial advantages over template-based data protection and key management architectures. The power of key attributes emerges when you start to specify access control policies that only apply to data with a particular set of markings. For example, when creating a single policy for data classified as “restricted” and data classified as “confidential.”

Subject Attributes

Subject attributes allow you to apply arbitrary name/value pairs to users and/or devices, so that these can be used in data policy. For example, a user could be assigned a risk_score: “high risk”.

Request and App Metadata

Request metadata provides context about the request: for example, source IP, client app, time of request, geo-location, and so on. Request metadata is not returned to the client when the key is fetched. Multiple keys requested in a single batch will contain identical request metadata. The policy engine can make allow/deny decisions based on request metadata attributes and values, and also appear in analytics related to the request.

Application metadata is generally programmer-defined and contains information about the client application making the request. Machina provides built-in attributes names for application-name and application-version. This information will appear in analytics when viewing the log.

Machina Tools — APIs

Machina offers a collection of REST APIs that are secure, simple, and easy to use. They are the building blocks for all things Machina including our SDKs, and they are first-class citizens within Machina – most tasks you can do in Machina Console can be done using the APIs. That includes managing users, groups, and roles using familiar SCIM interfaces; creating policy rules; managing key attributes and data markings; downloading tenant metrics, event, and log files; and managing device requests. The APIs are fully documented in the Machina Tools — API reference.

Implementing Zero Trust for Developers

Summary

Applications built within a Zero Trust framework can protect access to sensitive data even when perimeter controls like firewalls, intrusion prevention systems, and DLP tools fail to provide sufficient security due to misconfiguration or environmental drift. Capturing full context is critical to building policies that meet Zero Trust requirements. Developers should be able to describe attributes that provide that full context around the data and allow the authorization engine to interpret and enforce access controls based on that context. Machina provides a rich set of tools that enable Zero Trust for developers: to build applications that capture full context without having to understand, code, test, and maintain data security in silos or reinvent policy and data-handling rules for each and every application. Free developers to focus on the features and functionality that are core to your business.