Cybersecurity Assessment and the Zero Trust Model

56

GBNEWS24 DESK//

Over the past few years, the concept of “zero trust” architecture has gone through a number of evolutionary phases. It’s gone from being the hot new fad, to being trite (in large part due to a deluge of marketing from those looking to cash in on the trend), to passé, and now has ultimately settled into what it probably should have always been all along: a solid, workmanlike security option with discrete, observable advantages and disadvantages that can be folded into our organization’s security approach.

Zero trust, as the name implies, is a security model where all assets — even managed endpoints that you provision and on-premise networks configured by you — are considered hostile, untrustworthy and potentially already compromised by attackers. Instead of legacy security models that differentiate a “trusted” interior from an untrusted external one, zero trust instead assumes that all networks and hosts are equally untrustworthy.

Once you make this fundamental shift in assumptions, you start to make different decisions about what, who, and when to trust, and acceptable validation methods to confirm a request or transaction is allowed.

As a security mindset, this has advantages and disadvantages.

One advantage is that it lets you strategically apply security resources where you need them most; and it increases resistance to attacker lateral movement (since each resource needs to be broken anew should they establish a beachhead).

There are disadvantages too. For example, policy enforcement is required on every system and application, and older legacy components built with different security assumptions may not fit in well, e.g. that the internal network is trustworthy.

One of the most potentially problematic downsides has to do with validation of the security posture, i.e. in situations where the security model requires review by older, more legacy-focused organizations. The dynamic is unfortunate: those same organizations that are likely to find the model most compelling are those same organizations that, in adopting it, are likely to set themselves up for vetting challenges.

 

Validation and Minimizing Exposure

To understand the dynamic we mean here, it’s useful to consider what the next logical step is once zero trust has been embraced. Specifically, if you assume that all endpoints are potentially compromised and all networks are likely hostile, a natural and logical consequence of that assumption is to minimize where sensitive data can go.

You might, for example, decide that certain environments aren’t sufficiently protected to store, process, or transmit sensitive data other than through very narrowly defined channels, such as authenticated HTTPS access to a web application.

In the case where heavy use is made of cloud services, it is quite logical to decide that sensitive data can be stored in the cloud — subject of course to access control mechanisms that are built explicitly for this purpose and that have security measures and operational staff that you can’t afford to deploy or maintain just for your own use.

As an example, say that you have a hypothetical younger organization in the mid-market. By “younger,” we mean that maybe only a few years have passed since the organization was established. Say this organization is “cloud native,” that is, 100% externalized for all business applications and architected entirely around the use of cloud.

Comments are closed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More