‘Zero Trust’ is quite the thing in the IT world. The model (‘Zero Trust’) was first introduced – more like, defined – by John Kindervag, the Principal Analyst at Forrester Research back in 2010.
Then it was formalized by the 2015 Office of Personnel Management data breach report, which was one of the worst data breach incidents in American history.
Zero Trust is basically a security model based on the principle of assuming neither internal nor external of the system is safe, and no one must be trusted without a proper authenticating process.
Although it seems like something that must be expected is gaining too much spotlight. Nonetheless, the enlightening effect is something we should all keep an eye on.
The known Zero Trust methodology depends on how it’s focused on the perspectives and policies of security.
As a matter of fact, the model is all about not recognizing the entire system but dividing each factor as a micro-segmentation and applying granular perimeter enforcement, as ways of security.
This criticizes the existing policy and lackadaisical attitude of trusting the whatever inner boundary, regardless of the countless proven breaches and incidents.
The core message of this model is to face the harsh reality and prepare for the prevention against both in and out of the system. It is understandable – however, the methods are a bit vague.
Real applications of Zero Trust security can be listed as the following: Orchestration, Analytics, Scoring, etc.
All of these are focusing on security management itself. Essential indeed, but hard to be considered as concrete methods of applying the model, where micro-segmentation and granular perimeter rules are actualized.
Most of the enterprises till now have ‘built’ their own network perimeter, just like building a castle in order to protect against external threats. The main reason behind this is convenience.
It is believed that the internal system’s security is unquestionable, therefore, the insiders within the castle can act freely without thinking that any security policies are blocking their way of doing things. However, it is impossible to stop the attack once invaded.
Considering all ‘relevant’ data and their micro-perimeter is what Zero Trust security, and verifying anything and everything before connecting to its systems in order to grant access, is all about.
Let’s find out which concrete methods we can use for real-life application.
1) Intelligent WAF (Web Application Firewall): Security on the basis of data units
The web is the most dangerous place with the most security incidents, therefore, it requires the top prioritization. Needs applications of web security on the basis of data unit analysis.
The existing method of applying WAF and IPS won’t allow any data unit analysis. Thus will need to have packets come together as data, which allows relevant analysis, and then protect against malicious access by analyzing the data.
‘Relevancy’ here is defined by data, not packets. Therefore is recommended to use intelligent WAF loaded with a somewhat logical engine that can bring up syntactic analysis.
2) Security on the basis of columns: Minimizing the scope of the disclosure
You must encrypt all data. But the issue is always the Hows. The highroad to encryption is to disclose the minimum to the minimum number of people, for the longest time possible.
The longer you expose an extensive amount of data, the more dangerous it gets. Let’s imagine that you’ve activated full encryption for all your databases. During working hours, you must expose to all insiders the non-encrypted status.
The full encryption method is just vulnerable. To be fair, there isn’t much of classified information on the database that actually needs encryption. It also is normally clustered as columns.
Therefore, what you can do is you only encrypt classified information in a column and use other databases as the way they were.
This way, it also complies with the zero trust model’s micro-segmentation theory.
3) Cloud Environment, the favorable option
Numerous human errors related to security occur due to the burden of managing in-house hardware. Since managing hardware is one of the job responsibilities of an employee, hence the risk of human errors occur.
The absence of the person in charge, lack of security consciousness, and allowing others to have access to the server because of inconvenience. Therefore, it is clearer to understand that managing a cloud environment makes much more sense. It’s easy to find companies that support the cloud environment for web security and data encryption.
On the other hand, the cloud environment may acquire its own security risks. We recommend you consult a cloud security expert in this case.
4) Preventing Usability Reduction
No matter how strict the access to data can be, usability must not be reduced, nonetheless. If the process becomes too complicated, then the possibility of employing expedients can happen.
Regardless, humans end up looking for better and convenient ways. We recommend ‘Secure SSO’ which has an extra layer of security on top of SSO (Single Sign-On), which gives improved convenience and usability.
We can only understand that we have applied faithful security to zero trust security theory only after adding the before-mentioned options. However, the model is only directional, there is no perfect completion, to begin with. We must keep compensating the defeat and looking into the fundamentals of security.