Reading back the roots of software security, I got to think that it is always good to remember some principles that serves as foundations for the software security area. For this reflection I have made a summary on the guiding principles for software security based on the Book “Building Secure Software” by John Viega & Gary McGraw. My view is that these may not be an exhaustive list of principles, and of course, having all these principles addressed, does not necessarily ensure that systems will not be successfully attacked. But it is a good starting point to think about security.
Viega and McGraw lists the following Ten Guiding Principles on Software Security in Chapter 5. The goals of these principles are to identify and to highlight the most important objectives developers should keep in mind when designing and building a secure system from Viega and McGraw’s perspective.
- Secure the weakest link.
- Practice defense in depth.
- Fail securely.
- Follow the principle of least privilege.
- Keep it simple.
- Promote Privacy.
- Remember that hiding secrets is hard.
- Be reluctant to trust.
- Use your community resources.
1. Secure the weakest link.
The software security system is only as secure as its weakest components. Attackers will attack the weakest parts of your system because they are the parts most likely to be easily broken. Identifying the weakest component of a system falls directly out of a good risk analysis. Given good risk analysis data, addressing the most serious risk first, instead of a risk that may be easiest to mitigate. Deal with one or two major problems, and move on to the remaining ones in order of severity. Of course 100% security is never attainable and it is ok to stop addressing risks when all components appear to be within the threshold of acceptable risk for the business.
2. Practice Defense in Depth.
The idea behind the defense in depth is to manage risk with diverse defensive strategies, so that if one layer of defense turns out to be inadequate, another layer of defense hopefully prevents a full breach. The difference between this principle and the one before is that this principle here handles on taking defense as a whole, thinking of redundant security measures. The idea is that having a sum of protection is far greater than the protection offered by any single component.
3. Fail Securely.
Failure is unavoidable and should be planned for. What is avoidable are security problems related to failure. The problem is that when many systems fail in any way, they exhibit insecure behavior. In such systems, attackers only need to cause the right kind of failure or wait for the right kind of failure to happen. The main lesson here is that the developers should think that if their software will fail, make sure it does so securely!
4. Follow the Principle of Least Privilege.
This principle states that only the minimum access necessary to perform an operation should be granted, and that access should be granted only for the minimum amount of time necessary. When you give out access to parts of a system, there is always some risk that the privileges associated with that access will be abused. Sometimes more privileges than necessary are requested during coding to make developing task easier, for example, one might say: "Someday I might need to write this object, and it would suck to have to go back and change this request". Insecure defaults may lead to a violation here too. This problem is nowadays more common because many legacy systems were intended to run in a restricted environment.
The basic idea behind the compartmentalization is to minimize the amount of damage that can be done to a system by breaking up the system into a few units as possible while still isolating code that has security privileges. The classic example of poor compartmentalization is the standard UNIX privilege model, in which interesting operations work on all-or-northing basis. If you have root privileges, you can do anything you want anywhere on the system. Given a device driver, if an attacker exploits a buffer overflow in the code, the attacker make raw writes to disk and mess with any data in the kernel's memory.
6. Keep it simple Complexity increases the risk of problems.
For example, the compartmentalization principle must be used in moderation. If you segregate each little bit of functionality, then your system will become completely unmanageable. Complex design is never easy to understand, and is more likely to include subtle problems that will be missed during analysis, and more bugs will be bypassed at the testing phase. Consider reusing components whenever possible, as long as the components to be reused are believed to be of good quality. Of course, there is always the possibility of problems even in widely used components, but it's reasonable to suspect that there's less risk involved in the known quantity.
7. Promote Privacy.
There is often no quicker way to lose customer respect than to abuse user privacy. Therefore, developers should be as diligent as possible in protecting any personal user information. One of the things privacy most often trades off against is usability. For example, it is better that a webpage forgets the credit card numbers as soon that they are used, than to risk being compromised information. Leaving any sort of information around can help potential attackers. The main advice here is to promote privacy for your users, systems, and for your code.
8. Remember That Hiding Secrets is Hard.
This principle assumes that even the most secure systems are amenable to inside attacks. Several studies show that the most common threat to companies is the insider attack, therefore, the potential for serious misuse and abuse is a very real risk.
9. Be reluctant to Trust.
There are hundreds of products from security vendors with gaping security holes. Therefore, instead of making assumptions that need to hold true, the developers should be reluctant to extend trust. Trust is for example, often extended far to easily in the area of customer support, just because "it makes their jobs easier".
10. Use Community Resources.
There are many websites and sources of information for "known threats" and vulnerabilities in software components and systems. It is far better to trust security libraries that have been widely scrutinize and used. Still, there are many examples of security bugs in widely used pieces of software that have gone unnoticed for years. So not always it is the safest to use, but at least it is possible to leverage the experience of others.