Security Testing for Denial of Service Attacks

Yvonne Enselman

Security Testing is a component of reliability verification and nothing is more viable as a failure than when systems can’t be accessed. One interesting and frustrating fact about this threat is that it has been used as pranks all the way through the first stage of major threats to governmental agencies. The intent of these attacks is to cause resource depletion to a website causing it to fail. Unfortunately, it doesn’t need to be complicated to be successful and can bring a business down quickly without specific intrusion that can be monitored for. Much like malicious data encryption, the hacker doesn’t need to want to do anything with your system. If they can prevent you from running your business, you need to immediately deal with them and the threat.

While there is no definitive answer to DOS attack testing, we can organize our efforts around principles that work for all well-built and maintained systems. Validation of input cells can prevent bots from entering thousands of slashes. Once again, this is an area that if there are communication lags between departments (web application and IBM i for instance) problems are more likely to occur. Especially since the most effective way to guard against these threats is via efficiency and performance monitoring of the i. This is going to be our best line of defense.

Efficiency and testing is the place to start. This is defined as the capability of the system to provide appropriate performance relative to the resources used with understood conditions. Obviously, this includes hardware, software, and all the integration points between them. As systems, teams, and development become more distributed, this can be easier for hackers to exploit. As most efficiency failures are traced back to a core design flaw, we need to be aware that these can be very expensive things to fix late in the lifecycle. Understanding this impact at the design and requirements phase is critical (review and static analysis techniques.) Contrary to the idea that we treat performance by trying to overload a system until it fails (although, that is the exact attack vector I am describing), the correct approach is to measure the working system without causing it to fail. Again, this needs to be addressed at all levels and should be pervasive not simply at the end of system testing, which is a common misconception.

Lastly, there is the use of tools. There is no tool that will ensure proper testing coverage, although tools can be very helpful. I am going to talk about a couple that are common on the IBM i.

The native performance analysis tools can give a lot of information about where the system is inefficient. Index advisor will tell you where paths are needed over your database when the same requests are generated often. DASD and CPU increases, even when small amounts relatively but happing quickly is another indication a DOS could be happening. Reorganizing your files to reclaim deleted records ensures the application isn’t working against white space clutter. All of the above is comprised in the OS and available without additional cost to teams needing to ensure the proper performance of the system.

 

More from this month:

 

 

Leave a Comment

Your email address will not be published. Required fields are marked *