Continuing our discussion of security testing infrastructure, we are up to Denial of Service attacks. Since this focuses on resource depletion, load and stress testing is the primary proactive technique to counter.
Interestingly, it is easy to confuse these two, or at least it was for me.
Load testing is evaluating the behavior of the system with increasing load, looking at parallel users and transactions. There are 2 ways to look at this type of testing. First, we can steadily increase what we are simulating mimicking peak usage. Second, it can be very difficult to load tests given the differences between test and production environments and the limitations of testing. Especially sans automation tools.
Think realistically about the ebb and flow of users and processes on your systems. There are excellent performance tools on IBM i that can indicate this for you. From there, scale your tests to the size of your environments in an equal ratio test to production. Again, for Load Testing, we want to simulate both the user/transaction LOAD and the increases you see in production in relevant percentages. Stress testing on the other hand is conducted to evaluate a system or component at or beyond the limits of its anticipated or specified workloads, or with reduced resources.
Now with LPARs being common and the ability to have separate but equal environments, this type of testing has become far easier in the last 15 years. However, the tests need to be designed to push the infrastructure to the limit of both their intended use and what could happen if your business goes crazy. I doubt companies making PPE thought they would need to ramp up the way they did in 2020. Another way to differentiate between these 2 test types, both of which are important, is load testing looks at what the system does via application development and stress testing looks at how it is done from a configuration/admin POV.
Of course, DoS attacks range from the prank variety to governmental or business espionage. It is easy to see how disruption can hurt a business or reputation. However, we also know that at times of responding to chaos, organizations are less likely to catch malevolent behavior, back door insertion, unauthorized access, modifications that haven’t gone through proper change management, etc.
This can be frustrating for quality assurance professionals. Having an exact replication of the production system and the capability to mimic true-to-life inputs can be difficult. The answers to these attacks are also incomplete because they can be capricious in motive. Some of the easiest ways to start testing against this threat are validation and efficiency. Rigorous validation of any potential input cells is reasonable for most organizations. This prevents the thousands of slashes from being accepted by your system.
Knowledge and monitoring of your environments is also important. Companies need to know the system is performing within tolerance at all times and that processor spikes aren’t happening. Going back to the load testing I first discussed, this is why increased usage over time is a test component. The attacks can start small and spawn off into further damage.
Knowing the ways to approach the testing needed, and implementing multiple procedures, can prevent your sites from crashing from this vector.
More from this month:
- System Environment Variables for Controlling QNTC Behavior
- Getting to Know COBIT and IBM i System Administration
- Changing Your IBM Business Partner
- IBM i Security Resource Page
- iTech iTip Videos
- Sips & Tricks: Coffee with iTech
- iBasics: IBM i Education for the Beginner System Administrator
- Upcoming Events
- IBM i, FSP, and HMC release levels and PTFs (April 2021)