As I have mentioned many times, there is no such thing as exhaustive testing and one of the main principles of testing is QA shows defects are absent from the cases run not that no bugs exist. However, without proper testing and analysis, we can be certain we have no protection from attacks. What is imperative is that we perform as much testing and validation as possible within the constrains of our business processing and activities. Importantly, the protocols discussed in this series of articles show due diligence and attention to the issues involved which by being prioritized help ensure security is in place as verified by the testing.
Security is considered a specialist role, and while it is getting more visible on the IBMi side it still isn’t the first area infosec teams are concerned with.
Part of this is due to the securable nature of the system. However, as we know, the box doesn’t ship secure. We need to have the correct system values and authority limitations in place. Further, like every interface, the IFS is a vulnerability. So we first have to analyze the context of our systems. Ironically, PC systems with less critical data are frequently more tested as the security risks are well known, while the I with the mission imperative might be overlooked. Determining the testing effort needs to be based on the importance of the components. Lastly, security needs to baked into our systems, patching after the fact will never be as reliable as the innate settings.
Once we have aligned effort with the context of the system importance we look at our actual objectives.
This is an area where the difference between information assurance and actual testing is important to understand. To be clear; information assurance is, “measures that protect and defend information and information systems by ensuring their availability, integrity, authentication, confidentially, and non-repudiation. These measures include providing for restoration of information systems by incorporating protection, detection, and reaction capabilities.” [NISTIR 7298]. Whereas Security Testing is, “A process used to determine that the security features of a system are implemented as designed and that they are adequate for a proposed application environment.” [MDA1]. What does this mean in the real world? IA (aka Information Assurance) are the rules that pertain to everything in our environment, the specifications, and requirements we work from, and what we base our designs and decisions on. Security Testing on the other hand is how we prove what we think we are doing is actually being done AND when issues are found that haven’t been prevented how we resolve them moving forward. Often I talk about the difference between specifications and tests and results – specifications are what we base our configurations and applications upon and write test cases to. Tests are the action performed linking assumptions and reality. Results show where we were wrong, where we nailed it, and everything in between.
Scoping and focus is the next area we focus on here.
Obviously, IT security ranges from minutia to enterprise. If we right tests too narrowly we won’t have confidence in the system as a whole, too large and we can’t narrow down the actual failure for resolution. Authority groups are an excellent example of this. One of the things that makes the I a tester’s dream is the number of settings that give us tremendous confidence. It is relatively simple to prove an authority group is in place, the profiles, jobs, etc. using it. More difficult is the next component of the testing suite. Part one, can every procedure needed be run successfully under this constraint. Part two, are there any overrides or expectations that supersede it. At every point we want to think of the following for security test objectives: verify and validate the protections needed specifically as they relate to assets, protective measures, risk, and identification of vulnerabilities.
After all this, how do we actually test?
We start by looking at the source or config of the system or application, previous tests, policy, existing assessments, environments, skill set, known risks, structure, project team, experience, tools, limitations. While each area (client/server, web, back end) has the same needs the ways of testing and the risks are vastly different. Further, some testing must be against live data and some must be against anonymized data. Then there is the ever-important authorized access vector of an unauthorized user versus outside the organization entry. We move into the degrees of failure. This is the concept that if a test fails the approach wasn’t necessarily to blame as there are so many new vectors nothing can be prevented before the hack is created. However, tests may also have not been designed with enough foresight to realistically identify risks. To determine the cause of the issue the root of the failure needs to be analyzed. One of the most valuable ways to ensure the proper tests are designed is to have the needed stakeholders involved which requires individuals from every level of the organization.
Lastly, all of this must lead to improving the practice of testing.
We are always going to find areas that could have been done differently or risks not accounted for. How are we going to practice continuous improvement matters more? Evaluations should be short versus long term perspectives, process, and organizational composition, and the tools, skills, people, involved. Then we look at the metrics and data produced from all this effort. Ratios of risks to test coverage as well as policies and practices by test, then requirements to tests. Effectiveness levels of past testing efforts, where issues were found, and how severe and pre versus post-release concerns.
Next month, the security testing process.
More from this month: