Day: January 27, 2022

January 2022 Newsletter

This newsletter includes:

Well, I thought by 2022, we would be back to normal life, but unfortunately, it doesn’t seem that the start of 2022 is any different than the start of 2021. By now it is clear that both 2020 & 2021 were unprecedented years, and 2022 has already been eventful. Let’s hope that with people getting vaccines and boosters, plus with everyone catching Omicron, getting back to normal is right around the corner.  I said this last year, “It is hard to say what will happen a month from now, much less a whole year!” boy was that ever true.…

BRMS – Omitting Constantly Locked Files

BRMS (Backup, Recovery, and Media Solutions) is an application for IBM i used to make backups and restores easier. You can configure backups to happen just as you want them, and when you want to restore, find exactly what you want and where to restore it from. Very often users will configure BRMS to do Save-While-Active saves, so that applications don’t have to be ended during backups – keeping their system up and running and making them look like a superhero.

However, any objects that have exclusive locks on them cannot be saved during a Save-While-Active. Also, by default, objects that use commit control cannot be saved while in the middle of a commit cycle. When a single object in a save cannot be saved, it will result in the message:  Control group XYZ type *BKU completed with errors. Some administrators will often begin to accept this as a good completion message and never check to see what didn’t get saved. This could lead to a potentially disastrous situation if you need to restore and it turns out that not as much of the system was getting backed up as you thought.

There are two things you want to do to rectify this situation:

1) End the application causing the lock before the save and then start it after the save finishes.

2) Omit the objects that cannot be saved. Ending the application may not be possible, so then you have to look at what’s not being saved. If the objects that cannot be saved are trivial things like log files or easily re-created data areas, you can simply choose to omit them from the backup, since they are not critical to restoring the system.

The easiest way to add omits to your BRMS control groups is by using the graphical interface for BRMS inside of Navigator for i. There is a section for Backup, Recovery, and Media Services in the panel on the left side of the page.

 

Inside the section for Backup Control Groups, you will find all your configured control groups. Right-click the one you want to work with and click Properties. Click on the What tab to see what you are backing up. Click on the arrows next to the Item and then click Change Omits… This will let you add either single objects or many objects by specifying a string and then ending …

Shut down those /QIBM shares

Last week I posed a question on Twitter and LinkedIn about what would actually be deleted if your IBM i /QIBM share was fully compromised by malware.

What’s the /QIBM share?

Well, it’s something that was released years ago but has since been deemed a security risk. In fact, it was iTech Solutions that reported it and requested it shut down. Since then, four PTFs have been released to do just that; one for each recent OS version:

    • 7.1 — SI76071
    • 7.2 — SI76072
    • 7.3 — SI76073
    • 7.4 — SI76074

Unfortunately, many IBM i customers may not be able to apply PTFs too often so that share may exist on your system. It’s a very simple process to turn it off. Essentially you have to go into IBM Navigator for i, find the file share for /QIBM and click “Stop Sharing.” That’s all the PTF will do as well.

Now, to leave that share out there would not be an ideal thing to do. If someone with *ALLOBJ special authority were to map that drive to their PC or laptop, then inadvertently kicked off some ransomware, then the /QIBM directory would be in big trouble. How much trouble?

Well, I happen to have an IBM i partition we use for such tests. I decided to redeploy some custom malware as I did in the article called The Real Effects of Malware on IBM i against the /QIBM directory on that same (albeit rebuilt) WLECYOTE server. I do this so you don’t have to wonder…you’ll know.

Malware directed at WLECYOTE ran for about 5 minutes, destroying much of /QIBM. What was the result?

Well, licensed programs in *ERROR status were:

  • 5770SS1 options *base, 3, 30, 31, 39, 43
  • 5770DG1
  • 5770JV1
  • 5770WDS options 33, 34, 35, 41, 43, 44, 45

 

  • Host servers mostly destroyed except Telnet
    • Good luck getting Access for Windows or ACS Telnet without Signon
    • DOS Telnet will work! Enjoy!
    • But to be fair, I think I had a Telnet server in use…so your mileage may vary depending on the time of day.
  • Objects missing for licensed programs 5770TC1, 5770TS1, 5770XW1, 5770NAE, 5770PT1, 5733SC1
    • That’s TCPIP, Transform Services, IBM Access Family, Network Authentication Enablement, Performance Tools, SSH
  • License information
  • Digital Certificate Manager…goodbye encryption, even if Telnet works
  • Navigator for i

That’s a good chunk of the system destroyed. It’s functional…sort of.

That’s why you need to update PTFs (to …

What is Zero Trust?

Security is top of mind for CIOs, CISO’s and even CEOs today. Ransomware attacks are happening to companies every day. Even those who think they are prepared are surprised when hackers find a gap somewhere in their security strategy. Despite implementing all kinds of monitoring and anti-virus protection at the network layer, the hackers can still wreak havoc. So, what’s the solution?

We need to turn our security methodology on its head. The current approach to secure the network is to implement VPNs and anti-virus. The thought is if we keep the hackers out of the network, then our data is safe. The problem is that the hackers can find a way through the perimeter, or worse, they are already inside the network. If you assume that your data is safe once somebody is inside the perimeter, you are at risk. This isn’t the best way to protect your data from getting into the wrong hands.

Zero Trust

Zero Trust is a methodology based on the premise that all access is a potential threat. All users are verified and authenticated before gaining access to the network, an application, data, or any workload. All user access is segmented, and if you need elevated authority to do a task, that authority should only be granted when you need it and only after you are verified and authenticated as authorized. All data is encrypted from end to end. Monitoring the network and access to sensitive data is critical to mitigating risk and stopping the hackers before they access your data or worse.

There isn’t a step-by-step manual that you can follow to implement Zero Trust. It’s a framework. Your environment and business are unique, and your security is no different. No provider alone can help you achieve Zero Trust. It will take a team approach to get your environment genuinely secure.

Zero Trust focuses on seven key pillars. These pillars offer a comprehensive approach to layered security, which will ultimately provide you with the lowest risk. By reviewing each of these areas and implementing a strategy to ensure you have a good solution is in place, you can better protect your data. Let’s take a closer look at each one of them.

1. User access

It’s critical to ensure that the users accessing your data are who they say they are. It’s also essential that they have the least amount of authority …

Understanding Storage Options for IBM i

In the past when it came to data center infrastructure and specifically servers, most of the hyped up innovation was focused around the CPU, peripherals, and higher memory ceilings.  For many years disk wasn’t really that sexy.  For most IBM i shops all that mattered was having enough total storage, enough arms to support I/O needs, and a solid raid configuration for resiliency.  Most customers had spinning 15k hard drives up until SSD drives (Solid State Disk) became more readily available and financially palatable.

SSD and more specifically the underlying flash technology they are built on, paved the way for big innovation and massive performance gains at the storage level.  These innovations were so impactful for overall compute performance due to storage becoming a major bottleneck as a result of CPU and interconnect speeds increasing over the years, while storage tech stayed fairly stagnant.

Today Flash technology is driving big I/O and drastically increasing the density of storage in the data center.  In this article we are going to look at these drive options, what makes them unique, how they perform, and where they are available.

Below is a diagram we will reference throughout the rest of the article.

SAS HDD’s

These drives have much higher latency, longer read/write times, and as a result, are much slower than the other drive options we will talk about here.  These drives use the SCSI protocol to communicate, which we will touch on later. The biggest reason for the speed is that these drives have a motor-driven spindle that holds flat circular disks (called platters).  They are coated with a thin layer of magnetic material. Read-and-write heads are positioned on top of each plater that moves back and forth as they read and write to disk.  While these platters spin extremely fast at 15,000 RPM (Revolutions per minute) and the heads moving rapidly across them, it’s still subject to physical movement and has limitations.

Suggestion: If you get a chance google “slow motion video of hard drive seeking”. It’s incredible to watch them in action.

SAS SSD’s

This is our entry point into Flash storage.  As you can see represented in the red outlined gray box in the above image, all the remaining drive options are built with Flash storage. SAS based SSD’s have no moving parts and are essentially a memory chip or interconnected integrated circuit.  As a result of not having …