Marc Vadeboncoeur

How IBM i Invokes Your System Startup Program

In dealing with several customers as of late, I realized that there seems to be some confusion as to exactly how and when IBM i invokes the system startup program that you have defined for your system.

As many of you know, there is a system startup program defined in system value QSTRUPPGM that you can display using command DSPSYSVAL SYSVAL(QSTRUPPGM) and it will give you this result:

The above example shows the native (base) IBM i operating system startup program QSTRUP in library QSYS being used as the startup program, but you can of course use any custom program that you have in any library.  Most IBM i installations have custom Control Language programs that they have written that start their environments, and you most likely do as well.  But “how” exactly does this system startup program get called and specifically “when”?

The system startup program by the widest possible definition gets called whenever the QCTL controlling subsystem is started, which will be one of the following two scenarios:

  • When the system is IPL’d
  • When the system is in a restricted state and is brought out of a restricted state by the issuance of the STRSBS SBSD(QCTL) command

Now from a system work management perspective, “how” does the startup program get called?  This is accomplished by the use of an Autostart Job Entry in the QCTL subsystem description.  If you display the QCTL subsystem description on your system using the command DSPSBSD SBSD(QCTL) and take option #3 (“Autostart job entries”), you will see this entry:

Because of this job entry, which is the system shipped default configuration, whenever the QCTL controlling subsystem is started, a job named QSTRUPJD will be started in the subsystem using job description QSTRUPJD in library QSYS.  If you display the QSTRUPJD job description using the command DSPJOBD JOBD(QSTRUPJD) and page down to see the “Request data” entry you will notice that system program QSYS/QWDAJPGM is being called:

What the QWDAJPGM does is critical, it directly calls the system startup program that you have defined in system value QSTRUPPGM.  This job description is why you see a batch job named QSTRUPJD running in QCTL calling program QWDAJPGM after every IPL or after you bring your system out of a restricted state.  Also worthy of note here, the QSTRUPJD job description also specifies what user profile to use when running the system startup program, the shipped …


Getting Lots of Information Quickly – The Handy QLZARCAPI Tool

There seems to be an endless supply of “Easter Eggs” in IBM i that you can uncover and say to yourself “now that’s a handy thing to have”, and here is one of them…

Many situations can present themselves where you need to get detailed specs information on an IBM i LPAR running in your environment, for example, you need to get pricing and/or licensing for a new 3rd-party software package and the vendor needs to know critical things like “what processor group is the system in” or “what is the LPAR ID” or “what is the system’s processor feature code”.  Well, there is one very handy IBM i API that you can call from any command line that will give you all that information and more, it’s called QLZARCAPI.

The API is very simple to invoke, first, call the QCMD command line interface to show the full command line environment:

Next, simply call the API:

The API will create a comprehensive formatted output right in the command line interface that will give you a boatload of critical system information, it looks just like this:

The output is logically broken-out into three discrete informational groupings:

  1. System Information
  2. Partition Information
  3. Processor Pool Information

Under System information you can determine:

  • Serial number of the system
  • System model & type
  • Processor feature code
  • Processor group
  • Maximum physical processors in the system
  • Number of configurable processors in the system

Under Partition information you can determine:

  • Network name of the system
  • Partition name
  • Partition ID number
  • Processor sharing type and sharing mode
  • Minimum processors required
  • Desired processors
  • Maximum processing capacity
  • Entitled (licensed) processing capacity
  • Minimum, desired, maximum, & online virtual processors

Under Processor Pool information you can determine:

  • Number of virtual processor pools configured
  • Pool ID of current processor pool in use
  • Maximum processing units for processor pool in use

Invoke the QLZARCAPI tool on your system and see for yourself the wealth of information that it provides.  It’s another simple yet powerful tool deep inside of IBM i that is super easy to use and can make your life just a little bit easier.


This newsletter includes:

Permanently Applying PTFs & Why Doing So is Important

Many of you (well, hopefully, all of you!) regularly apply PTFs to your systems to keep them current with fixes from IBM.  Doing so ensures that you will always have the latest code updates from Big Blue to keep your IBM i environments running as problem-free and as securely as possible, and it is simply the tried & true best practice for maintaining fixes for the platform.

What we frequently find with customer systems is that many shops do attempt to keep their PTFs as current as possible (e.g. applying cumulative PTF packages every 12-18 months on average), but what we have also found is that the majority of shops are woefully negligent in “permanently” applying their PTFs on a regular basis as they are not aware of the importance of doing so.

From a very fundamental standpoint, there are basically three distinct code levels on IBM i that you regularly apply PTF fixes to:

  1. The Licensed Internal Code (or “LIC”) and these are all the MFnnnnn numbered fixes under product ID 5770999 and the LIC is the code level that interacts with the hardware and provides low-level system functions to the operating system
  2. The base IBM i operating system and these are all the fixes under product ID 5770SS1 and this code level constitutes the base operating system functionality
  3. The Licensed Program Products (or “LPPs”) and these are all of the fixes for the optionally installed licensed programs on your system

When you apply new PTFs you will typically apply them temporarily so if any of the new fixes introduce any operational issues they can be removed with a RMVPTF command.  The recommended flow is that you apply say a cumulative PTF package on an IPL with all PTFs set to be applied temporarily and let the system run for a few weeks with the temporarily applied PTFs to ensure the new fixes did not introduce any problems to the system, if no problems are found then you can run the following command to set all PTFs to permanently apply on the next IPL for all code levels (LIC, base operating system, and all LPPs):


The running of the above APYPTF command after newly applied PTFs have been deemed stable is what most shops are not doing on a regular basis and they should be.

So specifically “why” is permanently applying PTFs …

Is Your System Using Outdated and Insecure SSL/TLS Security Protocol Versions?

Safely sending data over the Internet is critical in this brave new world of widespread cybersecurity vulnerabilities.  When it comes to securely passing data from one system to another, a key requirement is to use encryption standards that are current and do not have widespread know flaws that can be exploited.

On IBM i versions V7R3 and V7R4, the following encryption protocol versions are supported (actual versions supported on your specific system is dependent upon system settings allowing their use):

  • TLS 1.3
  • TLS 1.2
  • TLS 1.1
  • TLS 1.0
  • SSL V3
  • SSL V2

When looking at the above list of currently supported protocols, what’s important to note is that “supported” does not implicitly mean “secure”.  This is illustrated by the fact that SSL V2, SSL V3, TLS 1.0, and TLS 1.1 now have known vulnerabilities and are therefore now considered insecure.  TLS versions 1.0 and 1.1 (also referred to as “early TLS”) were formally deprecated by the Internet Engineering Task Force (IETF) early in 2021, those older versions of the protocol were using cryptographic algorithms that were compromised by multiple attacks over the past several years, including BEAST, LUCKY 13, POODLE, and ROBOT, as both older TLS versions lack support for current and recommended cryptographic algorithms and mechanisms.  If your shop is supporting/handling credit card transactions then chances are you already know that the PCI Council announced way back in 2016 that SSL and TLS 1.0 could no longer be used for transmitting credit card data because they are no longer considered secure.

So, is there an “easy” way to determine if your IBM i environment is using any of the older protocols above that are no longer considered safe to use?  Well, as a matter of fact, there is!

IBM has embedded into the Licensed Internal Code(LIC) a very cool automated tool (a LIC macro of sorts) that can be turned-on and used to track all SSL/TLS connections that your system is involved with.  Turning this facility “on” is very easy to do and can be done in only a few minutes, and you do not need to bring your system down or halt production activity to do so.

To turn-on the LIC macro and have it start keeping track of all SSL/TLS protocols being used, simply follow these steps from inside the SST (System Service Tools) menus:

  1. Signon to SST using the STRSST command
  2. Take option #1 Start a

Need Another Tape Drive? Go Virtual!

The save/restore functionality that comes fully integrated with the IBM i operating system out-of-the-box is legendary, with a wide spectrum of functionality and device support there really is nothing that you cannot do with the native save/restore architecture on the platform. With respect to devices supported, the choices are many, you can save to physical tape, save to a save file, a physical RDX drive, etc., but did you know that you save/restore to/from a “virtual” tape drive?

The virtual tape drive functionality in IBM i is built upon the system’s rock-solid image catalog architecture where “volumes” are built in the IFS to emulate various types of storage like DVD storage for installation disk images, and, for tape virtualization, storage for virtual tapes where each volume in an image catalog is analogous to a discrete physical tape loaded into a physical tape drive.

Virtual tape drives have some nice advantages that you may want to look at to see if they may have a place in your environment, here are the main features that you may find interesting:

  • They are extremely fast, there is no physical tape that needs to be written to or read from, all virtual tape I/O is to/from disk so overall performance is incredible, and if you have solid-state disks the performance can go from incredible to spectacular
  • There are no physical tape cartridges that need to be handled (“handled” as in inserted, removed, or, dropped on the floor!)
  • Supports all IBM i save commands (except SAVSTG)
  • Can be used as the “source” tape device in a DUPTAP command to a “target” physical tape drive
  • Enables some creative ways to move data from one system to another by FTP’ing the image catalogs used to other systems so they can be mounted on virtual tape drives on those systems and read/duplicated

Creating a virtual tape drive is easy, and there is nothing to purchase or install, everything you need is fully baked into IBM i, just follow these easy steps…

  1. Create the virtual tape drive itself: CRTDEVTAP DEVD(TAPVRT01) RSRCNAME(*VRT)
  2. Vary-on the new virtual device: VRYCFG CFGOBJ(TAPVRT01) CFGTYPE(*DEV) STATUS(*ON)
  3. Create an image catalog that will hold the “virtual tapes” for the virtual tape drive where each catalog entry is a virtual tape: CRTIMGCLG IMGCLG(TAPVRT01) DIR(‘/tapvrt01’) TYPE(*TAP) CRTDIR(*YES)
  4. Add an image catalog entry for each virtual tape volume that you wish to have (e.g. VOL001, VOL002, etc.): ADDIMGCLGE IMGCLG(TAPVRT01)

Need a New HMC? Here’s Why a “Virtual” HMC May Be a Very Good Choice

Many (the majority) of our customers have an HMC (Hardware Management Console) appliance to manage their IBM Power systems, whether it be to manage multiple partitions (LPARs) or even to manage a single partition machine, HMCs are critical to the IBM i technology ecosystem.

HMC technology has very much evolved through the years, running on different processors and on a code base that has undergone constant change as IBM Power hardware and virtualization technologies have become more and more advanced, and HMC technology needed to keep pace to support those advancements.

It wasn’t too long ago that an order from IBM for a new HMC meant getting an IBM xSeries (Intel processor based) server “appliance” where the HMC software was pre-loaded at the factory and it simply needed to be racked & cabled-up on installation and configured for use.  Times have changed a bit, and getting a new HMC now gets you an IBM Power-based server appliance not an Intel-based one, and, you now have the ability to go “virtual”, and the virtualized option is what we’ll be exploring a bit in this article.

First, a very quick general summary of what an HMC is, it is a Linux-based appliance that physically connects to the service processor (a.k.a. “FSP”, “server firmware”) of your IBM Power system and allows you to manage that physical system & its logical partitions, and provide console access to all partitions running on that system.  The chart below from IBM’s documentation shows an example HMC managing AIX, IBM i, and Linux partitions running on an IBM Power system:

The HMC is a classic “one-trick pony” as it basically is a closed/locked-down appliance that has only one but very important role, to manage one or multiple IBM Power systems and the virtualization/configuration environments of all partitions running on those systems from a single pane of glass.

For many years, getting an HMC meant getting a dedicated physical box from IBM that ran the Linux operating system and the HMC software, but with widespread adoption of Intel-based virtualization technologies (like VMware) in just about every organization that has an IBM Power system, you now have a great alternative to installing/implementing the appliance, the Virtual HMC.

You can purchase the IBM Virtual HMC software to run on multiple supported virtual machine hypervisors, and currently, the supported x86 hypervisors (for IBM product code 5765-HMW) are KVM 2.5.0 on …

PWRDWNSYS Taking Forever After a Full System Save?

Quite frequently we encounter customers who explain to us that when they execute a PWRDWNSYS (Power Down System) command immediately following a full system save the system appears to “hang” and take forever to power down.

The scenario is always the same, they did a full system save (for example, a GO SAVE option #21) and then as soon as the full save has completed they invoke the PWRDWNSYS command (with either RESTART(*YES) or RESTART(*NO) specified, it doesn’t matter) and then the system appears to “hang” with the system displaying the SRC code “D6000298” for the partition for an extended period of time.  Once the SRC code disappears, the system powers down shortly thereafter.

We see this condition primarily on systems that have a large number of objects in the IFS (often, in the millions!) and large amounts of main memory allocated to the system/LPAR.

So, why does this happen?  This happens by design, and here’s why…


Save operations on IBM i create changed pages in main system memory, and once a page in main memory is changed as a result of the save operation, it needs to be written back out to disk so the change isn’t lost if the system is shutdown.  If you are backing-up an IFS that has millions of files in it then that could very well mean that when the save of the IFS is finished you have millions of pages hanging out in main system memory that need to be written back out to disk before the system can be safely shutdown, this is integral to the storage management architecture of IBM i.  The system SRC code “D6000298” is coming from the storage management function and it signifies that the system is currently doing its job and moving pages in main storage (main memory) back out to disk.

Now, how can you speed up your system’s power-downs after you do a full save?


Well, the first way is obvious, simply try to avoid invoking the PWRDWNSYS command immediately after you save your system, especially when your system has a large number of files in the IFS, but that isn’t always a practical (and realistic) resolution.  The most efficacious approach is to simply have storage management use a faster method to move all those changed pages in main memory back out to disk before the PWRDWNSYS command executes, and that …

Why Sharing the /QDLS Directory in NetServer Should Be Avoided

Many IBM i installations use the NetServer facility of the operating system to present directories in their system’s IFS as standard SMB file shares on their network, shares that can be accessed by Windows client PCs and any other SMB clients that may need to access IFS directories.  It is very common in many shops for customers to have a share directly on the /QDLS folder, but what many don’t know is that directly accessing files in /QDLS via a standard NetServer file share can be problematic.

Why is it not a good idea to share /QDLS as a NetServer file share?

It is because of multithreading.  NetServer by default is shipped as a multithreaded facility for optimal performance, and because /QDLS is a very old directory technology that cannot support multithreading, any file share connections to /QDLS must be single-threaded.  If an SMB client (e.g. a Windows PC) makes its initial SMB connection to the system via a share on /QDLS then the NetServer server-side connection will be a single-threaded job, and any additional accesses to other IFS directories (e.g. outside of /QDLS) will also be funneled through that single-threaded connection.  If an SMB client makes its initial SMB connection to the system via a share on a normal IFS share (“normal” = a share on a directory outside of /QDLS) then the NetServer server-side connection will be a multithreaded job and any requests to access /QDLS after that initial connection will fail, and therein lies the fundamental problem with /QDLS NetServer shares.

So, how do you avoid encountering these connectivity issues with shares on /QDLS?

The best practice approach is to not have a share on /QDLS in the first place.  If you have code running on your system now that requires the use of the /QDLS folder (e.g. you may have a population of old applications that execute CPYTOPCD commands to create ASCII files of database files in /QDLS), then a good workaround is to still use the /QDLS folder but then after you place files in that folder then have your application take the additional step of moving those files from /QDLS to a normal IFS directory (outside of /QDLS) that can then be accessed via its own SMB share that will be serviced by a multithreaded connection on the server-side.

If you must continue to use a share on /QDLS for whatever technical/application reasons and …

System Environment Variables for Controlling QNTC Behavior

For many of you, most likely the majority of you, are using IBM i’s NetServer facility “server side” to present directories in your system’s Integrated File System (IFS) to clients on your network as normal network file shares using the SMB protocol standard.  If you are using NetServer in this manner, you are also probably using the “client side” of NetServer called “QNTC” where your IBM i acts as an SMB “client” and reads/updates file shares on your network that are hosted by other servers (other IBM i systems, Windows servers, etc.). 

Many shops use the QNTC functionality of NetServer as a nice easy way to get files to or get files from Windows servers in their environment, and the purpose of this article is to make you aware of some little-known system environment variables that control some basic fundamental behavior of QNTC on your system.

To determine what system-level environment variables that you have defined in your IBM i environment, simply enter the command WRKENVVAR *SYS on any command line and the screen below will appear showing all your system-level variables that you have (if any).  System variables (e.g. WRKENVVAR) are very much like system values (e.g. WRKSYSVAL) in that they provide a “switch” of sorts to allow you to control all kinds of system behavior, here we will focus on system variables related to QNTC behavior.


Prior to version V5R4 of IBM i, the default QNTC functionality on your system would search for all SMB files shares on your network and only present them to your system if each file server that it discovered was accessible via TCP/IP.  This would ensure that QNTC presented to the system a list of only shares that are accessible, but this accessibility “pre-check” caused a performance penalty whenever you tried to access the QNTC file system from IBM i (e.g. WRKLNK ‘/QNTC’ command).

To speed things up, IBM changed the QNTC default behavior as of V5R4 to not perform this accessibility pre-check and as a result, QNTC may present to you shares that cannot be accessed and this could be confusing in network environments where you have many file shares and this QNTC accessibility pre-check is helpful in automatically weeding through which SMB shares are accessible from your system.  If you want to re-enable this accessibility pre-check, simply add the environment level variable QZLC_SERVERLIST and set it to a value of

Tailoring the Operational Assistant Backup Exit

I was recently at a customer site in Texas installing a brand new POWER9 system, and along with their beautiful new P9 box, I also installed a new IBM tape library device to replace an older standalone single cartridge tape unit that they had been using to back up their old POWER6 system.

The customer utilizes the IBM i native (built-in) RUNBCKUP command provided by IBM’s longtime Operational Assistant facility (GO ASSIST) to perform their daily and weekly backups. They wanted to take advantage of the multi-cartridge magazine capacity and autoloader feature of their new IBM tape library to be able to load a bunch of cartridges at the beginning of every week and simply have the tape library load a new tape out of its magazine inventory after the completion of each backup. That way there would always be a new tape in the drive ready-to-go with no daily manual loading required. Pretty standard stuff.…

Controlling the Outbound IP Address Used with a Virtual IP Address

We recently installed a new POWER9 system at a customer site and migrated their old system to it, and the customer wanted to take advantage of multiple network switches in their data center to provide some level of network connectivity redundancy for their new system to protect against a network switch failure.

The simple solution was to use a virtual IP address implementation where we created a virtual IP TCP/IP interface that sits on top of two physical interfaces that are used to handle the network traffic (in a recent newsletter I described the exact steps on how to do this, very easy to do!).  What we did was create a virtual IP interface that was the same address as the IP address of the old system (e.g. and two real (physical) TCP/IP interfaces (e.g. and that were defined to the virtual IP interface as “preferred interfaces” with each physical interface plugged into a different network switch, this is the norm for a typical virtual IP address configuration.…