Below is a table of the major group PTFs for the last few releases. This is what we are installing for our customers on iTech Solutions Quarterly Maintenance program.
This newsletter includes:
We received some great Valentine’s Day gifts from our IBM i’s all over the world, I think it was how we treated them, but is it because at iTech Solutions we know what to do, when to do it, and how to do it when it comes to IBM i? That is a good reason. Just so you know, our feelings are mutual, as we love our IBM i machines as well. Maybe it was our red shirts for valentine’s day? In either case, you know when dealing with iTech Solutions your IBM i will be in good hands.…
Journal management provides a means by which you can record the activity of objects on your system. When you use journal management, you create an object called a journal. The journal records the activities of the objects you specify in the form of journal entries.
The primary benefit of journal management is that it enables you to recover the changes to an object that have occurred since the object was last saved. This ability is especially useful if you have an unscheduled outage such as a power failure.
You can journal the objects that are listed below:
Libraries, Database physical files, Access paths, Data areas, Data queues, and Integrated file system objects (stream files, directories, and symbolic links).
CRTJRN command creates a journal as a local journal with the specified attributes and attaches the specified journal receiver to the journal. Once a journal is created, object changes can be journaled to it or user entries can be sent to it.
WRKJRNA Command displays or prints the creation and operational attributes of a journal, including the name of the journal receiver currently attached to the journal. From the primary display, options or functions can be selected to display the names of all objects currently journaled to the journal, the names of all remote journals currently associated with this journal, and detailed information about a remote journal, the receiver directory, or detailed information about a journal receiver; or to delete receivers from the receiver directory.
CHGJRN command changes the journal receiver, the journal message queue, the manage receiver attribute, the delete receiver attribute, the receiver size options, the journal state, allowing minimized entry specific data, journal caching, the journal receiver’s threshold, the journal object limit, the journal recovery count, or the text associated with the specified journal. The command allows one journal receiver to be attached to the specified journal. This replaces the previously attached journal receiver. The newly-attached journal receiver begins receiving journal entries for the journal immediately.
CRTJRNRCV command creates a journal receiver. Once a journal receiver is attached to a journal (with the Create Journal (CRTJRN) or Change Journal (CHGJRN) command), journal entries can be placed in it.
More from this month:
Over the past few years’ information technology has embraced cloud environments more and more. While many will say that cloud is steadily increasing in adoption by both large and small companies there are still some end users that we run into that either don’t understand the benefits that cloud adoption can bring them or maybe simply don’t understand what cloud is… like my mother.
A joke that I once heard from an IBMer regarding the fact that he had bought his mother a Power system was that she didn’t know what to do with it… His response was backup, backup, backup. Now, this joke needs to be changed around, and when you tell your mother that you bought her a new Power system she is going to say… “Well, where is it and what do I do with it?”… and you will now say… “It’s in the cloud but you still need to backup, backup, backup.” Explaining what a cloud environment is to my mother was similar to trying to explain to my father how they got all those TV channels into that thin black cable wire.
Let’s tackle some of the basics of cloud terminology and relate them to something that your mother might be able to understand.
Let’s start with the high-level terminology:
Simply strip out the “as a” and consider that the term “Service” simply means that you are paying someone else to supply that function and you are left with…
Let’s replace this term with hardware because that is what Infrastructure is. Your mother might have a Windows or Apple desktop, laptop, or tablet that is her infrastructure. In this case, these devices provide storage (data, files, documents), processing capabilities from the chip on the “motherboard” and physical connectivity to the …
Many of you (well, hopefully, all of you!) regularly apply PTFs to your systems to keep them current with fixes from IBM. Doing so ensures that you will always have the latest code updates from Big Blue to keep your IBM i environments running as problem-free and as securely as possible, and it is simply the tried & true best practice for maintaining fixes for the platform.
What we frequently find with customer systems is that many shops do attempt to keep their PTFs as current as possible (e.g. applying cumulative PTF packages every 12-18 months on average), but what we have also found is that the majority of shops are woefully negligent in “permanently” applying their PTFs on a regular basis as they are not aware of the importance of doing so.
From a very fundamental standpoint, there are basically three distinct code levels on IBM i that you regularly apply PTF fixes to:
When you apply new PTFs you will typically apply them temporarily so if any of the new fixes introduce any operational issues they can be removed with a RMVPTF command. The recommended flow is that you apply say a cumulative PTF package on an IPL with all PTFs set to be applied temporarily and let the system run for a few weeks with the temporarily applied PTFs to ensure the new fixes did not introduce any problems to the system, if no problems are found then you can run the following command to set all PTFs to permanently apply on the next IPL for all code levels (LIC, base operating system, and all LPPs):
APYPTF LICPGM(*ALL) APY(*PERM) DELAYED(*YES) IPLAPY(*YES)
The running of the above APYPTF command after newly applied PTFs have been deemed stable is what most shops are not doing on a regular basis and they should be.
So specifically “why” is permanently applying PTFs …
A few months ago we talked about two very simple core components of our IBM i penetration test: exploiting firewall NAT rules and default passwords.
I wanted to build upon that by showing you other ways to properly lock down your system by way of showing you how to easily exploit it.
Now, most people look at a user profile and think about some of the attributes defined within, such as special authorities, initial menus, password expiry intervals, etc. Those are certainly important, but I want to focus on the object itself.
By default, IBM i user profile objects are given the *public authority of *exclude. That means that users without *allobj special authorities or private authority to the user profile objects have no access to view a user profile or determine anything about the aforementioned attributes. Unfortunately, there are always a few profile objects on a system that have *public authority of *use, *change or *all. And guess what? Those profile objects usually have some sort of special authority. I’m not sure just why that is but my guess is that generally they’re vendor-created profiles. A vendor gets on the system and has the ability to create user profiles and set them up for adopted authority for their software…but they just didn’t really know what they were doing. They end up giving the new user profiles special authorities and then set up the profile object as something other than *public *exclude.
So, when we come into the picture on a white box penetration test to check if we can elevate our authority, the first thing I attempt to do is see what user profiles are not set up as *public *exclude. Why would I do that? It’s a simple vulnerability I want to exploit. It’s an old and simple hack, but a very effective one if you’re not buttoned up. And so you’re aware, there’s a very similar one regarding Job Descriptions when you’re at QSECURITY level 30…but that’s another topic for another day. Quick point…if you’re at QSECURITY 30, you need to be at 40.
Back to the original point. If I have *use, *change or *all private authority to a user profile with some special authorities, then I can simply submit jobs and have them run as that particular user.
I can do a wrkusrprf *all …
Besides overseeing the core information technology (IT) infrastructure, the CIO is also a visionary; one that needs to catapult the company to the forefront of the competition by deploying the latest technologies and the best of breed solutions, making sure that IT technology aligns with the visions and business objectives. The CIO is the vital link in bridging the gap between IT and the rest of the senior executives in the enterprise.
The Chief Financial Officer (CFO), Chief Operating Officer (COO), Chief Executive Officer (CEO), and other senior executives in the company may not fully comprehend the core technologies that drive the company forward. Especially when discussing the IBM Power systems solution, many have the conception that the Power System is a dated system that trails behind other advanced technologies.
The following is an approach to present a high availability (HA) hybrid cloud Power solution to senior executives, highlighting the financial and technology engines of the Power system in propelling the company ahead of its competitors. Strategic requirements of the company include the need to modernize the legacy applications and integrate Linux applications with AI analytics on Big Data.
The current hybrid cloud solution consists of an on-premises Power System, an IBM FlashSystem, a virtual tape library (VTL) for backups and a HA system hosted in the iTech’s i-In-The -Cloud (IITC) running HA replication software on both the on-premises and hosted Power systems in the IBM i environment.
Separating the capital expense (CAPEX) and the operating expense (OPEX) is paramount in working with the CFO to meet the corporate financial objectives. CAPEX includes the cost of all hardware and software; OPEX is the cost of hardware and software maintenance plus services to install and implement the solution. CAPEX and OPEX generally come in two different budgets and the CFO will need to know the CAPEX to determine the asset depreciation schedule. The solution is financed with a 5-year IBM Global Financing Full Payout Option (FPO).
The following table highlights the layout of the solution components in CAPEX and OPEX format.
The total cost of ownership (TCO) is still the best and proven way to justify the solution. Total cost of acquisition (TCA) accounts only for the initial cost of a system. TCO reveals the true cost of the solution, looking beyond just the initial hardware and …