Question Kernel Tuning For Aix

Rob Fitzpatrick Sponsor
Hi all,

A client of mine is experiencing some read performance issues and so is revisiting everything about their system configuration.

Production DBs are 180 GB and 90 GB. -B is 50 GB and 19 GB respectively. They are on AIX 7.1, OE 11.6.1. Structure is good, Type II areas. In short, startup parameters are all reasonable. Storage is on IBM SAN. Tuning for incremental gains is possible but I have dealt with the low-hanging fruit. ABP is in use but is not configured properly; I will be dealing with that separately. One table in particular is 52 GB and I have reason to believe it has a high degree of logical scatter and needs a D&L; that is in the planning stages.

They are asking me specifically about tuning OS kernel parameters. As my experience is mostly on Linux and Windows I don't have any insights to offer. Are there any suggestions you can offer on settings you typically apply for a database workload? Thanks in advance.


Sorry to hear about the SAN storage.

AIX is tricky. In some ways it is very simple and almost self-tuning. The things to pay attention to in one release are often made irrelevant in later releases.

You need some data about how the thing is configured -- it is probably a shared "LPAR" (virtual partition) and the configuration of that LPAR can matter a lot. If you download the most recent ProTop I suspect that you will find bin/getaix.shx to be useful. Some of the commands might require root privilege (it depends on how paranoid the sysadmin is) so you might need to get someone to run it for you.

Make sure that they have NMON. It's like ProTop -- but for the OS ;) It is *very* good on AIX.

Fire up NMON and watch the fiber adapters for a little while ("a" or "^"). Most AIX servers have at least 2 and the workload should be more or less balanced between them. But sometimes someone messes up and all the traffic is going over one while the others twiddle their thumbs. It's silly but it is sometimes a quick win.

nmon will also give you some useful insight into memory use and CPU utilization -- if you have bunches of CPU, relatively low usage but higher %sys than %usr then you might have an LPAR that has "too many cores". You might also have an LPAR whose "entitlement" is much smaller than your actual demand (if I recall the default is something silly like .25 of a vcpu). You really want the LPAR to be "right sized". If you're running 70% to 80% CPU busy and not much %sys that's a good place to be.


Active Member
AIX has filemon which is very handy when trying to track down I/O items. Windows has something similar so if you are familiar with that one.


Active Member
Apologies for the stream of consciousness post.. doing this in between meetings/tasks

There are some vmo and ioo parameters that should be set for AIX 7.1. Take a look at the documentation and the IO/memory usage on the box before changing these though..

Run a quick iostat sample first to get some gory details about service/wait times on the disks.. IBM Knowledge Center

iostat -D 30 1

If you have queue waits then either your SAN is too slow or the queue depths on the drives need to be adjusted (maybe both). The service times is something you will get to argue with the SAN guys about.. because of course their SAN has no latency no matter what you see.

Suggest safe settings vmo:

For the ioo ones.. first step would be to run vmstat -v and look for entries with "blocked" in the description. Each one of these relates to an AIX parameter that can get you some big wins.
If you are in an LPAR run vmstat -vh instead to see some LPAR specific metrics about Hypervisor overhead.

There are also some ioo options for jfs and jfs2 read ahead.. depending on the IO from the app these might need to go up or down. Numbers in the 8-128 range are somewhat sane for most apps, usually smaller if you are really doing random style IO.

For the logical disks.. make sure the queue_depth is set to something reasonable for what is behind the disks.

To see the setting...
lsattr -E -l <diskid> -a queue_depth

To change the setting...
chdev -l <diskid> -a queue_depth=256 (256 used to be the max.. correct value depends on how many physical disks are under the covers)

You can also adjust the max_cmd_elems on most adapters using chdev, but a reboot might be required depending on the exact model.

Rob Fitzpatrick Sponsor
Thanks for the input guys, much appreciated.


New Member
What is the best practises to create F.S for progress database

We are facing issue with performance using AIX 7.1, Storage ( DS8800)

Cringer Moderator
Staff member
I've created a new thread for you Renato. Please stick to that one.