This is the kind of thing you can estimate with a little trial and effect. This was done in 11.6.3 64-bit, on Linux.
Code:
proserve demo -omsize 1:
04/09/18 Status: Shared Resources
18:13:30
Primary object cache (-omsize): Total: 13 Used: 13 Pinned: 0
Secondary object cache (-omsize): Total: 13 Used: 13 Pinned: 0
Shared memory allocated: 17476K
proserve demo -omsize 1013:
Primary object cache (-omsize): Total: 1013 Used: 233 Pinned: 0
Secondary object cache (-omsize): Total: 1013 Used: 0 Pinned: 0
Shared memory allocated: 17684K (17476 + 208)
proserve demo -omsize 2013:
Primary object cache (-omsize): Total: 2013 Used: 233 Pinned: 0
Secondary object cache (-omsize): Total: 2013 Used: 0 Pinned: 0
Shared memory allocated: 17892K (17684 + 208)
proserve demo -omsize 3013:
Primary object cache (-omsize): Total: 3013 Used: 233 Pinned: 0
Secondary object cache (-omsize): Total: 3013 Used: 0 Pinned: 0
Shared memory allocated: 18100K (17892 + 208)
It turns out the smallest possible value you can have for -omsize is 13, even if you specify less. If you specify 0 the DB won't start. Every increase of 1000 increases shared memory size by 208 KB (212,922 bytes), so each entry takes about 213 bytes of memory. Pretty inexpensive.
A few years ago at a conference, I was in a DB-tuning workshop. One of the exercises demonstrated a measurable overhead on read performance to having -omsize tuned too low. Was it a contrived test? Maybe. Does every environment suffer from significant latch contention? Definitely not. Is this the "lowest-hanging fruit" for optimization in any given environment? Probably not.
But I don't see that as a reason not to tune it anyway. I have seen the effect of setting it in my clients' databases: OM latch locks go from many millions to n, where n is the number of storage objects. As I see it, latch locks are work and work isn't free, so why do it if you don't have to?
The nice thing about -omsize is it's easy to tune. There is no art to it like with -spin or some other parameters. There is no downside to tuning it appropriately, apart from a small shared memory size cost. So I don't worry about how much it helps me. I just set it and spend my time thinking about things that merit more thought. Maybe in a future release, Progress will automagically set the right value and this discussion will be moot. But until then:
- Count the storage objects (e.g.: select count(*) from _storageobject; )
- If count <= 1024 (the default value), you don't need to set -omsize.
- If count > 1024, set -omsize <count>.
It is worth noting that -omsize sets the size (# of entries) of two caches, as shown in promon above: the primary object cache and the secondary object cache. If your primary cache is large enough to hold all the storage objects, they are accessed without latching. If it isn't large enough, it is filled and the remainder are cached in the secondary cache. In this case an LRU list is maintained and processes must lock the OM latch to update it. If objects are added to the database online, the are always added to the secondary cache, even if there are empty slots in the primary. Access to the secondary cache always involves LRU/latch access.
This is my recollection from past conferences, but IANADBEP (I am not a DB engine programmer).