OpenEdge Management - Database - Performance Summary view

Aleksey Krivenya

New Member
Hello,

could you please explain information in the OEM tool:
Resources - Database - Operational Views - User Activity - User IO Activity - User Details

For example, I see for user #19

I/O Activity:
DB Reads 1 530 154
Logical Reads 146 115 415

But in the Table Activity below contains another info:
Reads (summary for all table): 40 559 401
Indexes (summary): 59 012 578

Shouldn't that be the same?

PS May be it is more detailed documentation about it, because docs.progress.com contains only basic description:
 

TomBascom

Curmudgeon
There are many numbers in your post. Which ones do you think ought to be the same?

Just guessing but a common misperception is that "db reads" are record reads. They are not. "db reads" is better read as "disk reads". A "db read" occurs when the database reads a block from disk. Which it will do if a "db access" (or "logical read") tries to access a block that is not in memory.

A "db access" is also not the same as a record read. "DB access" occurs when a block is referenced as a result of a CRUD request. The vast majority of these requests are record reads but a simple request at the 4gl level may result in multiple "db access" events. For instance, most FIND statements are going to result in at least one access to an index block and another access to a data block. So you will usually see the overall ratio of record reads to db access be about 1:2.x
 

Aleksey Krivenya

New Member
Thank you very much for this clarification.

I suggested that DB Reads (or Logical Reads) at the top of the User Details is a summary of total reads of all table and indexes...
I'm trying to guess meaning of this digits, because I haven't found any documentatin yet.

And could you please shortly explain the meaning of Hits%
 

TomBascom

Curmudgeon
The hit% is the percentage of "db access" operations that do NOT result in a "db read".

IOW, what percentage of logical activity is performed in memory?

If you read 100 records, you have probably done about 210 "db access" operations (100 record block reads and a few more than 100 index entry reads). If the records were densely packed, say 64 per block, then you probably referenced 4 or 5 distinct data and index blocks. If none of that data had been previously read by anyone else then none of those blocks would be in memory when you started so your hit% would probably be about: ( ( 210 - 5 ) / 210 ) * 100 = 97.6% (assuming that -B is big enough to prevent your blocks being kicked out while you are still reading the data). If some of those blocks were already in memory then your percentage would be higher.

Generally speaking you strive for that percentage to be very close to 100% and you achieve that by allocating large amounts of memory to the -B startup parameter.

This is important because disk IO can be literally millions of times slower than memory operations. And, no, using an "all flash" SAN doesn't change that.

Very small improvements in hit% require very large increases in -B. (The relationship between hit% and -B is an inverse square relationship.)
 

TomBascom

Curmudgeon
BTW, 97.6% is not "very close to 100%". 97.6% is actually pretty crappy for a hit%.

Using the example above if we assume a blazingly fast SAN with 2ms read repsonse time reading 100 records will take roughly 10.0001ms. Of that 10ms is waiting on disk, the 0.0001ms portion is operations in memory.

If the database is on an internal SSD with a 0.1ms response time then you could expect the 100 records to take 0.50001ms. Better, but still dominated by the disk reads.

Now extend that analysis to a large database reading millions of records per second. Minimizing disk IO has a very significant impact on throughput.

FWIW, I am not generally happy until hit% is >= 99.9%. But don't get fooled into thinking that hit% is the only thing that matters or that things are always bad if you have a "poor" value. It is only "a rule of thumb" and there are exceptions and even good reasons why it might be different for your case.
 

Aleksey Krivenya

New Member
, and lastly...

Is it possible to get such (or any else) info about using (reads, writes, etc) temporary tables? Or memory operations are too fast to monitor them?
 
Last edited:

Rob Fitzpatrick

ProgressTalk.com Sponsor
Is it possible to get such (or any else) info about using (reads, writes, etc) temporary tables? Or memory operations are too fast to monitor them?

In OE 11.0+ there are VSTs for temp-tables.
https://docs.progress.com/bundle/op...ed-virtual-system-tables-for-temp-tables.html

You'll have to write your own code to collect and parse this data.

<shameless plug>
The free download of ProTop (protop.com/protop-free-download) contains some sample code you can look at to see how to implement TT VST data collection. Look at the files in the lib subdirectory, including ttdebug.i, tttest.p, ttinfo.p.

Within ProTop, once it is configured appropriately (as per instructions in lib/ttdebug.i), you can see info about ProTop's own TT use as an example of what this OpenEdge feature could do for your application, by hitting ctrl+t. Sample output:

1697119458758.png
 

TomBascom

Curmudgeon
ProTop also contains lots of code that will give you better insight into your database performance plus a ton of documentation explaining what those metrics mean.

And you can easily entice us to talk about it by asking questions ;)
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
Or memory operations are too fast to monitor them?
I'll add that temp-table operations aren't necessarily memory operations, when the TT data grows large enough that it is no longer entirely memory-resident. This transition can lead to sudden and severe application perfomance problems. Doubly so when the session temporary directory (-T) is non-local (which is a worst practice).
 
Top