Reads vs #records

jdpjamesp

ProgressTalk.com Moderator
Staff member
Is there a simple way of seeing the number of reads on a table vs the actual number of records in the table? I know I can use the VSTs to see the number of reads, but I want to be able to see which of those tables have a small number of records. I know one table for example has about 500 records in it, but has had 8.5bn reads :mad: I'm working at tracking down where this diabolical thing is happening, but would love to identify any other small tables that are being hammered.

OE10.2A by the way.
 
AFAIK the total number of records in a table is not stored anywhere. If you need it you must roll your own logic (count the records yourself). But, when you do a table analysis it will be part of the report. We run a table analysis on our production database every night and store the information we're interested in in the database.

Heavy Regards, RealHeavyDude.
 
Cheers RHD - Will have to get a tabanalys running. That's what I thought would be the case.
 
For your version and what you've asked for, it sounds like what we use is pretty close to what RHD mentioned. We have a db that holds a record count for each table - we run the table stats every week.

This may work - may not though...
If you upgraded to 10.2B, you would see a new field in the _TableStats file that is _TableStat-OSRead. If you have very high Record Reads and very small OSReads it can mean the code is scanning a small table numerous times and a better index could be used in a query. Below is a very small table being scanned too often -

│Table Name Table Reads OS Reads Table Creates Table Updates Table Deletes│
│──────────────────── ─────────── ────────── ───────────── ───────────── ─────────────│
│busProcRes 2245585249 8264 2123 │
 
Thanks - it's not so much that it's using bad indexes, actually. The cases I've found so far seem to be
Code:
FOR EACH LargeTable NO-LOCK,
  EACH SmallTable OF LargeTable NO-LOCK:
END.
The index usage is fine, but if you swap the query round to look at the small table first you will greatly reduce the reads.
 
For your version and what you've asked for, it sounds like what we use is pretty close to what RHD mentioned. We have a db that holds a record count for each table - we run the table stats every week.

This may work - may not though...
If you upgraded to 10.2B, you would see a new field in the _TableStats file that is _TableStat-OSRead. If you have very high Record Reads and very small OSReads it can mean the code is scanning a small table numerous times and a better index could be used in a query. Below is a very small table being scanned too often -

│Table Name Table Reads OS Reads Table Creates Table Updates Table Deletes│
│──────────────────── ─────────── ────────── ───────────── ───────────── ─────────────│
│busProcRes 2245585249 8264 2123 │

Wouldn't a high ratio of logical reads to physical reads just indicate successful caching, as opposed to a problem with reading too much? I think whether or not you have too many reads is a function of how many records your code is reading (logically), versus the number you think it should be reading based on your desired business logic and your index bracketing.
 
Re record count, I agree with the others, just do a tabanalys. Maybe do a couple of them, separated by a week or whatever time you think appropriate, and diff the counts. Tables that are small and relatively static will remain small from one run to the next. Then get your _TableStat stats and sort by reads, and you can cross-reference against your previously-compiled list of small static tables.

If you don't have anything keeping you from moving to 10.2B, I would suggest it. Small, static, frequently-read tables (and their indexes) are very good candidates for the Alternate Buffer Pool. (But it's not a substitute for tweaking the code. :))
 
Back
Top