12 hours between backups.Not having after-imaging enabled is suicidal. Can you afford to throw away a full day of data? (How long between backups?)
What does mfgsys.st look like? ("prostrct list mfgsys" to get a fresh copy...)
Let me guess... all your data is in the schema area?
Sorry, we are running OpenEgde 64 bit.You have a 64 bit server. You should get 64 bit OpenEdge to go with it.
Yes, we do now. We originally had a 48-core box running our database and our software vendor told us that the server wasn't powerful enough.I thought you said that you have 80 cores?
I'm not familiar with this.Has anyone ever run "update statistics"?
How do I do this?You should /start/ by changing it to 16MB. Then see how far apart your checkpoints are. If they are under 1 minute apart during busy periods make it larger.
Thanks. I will make this change.You change the BI cluster size and/or BI block size with proutil <dbname> -C truncate bi -biblocksize <BI block size in KB> -bi <BI cluster size in KB>.
Try 16 MB for BI cluster size (16384 KB) and 16 KB for BI block size.
This is done with the DB offline.
output to "updatestatistics.sql".
for each _file no-lock where _tbl-type = "T":
put unformatted "update table statistics and index statistics and all column statistics for PUB." _file-name ";" skip.
put unformatted "commit work;".
end.
output close.
Thanks. I will make this change.
I'm thinking of doubling the buffer pool to 1,000,000. Is that significant enough?
We run solely on SSDs.
I'm pretty sure that they are directly attached.Is this a RAID configuration? If so, which RAID level? Are they direct-attached or on a SAN?
If you have successfully avoided disk IO you might then have a CPU bottleneck. Having lots and lots of cores is not helpful to databases. In fact it is harmful once you eliminate the other bottlenecks. This is known as the "too many cores problem".
To update SQL statistics, what you will do is use the 4GL to create a SQL script that updates metadata for each application table. Then you will use your SQL client of choice, e.g. sqlexp, to execute the script. This KB article has info on how to do it.
I would suggest slightly altering the code in the KB article, so the COMMIT statement is within the FOR block. Like this:
Code:output to "updatestatistics.sql". for each _file no-lock where _tbl-type = "T": put unformatted "update table statistics and index statistics and all column statistics for PUB." _file-name ";" skip. put unformatted "commit work;". end. output close.
How could I tell if this is an issue?
Would we be smarter to go with 40 cores instead of hyper-threading to 80?
Okay, I've created this .sql file. This looks like this needs to run on the database while it is running. Is that true? It doesn't change the indices, just optimizes them, correct?