Question Take a step backwards? 8k block back to 4k - Crazy or not?

Chris Hughes

ProgressTalk.com Sponsor
Hi progressors

We are currently experiencing a strange program with one of our client server apps, since upgrading it is running awfully slow.

We upgraded our 10.1C 32 bit DB with 4k block size running on Windows 2003 to
10.2B07 64 bit DB with 8k block size running on Windows 2012.

There is loads more resource on the new machine and the -B on the new DB is set high enough for the whole DB to fit into memory. All other parameters are the same. Typically buffer hits are 98 to 100%, no locks, no waits etc. Memory usage is less than 80% at OS level, no network bottle neck or CPU.

The data was moved using a full manual dump and reload of the data. The only thing we didn't do, which I'm kicking myself for now, is move to type 2 storage. So our new 8k DB has an st file that still looks like this

# generated by PROREST on Tue Apr 09 10:57:37 2013
b .
d "Schema Area":6,32;1 .
So my questions are this
Is there any chance what so ever that going backwards to a 4k block size DB would improve the performance?
Would you be able to offer me some good reasoning, for a high level IT manager, on why keeping the block size at 8k and going type 2 is far more sensible than going back to 4k.
Sorry - just to add we also run another Progress DB client server app - same upgrade process and this seems fine / faster even!
Thanks
Chris.
 

TomBascom

Curmudgeon
Performance is complicated but choosing storage area types is not -- going to type 2 *is* far more sensible.

http://dbappraise.com/ppt/sos.pptx

Your problem isn't that you have the wrong block size -- it is that you did not take the steps necessary to take advantage of changing the block size.

You should run proutil -C dbanalys to verify the rows per block (8k blocks are supposed to default to 64 -- but the schema area may be an exception, I don't recall because I *never* allow tables, indexes or LOBs in the schema area) and the average record sizes.

If your .st file is accurate 32 rows per block means that you basically added a lot of air to each block.

So plan out a proper type 2 design and do it right.

Another thing that people mess up when they dump & load is that they forget to tune the bi cluster size. The default is really bad. If this is an Enterprise license you want it to be 16,384 or higher. If workgroup (say it ain't so!) then you want something really small.

BTW, a hit ratio of 98% isn't actually very good. It's really kind of bad.
 

Chris Hughes

ProgressTalk.com Sponsor
Hi Tom

Thanks for the reply.

I ran the db analysis - I got lots of figures about blocks but not much about rows per blocks?

I'd love to plan a proper type 2 - I need to research more, clearly, but I don't have control of the applications that run on the DB's so I need to be very careful.

We have an enterprise license.

Regards BI we have this

Before image block size (bytes): 8192
Before image cluster size (kb): 512

Hits are 99% today. Do we not aim high on this as in to get 100%, how come 98% would be so bad?

Thanks again

Chris.
 

Chris Hughes

ProgressTalk.com Sponsor
A promon of our DB incase it helps

Code:
♀Activity  - Sampled at 05/31/13 14:56 for 114:37:15.
 
Event                Total  Per Sec  Event                Total  Per Sec
      Commits        14290      0.0            Undos          207      0.0
Record Updates        78612      0.2    Record Reads    35366564      85.7
Record Creates        57850      0.1  Record Deletes        31676      0.1
    DB Writes        26268      0.1        DB Reads      1200959      2.9
    BI Writes        13913      0.0        BI Reads        6044      0.0
    AI Writes            0      0.0
  Record Locks      839066      2.0    Record Waits          28      0.0
  Checkpoints          179      0.0    Buffs Flushed          46      0.0
 
Rec Lock Waits    0 %    BI Buf Waits      0 %    AI Buf Waits      0 %
Writes by APW    99 %    Writes by BIW    81 %    Writes by AIW    0 %
Buffer Hits      99 %    Primary Hits    99 %    Alternate Hits    0 %
DB Size        2502 MB      BI Size      10 MB      AI Size      0 K
FR chain                    0 blocks  RM chain                  34 blocks
Shared Memory  4069M        Segments      1
 
41 Servers, 68 Users (0 Local, 68 Remote, 0 Batch),2 Apws
 

TomBascom

Curmudgeon
Application code is completely unaware of what you do in terms of storage areas. The code has no idea how you have arranged them. You can modify them at will without any code impact.

Read the PPT that I posted -- you will find information in there on how to get from "average record size" (from DBANALYS) to "rows per block".

Your bi cluster size is the default (512). You need to run:

proutil dbname -C truncate bi -bi 16384 -biblocksize 16

16,384 is my suggested minimum bi cluster size. It's a good place to start.

PROMON data averaged over 114 hours isn't terribly useful. It is more helpful to see it for a snapshot during a period when you think things are performing poorly. (The "s" command.) Or, better yet, download ProTop from http://dbappraise.com/protop.html

Yes, you do aim to get it as high as possible -- I'm just saying that 98% isn't very high.

Well, ok, actually the goal is not to increase "hit ratio" -- the real goal is to decrease or eliminate disk IO. This is often (but not always) the same as increasing hit ratio (which, if you look at it is really hit percentage -- but I digress in my digression...) So we often lazily use "hit ratio" as a stand-in for "reduce disk IO".

98% means that 1 in 50 "logical IO" ops go to disk. Most Progress record operations require at least 2 logical IO ops -- one for the index block, another for the data block (on average it is usually a bit over 2 but less than 3 on many systems). Which means that one in every 25 record operations is going to disk. Disk is very nearly 1,000,000 times slower than RAM. So every time you go to disk you /could/ have performed a gajillion or so logical IO operations instead. Improving from 98 to 99 cuts disk access in half. Going from 99 to 99.5 cuts it in half again. The ProTop "Big B Guesstimator" can help you see how much of an improvement you might expect from increasing -B by various amounts.

You do not appear to have after-imaging enabled. You should rectify that deficiency immediately. Your data (and possibly your job) is at risk. Should your system crash and be unrecoverable you will lose all data back to your last (good) backup. In today's business world it is extremely unlikely that you can actually recover a lost day (or week) of business activity by "re-keying it". After-imaging is an absolutely essential component of a competently run OpenEdge database. More information: http://dbappraise.com/ppt/ai.pptx
 

Chris Hughes

ProgressTalk.com Sponsor
Am aware of the AI issue - passed this to my manager.
We utilise a disk block level replication service which covers us - we hope!

Is there any docs knocking around regarding truncating BI? I guess the database needs to be shutdown first?

Really appreciate the powerpoint links - they're just blowing my mind a little ;)

Thanks
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
Hi progressors

We are currently experiencing a strange program with one of our client server apps; since upgrading it is running awfully slow.

We upgraded our 10.1C 32 bit DB with 4k block size running on Windows 2003 to
10.2B07 64 bit DB with 8k block size running on Windows 2012.

There is loads more resource on the new machine and the -B on the new DB is set high enough for the whole DB to fit into memory. All other parameters are the same. Typically buffer hits are 98 to 100%, no locks, no waits etc. Memory usage is less than 80% at OS level, no network bottle neck or CPU.

The data was moved using a full manual dump and reload of the data. The only thing we didn't do, which I'm kicking myself for now, is move to type 2 storage. So our new 8k DB has an st file that still looks like this

# generated by PROREST on Tue Apr 09 10:57:37 2013
b .
d "Schema Area":6,32;1 .
So my questions are this
Is there any chance what so ever that going backwards to a 4k block size DB would improve the performance?
Would you be able to offer me some good reasoning, for a high level IT manager, on why keeping the block size at 8k and going type 2 is far more sensible than going back to 4k.
Sorry - just to add we also run another Progress DB client server app - same upgrade process and this seems fine / faster even!
Thanks
Chris.

If that is actually your new structure file then your DB still has 4KB blocks. 8KB databases have 64 RPB in the schema area.

Prove it to yourself with proutil dbname -C describe.
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
Am aware of the AI issue - passed this to my manager.
We utilise a disk block level replication service which covers us - we hope!

Bad assumption. Thus Tom's comment about job risk. An online database comprises (a) files on disk and (b) data in shared memory. If you replicate only the disk files of an online database, you're copying garbage. You can OS-copy an offline database, but you have to be sure you get every single file. And the number and location of DB files can change over time. Probkup is a safer approach as it always knows where all the parts of the DB reside.

Once AI is enabled you should start with a Progress backup of the DB, restore it to your DR box, and then roll forward your filled AI extents from production that are copied over to the DR box. This can all be scripted, and the prod side can be automated with the use of the Progress AI File Management Daemon. It can be configured to switch extents automatically at a set interval and copy them to a local archive directory. Then you just script the process of copying them from there to DR and applying them to the DR database.

Is there any docs knocking around regarding truncating BI? I guess the database needs to be shutdown first?

Yes, your DB has to be offline to truncate the BI file. You should get to know the Database Essentials and Database Administration manuals. Download the whole 10.2B documentation set here. Look up the proutil command, truncate bi qualifier, in the Reference section in the back of the DB Admin manual for more details.

Really appreciate the powerpoint links - they're just blowing my mind a little ;)

Good. When you're done reading them, read them again. It's a lot to absorb when you're new to the material. Walk through the exercise of rebuilding a DB as you view the material. It will help it "stick".

You may get some additional benefit from watching the video versions, complete with Q&A:
Storage Optimization Strategies
After Imaging, The DBA's Best Friend
 

TomBascom

Curmudgeon
In addition to Rob's comments about after-imaging -- mirroring and block level copies, even when done right, do not protect you from:

1) someone deleting the db
2) operational errors -- like someone accidentally deleting all of the customers in California
3) corruption

After-imaging provides protection against all of these things and more.
 

Pablo

New Member
Before image block size (bytes): 8192
Before image cluster size (kb): 512

Your bi cluster size is the default (512). You need to run:

proutil dbname -C truncate bi -bi 16384 -biblocksize 16

16,384 is my suggested minimum bi cluster size. It's a good place to start.

This command also increase BI block size to 16k. Is this ok?
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
And when you enable after imaging, make sure you change your AI block size to 16 KB to match the BI block size. And start an AIW.

This stuff is covered in Tom's AI presentation.
 

Chris Hughes

ProgressTalk.com Sponsor
If that is actually your new structure file then your DB still has 4KB blocks. 8KB databases have 64 RPB in the schema area.

Prove it to yourself with proutil dbname -C describe.


Hi - it is 8192 using this command.

I know the block size is 8k as I created the new databases from empty8.

I think the big mistake was in copying the old .st files << I can feel the virtual slaps coming now ;|
 

Chris Hughes

ProgressTalk.com Sponsor
If your DB is 8 KB then what we saw isn't your current structure file.


This is my reason for being a bit twitchy now, from you are hinting at our set up makes no sense but I can assure you in the same directory I have db1.db and db1.st, the st file reads

# generated by PROREST on Tue Apr 09 10:57:37 2013
b .
d "Schema Area":6,32;1 .
I copied the .st files from before our upgrade when it was a 4k database.
So I guess other than being technically poor design through lack of knowledge would this be causing the performance problem directly?
Thanks
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
As Tom said earlier, your performance problem is due to your structure. In fact if you did succeed in building an 8 KB database where everything is in the schema area, with 32 RPB, you probably made your caching performance worse. You're doing 8 KB of I/O to load a block into the buffer pool, containing at most 32 records. That same I/O in your old DB cost you a 4 KB transfer.

The point of moving to 8 KB blocks is that it gives you more room to store records, so assuming full blocks, two 4 KB blocks contain more block overhead (headers, etc.) than one 8 KB block. But you need to adjust your structure, i.e. increasing RPB, to take advantage of the extra space. All you have to show for your transition to 8 KB is 4 KB more "digital air" in each block.

I would advise that you take a DB analysis of your current DB, plan out a new Type II structure, and then dump and load again. Then you should see performance benefits.
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
If your location, time, and budget permit it, you may want to consider attending a PUG Challenge Progress conference. There's a lot of great information in the sessions, as well as networking opportunities to get information from your peers. Lots of stuff for developers, DBAs, and architects.

pugchallenge.org June 9-12, Westford, MA
pugchallenge.eu November 7-8, Brussels, Belgium
 

Cringer

ProgressTalk.com Moderator
Staff member
I also know a very good consultant who could come and help you out. His name starts with a T and finishes with an m, and has omBasco in the middle ;)
 
Top