How calculate Records Per Block and Blocks per Cluster into OpenEdge 101B?

rrojo7229

Member
Dear,

How calculate "Records Per Block" and "Blocks per Cluster" into OpenEdge 101B?

I know that I can reach into an good number for "Records per Block" if I take the "Real Block Size" per table but how can I do that?



Thanks,
Kind Regards.
Ricardo Olguin
 

TomBascom

Curmudgeon
Rows per block can be a bit complicated. There is a whitepaper on PSDN.

If an area only contains indexes use a RPB setting of 1. For type 2 data areas and assuming that you haven't modifed the create & toss limits:

ProVersion: 10.00
BlkSz: 8,192
AreaType: 2
SAFR: 7,958

From To iRPB
----- ------ ------
1 28 256
29 59 128
60 121 64
122 245 32
246 495 16
496 992 8
993 1,987 4
1,988 3,976 2
3,977 1

ProVersion: 10.00
BlkSz: 4,096
AreaType: 2
SAFR: 3,862

From To iRPB
----- ------ ------
1 12 256
13 27 128
28 57 64
58 117 32
118 238 16
239 480 8
481 963 4
964 1,928 2
1,929 1


Blocks per cluster. OTOH, is largely a matter of taste. My taste is to use 512 for all non-trivial type 2 areas and 8 for type 2 areas that have very small amounts of data.
 

rrojo7229

Member
Dear Tom,

Yes, I collected that file that you are saying I will attach it.

But I do not understand yet, How can I know the values "Create Limit", "Toss Limit" and "Record block header overhead (RBO)" - where can I see it? In the Tabanalys or in the promon ?

Anyway, I am not sure, but to find the suitable value to RPB, can I use just the "Average Record size" (from tabanalys) or I need calculate (SAFR - Space Available for Records), (ASOR-Average Size of record) and (AUD-Avail user Data)?

Thanks one more time.
Kind Regards,
Ricardo Olguin

Rows per block can be a bit complicated. There is a whitepaper on PSDN.

If an area only contains indexes use a RPB setting of 1. For type 2 data areas and assuming that you haven't modifed the create & toss limits:

ProVersion: 10.00
BlkSz: 8,192
AreaType: 2
SAFR: 7,958

From To iRPB
----- ------ ------
1 28 256
29 59 128
60 121 64
122 245 32
246 495 16
496 992 8
993 1,987 4
1,988 3,976 2
3,977 1

ProVersion: 10.00
BlkSz: 4,096
AreaType: 2
SAFR: 3,862

From To iRPB
----- ------ ------
1 12 256
13 27 128
28 57 64
58 117 32
118 238 16
239 480 8
481 963 4
964 1,928 2
1,929 1


Blocks per cluster. OTOH, is largely a matter of taste. My taste is to use 512 for all non-trivial type 2 areas and 8 for type 2 areas that have very small amounts of data.
 

Attachments

  • records-per-block_best_practices_white_paper.pdf
    54.8 KB · Views: 41

TomBascom

Curmudgeon
If you haven't changed the create and toss limits (you need to use proutil to do so) then the tables that I posted can be used in conjunction with the average record size as shown by proutil dbanalys.

There are, of course, exceptions to every rule -- there are cases where this approach is not the best way to do things. Using average record size and these tables is only a "rule of thumb" not a hard and fast formula that works for everyone always. It does, IMHO, work quite well for most people most of the time though.
 

rrojo7229

Member
Dear Tom,

This Database will be create this week, but the "Schema" is the same since version 9.1C and we would like organize each group of table into a specific Storage Area.
What are you mean when you say "If I had changed the create and toss limit"? Where I see these parameters(Create and Toss Limit) ? Into tabanalys, Promon?

Thanks,
Kind Regards,
Ricardo Olguin


If you haven't changed the create and toss limits (you need to use proutil to do so) then the tables that I posted can be used in conjunction with the average record size as shown by proutil dbanalys.

There are, of course, exceptions to every rule -- there are cases where this approach is not the best way to do things. Using average record size and these tables is only a "rule of thumb" not a hard and fast formula that works for everyone always. It does, IMHO, work quite well for most people most of the time though.
 

rrojo7229

Member
Dear Tom,

Ok.

I found a documentation to see the parameters Create Limit and Toss Limit:
# proutil upd_allin1 -C dispTossCreateLimits 7

And I saw:

[root@clones2 DB_update]# proutil upd_allin1 -C dispTossCreateLimits 7 | more
OpenEdge Release 10.1B01 as of Tue Apr 3 20:41:05 EDT 2007

Table object number: 1, Toss Limit: 300, Create limit: 150. (12852)
Table object number: 2, Toss Limit: 300, Create limit: 150. (12852)
Table object number: 3, Toss Limit: 300, Create limit: 150. (12852)
Table object number: 4, Toss Limit: 300, Create limit: 150. (12852)
Table object number: 5, Toss Limit: 300, Create limit: 150. (12852)
Table object number: 6, Toss Limit: 300, Create limit: 150. (12852)
Table object number: 7, Toss Limit: 300, Create limit: 150. (12852)
Table object number: 8, Toss Limit: 300, Create limit: 150. (12852)

So, considering that database that we have here is 4096 BlockSize, as far I understood I will take the Record average size to find Records Per block:

ProVersion: 10.00
BlkSz: 4,096
AreaType: 2
SAFR: 3,862

From To iRPB
----- ------ --------
1 12 256
13 27 128
28 57 64
58 117 32
118 238 16
239 480 8
481 963 4
964 1,928 2
1,929 1

Thanks.







Dear Tom,

Yes, I collected that file that you are saying I will attach it.

But I do not understand yet, How can I know the values "Create Limit", "Toss Limit" and "Record block header overhead (RBO)" - where can I see it? In the Tabanalys or in the promon ?

Anyway, I am not sure, but to find the suitable value to RPB, can I use just the "Average Record size" (from tabanalys) or I need calculate (SAFR - Space Available for Records), (ASOR-Average Size of record) and (AUD-Avail user Data)?

Thanks one more time.
Kind Regards,
Ricardo Olguin
 

rrojo7229

Member
Tom,

What do you mean when you say: "Non-trivial"
"My taste is to use 512 for all non-trivial type 2 areas and 8 for type 2 areas that have very small amounts of data."

Is it means - big amounts of data?

Thanks.
Regards
Ricardo Olguin




Dear Tom,

Yes, I collected that file that you are saying I will attach it.

But I do not understand yet, How can I know the values "Create Limit", "Toss Limit" and "Record block header overhead (RBO)" - where can I see it? In the Tabanalys or in the promon ?

Anyway, I am not sure, but to find the suitable value to RPB, can I use just the "Average Record size" (from tabanalys) or I need calculate (SAFR - Space Available for Records), (ASOR-Average Size of record) and (AUD-Avail user Data)?

Thanks one more time.
Kind Regards,
Ricardo Olguin
 

TomBascom

Curmudgeon
If the data is going to occupy at least 256 blocks then I use a cluster size of 512. Otherwise I use 8.

I'd like it if there were a cluster size of 1 -- the world has an amazing number of tables with a handful of control records that are read billions of times per day. (Just to be clear -- you can use 1 for the cluster size but it means the same thing as 0 and it creates a type 1 area.)
 
Top