Question B2 Puzzle

RealHeavyDude

Well-Known Member
OpenEdge 11.3.1 Solaris SPARC 64 Bit

I am having trouble understanding this. Maybe I am completely wrong - but here goes:

I have a database, blocksize is 8192, for which I moved the Schema Area to the B2. The Schema Area only contains the meta schema of the database and consist of one variable extent with a size of 18 MB on disk.

Code:
proenv>proutil d_onedb -C viewB2 | grep "Buffer Pool"
Area "Schema Area":6 - Alternate Buffer Pool
Area "LOB 01":34 - Primary Buffer Pool
Area "Index_01":100 - Primary Buffer Pool
Area "Index_02":110 - Primary Buffer Pool
Area "Index_03":120 - Primary Buffer Pool
Area "Index_04":130 - Primary Buffer Pool
Area "Index_05":140 - Primary Buffer Pool
Area "Index_06":150 - Primary Buffer Pool
Area "Index_07":160 - Primary Buffer Pool
Area "Index_08":170 - Primary Buffer Pool
Area "Data_01":200 - Primary Buffer Pool
Area "Data_02":210 - Primary Buffer Pool
Area "Data_03":220 - Primary Buffer Pool
Area "Data_04":230 - Primary Buffer Pool
Area "Data_05":240 - Primary Buffer Pool
Area "Data_06":250 - Primary Buffer Pool
Area "Data_07":260 - Primary Buffer Pool
Area "Data_08":270 - Primary Buffer Pool

svkeady:t302218$ ls -lh
total 301552574
-rw-r--r--  1 uug001  gug001  18M Nov 17 16:00 d_onedb.d1
-rw-r--r--  1 uug001  gug001  2.0G Nov 17 12:11 d_onedb_100.d1
-rw-r--r--  1 uug001  gug001  2.0G Nov 17 12:11 d_onedb_100.d2
-rw-r--r--  1 uug001  gug001  2.0G Nov 17 12:11 d_onedb_100.d3

Therefore I thought that some 10'000 blocks should be way more than necessary. But here I am wrong
Code:
Alternate Buffer Pool
Logical reads  669121K  2566539  42775.65  1930085.52
Logical writes  5  0  0.00  0.01
O/S reads  3406599  12760  212.67  9596.05
O/S writes  7  0  0.00  0.02
Marked to checkpoint  0  0  0.00  0.00
Flushed at checkpoint  0  0  0.00  0.00
Writes deferred  2  0  0.00  0.01
LRU2 skips  0  0  0.00  0.00
LRU2 writes  0  0  0.00  0.00
APW enqueues  0  0  0.00  0.00
Alternate buffer pool hit ratio:  99 %
LRU2 replacement policy enabled.

If I have 3.4 OS million reads - I would need 3.4 million buffers - rly??!@#$%%% - 27 GB for the schema area?

Now I am really puzzled. Okay - I run a dbanalys - but ...

Can anybody point me into a direction to understand this?

Thanks in Advance,
RealHeavyDude.
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
One of the reasons I don't like the viewb2 report is that it shows both area-level and object-level assignments in a way that isn't clear. You can have an area labelled "Primary", but within that have objects that are labelled either "Default" or "Alternate". (That's from memory, but I think it's correct.) So grepping for "Buffer Pool" may not tell you the whole story. I prefer to roll my own report by querying _Area and _Storageobject.

Given an 18 MB schema area you should need about 2300 blocks to cache it.

Is the schema area your only ABP assignment? Or do you have any object-level assignments?
 

RealHeavyDude

Well-Known Member
Sorry for not responding earlier - I was sidelined with the flu.

I need to check which storage object are actually really in the B2. All I did to the database was to proutil enableB2 "Schema Area". From my understanding this should move all storage objects residing in that area into the B2. I double checked that only the meta schema is in Schema Area.

The reason I put the schema area into the B2 is because it's a great thing and I though it would be an easy start to gain experience with the B2.

Heavy Regards, RealHeavyDude.
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
Since you're starting a foray into using -B2, be sure to add ABP-related activity to your DB maintenance checklists, e.g. for dump & load, for restore from backup, and for annual maintenance.

For D&L, be sure to compare object- and area-level assignments on source and target to ensure they're the same. For restore from backup, you lose area-level assignments so be sure to re-add them, which must be done offline, and confirm with a report that everything is assigned as you expect. And for annual (or whatever periodic) maintenance, run a report and check it.

A little while back a client of mine thought they had about half their objects assigned to B2 in one prod DB; it was all done at the area level. It turns out they had restored the DB from backup at some point and lost all their B2 assignments without realizing it. That's why I suggest the periodic check.
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
A simple ABP report:

Code:
/*  abpreport.p  */
define variable v-abp as character no-undo.

for each dictdb._area no-lock where _area-type = 6,
    each dictdb._storageobject no-lock of _area
    break by _area._area-number:

  v-abp = "".

  if get-bits( _storageobject._object-attrib, 7, 1 ) = 1 and
     get-bits( _area._area-attrib, 7, 1 )            = 1 then
    v-abp = "both".

  else if get-bits( _storageobject._object-attrib, 7, 1 ) = 1 then
    v-abp = "object".

  else if get-bits( _area._area-attrib, 7, 1 ) = 1 then
    v-abp = "area".

  if v-abp > "" then
    do:
      case _storageobject._object-type:

        when 1 then do:           /* table */
          find dictdb._file no-lock where _file._file-number = _storageobject._object-number.
        end.

        when 2 then do:           /* index */
          find dictdb._index no-lock where _index._idx-num = _storageobject._object-number.
          find dictdb._file no-lock of _index.
        end.

        when 3 then do:           /* LOB   */
          find dictdb._field no-lock where _field._fld-stlen = _storageobject._object-number.
          find dictdb._file no-lock of _field.
        end.

      end case.

      display
        _area._area-number
        _area._area-name
        _file._file-name
        _index._index-name when available( _index )
        _field._field-name when available( _field )
        _field._data-type  when available( _field )
        v-abp
      .
    end.  /* case _object-type */

end.  /* for each */
 

RealHeavyDude

Well-Known Member
Unfortunately I can not roll out SP3 to 11.3 just like that - it would take me at least a 2 months head start to get it across our test systems into production. But, we do have scheduled an upgrade to 11.5 ( or 11.6 ) somewhere Q2 next year.

Plus, in the meantime I found out that -Bp 100 on probkup online and proutil -C dbanalys help long ways. Since introducing them I was not able to exhaust the B2 anymore. Now I really feel more comfortable with the B2.

Thanks, RealHeavyDude.
 
Top