Setup Level II Storage Guide

JLovegren

New Member
I know this is an elementary task for most of you.
I am migrating to a 12.8 on a new server.
I currently have Level I storage.

Thank you in advance for your help!

Is there a comprehensive guide, I couldn't find anything comprehensive on Progress, I have watched many videos - so I can understand, a little, the anatomy of Level II but struggle with the mechanics of setting it up.

I am thinking about specifying every table in the structure file, but, is it possible to just specify the tables that are higher in transaction?
Whatever is not included would just go into its default settings?
For a high transaction table with 5 indexes, do I specify each index individually?

I saw in the Progress documentation that I could define something like named groups, but it would have less performance.
I could not find a good example.

I have about 170 tables, 27 of which are higher in transactions. Most others are read often but get added to or changed infrequently (like master, codes and address info).

A sample of the analysis report is below. With Level II, will I reduce the fragment count?

1747762109242.png
 
It would be helpful to know your current OpenEdge release, your database OS platform, and something about your storage hardware. It would also be helpful to know something about your database size, sizes of the largest tables, and your available window of downtime to complete your upgrade and migration.

I'm glad that you are making this change and looking for help, rather continuing to use an outdated database structure. Part of learning is using the correct terminology. There is no "Level I" or "Level II" storage. From context, you are talking about Type 1 and Type 2 data storage areas.

By "data storage areas", I mean areas that contain or could contain tables, indexes, or LOB columns. In other words, all areas that are not before image, after image, or transaction log areas.

Type 1 data areas are the original data storage area architecture. They were the only data area type available in Progress v9 and earlier. Type 2 data areas were introduced in an early form in OpenEdge v10.0A and fully implemented by v10.1B. All application data, in any v10.x or later OpenEdge database, should be stored in Type 2 areas. The Schema Area, area #6, is required to be Type 1. It is also severely constrained in maximum size. Therefore it should never be used to store application data.

If you want to migrate your data to Type 2 areas, especially from a database with data in the Schema Area, you should plan a full database dump and load. Dump and load of tables can be done as ASCII dump and load, via Data Dictionary routines, or binary dump and load, via proutil commands. Binary dump and load is preferable as it is more efficient. It can and should be scripted and logged to ensure that all tables are migrated in their entirety and that any exceptions are caught.

Automation, practice, and testing are essential for success. Note also that a database dump and load typically involves dump and loading more than just application schema and data. Depending on the database features, options, and add-on products you use, there could be many other types of metadata in your source database that should be preserved and migrated to the target database. Planning and executing a database dump and load is an entire topic unto itself and I can't do it justice here.

Note that some changes can only be made via dump and load, such as changing database platform (e.g. Windows to Linux), and changing the database block size (e.g. 4 KB to 8 KB). I recommend using 8 KB for the database block size.

I am thinking about specifying every table in the structure file
Strictly speaking, tables are not specified in the structure file. Storage areas and their extents (files) are. The assignment of storage objects (tables, indexes, LOB columns) to storage areas is done in the schema.
Do not create a separate data storage area for each table. That is extreme overkill, and makes for a database that is more difficult to manage.

is it possible to just specify the tables that are higher in transaction?
You can create as many or as few data storage areas as you like (within reason). You don't necessarily need separate areas for tables with high activity, though there are some specific use cases where that can be helpful. But generally, you should have a separate area for each table that is fast-growing, i.e. higher in creates. You can get this information by looking at your CRUD (create/read/update/delete) activity.

ProTop is a very good tool for doing this. ;) It is a free download, at the link in my signature. Let me know if you need help with that.

For a high transaction table with 5 indexes, do I specify each index individually?
This is probably not necessary. For a large table, I typically create an area for the table and a separate area for all of its indexes. I might decide to break down the indexes further, if one of them was exceptionally large/fast-growing.

I saw in the Progress documentation that I could define something like named groups, but it would have less performance.
I can't figure out what this means. Can you provide a relevant link?

With Level II, will I reduce the fragment count?
Given the mean record sizes we see for the subset of tables you have shown, yes, a dump and load will eliminate fragmentation in the short term. Some fragmentation is unavoidable, e.g. records that are larger than the database block size. But there is no evidence of that, yet, in your database. Fragmentation can be caused over time, by record updates that increase record size. This is dependent on application and user behaviour.

There are different schools of thought on how to structure your database, in terms of the assignment of storage objects to storage areas. I have discussed this in the past. Here are some other threads to read:

Note that these threads are old. Some of my opinions have changed over time. For example, I used to recommend RPB 128 for multi-table areas and for index areas. I don't have a good reason to limit it to 128, rather than 256, so I would now recommend RPB 256 for those areas.

I purposely don't have a precise definition for very large tables, i.e. tables that warrant their own areas. This is a trade-off between manageability and complexity, and different DBAs may decide this differently based on their needs. A very large table, in the mind of a DBA with a 50 GB database and minimal disk space, might be considered small by a DBA with a 20 TB database and PB of fast storage.

It would help to know about your entire database. If you only have 170 tables, some of which are empty, you could post information about the subset that have a meaningful amount of data in them, in descending size order.
 
Last edited:
Thank you so much for your explanations.

I have migrated a 9.1e database to 12.8 utilizing dump and reload of data, definitions, indexes and sequences. This has been running on a virtual machine for testing. All of the data is still located in the “Schema Area”. The block size is 8192.

The application is a custom ERP system developed in v9. I have successfully recompiled the code to 12.8 with success. I have no add-on products. We are running the workgroup database.

Now, in preparation to go live, I have a new Dell Windows server running SSD’s on a RAID 10 controller. There are two containers, 4 x 480GB SSD in each. I have allocated 300 GB for my less than 3GB database.

I have had Progress help to implement after imaging with archiving. Likely, I will get some assistance with Type 2 storage implementation as well, but I feel I need to be able to do it myself so I understand the process, calculations, considerations. Ultimately, I need to be able to do this solo and from scratch on an empty machine, from a disaster recovery standpoint.

The sample of Table Analysis was from a production database from a few months ago, restored onto my workstation using prorest – the structure file is not the same as the production server as I do not have the same disk layout, etc. I use this environment for testing things like this and “what if” scenarios. I guess, the point being that it was not a dump and reload, it was a restore into a different quantity of extents, although I think it is not important for this discussion.

I have attached a spreadsheet with all of the tables, sorted by quantity of records. My highest use is the “invtran” table which holds inventory transactions.

The structure file for my production database {9.1e} consists of 16 fixed length extents of size, 240,000 with 1 variable. [d d:\AriesDB\Aries.d1 f 240000]. I now understand that I can use a larger extent size, such as 1,048,576 and less extents.

For background, I did help design the ERP system many years ago and worked with a Progress developer for a year to develop the system. We implemented a bunch of years ago, my developer since moved out of state and retired leaving me with the task of maintenance and updating and learning along with way. We own the source code so I have learned by doing, mostly. I have other duties but keeping this system going is vital. I finally got the approval to upgrade to a current version of Progress so getting this done is a priority for me – after all is running well, we will actively look for an alternative ERP which will be supported by someone else.
 

Attachments

Prior to OE 12.5, the maximum extent size was 2 GB, unless the Large File Support feature was enabled, which made the maximum size 1 TB. This feature was exclusive to the Enterprise RDBMS license. In OE 12.6+, Large File Support is enabled in all databases, regardless of license type. So you don't need to have lots of tiny extents in your larger storage areas.

Your database is fairly small, in absolute terms, so you could probably adopt a pretty simple Type 2 structure.

Half of your table data is in the top four tables by size: invtran, wrkfile, lottran, coitemd. These are the tables whose logical size (size reported by dbanalys) is above 100 MB.

So you could make a pretty simple structure like the following:
Code:
b .
d "Schema Area":6,64 .
#
# Misc Tables
d "data":7,256;8 .
# 
# Misc Indexes
d "index":8,256;8 .
#
# invtran
d "invtran_data":9,64;8 .
d "invtran_index":10,256;8 .
#
# wrkfile
d "wrkfile_data":11,32;8 .
d "wrkfile_index":12,256;8 .
#
# lottran
d "lottran_data":13,128;8 .
d "lottran_index":14,256;8 .
#
# coitemd
d "coitemd_data":15,32;8 .
d "coitemd_index":16,256;8 .
#
# After Image areas
a .
a .
a .
a .
a .
a .
a .
a .

Given the size of this database and its presumed growth rate, I think you would be fine with one variable-length extent per area. This design makes smaller databases easier to manage as your data grows.

If your schema has LOBs or word indexes then some further structure changes would be recommended.
 
Since 11.4 you can use the -csoutput -verbose parameters with proutil -C dbanalys. No needs to parse dbanalys output anymore.

Unfortunately dbanalys was and is still a poor tool to analysis data storage. You can improve it a bit by adding data from other sources. I'm combining data from: dbanalys (5 files), object info (ABL program creates 5 files), _Table/Index/LobStat Dump (3 files), structure File (.st), viewB2 output, prostrct statistics. Make it "shaken not stirred". The combined report allows you to better understand the character of your data. Though the combined report is less useful for a database with type 1 storage areas.

The typical topics for Holy Wars:
How to lay out the database objects per areas?
What RPB/CLS values to use for the areas?
When you need and how to change the toss/create limits for the tables/lobs?

It would be interesting to see the experts' current understanding. What old myths can be declared dead? Are there any topics for debate? :-)
 
@JLovegren I can probably get away with saying this here... Hopefully... but in all honesty, if you want to learn these things then you'd be better off employing the services of an independent contractor than Progress support. Their job is often to solve problems as quickly as possible, whereas a contractor will be much more likely to provide background and depth. One such contractor has already replied to this message and it's not me or George.
 
That's in no way a diss of Progress support by the way. They do an excellent job, I'm just questioning if that's the right approach! :)
 
> I purposely don't have a precise definition for very large tables, i.e. tables that warrant their own areas. This is a trade-off between manageability and complexity, and different DBAs may decide this differently based on their needs. A very large table, in the mind of a DBA with a 50 GB database and minimal disk space, might be considered small by a DBA with a 20 TB database and PB of fast storage.

I'm trying to find out (mostly unsuccessfully) the universal formulas. In this case by comparing the table sizes as a percentage to the total size with the rest percentage - the total size of all tables except the largest ones ("Misc Tables" area).

For the given case:
Code:
Table      Size in Bytes Size% Rest% 
PUB.invtran  374,700,000 20.81 79.19
PUB.wrkfile  229,800,000 12.76 66.43
PUB.lottran  160,000,000  8.89 57.54
PUB.coitemd  100,700,000  5.59 51.95
PUB.jobtran   80,000,000  4.44 47.51
PUB.coitem    77,400,000  4.30 43.21
PUB.shipitem  65,600,000  3.64 39.57
PUB.notes     65,500,000  3.64 35.93
PUB.journal   54,800,000  3.04 32.89
...
Total:     1,800,700,418   100  0
The "Misc Tables" area stays the largest one no matter how many the largest tables will get their own areas.

So the second rule: create just a few areas for the largest tables - the number of the areas should be comfortable for monitoring.
 
For record sizes I used to check the "deviation ratio":
Dev = (Max - Mean) / (Mean - Min)

Code:
Table       Records  Min   Max Mean   Dev
PUB.notes    276431   43 13795  248 66.08
PUB._File       361  284  3201  416 21.10
PUB.eco          10  215   234  216 18.00
PUB._Field     7097  250  1782  338 16.41
PUB._Sysviews    52   71  1206  142 14.99
PUB.code       2339   72   635  111 13.44
PUB.itemelig     49   56    66   57  9.00
PUB.ExpList      29   56    65   57  8.00
PUB.rpt-def    3217  314   815  373  7.49
PUB.coitemdm 360008   83   163   97  4.71
The character fields are the main contributors to the record's sizes. Table may have many (huge) number of the character fields. Each field has its own distribution of the sizes. If we would put each field in its own table we would get the tables with the absolutely different mean sizes and the deviations. But in fact they belong to one table. That is why the concept of the mean size is meaningless. At least we don't talk about the mean record size per database. The "mean" property will work more or less only when the deviation ratio is close to 1. In other cases it would worth to know the distribution of the character fields.

In the table above:
notes - obviously contains the text data. It's a candidate to CLOB;
eco, itemelig, ExpList, rpt-def - their mean sizes are rather close to the min sizes. But there is at least one record with the untypically large record size. Nothing we can do here.
 
I wouldn't go so far as to say that mean size is "meaningless".

Sure, it is not perfect and there are certainly cases where it doesn't work as well as something else.

It is, however, readily available and it works reasonably well in many cases.

That's worth quite a lot.
 
A table called "wrkfile" strikes me as something that might be worth discussing. Is this a table that contains temporary data? Do you write a bunch of things there, do something, and then eventually delete all of that data? If you do, then that would be something that could benefit quite a lot from being in a dedicated storage area. Depending on how any eventual purges get done and how much data there is you might even want to consider using "proutil -C truncate area" rather than having 4gl code delete data.

Assuming, of course, that "wrkfile" implies what I suggest that it might imply.
 
Hi Tom,

> it works reasonably well in many cases.

Sure, the mean size works, for example, for the “eco” table:
Code:
Table       Records  Min   Max Mean
PUB.eco          10  215   234  216
But Min/Max record sizes would work equally well in this case as well.

Mean size does not work for the LOB fields or for LOB-like tables, e.g. for the “notes” table:
Code:
Table       Records  Min   Max Mean
PUB.notes    276431   43 13795  248
The recent Dmitri Levin’s presentation (“Dbanalys - theory and practice”) shows some examples from my investigation based on data in the real databases.

Well, the mean size is used in a well-known estimation of an optimal RPB value:
RPB = BlockSize / MeanRecSize
Where it comes from?

There are 2 real limits for RPB:
HiRPB = BlockSpace / (MinRecSize + 2)
LoRPB = BlockSpace / (MaxRecSize + 2)
Where BlockSpace = BlockSize – [16 or 64] - CreateLimit

Area RPB higher than HiRPB will not improve disk space utilization.
Area RPB lower than LoRPB will not improve address space utilization.

In the era of 32-bit recids we were forced to find out a balance between disk space and address space utilization. The ‘RPB = BlockSize / MeanRecSize’ formula seemed to provide such balance. At least it works when Min=Max=Mean and we did not have any better formula for other cases.

Nowadays with 64-bit recids we can totally forget about address space utilization.
Now it’s a matter of taste - to use RPB = BlockSpace / (MinRecSize + 2) or to set RPB = 256.
By the way, the maximal number of records per block is 255.
By the way #2: the empirical observation - often MinRecSize is about double of a template record size. So if a table does not yet have any records you can already choose RPB for it.

Now the mean record size should not be used to estimate the optimal RPB and I don’t see where else it can work.
 
Last edited:
Granted there isn't much of an argument to be had.

Personally, I like to try to get it close to a useful value primarily because I have a utility that does random sampling of tables by randomly generating RECIDs and I can avoid a lot of wasted effort by having a RPB that is a better fit.

But that isn't very persuasive to people who are never going to do anything along those lines.
 
There are 2 real limits for RPB:
HiRPB = BlockSpace / (MinRecSize + 2)
LoRPB = BlockSpace / (MaxRecSize + 2)
Where BlockSpace = BlockSize – [16 or 64] - CreateLimit
These considerations don't matter for existing areas, as RPB cannot change. It only matters when thinking about new areas: added areas, or areas recreated via dump and load. So I don't bother with the "16 or" as no new Type 1 areas should be created.

I use 65 in this calculation rather than 64. Block header overhead, when viewed in aggregate, should include the extra fields in cluster boundary blocks: TrId and SerialNo in the start block, and NextCluster and PrevCluster in the end block, for a total of 32 extra bytes per cluster. Depending on the cluster size (8, 64, 512), this averages to an extra 4, 0.5, or 0.0625 bytes per block, for a total average block header size of 68, 65, or 65 bytes respectively (rounding up to be conservative). If we aren't dealing with a lot of data in the area then this calculation doesn't matter too much. And if we are dealing with a lot of data in the area, then it is fast-growing and we shouldn't be using cluster size 8. So that leaves us with cluster size 64 or 512, both of which have an average block header cost of 65 bytes.

Also, shouldn't we use Toss Limit in this calculation of fragment density, instead of Create Limit?
 
@Rob Fitzpatrick

Hi Rob,

I agree that 65 is more accurate estimation than 64.

Cluster size:
Richard Banville suggested to use a cluster size of 64 for index areas. However, I don’t understand the explanation.

Should we always set a cluster size to 512 for all table areas? I'm not sure, but maybe.

JLovegren’s case:
116 of 254 tables have no records and only 73 tables have the size large than 64K. If we set set a cluster size to 512 for all tables we will waste about 800MB (~200*512*8K) of disk space while the total size of all tables is 1717 MB. So we will waste 32% of disk space.

Another example - the customer of ours has a database with 15.67 TB size. Tables waste 0.56 TB of disk space. Indexes waste 2.86 TB. Not due to the wrong RPB/CLS size setting but due to the space management algorithms. 22% of the wasted space. It’s the losses that we should accept.

Back to the JLovegren’s case - I don’t see the reasons to choose between 64 and 512 for the cluster size except the point: if/when the database will become much larger in the distant future then cluster size 512 will be a better choice.

But I would still suggest an unpopular idea to put all empty tables together with their indexes to a separate type I "quarantine" area provided you use a tool to monitor the sizes of the areas. If some of the empty table will start growing then it would mean an application starts using new functionality. As dba I would like to be notified about such event and not only because I should move the "awakened" tables to the new areas.

> Also, shouldn't we use Toss Limit in this calculation of fragment density, instead of Create Limit?

Toss Limit just divides the data blocks in two categories - the blocks on/off RM chain. These categories are used by space management algorithms but Toss Limit is not define how much space should be left free in data blocks. More over the space management algorithms divides the data blocks in three (!) categories - the extra category is the blocks where 70% of block size is free. It's a non-tunable parameter but it's kindred to the Toss Limit.

I said that dbanalys is a poor tool to analysis data storage. For example, I would like to see the separate reports for the separate categories of data blocks. Namely how many records stored in the blocks on RM chains, how many space is free in these blocks, how many blocks on the RM chains don’t store any records. We can use chanalys that will provide the over-detailed report about the blocks on chains but I would not recommend to run chanalys in production environment.

My recommendation is to set the Toss Limit a bit higher (let’s say 10%) than a mean record size. Toss Limit for LOB areas is a totally different story.

Grumbling about Create Limit: It’s great that Progress provides the customizable parameters for the space management algorithms but Progress forgot to provide any statistics that would help to choose the optimal value for Create Limit. Do we have any statistics in the unit of the bytes related to the change of the records? _ActRecord._Record-BytesUpd? It’s a total size of the records been updated. If a record size is changed from 100 to 101 bytes the _Record-BytesUpd will increment by 101.

The only way to find out the proper Create Limit is a guess, then change, then wait the years to see how new setting will affect the record fragmentation.
 
Last edited:
Grumbling about Create Limit is continued...

I know for sure that Progress technical support has an internal utility that scans AI files and reports the sizes of the recovery notes. The variable part of the sizes of RL_RMCHG notes is directly related to the change the record’s sizes. Progress could add the similar option to ‘rfutil -C aimage scan verbose’ to allow us to collect the statistics to set Create Limit. Would anyone like to submit an enhancement request?

We can /try/ to estimate what fields are changing in the tables. The _IndexStat VST has the_IndexStat-create and _IndexStat-delete fields but not _IndexStat-update. In fact, these two fields should not be used "as is" because the only useful information they provide is the two estimations of the updates of the indexed fields:
IndexStat-update1 = _IndexStat-create - _TableStat-create - _IndexStat-split
IndexStat-update2 = _IndexStat-delete - _TableStat-delete - _IndexStat-blockdelete

These estimations may differ because:
1. We can’t get the _IndexStat/_TableStat exactly at the same time;
2. _TableStat-create and _TableStat-delete themself are the inaccurate estimations when the changes are undone.

In the most cases the difference between update1 and update2 is negligible.

BTW, _LobStat-update is NOT what you might think!

Let’s assume an AB index (with two components – the A and B fields) has non-zero updated while the updates of an BC index (with two components - B and C fields) are zero. This means the A field is updated while the B and C fields are not updated. Let’s assume the number of updates of the AB index matches the number of the record updates. Then we know the A field was the only one that is updated in the table. If the record updates are higher than the unindexed fields are updated. Of course, we can't always get the statistics per field level but we can try.

The updates of the character fields contribute the most to the changes of record sizes. Knowing the character fields that are updated in a table with the high record fragmentation may help to set the optimal Create Limit.

BTW, online dbanalys could dump the _TableStat/_IndexStat/_LobStat to create the statistics since last db startup. Would it hard to do for Progress development? Would it help us to analysis the data usage? What a grumpy old man! :)
 
Last edited:
I agree that 65 is more accurate estimation than 64.
For record block overhead, we should also add four bytes for the RM header.

Should we always set a cluster size to 512 for all table areas? I'm not sure, but maybe.
This brings up the larger question of rules of thumb. What's the right way to tune X in all cases? In a few lucky cases, we have an answer, e.g. -omsize should be at least the record count of _StorageObject. In most cases, there is no definitive answer, or even a straightforward method for finding the answer.

I always use 8 KB block size when creating a database. So each storage object in an area with the largest cluster size uses a minimum of 4 MB of disk space. Many applications have optional features or modules, so some deployments of those applications may have hundreds or even thousands of completely empty tables and indexes. If you have a thousand such objects, you are spending 4 GB of space for nothing. The cost is extra disk space used by every copy of the database, larger backups, extra time to back up, extra time to restore.

But I would still suggest an unpopular idea to put all empty tables together with their indexes to a separate type I "quarantine" area provided you use a tool to monitor the sizes of the areas. If some of the empty table will start growing then it would mean an application starts using new functionality. As dba I would like to be notified about such event and not only because I should move the "awakened" tables to the new areas.
Perhaps literally unpopular, meaning that it is not done often, as opposed to being a good idea. I actually like this idea and to me it is the best approach you can take when we don't have deferred storage allocation for empty objects. It minimizes the costs of empty objects, and it's probably the one good use case for Type 1 architecture.

But this approach has its own costs, and it isn't for all databases. Not all databases are equally loved. They don't all have dedicated DBAs and they aren't all monitored. But for a database that is actively monitored by a knowledgeable DBA, this is a great strategy.

Toss Limit just divides the data blocks in two categories - the blocks on/off RM chain. These categories are used by space management algorithms but Toss Limit is not define how much space should be left free in data blocks.
I get the definition of toss limit. But I don't think of it as defining how much space is left in a block. Setting aside record updates, how much space is left in a block is the block size minus the block overhead minus the size of the fragments that are created while the block remains on the RM chain. Once the block is tossed off the RM chain (or, probably, when it is relegated to the back of a growing RM chain), it won't be eligible for further creates, so that effectively determines how much empty space is left. Is my thinking incorrect?
 
For record block overhead, we should also add four bytes for the RM header.

You are correct: free space in data block is decreased by RECORD-LENGTH(new record fragment) + 4 bytes: 2 bytes in a record offset directory and another 2 bytes store the size of record fragment. I'm tempting to call these 2 bytes of fragment's size plus the fragment's body itself as a "record". It looks like I need a tattoo: "battery not included".

Once the block is tossed off the RM chain (or, probably, when it is relegated to the back of a growing RM chain), it won't be eligible for further creates, so that effectively determines how much empty space is left. Is my thinking incorrect?

Just a note: the first block of RM chain can have free space less than the toss limit - if a record in the block's is expanded while the block was at the head of the chain. Then the block will be removed from the chain by the next create operation. Though I might misinterpret the results of my tests - sometimes I saw the different behavior.

Progress can create a new record (fragment) only in the first block of RM chain and only if after a create operation the block will have at least the create limit of free space. So the toss limit must be always larger than create limit. But Progress does not check this - it's a bug.

There is an undocumented limit for the large records. A large records are the ones with the size greater than or equal to dbblocksize - 120. The "limit" is (dbblocksize - 118) * 0.7. Progress will not use the blocks with free space less than this limit but will not remove the blocks from the chain either. For large records Progress ignores the create limit as well. Paradox: if a record is a bit less than dbblocksize - 120 and the first block on the chain has enough free space for this record (but not for the record + create limit) then Progress will skip the block. But if a record would a few bytes larger then Progress will change its mind and will use the block to allocate the record.

These undocumented limits are used with the LOB fields as well. The space management algorithms used for large records and LOBs are very inefficient but they are undocumented and are invisible for users - that's why they remain beyond criticism.
 
Last edited:
I'm tempting to call these 2 bytes of fragment's size plus the fragment's body itself as a "record".
I do consider those two data structures as part of the fragment, as they are not pre-allocated. There is one each per fragment. I was referring to the RM header, the four-byte structure between the block header and the row directory, which is a fixed part of the block overhead.
 
the four-byte structure between the block header and the row directory, which is a fixed part of the block overhead.
Code:
RMBLK:
0040 numdir:       0x02               2
0041 freedir:      0xfd               253
0042 free:         0x1f8c             8076
0044 entry #  0:   0x1fef             8175
0046 entry #  1:   0x1fd4             8148

"numdir" and "freedir" are 1 byte fixed fields. "free" is 2 bytes.
Offset directory is a variable structure. Each entry uses 2 bytes.

BTW, dbrpr seems to be updated in V12.8: "entry #".
 
Back
Top