Answered Dump And Reload - Can I Reduce The Fixed Database Size.


My client uses Progress 10.2B on Linux. The database currently uses 45+GB on the hard disc. It hasn't been dumped and reloaded in years. So fixed extents were just added on every time the database needed to grow.

When I do a database analysis it says the actual database size 17.4 GB.
Tables 14.4GB and indices 3GB
Looking at the .st file the database has many extents that are large than 512000.

I am thinking of doing a dump and reload.
I was thinking also thinking of :
a) Reducing the fixed size of the database to 22GB and allowing a fixed size of; Tables 17.5 and Index space 4.5GB. Obviously I would also leave an extra variable extent as well
b) Making all the database extents the same size 512,000

I am wondering if anyone has any info on the impact doing this.
In particular, is it okay to make the fixed database size smaller.
Any advice would be appreciated.
Control Area – 1



                                                          -Record Size (B)-          ---Fragments--- Scatter

Table                                    Records    Size   Min  Max  Mean                Count Factor  Factor

Totals:                                85519674   14.4G    6 30650   180             86261330    1.0    1.7

5330772 RM block(s) found in the database.

37.11% of the RM block space is used.

                             Records              Indexes            Combined

NAME                        Size  Tot %        Size  Tot %         Size  Tot %


Total                      14.4G   82.6        3.0G   17.4        17.4G  100.0

Rob Fitzpatrick Sponsor
This is a pretty small database. I wouldn't worry about managing lots of little fixed-size extents at all; you have better things to do with your time. I would make a simple structure with one variable-length extent per storage area. Ensure large file support is enabled.

OpenEdge 10.2B is nearing its end of life. Even the OpenEdge 11.x family of releases is nearing its end. 11.5.1 will be retired in two months. 11.6 will be retired in about a year and a half. All other 11.x releases are already retired. You should think seriously about upgrading your client to OpenEdge 11.7.

Here are my general rules for structure design:
  • It is a good idea to have an area naming convention, e.g. table areas have a name ending in “_data” and index areas have a name ending in _index”.
  • No application objects in the schema area.
  • All application objects in Type II storage areas.
  • Each application storage area contains only one object type: tables or indexes or LOB columns.
  • Every very large table is in its own storage area.
  • The indexes of each such table are in their own area.
  • One data area for small-to-medium-size tables.
  • One index area for the indexes of those tables.
  • Each table that has one or more word indexes is in a wordidx_data area (unless it is a very large table and already has its own area).
    • This is to address the restriction on the use of the -datascanthreads option of proutil idxbuild.
    • The word indexes themselves do not have to be segregated, just the table. If the indexes are small, they can go in a multi-index area.
  • Area blocks per cluster
    • Tables:
      • 512 for fast-growing tables
      • 8 for the rest
    • Indexes:
      • 512 for indexes of fast-growing tables
      • 64 for the rest (not 8, due to the potential worst-case cost of block splits)
  • Area records per block
    • Option 1:
      • For large, single-table areas, tune it for mean record size
      • 128 for the rest
    • Option 2:
      • 128 for everything
  • After imaging:
    • Create 8 variable-length AI areas
    • Use the AI archiving daemon to do time-based switching, based on the client's RPO
If it were a very large database you might have to do more analysis and spend more time crafting the structure. For a DB of this size, the above should do fine.

Cringer Moderator
Staff member
Dump and reload is a good idea! I second Rob's points above. From experience, you need to test the process before doing it in live. That will give you the opportunity to load everything into single variable extents and get an idea of how much space each extent takes up.
A lot of people preach just using variable extents these days because disk is so quick. Personally I still try to avoid variable extents because we have customers where there is a noticeable slow down.


You've got to be adding a lot of data very fast (or perhaps running in a virtualized Windows environment with a "filer" for storage) for variable vs fixed to make a noticeable difference. Obviously one of the most noticeable ways to measure this is during the "load" portion of "dump & load" ;) Another is when restoring to an uninitialized target.

Judging from the state of affairs at many customer sites over the years most people do not pay much attention to managing their extents. They often start out with fixed and a variable "overflow" but most of them are just letting the overflow grow. Quite often there is nobody who even knows that this is happening. A very small number of people obsess over managing extents. They are often fixated on very small extent sizes for reasons that last made sense circa 1990.

If you really care about such things ProTop has an excellent metric to help you out -- it keeps track of the %full of the LAST fixed extent and if you have the commercial version of ProTop you can set an alert when a threshold is reached. If you want to go crazy you can even have that be a "scripted" alert that will automatically add a new extent online -- this is one of ProTop's "cognitive" features ;)

I am often an advocate of pure variable extents in many situations -- if your db is small (perhaps less than 100GB) and you have multiple storage areas then variable extents are much simpler to work with and manage than individually monitoring a horde of extents is. Having said that -- I do prefer to keep the extent sizes "reasonable". If an extent is getting to be more than, oh say 32GB, I probably want to make that a fixed extent and add a variable.

Even though I tend to use variable extents a lot, when I get the chance (perhaps because I am dumping & loading) I often take the opportunity to initialize fixed extents to be big enough to hold the known data and some reasonable amount of growth.

FYI -- you do NOT want to completely ignore extent size and let a variable extent grow forever. There is a "lightly documented" extent size limit of 1TB and very, very, very bad (like *unrecoverable* bad) things will happen if you have the misfortune of hitting that limit. You better make sure that you have after-imaging enabled and that your backups are in tip-top shape.

PS -- a pure "all variable" approach often makes a lot of sense in DEV & TEST environments.


Thanks for everyones assistance on this. Just to provide some further info on how I got on with this. I was able to make the database smaller and I was also able to make the extents all the same size. The database ended up being larger than I anticipated. It was 27GB(Table 22.5GB and Index 4.5GB) in total instead of 22GB I'd hoped for. But it was still a good saving on space. There weren't any ill effects.

The one thing I would say is, given my actual database size was larger than what my dbanalysis said; I would endorse Cringers point that testing this before doing it in live is a must.