Error DB extent has reached the 80% threshold for block usage

NewDBA

New Member
Hi,

Today I was adding extents in one of the DB's when I found the error:

Code:
Prod01:PROGRESS:/live/db/billingDB:> prostrct add billingDB add_06102014.st

Formatting extents:
   size                area name   path name
524288                AlphaData /live/db/billingDB/data01/billingDB_7.d3 00:00:10
     16                AlphaData /live/db/billingDB/data01/billingDB_7.d4 00:00:00
12800000            billData /live/db/billingDB/data01/billingDB_9.d5 00:04:08
     16            billData /live/db/billingDB/data01/billingDB_9.d6 00:00:00
3780000           billIndex /live/db/billingDB/index01/billingDB_10.d5 00:01:16
     16           billIndex /live/db/billingDB/index01/billingDB_10.d6 00:00:00
3780000      NewChargeData /live/db/billingDB/data01/billingDB_13.d5 SYSTEM ERROR: Attempt to read block 256 which does not exist. (210)
** Save file named core for analysis by Progress Software Corporation. (439)
Quit (core dumped)

I saw that billingDB_13.d5 has been created in the disk but it was not added to the database .st file. I deleted billingDB_13.d5 and tried again but no luck. We have approx 100 GB space left for variable extents of this DB so i started the DB and services. Later i saw the following error in the DB log file:

Code:
05:04:27 Usr    48: bkxtn: WARNING: Area: 13 extent /live/db/billingDB/data01/billingDB_13.d4 has reached the 80% threshold for block usage - current block hiwater 14640364

billingDB_13.d4 is a variable extent. Please see the relevent part of DB .st file:

Code:
 #
d "NewChargeData":13,128 /live/db/billingDB/data01/billingDB_13.d1 f 57671680
d "NewChargeData":13,128 /live/db/billingDB/data01/billingDB_13.d2 f 29014656
d "NewChargeData":13,128 /live/db/billingDB/data01/billingDB_13.d3 f 10240000
d "NewChargeData":13,128 /live/db/billingDB/data01/billingDB_13.d4
#

I searched for the calculation of block usage and found that this extent can take upto 137GB of data. Where as currently the data written in .d4 variable extent is approx 19GB. I couldn't add fixed extent in time due to some other activities in line and absence of DR/failover setup.

The block size details are:

Code:
Database Status:
         Database version number: 8283
                  Database state: Open (1)
          Database damaged flags: None (0)
                 Integrity flags: None (0)
     Database block size (bytes): 8192
Total number of database blocks: 104678799
Database blocks high water mark: 81463973
Free blocks below highwater mark: 296
   Record blocks with free space: 264
Before image block size (bytes): 8192


Progress DB details:

Version: 9.1E
Storage: NetApp
OS: Solaris 10

QUESTION: This DB is written at approx 2GB per day. How many days do I have before the block usage reaches 100%?

What impact will I see if it reaches 100%?

Can't upgrade Progress Version. No DR/failover setup and very tough to get a downtime, is thee any other way to evade this than doing dump&load or Index rebuild (DB size is approx 600GB)?
 

RealHeavyDude

Well-Known Member
Just for clarification: The .st file is an ASCII representation of a physical database structure that can be used to create a database and add extents. The physical structure of the database is stored in the .db file. The .st file is not necessary to run a database and therefore it's content are not relevant for that matter. But, you should keep your .st file in sync with the real database structure because you might/will need it in other scenarios. For example when you need to restore the database from your last good backup. If you don't have a valid .st file then in the worst case the restore will fail or will create a file structure that differs from the last structure your database utilized.

prostrct add will not update the .st file - you must do this manually by executing prostrct list.

I suppose that your problem is that you've hit the maximum number of records ( or better 32 Bit RECIDs ) that can be addressed in the storage area. I am not very familiar with what the limit in 9.1e was ( which conceptually is at least 15+ years old software ) but I think it is 2 billion RECIDs per area in that version.

You might want to have a look here http://knowledgebase.progress.com/articles/Article/P118396

As far as I know the only way to recover is to re-structure the database ( split the offending areas ) and the only way to achieve this is dump & load.

And yes, there will be a downtime and since this is 9.1e ( storage area type I and many other performance enhancements that have been introduced since then ) and you mention NetApp it will probably not just a few hours. And you need to fix this ASAP because whenever the database hits the limit ( which might probably be tomorrow ) it will go down immediately. And, it will stay down then.

A 9.1e database that large with that much records in a single storage area should be pro-actively monitored for that limit in particular. Even if you had a DR/failover setup the problem would just happily have been replicated to your DR/failover which would have rendered it useless too.

We got a problem with that limit in OE10.1c. Not because of the database which happily utilized 64Bit RECIDs - but because of the 32Bit GUI client that got problems so that quieries under certain conditions did not retrieve all records from the database.

I know that does not help you in your situation, but, whenever I hear an upgrade is not possible given that you are running probably a business critical system on 15+ years old software, is a recipe for disaster.

Heavy Regards, RealHeavyDude.
 

NewDBA

New Member
Thanks for the reply RHD. I read the KB article you suggested andd it was really helpful.

Today i ran dbanalys on the DB and compared it with one month old analysis. There is only one table in that area and approx 10 GB was written in last 30 days.

If it continues to write at same speed i think it will take approx 50 days to reach 137 GB. Currently it is at 113 GB. I have calculated this on basis that maxrows in a DB is 2 billion and my db block size is 8192 and record per block is 128. Please advice if i am calculating correctly and please correct me if i am wrong.

Tomorrow we will decide on weather to make new db and make this read-only (which is a standard practice here every year) or move data from this DB to old read-only DB to buy more time.
 

RealHeavyDude

Well-Known Member
What factors in here is the records per block setting which is 128 and the mean record size of the table which you find in the dbanalys report. If the blocksize of the database divided by the mean record size exactly matches the records per block setting of the storage area, in theory, you are not wasting any RECIDs and you might be able to reach the 2 billion RECID limit per storage area. Most likely it won't be an exact match. If it is less than you are wasting RECIDs which means you won't reach the 2 billion records per storage area ( table since it is the only table in the area ) limit.

The largest size to which your area can grow is the blocksize of the database divided by the records per block setting multiplied by 2 billions ( 8,192 / 128 * 2,000,000,000 ) which is roughly 128 GB. So you will hit the limit of the area before you hit the limit in the file system and your storage area is utilized 88% ( 113 GB large ).

Heavy Regards, RealHeavyDude.
 

NewDBA

New Member
Thanks for reply RHD.

The mean record size is 83 in the dbanalys. With that the value i get is 98.
In last one month the number of records written is:
124522631 and size increase is 10 GB.

For the largest possible size of my DB area, i used values from following link:
http://knowledgebase.progress.com/articles/Article/P106246

This suggests maximum rows as: 2,147,483,648

If this is accurate then (8,192/128)*2,147,483,648 = 137,438,953,472

So it comes to be 137 GB.

So it looks like 82% as I got the alert Monday morning only.

so 137-113 = 24 GB

With 10 GB a month i am expecting some 60 days of time left with me. Hope i am correct on that.
 

RealHeavyDude

Well-Known Member
You are correct - I just used an estimation for the 2 billion RECID limit per storage area.

You need to be aware that his is the number of RECIDs that can be addressed in the storage area with the 32Bit RECID used in 9.1E. Since you are "wasting" roughly 45 RECIDs per database block you won't be able to store that much records in the storage area. The valid values for records per block of a storage area 1 - power of 2 - 256. Usually you take the next bigger value than what the optimum value ( database block size divided by mean record size ) would be - which in your case is 128.

A RECID is a physical address in a database block that can hold a record.

Since there is only one table in the storage area and tables can not be split across storage areas in 9.1E you will hit the limit rather soon and then you are pretty much hosed. Plus, you can't solve it with a better storage area design in 9.1E as this version was never designed to store more than 2 billion records per storage area.

You won't like it, but, I see only two options for you:
  1. Implement a process that purges records that are not needed anymore from that table on a regular basis.
  2. Upgrade to a reasonably recent version of OpenEdge 11 which does not have this limitation anymore as the database uses 64Bit RECIDs - which considerably raises this limit.

Heavy Regards, RealHeavyDude.
 

NewDBA

New Member
Hi,

An urgent help needed.

In addition to the above issue, the block usage reached 90% sometime back. We did create a new DB but die to some application design issue, it will take some 15 days to completely stop writing into current DB. Just now this DB went down with the following error:

SYSTEM ERROR: bkxtn: Area: 13 extent: /live/db/billingDB/data01/billingDB_13.d4 Attempt to exceed the max
imum number of user-available blocks per area - Max: 16121855 Current: 16121756 Extend: 128

i started the DB but then DB after understand from the following link that DB has started writing into 5GB reserved space and no more warnings will be issued:

http://knowledgebase.progress.com/articles/Article/P130582


But DB went down again after some time, throwing following error:

SYSTEM ERROR: bkxtn: Area: 13 extent: /live/db/billingDB/data01/billingDB_13.d4 Attempt to exceed the max
imum number of user-available blocks per area - Max: 16121855 Current: 16121852 Extend: 64

As per the above article, I understand that my DB should run and extent should be allowed to grow till 128 GB as currently i am at 123.8 GB (1.4 Billion records in the only table present in this data area).

I have started the DB up again but not sure when its going to go down again? Already working on possibility of moving data to new DB so that there is no need to write anything in this DB anymore, but still Need some advice on my options here.

A friend DBA advised me to make variable extent fixed and add new smaller size extent, which i did successfully. Will this help? What else can i do?

Need a quick response please.

Regards.
 

Cringer

ProgressTalk.com Moderator
Staff member
I think that making the variable fixed and adding an extent should certainly help. Whatever the case, you should never actually use the variable extent. You should add neat fixed extents with a variable at the end in case it's necessary, but you should monitor your db regularly and add fixed extents as the other ones get full. There's a number of reasons for this, not least that as you allocate more space to a variable extent it takes time meaning that writes are slower than necessary.
Are you still on v9? You should be getting to 10.2B or better 11 ASAP. Type II areas will offer you all sorts of benefits that you are missing out on.
 

NewDBA

New Member
yes i am on version 9 as of now but right now need a quick fix to be able to take decision on upgrade.

suddenly i realized that my records per block is 128 and block size 8K, so my reserved space is not 5 GB but 1 GB.
As of now nothing is being written to the DB but soon will.
Now the question burning for me i have is:

If i move some data from this table to some other DB, will my total block usage reduce? Please mind it the data will be moved from fixed extent as the variable extent which was being written before DB shutdown, is not marked as fixed and i have now added new fixed extent of size 1 GB plus a variable extent.
 

Cringer

ProgressTalk.com Moderator
Staff member
I've never done any v9 DBA work, but I'd have thought that deleting data (moving it) should solve your problem in the short term. I've reread the details on your error though and the limit you're hitting seems to be an area limit, not an extent limit so adding extents won't help. Could you move a few tables to a new storage area?
 

NewDBA

New Member
Thanks for reply.

yes its an area limit issue. there is only 1 table and no index in this area. therefore moving data is the only option we have i guess. But the question I have, will block usage go down if i only move/delete data from this table or do i have to also do the dump-load? And will dump-load help at all after moving/deleting data?
 

Cringer

ProgressTalk.com Moderator
Staff member
It's a good question. I'm not sure if it will reduce block usage, but it will stop it growing any further as there will be free blocks for the DB to write into.
 

TomBascom

Curmudgeon
This DB is written at approx 2GB per day. How many days do I have before the block usage reaches 100%?

What impact will I see if it reaches 100%?

Can't upgrade Progress Version. No DR/failover setup and very tough to get a downtime, is thee any other way to evade this than doing dump&load or Index rebuild (DB size is approx 600GB)?

It sounds to me like nobody is taking this database seriously. It must not be very important.
 

TomBascom

Curmudgeon
When you run out of recids and cannot extend the area your only option is to purge some data.

How you do that depends entirely on the application.

One technique that some people use is to rename the table and start a new area. You can rename the old data or rename the new -- whichever is easier. Accessing both will obviously require some coding changes. Possibly *lots* of coding changes. Which can impact the approach that you take.

Or you can do the sensible thing and upgrade to a modern release.
 

NewDBA

New Member
Thanks for the reply Tom.

Application team is moving data from this Db to the new one and have made sure no data is added to this DB anymore. This is being done online so no business impact.
When this is over i am going to go full force for upgrade as that is the best solution to many problems we face here.

Just one question here, As that data is being loaded into new DB, the BI file size is increasing. I have increased it's limit two times (first to 1.5 GB and second to 1.9 GB) using proquiet command. I have two extents for BI file, one fixed of 2 GB and second one is variable. Can I increase the bi file size limit further to 2.5 GB ? Is there any concern/suggestion/advice here from Progerss or from DBA practices perspective?

I have a planned downtime 2 weeks down the line. I was planning to truncate BI for this DB then before restart. Please suggest.
 

Cringer

ProgressTalk.com Moderator
Staff member
How are they migrating the data? The BI grows because you're running transactions. If the transaction scope is large then it will grow like you see. If the data was loaded with small tight transactions then it would be less likely to grow the BI.
 

NewDBA

New Member
They are using some old scripts/programs to do it. I haven't seen those. More over i haven't enabled large files on this DB at time of creation. Without enableLargeFile, will it allow to extent it beyond 2 GB file size?
 

Cringer

ProgressTalk.com Moderator
Staff member
If large files isn't enabled then the db will crash as soon as any extent tries to grow beyond 2GB.
 

RealHeavyDude

Well-Known Member
In theory - with the enterprise database license - a single variable extent ( file ) can grow until the file sytem fills up. The largest extent size I use is 16 GB. With the workgroup database license you have the 2GB limit per file.

You can increase the limit of the BI much more than 2,5 GB. But, as Cringer points out, this looks suspicious. A standard load where one record is loaded in a single transcation should not cause an unusual BI file growth. This looks like your load process packs more than one record into single transaction or has other issues. Theoretically this could mean that you might blow the BI if you do set the limit to something like 20 GB.

You could run the database with the -i ( no integrity ) parameter which will reduce the load on the BI. But that means that, if anything bad happens, you will need to start over.


Heavy Regards, RealHeavyDude.
 

TomBascom

Curmudgeon
The bi is growing because there is a long lasting transaction somewhere. That may be because your transfer is deliberately one monster transaction. Or it might be a mistake. Either way it is a bad state of affairs.

If large files are not enabled then no extent can exceed 2GB -- but you can add as many < 2GB extents as you need. The total size of bi notes is only limited by available disk space. (Keep in mind that crash recovery may also grow the bi file, sometimes as much as 3x the size of the pre-crash bi log -- a good rule of thumb is to keep bi space to less than 25% of available disk space on the filesystem holding bi logs...)

Modern versions of Progress allow extents to be added online. Ancient, obsolete and unsupported releases might -- but it has been a very long time since I would have wanted to do such a thing and I do not recall if v9 has that feature. It would be described in the "prostrct" documentation if it exists. The modern syntax is:

prostrct addonline db-name [ structure-description-file ] [ -validate ]
 
Top