Large file size of B1 file

itobinh

New Member
One of my progress database has B1 file almost 2GB.

This does not make sense to me.

The same database but place in my notebook only has not more 100 MB of B1 file.

Anyone could help ?
 

Casper

ProgressTalk.com Moderator
Staff member
You probably had a big transaction going on.
If you trucate BI and then BIgrow to the normal size then everything should be fine again.
You probably have to find out what caused this big BI grow. Any new program or unusual long processing?

By the way if you have two physically seperate databases then they are never ' the same'. So comparing BI files of one database to another has no use.

Regards,

Casper.
 

TomBascom

Curmudgeon
The BI file records transaction notes. The size of the file depends on how much activity there is on your system and on the way that your users commit transactions. One easy way to cause out of control BI growth is to open a transaction and then prompt for user input. If the user leaves for lunch (or goes home for the weekend) leaving the input and the transaction open then the "bi cluster" that records that note cannot be reused. Since the BI file is organized as a ring of clusters this open transaction causes the database manager to add clusters if other users are actively committing transactions. Lots of clusters. Until the original user comes back and finishes the input...

Another common way to get a really big BI file is to create a really big transaction. Updating a few million records within a single enclosing transaction for example.

On older versions of Progress (pre v9) there is a hard limit of 2GB of bi space. If you hit it your database is unrecoverably hosed. You don't want that to happen.
 

itobinh

New Member
Not only the B1 file gets bigger.

The D1 is growing like crazy. Normally is only around 200 MB, but now within a month is growing until 450 MB.

FYI, we just upgrade the database (dump and load) to the new database structure.

I did try to do it again (dump and load), but the database size is OK.
 

Kopperton

Member
I think we have this problem too, but with .d3 files three .d3 files are growing a lot after we did upgrage sx.enterprise.

we are running on unixware and its file limit is 2GB, what happens if file reaches @gb? currently we have 740mb , 300mb, and 300mb the third.
the rest of files are like 100mb each.
 

Casper

ProgressTalk.com Moderator
Staff member
Hi Kopperton,

.d3 files are data extents from different area's. You can add an extra extent anytime you like, so that isn't a problem.

Do you know if there was a change in blocksize? That could cause initial growth after conversion.

regards,

casper.
 

Kopperton

Member
Hi Casper, no nothing has changed. Looking at old backups these files were also large but not as large as they are now.

What will be with them after they will reach 2gb file limit unixware has?
 

TomBascom

Curmudgeon
There has been at least one change -- you upgraded your application. There are lots of ways that that could result in faster growth of any sort of extent. It could be as simple as an application behavior change that stores more data than previously -- after all you probably upgraded for a reason and most reasons involve data ;-) Another common source for post-upgrade growth is from dumping & loading. Was there a dump & load involved? Even if you didn't change block sizes it is very common for there to be a growth spurt right after a d&l. (The index trees often need to rebalance...) BTW -- SX.e performance can be greatly improved from its default state. There are a lot of administrative opportunities to make it better within the bounds of their standard configuration.
 

TomBascom

Curmudgeon
Kopperton said:
What will be with them after they will reach 2gb file limit unixware has?
Your database will crash. You don't want that to happen. Depending on the version of Progress and exactly how and why you ran out of space it might be very painful (or impossible) to recover. In the short term keep an eye on it and make sure that you always have plenty of space. In the longer term you need to make sure that after-imaging is enabled, running properly and that you know how to use it to recover.
 

Kopperton

Member
TomBascom said:
Your database will crash. You don't want that to happen.

In the short term keep an eye on it and make sure that you always have plenty of space.

Hi TomBascom. Progress is version 9.1e and I have 10times more free space than whole database is.

The problem I am raising is that what if these .d3 files reach 2gb size which eventually they will. unixware has 2gb file size limit. what happen next.
 

Casper

ProgressTalk.com Moderator
Staff member
Hi Kopperton,

Looks like you have some reading up to do :)

Knowledgebase articles:
Add extent: P7697: http://tinyurl.com/n3jfe

Database growth:
20012: http://tinyurl.com/ee96v

And maybe even better:
the database administration guide:
http://www.progress.com/progress_software/products/documentation/openedge10_1a/docs/dmadm/dmadm.pdf

and some more (about fathom but contains also valuable information regarding Progress database and administration):
http://www.progress.com/progress_software/products/documentation/fathom_management/docs/mpf/mpf.pdf

HTH,

Regards,

Casper.
 

TomBascom

Curmudgeon
Kopperton said:
Hi TomBascom. Progress is version 9.1e and I have 10times more free space than whole database is.

The problem I am raising is that what if these .d3 files reach 2gb size which eventually they will. unixware has 2gb file size limit. what happen next.

Like I said... you'll crash.

By "make sure you always have plenty of space" I meant both on the filesystem and within the storage area. To monitor free space inside of storage areas you need to use a tool such as ProTop:

http://www.greenfieldtech.com/articles/protop.shtml
 

Kopperton

Member
TomBascom said:
By "make sure you always have plenty of space" I meant both on the filesystem and within the storage area.

for some reason I can not get ProTop to work, some vt102-t protermcap. I did not go through that very much with it so I ran the code below.

in filesystem looks I am ok but in storage I have no idea I have to worry or not.

================
FOR EACH _AreaStatus:
DISPLAY _AreaStatus-AreaName LABEL "Area Name"
_AreaStatus-TotBlocks LABEL "Total Blocks"
_AreaStatus-Hiwater LABEL "High Watermark"
_AreaStatus-TotBlocks - _AreaStatus-Hiwater LABEL "Free" (SUM).
END.
========================

Code:
Area Name                        Total Blocks High Watermark       Free      
|
||||||||||||||||||||||||||||||||| |||||||||||| |||||||||||||| 
||||||||||      |
|Control Area                               31              5      26      |
|Primary Recovery Area                   11998           8640      3,358   |
|Schema Area                              1471           1451      20      |
|trhead                                  35309          35299      10      |
|trhead_idx                              24001          10729  13,272      |
|trlines                                 61133          61067      66      |
|trlines_idx                             24001          15558   8,443      |
|trother                                 39085          38984     101      |
|trother_idx                             24001          11984  12,017      |
|trsm                                    24001          10883  13,118      |
|trsm_idx                                24001           3540  20,461      |
|trstat                                  24001          12608  11,393      |
|trstat_idx                              24001           2828  21,173      |
|trtrans                                 88813          88737      76      |
|trtrans_idx                             30829          30801      28      |
|trcust                                  24001              2  23,999      |
|trcust_idx                              24001              2  23,999  
                                                               
                                                              151,560 TOTAL|
===============================

df.
 
filesystem          kbytes   used     avail    mounted on
/dev/root           4096575  2496874  1599701  /
/dev/stand          32130    5563     26566    /stand
/proc               0        0        0        /proc
/dev/fd             0        0        0        /dev/fd
/dev/_tcp           0        0        0        /dev/_tcp
/dev/dsk/c0b0t0d0sc 2465977  1479757  986220   /rd
/processorfs        0        0        0        /system/processor
/dev/dsk/c0b0t0d1s1 39929552 5557176  34372376 /db
/dev/dsk/c0b0t0d1s2 13309848 4631600  8678248  /usr
 

TomBascom

Curmudgeon
That looks like an out of the box Trend installation. AKA "a target rich environment", I'd be happy to help with that on a professional basis ;-)

Some of those storage areas are full and likely using variable extents. (The ones with very small values in the "free" column). This means that your cost of an IO to those areas is much higher than it should be. (Which may or may not be a problem depending on your workload.)

But none of them appear to be in any imminent danger of hitting 2GB.

I wouldn't panic but I'd start planning to re-arrange the storage area configuration to:

1) get objects out of the schema area
2) eliminate the growth of variable extents
3) better manage SX.e through dedicated storage areas
4) implement after imaging -- without ai you are exposed to losing data

I'd also get ProTop working. You probably just need to set TERM=vt100 before launching it.
 

bulklodd

Member
Tom,

you wrote:

If the user leaves for lunch (or goes home for the weekend) leaving the input and the transaction open then the "bi cluster" that records that note cannot be reused. Since the BI file is organized as a ring of clusters this open transaction causes the database manager to add clusters if other users are actively committing transactions. Lots of clusters. Until the original user comes back and finishes the input...

Could you remind me why progress can't skip a busy cluster and use free one. For instance, I've got four bi clusters, the first of them is busy by an incompleted transaction, the rest three are filled up but don't contain any notes about incompleted transactions IOW they're free. Why will progress create a new cluster instead of using a free cluster?
 

TomBascom

Curmudgeon
You would need to either track all of the notes reated to every open transaction (which would take a lot of memory -- especially for large systems) or you would need to scan the cluster (which would take a lot of IO). Both of those approaches would hurt performance and add complexity.

I think that it's a reasonable trade-off.
 

bulklodd

Member
Thank you, it refreshed my memory :)

You would need to either track all of the notes reated to every open transaction (which would take a lot of memory -- especially for large systems)
By the way I daresay it's not quite right. Because all you need is to keep bi cluster number in the inner transaction structure, I mean _trans VST. And I think a couple of hundreds bytes isn't a big trouble whereas a size of a new cluster is more significant.
 

TomBascom

Curmudgeon
I don't think it's quite that simple... you'd need to (at least) keep track of each cluster that each open trx has at least one open note in (transactions consist of multiple notes which can be in miltiple clusters).

There may also be other complexities that I'm not thinking of.
 

bulklodd

Member
I don't think it's quite that simple... you'd need to (at least) keep track of each cluster that each open trx has at least one open note in

Yes, I would, but I'd create just another VST to keep that stuff :)
I suppose there're serious reasons to operate bi file in that way nevertheless I see its overflow might cause serious damage as well because someone might forget to commit the transaction.
 
Top