Re: UNIX Maximum file size exceeded

ggovotsis

New Member
Hi all,

I am trying to do a dump of d files from a 85GB DB. After hours of working there is a table that is huge and exceedes the 2GB limit as a result I am getting "UNIX maximum file size exceeded. . (303)". How can I bypass this file size limit?

Thanks,

George Govotsis
 
A dictionary dump and load cannot bypass that limit.

A binary dump will either create multiple .bd files (.bd1, .bd2...) which can be binary loaded individually or, in versions of Progress that support large files, will just create one big .bd file.

Or you can use something like Highly Parallel Dump and Load and bypass the temp file creation and all that IO entirely.
 
Dear Tom,

The DB is our backoffice system that contains multiple companies.
I am using a sript that scans the db and extracts the data linked to the company I have specified at the script. In total there are 900 tables couple of them they will be more than 2GB. if I use the dictionary I will extract all of the data in a table so it will be no good.

Is there an other way maybe a unix O/S setting or a progress setting that will enable the file size of .d files to more than 2GB?

Thanks,

George,
 
That depends on what specific OS, filesystem and version of Progress you happen to be running.

And your ulimit (for UNIX).

All of those elements may have "large files" settings of one sort or another that influence how large a file Progress can write.

Progress has had an "enable large files" option since 9.1D (I think it was "d"...) but it applies to the database files not stuff created by or used by 4gl sessions.

When writing .d files with 4GL code you can sometimes succeed by simply appending to the file -- but, on older releases anyway, you won't be able to SEEK past 2GB. You also probably cannot read such a file.

But since this is custom code why not just break the file into pieces? Every X thousand record use file-info to check the file size. If it's getting close (or maybe if it just gets to 1GB) then start another one. Name them .d001, .d002 and so forth and it'll be easy to load them back in.
 
# oslevel
5.3.0.0
#

-bash-3.00$ ulimit -a
core file size (blocks, -c) 1048575
data seg size (kbytes, -d) 131072
file size (blocks, -f) unlimited
max memory size (kbytes, -m) 32768
open files (-n) unlimited
pipe size (512 bytes, -p) 64
stack size (kbytes, -s) 32768
cpu time (seconds, -t) unlimited
max user processes (-u) 128
virtual memory (kbytes, -v) unlimited
-bash-3.00$

/ahe:
dev = /dev/extractgmlv
vfs = jfs2
log = /dev/loglv02
mount = true
check = false
options = rw
account = false
#

Progress version 10.0b
 
Back
Top