[Progress Communities] [Progress OpenEdge ABL] Forum Post: RE: Why binary dump tried to write to BI file?

Status
Not open for further replies.
G

George Potemkin

Guest
I did some research about binary dump and its relations with ulimit. Binary dump as well as other Progress executables rises ulimit on startup. It does not matter if the executables have the userid bit or not if the executables are owned by root or not. Note that non-root user can't increase ulimit in shell: ulimit: file size: cannot modify limit: Operation not permitted Description of the old message # 4162 says: ** UNIX maximum file size exceeded. (4162) If PROGRESS runs as root, the standard UNIX file size limit of 1 MB is increased to 1 GB. You have exceeded the limit. Check to make sure that _progres and _mprosrv are owned by root and that each has the userid bit (s) set in its permissions. But in fact Progress executables seems to rise ulimit to the "unlimited" value. After login to a database Progress sessions downgrade their suid privileges to the user's ones but not ulimit. Binary dump is an exeption. It downgrades ulimit to the its initial value. For example, if ulimit is smaller than the size of database log then binary dump is still able to write the login messages to a database log: BINDUMP 7: (452) Login by george on /dev/pts/18. BINDUMP 7: (7129) Usr 7 set name to Binary dump. BINDUMP 7: (7129) Usr 7 set name to george. But it will be unable to write the rest of the messages: BINDUMP 7: (17813) Using index CustNum (12) for dump of table customer. BINDUMP 7: (453) Logout by george on /dev/pts/18. Instead the binary dump will write 10 times per message to the standard output stream: ** UNIX maximum file size exceeded. bkWriteMessage (4162) If ulimit is not enough to write to bi or db files then binary dump will crash a database when it evicts the blocks modified by other users. Dbanalys or ABL clients will evict the modified blocks as well but they will NOT crash a database because they keep using the unlimited ulimit. Be warned: the malicious minds may use binary dump to easy crash your database: # ulimit 30 # ./_proutil sports2000 -C dump OrderLine . SYSTEM ERROR: error writing, file = sports2000.b1, ret = -1 (6072) User 7 died with 1 buffers locked. (5027) # ulimit 100 # ./_proutil sports2000 -C dump OrderLine . SYSTEM ERROR: error writing, file = sports2000_9.d1, ret = -1 (6072) User 7 died with 1 buffers locked. (5027) I believe it's not a bug. Ulimit sets the size of the volumes (sections) for binary dump when _proutil does not have the userid bit. So we can create the multi-volume bunary dumps. It's a useful feature. By the way, there is a minimal value of ulimit for binary dump: 28 or 29 (kilobytes). The value slightly depends from a table used for dump. Binary dump will fail if ulimit was set less than this value: Internal error in upRecordDumpCombined, return -26631, inst 8. (14624) Binary Dump failed. (6253) In fact the binary dump successfully creates all volumes except the last one (except the smallest volume) - no matter how large should be the last volume. The minimal size of binary dump is the header size plus record size where the header size is 1K. Binary dump can creates the dump files just a bit larger 1K but it needs ulimit 30 times larger. The mimimal ulimit is close to but a bit less than the max record size: 31992 bytes. I don't have a guess why there is such limit. I hope the topic starter will not blame me for offtop. ;-) Does anybody know why a shell script with a header doubles the value of ulimit? ulimit 10 utest.sh where utest.sh #!/bin/sh ulimit TmpFile=out.$$.tmp dd if=/dev/zero of=$TmpFile bs=1M count=1024 ls -l $TmpFile rm $TmpFile ulimit will report: 20. The size of temp file will be 10K. In other words, the effective value of ulimit is correct. Only ulimit function reports a wrong value. But if we will remove the header of the script ("#!/bin/sh") than ulimit reports a correct value. Is it a bug or has it have some sacred meaning?

Continue reading...
 
Status
Not open for further replies.
Top