Error on appliance storage

jcorona1962

New Member
I get this error when a tried to server the database on appliance Sun ZFS storage 7120

stat() failed on both /export4/pro91/dbname .lk and /export4/pro91, errno = 79. (2607)



The enviroment is
Sun M5000 Solaris 10
Progress 9.1D

thank for the support.
 
Errno = 79 = Value too large for defined data type.

9.1D is, of course, ancient, obsolete and unsupported. It pre-dates ZFS by about a decade. I'll hazard a guess that ZFS is returning something to stat() that is too big.

FWIW -- "appliances" and "filers" are, generally speaking, not exactly the sort of thing you want to store an important database on.
 
Errno = 79 = Value too large for defined data type.

9.1D is, of course, ancient, obsolete and unsupported. It pre-dates ZFS by about a decade. I'll hazard a guess that ZFS is returning something to stat() that is too big.

FWIW -- "appliances" and "filers" are, generally speaking, not exactly the sort of thing you want to store an important database on.


-------------------------------------
Thanks a lot Tom
We'll upgrade our version, at end i find this document. Progress support for NFS
 
Out of experience, when dealing with ZFS, there are some issues to deal with:

  • ZFS does need more empty space because of the way it allocates space during writing - it does never overwrite an existing block when updateing, it always writes a new block and marks the old one as empty. You should never utilize a ZFS file system more than 80% otherwise performance will degrade dramatically.
  • You need to be aware that there is no such thing as a block size in ZFS. The pendant on ZFS is the record size and it can be changed online. But when it is changed online, it does not change the record size of existing file - instead you must re-create them for the changd record size to take effect. You should set the ZFS record size to match the database block size.
  • You should limit the ZFS cache. When it is not limited it will allocate all memory on the system. Although it should give back memory to applications ( such as you database ) when they request it - but experience shows that it does not give it back fast enough. You might end up not being able to start the database because the broker is getting enough memory to allocate the buffer pool. Another side effect of not limiting the ZFS cache is that you will always see that the memory on the machine is utilized 100%.
Other than that it works for us - but it turns out that there are not many people out there that understand what it takes running a database on ZFS as Oracle of course provides a dedictated file system for their databases with their OS - IIRC it is called ARM or something like that ...

Heavy Regards, RealHeavyDude.
 
Back
Top