Optimal NTFS block size for Progress 10.1C database?

sherbang

New Member
My application vendor is recommending a NTFS block (cluster) size of 64k for best performance. I'm skeptical, my understanding was that it was best to match the filesystem block size with the database block size. Is this not correct?

The database that they provide has a 4k block size, and my disk is formatted to NTFS's default 4k setting. Is there any possible benefit in increasing the filesystem block size? Does OpenEdge have any recommendations on this themselves?

Thanks in advance for any advise.
 

RealHeavyDude

Well-Known Member
AFAIK you can set the blocksize when you format a drive with NTFS.

The best: Database blocksize = Filesystem blocksize.

Never: Database blocksize < Filesystem blocksize !!!! (performance hit)

Might be dangerous: Databaseblocksize = n * Filesystem blocksize (Gus - wizard of wizards - once said that this might cause silent database corruption. Progress gives one block to write to the filesystem but the filesystem has to write more blocks - when something happens in between writing the blocks Progress might not be aware ...

AFAIK for Windoze and Linux best database blocksize is 4K, whereas 8K for Unix.

Therefor 4K is the winner on Windoze.

HTH, RealHeavyDude.
 

TomBascom

Curmudgeon
As RHD says never go with a db block size smaller than the fs block -- that's just silly.

Having the db block larger than the fs block can, in theory and on old releases, possibly result in a "torn page". That is a situation where one part of a block is written to disk and the rest is not due to a hardware crash at the very wrong moment. It is exceedingly unlikely to actually happen but, in theory, it could. 10.1c+ has some error checking that will catch this though.

For read intensive work loads 8k blocks are "a good thing". More data is fetched with each IO and lots of stuff is more efficient. I'd be looking at converting to 8k db blocks (and probably generally throwing away the vendors delivered storage area design anway because they're usually pretty crappy).

Some people have tried NTFS block sizes of 8k. I've not heard of real good definitive testing results from that though. But I'd think it worth a try.

Are you sure that your vendor isn't talking about the stripe size on a striped filesystem?
 

sherbang

New Member
It's possible that they got confused between filesystem blocks and stripe segments. That at least would make more sense.

Thanks for the info, you both confirmed what I thought.

Is it possible to change the database block size on a database with data in it? Would I have to do a dump and load, or is it possible to add new storage areas and then remove the old one? I don't mind heading to the OpenEdge docs for details, I was skimming the docs on adding data files earlier, but wasn't clear how you could migrate your data to new files to remove the old ones.

Also, I assume the same block size rules would go for the bi and ai files. For some reason they have these set to 8k blocks while the data file is set to 4k blocks.

Thanks again for the detailed info.
 

TomBascom

Curmudgeon
You have to dump and load to change the db block size. And if you do that you should review the rows per block settings too. And, as I said, probably the storage areas aren't very well designed in general. (Especially if it is a "functional" layout.)

AI and BI block size is usually 8 or 16. Same arguments, bigger is usually better.
 
Top