G
gus bjorklund
Guest
adding a few more things to the excellent points made by others: I often hear the claim that copying a large file is fast, and implying therefore that 0) the high performance san is fine 1) anything that is not fast is implemented wrongly 2) the OpenEdge database is the cause of whatever the current performance problem happens to be copying a large file over a network or from one device to another is one of the easiest of all storage system use cases. all the i/o is sequential, block size does not matter, latency does not matter, the input can be buffered, the output can be buffered, and a return code of ok can be given without anything actually being written to the output device. a well implemented file copy operation will do read-ahead of the input and write-behind of the output and overlap disk and network operations for both the input and the output. since all the i/o is buffered and non-blocking, once the output file is closed, the operation can be considered complete even though all the data may still be in memory. none of that is recoverable in the event of a crash. all that is required is lack of filesystem corruption if a crash occurs - data corruption and lost files are allowed. this simple use case is very different from what is required for the type of i/o operations needed to do transaction logging, which has to enable the database to recover complete and incomplete transactions during crashes. and one more thing: right or wrong, long experience by many people over many years with many different storage systems and OpenEdge releases has taught us that if the bi-grow test takes longer than 10 seconds or so, that correlates strongly with bad database and application performance caused by improper storage implementation and/or configuration. it is not the be-all end-all diagnostic but it is a very reliable indicator.
Continue reading...
Continue reading...