[progress Communities] [progress Openedge Abl] Forum Post: Re: Proutil Bigrow Performance...

  • Thread starter Thread starter gus bjorklund
  • Start date Start date
Status
Not open for further replies.
G

gus bjorklund

Guest
> On May 18, 2017, at 9:04 AM, George Potemkin wrote: > > Can we expect that the results of bigrow and dd commands will always match one anothe no. that cannot be guaranteed. 0) the dd program has different options on different systems. some do not have an option for synchronous writes. 1) to further complicate things, the behaviour of the O_SYNC and D_SYNC options to open() system call vary by filesystem type. ext4 appears to ignore them, at least according to my experiments on Centos 7. 3) if you use the dd program with /dev/zero as input, you may get a sparse file as output, depending on operating system and filesystem type. if so, nothing is written at all, just metadata, because the blocks with all zeros are optimised away. the original purpose of the bigrow test was to check for potential disk write throughput shortcomings. for this purpose, it has worked rather well (though many arguments have ensued with storage IT people who think they know better). from long experience with hundreds of systems, we know that there was a strong correlation between long bigrow times and poor application/database performance. unfortunately, with the changes in operating systems and filesystems, this has now become a bit harder to determine. i’ve been looking for another solution. so far i have nothing better.

Continue reading...
 
Status
Not open for further replies.
Back
Top