I am dumping and loading a database from 9.1D09 32-bit to 10.2B05 64-bit on Windows 2008 R2 64-bit. On brand new hardware under 10.2B05 I am getting a binary dump rate of over 300k (which I am very pleased with) records a second using a multi threaded dump file against a brokered database. However, I am only getting a binary load rate of about 100k records a second with the same number of threads against a brokered database. When performing the same operation in 9.1D09 on similar hardware the performance metrics are typically around the other way i.e. binary load is 3-4 times faster than the binary dump. Also, I can load the same binary dump files into a 9.1D09 database at nearly 200k records a second.
There are 1.8 billion records to load so increasing the load rate could make a few hours difference. The only significant difference between the 9.1D and 10.2B databases is that the records are being loaded into Type II storage. I am not building the indexes during the load.
Is there any way of speeding up binary load into 10.2B? Is it Type II storage causing the difference in speed or something else perhaps?
Few other details regarding empty database used for load: -
BI truncated with 65536 cluster size and 40 grown clusters.
Broker running with -i.
15000000 blocks in DB Buffer
9 APW
25 BI Buffs
100000 Spin Locks
Server 128 physical disks arranged into 6 logical drives in RAID 1+0, the load database is spread across three of these (indexes, data, BI) and the .bd files are held on a fourth.
Server has 128GB physical memory and 48 cpu cores
I have experimented with different settings for the load database above and -i is the only thing that had any significant performance impact.
Thanks in advance.
Andrew Bremner
There are 1.8 billion records to load so increasing the load rate could make a few hours difference. The only significant difference between the 9.1D and 10.2B databases is that the records are being loaded into Type II storage. I am not building the indexes during the load.
Is there any way of speeding up binary load into 10.2B? Is it Type II storage causing the difference in speed or something else perhaps?
Few other details regarding empty database used for load: -
BI truncated with 65536 cluster size and 40 grown clusters.
Broker running with -i.
15000000 blocks in DB Buffer
9 APW
25 BI Buffs
100000 Spin Locks
Server 128 physical disks arranged into 6 logical drives in RAID 1+0, the load database is spread across three of these (indexes, data, BI) and the .bd files are held on a fourth.
Server has 128GB physical memory and 48 cpu cores
I have experimented with different settings for the load database above and -i is the only thing that had any significant performance impact.
Thanks in advance.
Andrew Bremner