Hi All
I was wondering if anyone has experience of dumping and loading a single table with over 500Million records. I am working on a 9.1D08 database running on W2003 Standard R2 which has severe database corruption. A Proutil tabanlays throws up repeated 5433 errors due to a table being dropped before its data was deleted. In fact the file holding these errors grew to 7GB before I killed the Proutil session. Therefore, I feel the the only course of action is an Ascii dump and load through Data Administration. A binary dump and reload would only carry the corruption over to the new database.
I have dumped the table containing the 500 Million plus records into 6 separate files, one for each year by timestamp. The first two files containing 30 Million records between them loaded relatively fast averaging 2k records per second, but the third containing 50 Million records is loading very slowly at < 200 records a second. I have also tried loading the data as one file and it also slowed down when it got to about 30 Million records. On top of this I have tried loading single and multiple files with the database in single and multiuser modes. Again, the load slowed down at about 30 Million records. The only thing left to try is enabling direct i/o.
Does anyone out there have a pearl of wisdom I could try?
The table has its own data and index area in the database.
I was wondering if anyone has experience of dumping and loading a single table with over 500Million records. I am working on a 9.1D08 database running on W2003 Standard R2 which has severe database corruption. A Proutil tabanlays throws up repeated 5433 errors due to a table being dropped before its data was deleted. In fact the file holding these errors grew to 7GB before I killed the Proutil session. Therefore, I feel the the only course of action is an Ascii dump and load through Data Administration. A binary dump and reload would only carry the corruption over to the new database.
I have dumped the table containing the 500 Million plus records into 6 separate files, one for each year by timestamp. The first two files containing 30 Million records between them loaded relatively fast averaging 2k records per second, but the third containing 50 Million records is loading very slowly at < 200 records a second. I have also tried loading the data as one file and it also slowed down when it got to about 30 Million records. On top of this I have tried loading single and multiple files with the database in single and multiuser modes. Again, the load slowed down at about 30 Million records. The only thing left to try is enabling direct i/o.
Does anyone out there have a pearl of wisdom I could try?
The table has its own data and index area in the database.