[Progress Communities] [Progress OpenEdge ABL] Forum Post: RE: Dump, Load and Index Rebuild on Type II Database. - Performance

Status
Not open for further replies.
P

PatrickOReilly

Guest
Hi all, and thanks for your valuable input. In the instance we are preparing for, we are doing a fair amount of re-arranging of the DBs in the process. Historically the "DB" was actually 3 separate DBs, and we are 'merging' these into one dictionary. So we can quite easily run multiple parallel dump streams. I'm not sure if we can run parallel BulkLoads into the single taget DB, but we'll look at that. Any advice around that? For the Index Build I find no reference to being able to run multiple parallel 'proutile -idxbuild' processes - except for the internal multi-threading which has been discussed. The thought that went through my mind was "surely if idxbuild can be run per area, then one could run the separate areas in parallel ...". But I have been known to be a wishful thinker. :) Re Areas: We have put the 2 largest transaction tables into their own areas, and index areas are separate too. Not sure about LOBS - Ahmed? Our client is - surpisingly ;) - very demanding, but also very tightfisted with server provisioning. They gave us the objective of completing the process (detail below) within 24 hours, but we've "talked them down" to 39 hours now. But it's still very tight. This is scheduled to start in the evening of 26 Jan. The process includes moving onto new hardware, though the new spec is not much improved from the old (did I mention they are tight-fisted?) Servers config is VMWare VM, 8vCPUs, 32GB RAM, disks sized to be at 85% capacity once the DBs are built, and the backaup area is only large enough for one backup. We have saved significant time by negotiating with them to provide additional partitions temporarily during this process - which allows us to move data from server to server by moving the partitions using VMDK (1/2 hour) instead of doing network copies of the DBs (nearly 5 hours). Here are the highlights of our current schedule, based on the dry-runs we have already performed: --- 5:30 hours: Dump all data ~ 1.2 TB from 2 servers of 750 Gb and 450GB --- 0:30 hour: Transfer data from old servers to new servers (partition unmount & remount from storage layer) --- 7:00 hours: Load all data onto the 2 new servers --- 10:00 hours: Index Build on both servers --- 1:00 hour: backup DBs post-build. --- 6:00 hours: time required for application upgrade, rebuild and "sanity testing" on the new platforms. --- 1:00 hour: Network reconfig for new servers to replace the old in-place (Customer's IT team). --- 4:00 hours: enable replication, backup, transfer backup to TEST server, restore, start test server for UAT. --- 3:00 hours: transfer same backup to DR server, restore, start replication. --- Hand-over to customer for full testing and sign-off. Original target 24 hours, extended to 39 hours. The total time is 38 hours - we have a 39-hour window. Which means no slack to speak of. if we add the backup pre-idxbuild as Rob suggested, then slack is literally 0. So - this is why we are trying to save time on the big chunks (IdxBuild, Dump, Load). Life is fun in IT! ;)

Continue reading...
 
Status
Not open for further replies.
Top