Is it posible to Mix Dump and Load method?

dopena

New Member
Could someone who already did confirm me if I can mix dump and load method.
I would like to dump and load some tables (smaller tables) with binary method and other tables (bigger one) with bulk dump and load. Would that be posible? I have a performance issue when binary dumping the large tables they run like a turtle. Please advise. Thanks.
 
I've not done binary dump and loads but I believe that the newer versions of Progress allow multiple binary D&L concurrently.

There should be no reason why you cannot do this - just remember to index rebuild the tables you Binary D&L (if this doesn't happen automatically now!)
 

Peter de Jong

New Member
Yes you can mix them but why should you? A binary dump is safer (no ASCII conversion) and is always faster then a regular (ASCII) dump.
A Dump with several dump procs at the same time to multiple disks is very fast.

I can dig up a 4GL program which generates the multi-thread dump scripts if you want.

Greets,
Peter
email:peter@peterdejong.com


Originally posted by dopena
Could someone who already did confirm me if I can mix dump and load method.
I would like to dump and load some tables (smaller tables) with binary method and other tables (bigger one) with bulk dump and load. Would that be posible? I have a performance issue when binary dumping the large tables they run like a turtle. Please advise. Thanks.
 
Originally posted by Peter de Jong
Yes you can mix them but why should you? A binary dump is safer (no ASCII conversion) and is always faster then a regular (ASCII) dump.
A Dump with several dump procs at the same time to multiple disks is very fast.

I can dig up a 4GL program which generates the multi-thread dump scripts if you want.

Greets,
Peter
email:peter@peterdejong.com

How do you make a 'binary dump' ?
 

Keith Owens

New Member
Binary Dump & Load

HP UX 10.20
V8.3E02

I see lots of referencess to creating a RAM disk to speed up the binary load - anybody kniw how i'd go about this or where I can find the necessary info?
 

CtL

New Member
Interesting questions on this topic... Techniques and links are at the bottom of this post.

a) Binary is the fastest possible way to DnL you db. If you see slowdowns in this process (Re: The first post from dopena) then look at system resource usage. Are you directing the dump to a different disk? Are you running anything else that is competing for CPU? Is (shudder) the disk you are writting to an appliance and perhaps being used by other people.

b) You still need to do a index rebuild after a binary load.

c) You can do multiple binary DnL operations at the same time, but you need to carefully consider if you want to do it, or need to do it. The trick is to know when you are going to saturate your critical resource, usually disk speed, and then step back just a bit. If you spread each dump operation to a different disk, and have a manager for the whole operation kicking them out as needed, you can perform some really fast database rebuilds. You also can reload to a new db (if doing all) on a different disk, dumping from one db while loading to another (different tables of course!) at the same time.

d) If you are doing the DnL for performance, you can usually get most of the benefits by just re-indexing!

e) RAM disk is cool, but you can run into some really monster tables sometimes, and that is a lot of RAM.

f) As Peter de Jong pointed out, there are scripts out there to demonstrate a multi-threaded DnL approach. They may also incorporate the v9 index rebuild operations. If you are using multiple threads you should be aware that this v9 index rebuild is single threaded, and smacks the BI each time, you will be doing your index rebuilds at the end! For a single stream you could still do the index rebuild after each load.

Techniques:
In general I'm just tossing out loose information here, there is a lot of things to consider when doing binary DnLs for both safety and optimization. Although you can write scripts and perform this as a largely automated process (some companies schedule these on a regular basis) the first couple of times will be rough, there are a lot of things to think about. Investigate lots of sources before trying this (links at bottom)

In vN to v7:
Use the proutil <dbname> -C dbrpr option, option 1 (Scan menu), suboption 6 (Dump Records) to dump the db. The size of this dump will be between 60-70% of the base db size. Suboption 7 allows the load. After you are done you need to re-index ALL your active indexes.

In v8:
Use the proutil <dbname> -C dump <tablename> <dumplocation>.

In v9:
Like v8, but also adds in the ability to rebuild the index on the fly with the proutil <dbname> -C load <tablename> operation.

Links: In the Progress knowledgebase there are some really good articles -

12994 Binary dump and load (dbrpr RM Dump and Load) instructions (Must read!)
20206 Building Indexes with Binary Load in Progress Version 9.1B
17528 Binary Dump With Tables Larger Than 2 GB
43744 Binary load did not load past 2 gigabytes
18008 4GL to Create Binary Dump/Load Scripts From Metaschema

Others: Yikes! I couldn't find some nice links for more details!
 
Top