dumping tables

make

Member
Hi,
how can can i dump and reload my database in as quick and safer way.
I think it is nessecary to dump the database because the fragmentation is over 6 %.
I have tried to do it with a copy of the database. But i killed a table, so i think i made a mistake.
I opened the Data Administration an dumped the *df an the *d file of a table, then i deleted the table with the data dictionary an commited the changes.
Then i try to load back the *df. file , that works, but when i try to load the *d file an error message comes an the table comes emty.

Can anyone tell me how to daump an reload a table and a whole Database in a right way.

Greets
Make
 
the easiest way is to use a utility conversion.p, which I can attach. Reloading the data will not defragment it though, you can do a selective dump and load. (if the conversion.p didnt attach correctly, just private me).

if you want to dump a single table

proutil dbname (full path) -C dump table-name target-dir -index num -- pick an index to to search contents.

before loading, truncate bi

proutil dbname -C truncate bi -biblocksize size (in kb) -bi size in kb (cluster size)

start up database and a prowbiw and prowdog

proutil dbname -C load .\table (if it is in the current directory)

---------

after, rebuild the index (when db is down)

proutil dbname -C idxbuild (there are options to build in a menu)

rebuild indexes

proutil dbname -C load .bd-file index (parameters - should be in a pf file)

--------

to do a who database

run conversion.p, the program will give you a few scripts

dump.sh
shudown db - not script
prodel old db
create void db with the db-new.st
procopy the empty db into your new void db
start db
load the definitions
run the load.sh script
rebuild all indexes
backup
----------
 

Attachments

A reload will defrag data. It's one of the reasons to go through that trouble. It will also reduce scatter.

The good benefits also last a lot longer if you also go to the trouble of properly re-designing the storage architecture.
 
whoops my bad, I misread the question.

and a scatter of 6 is high, very high, depending on the table size

--- and yes I removed the % (I did know that, I remember to think some of the replies thoroughly.
 
The original post, which BTW is from 2003, specifies fragmentation of 6%.

Fragmentation is records which are split into 2 or more pieces. IMHO 6% isn't really all that high. And if the records are large records it may, in any event, be unavoidable (records which are larger than the db block size must be stored in multiple pieces).

Scatter is a measure of how close records are to their "ideal" ordering. Progress provides a "scatter factor" in the output of "proutil dbname -C dbanalys" but it is nota percentage -- it's a "factor". 6 would indeed be high but it might not be important if, for instance, the table is small or if the primary access path is via an index other than the primary index.

Someday, if we're lucky, Progress may start to report scatter on an index by index basis.

BTW, I suspect that our original poster's real problem was probably that he forgot to rebuild indexes. That's the usual thing that causes a newly reloaded database to appear to be empty. But it was 5 years ago so we probably won't get an update an that ;)
 
I agree, but I believe that he mentioned a scatter factor of 6 PRIOR to the dump.

In either case, when I have had tables get this high (which would be sans the extra 15 years of Progress experience Tom has) it was usually time to evaluate the rpb and cluster size.

- I could be wrong though.

If I get to the point where I am changing rpb/cluster size it generally isn't with one table; so the only recommendation I would have for - make - would be to look into tables where the scatter is above 2.5 on 200 meg plus tables or high transaction tables (where I begin to plan my dump/loads).

With a scatter factor of 6 (if indeed it is a large table) there are coinciding issues with the number. It may be time to evaluate the tables rpb/cluster size as well.

Since there was no mention of Progress version, if we are talking with 10.1B03 he could always do a multi-threaded d/l which runs pretty damn quick (well with my 90 gig db it did).
 
The original post is quite clearly referring to 6% fragmentation. No mention is made of scatter but your recommendations are good if the conditions that you outline are the case.

In January of 2003 10.0 was still a year away so "make" was, at best, running v9.
 
Back
Top