Binary load appears to work...but records aren't in table?

Vienneau

New Member
I haven't done a dump/load on my DB in a decade so I thought it might be time (scatter factor is over 8 on some tables!). I always used to do it with the ASCII dump, but thought I'd try binary this time, using a small table as a test.

Binary dump is successful. I delete the table from the DB (I read that if you just delete the records then the records are loaded back into the same memory locations). I re-create the table. I do a binary load, and it appears to work:

pro11> proutil mlsd -C load /uc/div/judge.bd
OpenEdge Release 11.7.2 as of Tue Oct 24 19:00:29 EDT 2017

Binary Dump created on Fri Oct 25 10:54:37 2019
from database /db/mlsd/mlsd. (6203)
Loading table judge, Table number 44
starting with record 1, section 1. (6204)
Loaded 269 records. (15167)

YOU SHOULD REBUILD THE FOLLOWING INDEXES IN TABLE PUB.judge:
judge-no
name
Binary Load complete. (6256)


But when I run a simple "for each judge: disp judge" it returns no records. I did the whole process twice just to see if I missed anything. I even delete and re-created the indexes on the off chance that inactive indexes were somehow making the records invisible (very unlikely, but I was out of ideas).

A subsequent ASCII dump/load worked just fine, so it's not that the records are there but not visible to me for some reason.

If I had an error message, I'd have a place to start searching for answers. But this is the first time that I've had a message say everything worked, and the records aren't there.

What am I missing? I'm assuming it's something simple/obvious. :)

Oh yes - v11.7.2, CHUI, AIX Unix
 

Vienneau

New Member
...a plot twist!

I did a proutil tabanalys and it's showing that there are three times as many records now (805), each of my two binary loads (269 each) and my ASCII load (267 from a different instance).

(and my scatter factor only shrank from 8.4 to 6.8, but that's a separate issue)

I feel like I've accidentally created a mult-tenant DB with 3 copies of the same table. That would be weird.
 

TomBascom

Curmudgeon
Have you rebuilt your indexes like the message instructs you to?

Failure to rebuild indexes results in exactly this behavior.
 

TomBascom

Curmudgeon
Aside from that your whole approach of reloading back into the same db that you dumped from is kind of bizarre. Usually you would want to be building a new *replacement* db or storage area.
 

TomBascom

Curmudgeon
Where did you read about reloading back into the same memory locations. That is either a misunderstanding or just wrong.
 

Vienneau

New Member
Have you rebuilt your indexes like the message instructs you to?

Failure to rebuild indexes results in exactly this behavior.

That is certainly simple and obvious. When the message said "SHOULD" instead of "MUST" I figured it wasn't required and wanted to save time. Lesson learned! And thank you very much for the answer.
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
It seems some education is in order here. In order to assess what is happening, aside from your OE version it is useful to know the structure file contents and where the objects (tables, indexes) in question reside within the structure. Given 11.7.x, all of your application storage areas should be Type II (i.e. they should have 8, 64, or 512 blocks per cluster).

Dumping a table (whether via Data Dictionary (ASCII) or binary dump) is a read-only process. It does not delete the data. So it should be self-evident that dumping and reloading the data in place is not something that you want to do without having some sort or intervening step to remove the existing data. Examples of an intervening step would be deleting the data (via ABL or SQL), dropping the table and then re-adding its schema, renaming the existing table and then adding its schema as a new table, etc.

I read that if you just delete the records then the records are loaded back into the same memory location
Loading data (ASCII or binary) writes records to disk; it has nothing to do with "memory locations". If that was a direct quote from your source, try to find different sources for OE DB maintenance advice.

I did a proutil tabanalys and it's showing that there are three times as many records now
You need to be careful when loading data into a table with either deactivated indexes or with no unique indexes. Nothing will prevent you from loading the data multiple times. But having multiple copies will prevent you from building the indexes, if any of them are unique.

I feel like I've accidentally created a mult-tenant DB with 3 copies of the same table.
That isn't something that happens accidentally.

As Tom said, what you are doing is not typical. Dump/load/index rebuild is typically done either in a new storage area in an existing database or in an entirely new database.

I even delete and re-created the indexes on the off chance that inactive indexes were somehow making the records invisible
Rebuilding indexes requires the database to be shut down but does not require the indexes to be deleted and re-added first. You seem to be doing some trial and error rather than following a proven procedure.

I recommend the following:
  • Start at the beginning: read the Database Essentials manual; it won't take long.
  • Read the sections of the Database Administration manual relevant to dump and load, particularly the sections on the proutil dump, load, and idxbuild qualifiers in the Reference section.
  • Look for presentations on database dump and load and index rebuild, or database maintenance, in the download archives of the PUG Challenge Americas conference and read them; you should find several.
  • Create a sports database on a test server; call it "source".
  • Create the structure for a new database in a different location on that server; call it "target".
  • Practice what you have learned from the docs and the conference presentations: dump the schema and data from source, then load the schema and data into target, then back up target, then rebuild target's indexes, then back it up again. Run dbanalys on both and compare the results to prove to yourself that the record counts are the same in both.
  • Document the procedure and practise until you can do it in your sleep. The sports database is very small so you can repeat this procedure quite often in a short time.
  • Bonus marks: automate the procedure with shell scripts. Be sure to capture the output to log files and verify the results.
Once you think you have a good procedure, post it here and ask questions. Be aware that a simple procedure that seems to work in a sample database like sports will not necessarily do everything that needs to be done in a production environment that has other important data to be preserved, e.g. user records, current sequence values, SQL permissions, domains, authentication systems, and policies for optional features like multi-tenancy, auditing, table partitioning, CDC, TDE, etc.

There is no one procedure for dumping and loading that is "correct" and that everyone should follow. What is appropriate for a given situation depends on several factors, including DB features used, DB size, available hardware resources, allowable downtime for a dump & load, etc.
 
Top