How to copy only the table structures from a database

jsveiga

New Member
Hello,

I need to create a copy of an existing OpenEdge 10.1A database, but without its data - only replicate all the table structures, keeping .r compatibility. Or (final result would be the same) just a way to purge all records from all tables of the database.

I saw a suggestion on:
http://www.progresstalk.com/showthr...he-quickest-way-to-blow-up-64-million-records
(make a .df for the table, rename original table, load the .df)
But to do that table-by-table would be very time-consuming.

I tried making the .df for all the tables (including the hidden ones) in the database, then created a new empty one (prodb sameoldname empty) and loaded the .df back. All seemed to work ok, but it seems the .r compatibility was not kept (and I cannot recompile).

Is there another way?

Thank you!
 

TomBascom

Curmudgeon
R-Code compatibility depends, in part, on the order in which elements of the schema were created. From the r-code's perspective there is a difference between a table that was created with fields A, B and C added in that order and a table where the fields are added B, C and A. (This is the "rpos" field in the meta-schema.)

When you dump the .df file you get an idealized schema that does not account for those subtle differences. If you then build a new db the old r-code may no longer work.

The traditional way around this is to always maintain an empty skeleton db and to always use scripts to apply incremental .df files.

Or you could fix up the rpos fields in your new db.

The best solution though is to simply recompile your code. Is there some reason why you cannot do that?
 

jsveiga

New Member
Thank you Tom,

I did check the "POSITION for .r code compatibility" option when creating the .df, but I still get "CRC mismatch, try recompiling" when running the app.

I cannot recompile because I don't have access to the sources. It is an ERP system with a huge amount of .r programs. I am trying to initialize a test environment with an empty database.

The original installation is pretty old, and upon that there were dozens of patches installed (which include incremental .df definitions), so one other (bad) option to get the empty database would then be to reinstall on another system, then apply all the patches, then copy it to the test system, but I hoped there was a better way.

Best regards,

Joao
 

TomBascom

Curmudgeon
You've learned a very valuable lesson. ;)

I'd make the test system as you have described and then keep it up to date.
 

jsveiga

New Member
Thank you again :)

I've found that the original SW installation has an "\empty" directory with the original db files (structures without data). I could then find out what the patch level was for the original install, and apply all the delta .df's to get to the current version (so no need to reinstall everything).

On the other hand, I was surprised that the empty db file work with the application (I did not test everything, but it does not complain of the CRC when loading, as did my attempt with a completely blank db). If the version is old, and there were many columns added/changed, shouldn't the .r compatibility have been lost?

I'm going to dump the .df for this skeleton, and compare with the .df for the current one. Maybe I can create a "delta" .df to bring it directly from point A to B.

I tried to use the Data Administration tool to create the delta .df directly, but it will not load the "old" and "new" dbs at the same time to compare, due to the same logical names.

Best regards,

Joao
 

TomBascom

Curmudgeon
You can use the -ld startup parameter to change the logical name when you connect. That should allow you to create the incremental .df.
 

jsveiga

New Member
Hi, they were the ones who first suggested a clean install, but I managed to talk with other person there (more technical this time) and he suggested the use of the "\empty" db files I mentioned before - actually a "\presetup" directory with some extra initialization made (and find out the original patch level, and apply all .df patches).

What I ended up doing was to create the .df of the database (all tables, "include POSITION"), deleted the "create sequence" entries (I don't mind, and it's even positive, that sequences are not reset) then added (with a perl script; as there are tons of tables) a "DROP TABLE xxxx" before each "ADD TABLE xxxx" in the .df file. Then loaded it back on top of the populated database. It took a fair "Not Responding" while, but ended up with no errors.

Then I fired up the application and got no CRC errors. A quick try showed that the data is really gone (although the .d1 file still has the same old size - would a dump/reload "defrag" it?).

Besides being easier to do, this also kept the user/security definitions (I suppose using the "\empty" db wouldn't, because when I loaded it on Data Administration it would allow me to do anything - and it doesn't with the "right" db loaded). So this is at least a good try for people falling in this thread with the same problem. I will sell back the solution to the vendor (if they don't find it here first)...

Thanks again! Tom, your posts here (not only to my questions) have helped me many times!

(and if it all blows on my face in the next few days, I'll surely post a warn on this thread)
 

tamhas

ProgressTalk.com Sponsor
From what I can tell, it sounds unlikely for that to actually work in the end. Starting with empty and applying all changes that have been applied to the production database, while tedious, should get you an empty database with the same characteristics as what you have in production. And, of minimal size.

No, there is no defrag. And, of course, a dump and load ... unless it was into that empty as modified database ... would change the field orders just like you have already experienced.
 

jsveiga

New Member
Thank you Tamhas,

Do you have any specific concern I should watch out for about the DROP TABLE / ADD TABLE method - apart from the non-reducing size (150MB, not a big deal)? I mean, the .df generated from the production database contains all the db structure, and should have reproduced it back exactly as it was.

Also, dropping/re-adding on the production db (as opposed to on the "skeleton" db) kept the security/user definitions (which the "skeleton" lacks, and I have no permissions on the db to dump these definitions). I only dropped and recreated the same tables, not touching anything else - I believe this works fine in other database systems.

Going from "skeleton" to the current, I may miss something (security configs, sequences will be reset - meaning duplicated stuff between the old and the new environment, the odd modification the vendor may have applied to our specific installation, but not available on the patch repository, etc). Replicating the production structure and just deleting the data ensures I only deleted the data, nothing else.

I believe it is the same as suggested on the post I linked (create df, rename table, load df to create empty table), the only differences being that I did not rename the tables, just dropped them, and that I made it for every table in the database.

Or not?
 

tamhas

ProgressTalk.com Sponsor
I don't believe Drop table/ Add Table is going to produce any different result than loading the .df into an empty database, except that you will have a lot bigger database.
 
Top