Production Mode Changes...?

dkellgren

Member
After reading:

Is online db change possible?

Has anyone come up with an ingenious way (in Progress - not SQL or ODBC etc.) to get around this?

Our IT Director has proposed the following:

Create a copy of the LIVE db - in both schema and data. We'll call the new copy SchemaChanger.db. Have the developers develop against SchemaChanger.db and make their necessary db modifications. Then have a nightly process kick off (when all the users are snug in their beds) that automatically updates the schema of LIVE with the schema of SchemaChanger - not data, just the schema.

You could also use SchemaChanger as a TEST db and only update data FROM LIVE to TEST as needed.

Has anyone tried this approach or has a better one in place?

-dk-
 

MurrayH

Member
Yes that is the way that we do it. I belive that there is a "-z" hidden parameter that allows you to change the schema "_" tables on line but I wouldn't use it. You can add indexes online in V9 I believe.

Murray
 

m1_ru

New Member
One of possible ways to perform upgrade and development:

1. All updates of database structure made only through small
incremental *.df files that stored permanently.

2. During some changes we also need to run utilities to fill
new fields. Those programmes also stored.

When you need to make fresh test db - you can take live
db and apply all *.df and *.p programmes in the same order.

When you need to update live db:
1. make backup,
2. apply all *.df and *.p programmes to live db
3. update sources,
4. make *.r code.
If something goes wrong - restore backup.

We also "stabilise" db structure - make it so, that if one make dump of database structure and than load it to empty database CRC of all tables would be the same. So it is possible to prepare *.r code in advance.
 

dkellgren

Member
Murray -

When you update the schema, are you using the incremental df to create a delta.df? If so, are you running it in batch - and if so - have you modified _dmpincr.p of Progress source code to do it?

-dk-
 

MurrayH

Member
We tend to create 1 incremental DF per release and ship this. Since we run on UNIx everything is batched (nice and easy) and well proved so its pretty easy. The code is the nasty bit, ensuring you update the code at the right time so the CRCs are all in line. If you ever need to do a multi-df upgrade you will understand the issue of having to ship multiple versions of you code compiled with difference CRCs.

Murray
 

DavidP

New Member
Progress CRC's could have been great but there was always the problem that the same database schema could have any number of different CRCs depending on just how the schema evolved.

When a .df is applied to a database you get a new CRC, but if you then dump and reload you will get a different (but stable) CRC. This in my opinion made the whole thing a bit of a joke and greatly reduced the usefulness of CRCs over schema timestamps.

Can anyone tell us if they have fixed this in V9? Murray?

David
 
Top