Answered Scheme Changes & Recompilation

JoseKreif

Member
Sorry if this is a noobie question.

Our team is in the process of deciding whether or not to add a new table to one of our databases. IIRC, my progress mentor told me when I started learning that any changes to the database schema would require all programs to be recompiled.

Does this only apply to adding fields to an existing table? Or will adding an additional table require all programs to be recompiled? Is this even an issue with Progress these days?

We are using the 10.2B version of OpenEdge on a Linux Distro
 
I have 10.1c and we don't need to recompile if table, field or index is added.
We have clear if we add an index we need to recompile for the new index can be choosed, but if we not recompile the programs keep working.
 

TomBascom

Curmudgeon
Unless you are on an incredibly ancient, obsolete and quite unsupported release adding tables, fields or indexes does NOT require a recompile.

If you are on a reasonably up to date release you can do such things online -- no downtime required.

Of course static queries in pre-existing r-code cannot take advantage of these new objects until that code is recompiled since the previously compiled code has no way to know that they exist. If you think about it that makes perfect sense... and if you think about it some more most actual use cases would also require you to modify at least some code in some way before recompiling it ;) but you shouldn't need to recompile code that doesn't care about the new stuff.
 
Last edited:

JoseKreif

Member
Unless you are on an incredibly ancient, obsolete and quite unsupported release adding tables, fields or indexes does NOT require a recompile.

If you are on a reasonably up to date release you can do such things online -- no downtime required.

Of course static queries in pre-existing r-code cannot take advantage of these new objects until that code is recompiled since the previously compiled code has no way to know that they exist. If you think about it that makes perfect sense... If you think about it most use cases would also require you to modify the code in some way before recompiling it ;)

Sounds good. I'm going to take a copy of our database and build some programs to test before and after changes, this may prove the outcome of schema changes in my case.

My mentor started learning progress from the very beginning, so he may of went on in his life holding onto the things Progress use to do and require as still being being relevant.
 

TomBascom

Curmudgeon
While you shouldn't *need* to recompile for these sorts of changes it also really shouldn't ever be a big deal to recompile. It really ought to just be a simple "push a button" thing to rebuild all of your r-code. I have a hard time understanding why this seems to be so hard in some places.

The number of times the real answer to "why can't you easily recompile" turns out to be "we lost our copy of the source for procedure X" blows me away. Now when I hear people making excuses for not wanting to recompile I pretty much assume that they are trying to hide the fact that they have lost some source code.
 

JoseKreif

Member
While you shouldn't *need* to recompile for these sorts of changes it also really shouldn't ever be a big deal to recompile. It really ought to just be a simple "push a button" thing to rebuild all of your r-code. I have a hard time understanding why this seems to be so hard in some places.

The number of times the real answer to "why can't you easily recompile" turns out to be "we lost our copy of the source for procedure X" blows me away. Now when I hear people making excuses for not wanting to recompile I pretty much assume that they are trying to hide the fact that they have lost some source code.

The part where the time consuming problem comes in for us is that we have about 27 plants, each having their own database (same schema and r-code). It would be a nightmare, since as I was thinking, would have to go and update their schemas one by one and pushed out the thousands of recompiled programs. Also thinking this would have to be done on a Sunday when no one is working, due to the down time.

But sounds like it won't be that big of a nightmare, if a recompile isn't necessary.

EDIT:

it makes a lot more sense the way it is, to me. So the guy I knew was either overly protective of the database, or was holding onto some ancient Progress DBA knowledge

In the event of adding something such-as a new field or table, you will only need to recompile programs on a "need-to" basis. Pre-existing r-code will not know the new field or table exists, and this will not be a problem, unless you want to use the new field in said r-code. This would only require you to recompile the programs in question.
 
Last edited:

TomBascom

Curmudgeon
All of that stuff /should/ be automated. It isn't that hard to do. It's way better than the nightmare of having to manually go to 27 systems and get it done.
 

JoseKreif

Member
All of that stuff /should/ be automated. It isn't that hard to do. It's way better than the nightmare of having to manually go to 27 systems and get it done.

I like automating things on Linux systems. I'll see if I can come up with a way to handle this.
 

TheMadDBA

Active Member
The easy way to get started...

1) Figure out which changes can be done "online" vs offline.

2) Hopefully you already have scripts to shutdown/startup your DB.

3) Write a simple bash script to call mbpro/bpro <your connection info> -p load_my_df.p
- Where load_my_df.p contains:
- RUN prodict/load_df.p (INPUT <directory>/your.df).

4) Learn to love rsync and/or tar to deploy your code.

You can of course make it more and more complicated as needed. In a previous life I had cron jobs that looked for certain files to decide it was time to deploy a release and if a shutdown was needed or not.

The day of the release I just ran a script to copy the tar file, st file and df file to each of the hosts. Then I just waited for the email confirmation from each host. Having a test environment is crucial of course.
 

JoseKreif

Member
The easy way to get started...

3) Write a simple bash script to call mbpro/bpro <your connection info> -p load_my_df.p
- Where load_my_df.p contains:
- RUN prodict/load_df.p (INPUT <directory>/your.df).

I should have everything I need. prodict/load_df.p was the final piece to making this an easy transaction. Thank you very much :D

I'll probably have this idea completed without to much of a headache
 

Cecil

19+ years progress programming and still learning.
I liked deploying application when I was creating Procedure Library, all the .r code bundled into a single file.
 

andre42

Member
For adding fields, I prefer a complete recompile, even if it is not immediately necessary.
One example what might go wrong:
  • define a temp-table like some database table
  • pass this temp-table between two programs
  • change one of there programs requiring you to compile it
  • result: run-time error because the temp-table doesn't match between those programs.
  • It should be sufficient to recompile all programs which reference the changed table, though.
At one time when adding a field still required a recompile (probably using 9.1D or 9.1E at the time, but only 10.1C and up can do this without recompiling) I even wrote a script which scanned all .r-files using the RCODE-INFO system handle for mismatched CRCs to check which programs needed to be recompiled. But this doesn't help for the example I described.
 
Top