Database Change Request Form - Template

JeFernando

New Member
Hello All,

I want to implement a good database policy and integrity while making any database changes in our application. For this, I am looking for a template that must capture all the important information/steps for any DB changes requested by my fellow teammates.
Can somebody point any template covering all those necessary steps one should follow? It will be highly appreciable.

Regards.
 

TomBascom

Curmudgeon
Rule #1 -- all proposed db schema changes must be submitted as a .df file. No exceptions. Ever. This enables testability, repeatability, and scriptability. Without it any process and any accompanying forms are nearly useless.
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
Rule #2: your application is more than just its source code and configuration files. Treat database schema like code: put your .df files in version control, either in their own repo or as part of your application. Each version should contain the full .df and a delta that upgrades the previous version to that version.

I prefer to keep the area assignments, and other per-site configuration elements that can appear in a .df, external to the versioning. For example, remove the AREA qualifiers before committing the .df. This gives you the flexibility to have different area assignments for objects that are used in different ways (less/more/not at all) at different sites. This means your SCM will highlight meaningful schema changes to objects between schema versions, as opposed to configuration changes that may reasonably happen from one deployment to the next (e.g. TDE policies, ABP assignments, etc.). And it helps you to get cleaner output (fewer errors/warnings) from the Data Dictionary incremental generator, yielding output you can use at any site even if you make changes over time to your structure-management strategy, e.g. changes to standard area names.

Obviously this approach means you also have to create a process for performing area assignments after building a new application deployment or applying schema changes to an existing one. This will prevent you from unintentionally leaving new objects in the Schema Area; at least until you get to OE 12.4+ and you can designate default Type II areas for new tables, indexes, and LOB columns. :)
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
This is by no means a comprehensive list. There is a lot that one could say about codifying good schema-management practices.
 

TomBascom

Curmudgeon
Your level of success may vary but you might like to ask some questions about proposed schema changes:

1) What is the purpose of the change? (Capture business data with new tables and fields? Improve performance with new indexes? Remove unused or redundant objects?)

2) What is the expected or predicted impact? What is that expectation based on? (You are looking for testable assumptions that you can *verify* and then reliably use for capacity planning purposes.) Is there any data to back up those expectations and predictions? Are there future business volume metrics and predictions that will have an effect?
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
Do some research on database normalization and on naming conventions for tables/fields/indexes/sequences. There are varying schools of thought on both so I can't point you to one correct answer to rule them all. But having just about any set of rules and adhering to them strictly is better than having no rules at all. Doing so just prevents a class of problems you don't want to deal with.

Write down your rules and disseminate them to the development staff who will be submitting schema change requests. Think of them as commandments, like "you must always <do this thing>" and "you must never <do that thing>". It may take some time and some iterations to get to where you want to be but the process of creating the rules and the process of enforcing them is important.

Maybe the developers will help you refine what you want. They might come to you with a change that doesn't satisfy a rule but they present a compelling case for changing the rule. But if they don't comply for no good reason, like they're running late or they're feeling lazy or they copied some code from the internet, don't make exceptions. The rules are the rules, until they change.

A nice benefit of having rules that you always adhere to is that you can eventually create tooling to help with scanning the quality of the proposed changes. For example if you know that every table must have a description, a program could scan a proposed .df (or a database created/updated from that .df) and test whether every new table satisfies that rule. You could even make that scanning program available to submitters so they can pre-scan their changes and don't waste your time with change requests that you will reject.

Take a look at some .df files and note the possible attributes for each type of object: tables, fields, indexes, sequences, LOB columns. Which attributes do you consider essential? Make rules that they must always be included. In my past life, because we didn't yet have such a system in place, I wasted time going back to developers to get information that they neglected to provide in a .df or I rejected their change because it was obviously incomplete. No descriptions or labels? Rejected. A table with no indexes? Rejected. Array fields? Rejected. Fields that are named differently across tables? Rejected (though maybe hard to detect). But that back and forth is a waste of everyone's time.

Also, give some thought to modern best practices. There are a lot of things you can do in OpenEdge, but that doesn't mean you should do them in 2021 and beyond. Examples might be array fields (extent > 0), file and field triggers, tables without indexes, etc. And there might be other features or design decisions where you won't necessarily reject the change outright but you will flag it for further discussion because you consider it "use with caution". Example questions for developers:
  • Do you really want to create a CLOB column to store that big SOAP request after you process it?
    • Couldn't it just be written to a log file?
    • If not, why not?
    • If we approve this, what happens when it becomes huge and is a maintenance headache? Have you considered the life cycle of this data?
  • Do you really want to create an entirely new database to store that one new table and its indexes?
    • Do you understand all of the implications of that?
    • Can you list them?
  • Do you really need 250 fields in one table?
    • Could this better be split into multiple tables?
    • Are all of these fields actually attributes of the entity being modeled, or are they attributes of some other entity related to it?
  • Do you really want to create an abbreviated index?
  • Do you really understand the implications for the DBA of adding a word index?
Another problem I've seen is "digging the hole deeper". A development team works for years on a product without such rules, or with bad ones, and rejects adopting new rules because "it would be too time-consuming to change everything to the new paradigm" or "but we've always done it this way and it works". If you have an existing application and are trying to deal with your technical debt, it is never too late to do better. Consistency isn't a virtue when you are consistently following a worst practice without even knowing why, just because it's always been done that way. Having a 500-table application where 450 tables have confusing and inconsistent names and 50 tables are well designed and documented is better than having 500 tables where they are all similarly bad.
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
In general, I like to define new indexes with the INACTIVE keyword and then build/activate the indexes after applying a schema change. This prevents the scenario where you define a new index and deploy it to customer X where they have 100 million rows in that table and the Data Dictionary churns for a looong time while it builds for all of them.
 

Cringer

ProgressTalk.com Moderator
Staff member
I've worked under numerous folks over the years, all with different approaches, but the one that caused the most food for thought was the policy that all schema change requests had to also come with sample data in a .d file. This had a number of purposes.
1) The developer has actually tested their changes work
2) The DBA is able to understand the intended use of the schema changes, and therefore assess if it is a sensible change or not
3) It enables the DBA to assess if index changes may or may not make sense
4) It enables the DBA to assess what configuration may be needed for new tables/indexes etc.

I think a lot of us thought it was an overly anal process, but in retrospect I think there was a lot of merit in it.
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
Another consideration: additive schema changes (e.g. new table, new index, new field in existing table, etc.) are straightforward. But not all schema changes are additive; some are destructive. Destructive changes may not be possible to make in a single .df. They might require multiple .df files together with update utility code to manipulate data. So give some thought to naming conventions for these .df and .p files so you can create a generic and automated update process.

Why destructive changes? You might want to refactor a table with many fields into a better design with two or more tables. So you would want to:
  • add the new tables and their indexes (.df);
  • read the old table and write the appropriate records in the new tables (.p);
  • remove the "moved" fields from the old table (.df);
  • possibly delete and recreate the indexes on the old table if the moved fields were index components (.df).
So, for example, you might want to put files into an update assets directory with names like:
  • 01_mydb_add.df
  • 02_mydb_datafix.p
  • 03_mydb_delete.df
  • etc.
Then your generic version-update program could connect to the database, iterate over those files in the correct order, and do the appropriate thing for each: load a .df, check for success; run a .p, check its log for success; etc.

Another destructive-change scenario is a field data-type change. Apart from int-to-int64, you can't do in-place data type changes. Otherwise, you have to rename the field, add a new field with the old name and a new data type, programmatically transform and load the data from old to new, and then delete the old field, and maybe also delete and re-add some indexes.
 

JeFernando

New Member
Thank you everyone for their inputs. This is good stuff, I will be using this to compile a change form. Highly appreciated all.
 

Cringer

ProgressTalk.com Moderator
Staff member
Another destructive-change scenario is a field data-type change. Apart from int-to-int64, you can't do in-place data type changes. Otherwise, you have to rename the field, add a new field with the old name and a new data type, programmatically transform and load the data from old to new, and then delete the old field, and maybe also delete and re-add some indexes.
Another one to watch out for is changes to the length of an extent.
 
Top