1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Question Software Upgrades / Db Changes To Large Tables

Discussion in 'Database Admin' started by Chris Hughes, Feb 15, 2017 at 12:59 PM.

  1. Chris Hughes

    Chris Hughes ProgressTalk.com Sponsor

    Hi

    First time in 3 years I've hit this particular problem with our app....

    Say you have a really large database 100GB+, and an audit table of 30GB+.
    The software upgrade requires a DF to be loaded against that DB (mostly new tables, but some addtional fields and indexes).
    This takes 7 hours+ and generates a BI file in excess of 70GB.

    I've tried this multiple times

    1) Ran out of disk space on first couple of attempts
    2) Tried starting the DB with -i and -r (still writes to BI files though)

    I can see there is a new field being added to the audit table in the DF.

    So just wondering if there is a proserv parameter I've overlooked that could help or perhaps a better approach?

    Just glad I'm not dealing with TB dbs yet :)

    Thanks

    Chris
     
  2.  
  3. Rob Fitzpatrick

    Rob Fitzpatrick ProgressTalk.com Sponsor

    OpenEdge version?

    It would be helpful to know exactly what statements are in your .df. Adding new fields shouldn't be a problem, now that there is schema versioning. I'd guess that you're adding one or more active indexes to large tables, so the index keys are being created on the fly, causing the update to take a long time and write lots of BI notes.

    If you add indexes to large tables, be sure to add them inactive. Then you can run idxactivate or idxbuild after the fact to build those indexes without the effects you're seeing.
     
    Chris Hughes and TheMadDBA like this.
  4. TheMadDBA

    TheMadDBA Active Member

    Like Rob said.. indexes are your most likely culprit.

    Schema versioning should take care of the new columns almost instantly in modern versions. In older versions I seem to recall some issues with default values causing actual writes though.
     
    Chris Hughes likes this.
  5. Chris Hughes

    Chris Hughes ProgressTalk.com Sponsor

    Sorry - Originally a 10.1c database, I've restored to a new Win 2012 Server running 11.4 64bit.
    Truncated bi, conv1011, then 11.4 truncated bi 16k and 262144 cluster size.

    The are indeed 2 indexes added to my audit table, there is nothing in the DF explicitly stating Active or InActive - but I assume I right in that default is Active.

    Your insight will save me so much time - Thanks Rob :)
     
  6. Rob Fitzpatrick

    Rob Fitzpatrick ProgressTalk.com Sponsor

    No problem. Yes, default is active. Probably your best bet is add inactive and idxbuild. If you use idxactivate, be sure to use a unique index as the reference, otherwise it locks the table which is probably not feasible for you.
     
    Last edited: Feb 15, 2017 at 3:37 PM
  7. Rob Fitzpatrick

    Rob Fitzpatrick ProgressTalk.com Sponsor

    Just saw a typo above, which I have changed. I originally said "Yes, default is inactive" which was wrong. Indexes are added as ACTIVE in a .df unless specified as inactive.
     
  8. cj_brandt

    cj_brandt Member

    if you chose to use idxactivate, in my experience this utility runs faster with the database online vs running it in single user mode.

    As a rule, we add all indexes as inactive and then activate with idxactivate, because we ran into the situation you described.
     

Share This Page