AI Best Practice

Graeme

New Member
Hi all, wonder if someone could offer some advice on best practice when it comes to the use of AI?


We have a number of OpenEdge 10.1C installations around the country running a business system.


Each server has an identical "DR" machine with exactly the same hardware spec on the same site.


We have previously used online and incremental backups to copy BAK files over to the DR server so that it can be restored in the even of a problem with the LIVE machine.


However we are now looking at the use of AI to achieve this.


I've been through the documentation and have been able to set up a test environment including a number of successful restorations etc.


...but since we have the 2nd server in place on site, we've been wondering if there is a better way of making use of the AI functionality???


Wondered what other people were doing?


Optimal scenario would be to have a copy of the LIVE databases on the 2nd server that is always as up to date as possible but without hindering the ability to use AI to restore the primary server.


Original thoughts were have an online backup first thing in the morning at which point two AI extents for each DB will swap places (FULL/BUSY)


This backup would then copy the BAK files over to the 2nd server and at regular intervals during the day a copy of the busy AI would be copied over so that it could be applied if necessary.


...but since I'm new to AI this may not be the most efficient way of doing it. Although BUSY AI files seem to copy quite happily I'm not sure if this is a supported action and if I should be copying only those IA files that are not marked as busy?


Or would it be better to use more AI files??


Any advice on best practice would be very much appreciated.
Regards
Graeme
 
Hi all, wonder if someone could offer some advice on best practice when it comes to the use of AI?

Best Practice #0 -- implement after-imaging. Failing to have after-imaging implemented is the the #1 DBA Worst Practice.

We have a number of OpenEdge 10.1C installations around the country running a business system.

Each server has an identical "DR" machine with exactly the same hardware spec on the same site.

We have previously used online and incremental backups to copy BAK files over to the DR server so that it can be restored in the even of a problem with the LIVE machine.

However we are now looking at the use of AI to achieve this.

Excellent! You are on your way to best practice!

I've been through the documentation and have been able to set up a test environment including a number of successful restorations etc.

You might find this PPT helpful as well.

...but since we have the 2nd server in place on site, we've been wondering if there is a better way of making use of the AI functionality???

Wondered what other people were doing?

Optimal scenario would be to have a copy of the LIVE databases on the 2nd server that is always as up to date as possible but without hindering the ability to use AI to restore the primary server.

That would be a "warm spare" or a "verified backup". Basically you continuously apply after-image logs as they fill. It's fairly easy to do although there are potential licensing issues depending on how you go about doing it.

You might also want to consider OpenEdge Replication.

Original thoughts were have an online backup first thing in the morning at which point two AI extents for each DB will swap places (FULL/BUSY)

This backup would then copy the BAK files over to the 2nd server and at regular intervals during the day a copy of the busy AI would be copied over so that it could be applied if necessary.

...but since I'm new to AI this may not be the most efficient way of doing it. Although BUSY AI files seem to copy quite happily I'm not sure if this is a supported action and if I should be copying only those IA files that are not marked as busy?

Or would it be better to use more AI files??

Any advice on best practice would be very much appreciated.
Regards
Graeme

This last part sounds quite confused.

You should only be copying FULL extents, not BUSY. When an extent is marked FULL (either because it is fixed size and filled up or because you marked it full with rfutil on a schedule) you first copy to an archive location. As part of the copy you should rename it using some useful standard such as the sequence number. After the initial copy and archive you then copy it to a remote server so that it is safe in the event of a disaster. At that point you can decide to use it in a roll forward process.

I generally suggest 8 to 16 variable extents that are marked full and archived on a schedule (every half hour or 15 minutes depending on how active the system is and how valuable the data...)
 
Many thanks for taking the time to respond...but I give in...how do I view the xml document that you linked to? Just keeps coming up as source code on my screen??? :confused: :blush:


If I should only copy a FULL AI file and not one that is still busy do people usually do this simply by keeping a track of what the latest AI file is or is there a nice little command for this?

Regards

Graeme
 
They are PowerPoint 2007 slides.

You parse the output from "rfutil dbname -C aimage extent full" to know which extents are full. More details on slide 56 ;)

Or you setup the ai archiver daemon to do it for you.
 
Note that the slides are .pptx, so require a recent version of the viewer or PowerPoint to read.
 
Excellent and very useful presentation. Many thanks

Some of our sites are only using Workgroup due to the small size, is the Daemon available in Enterprise only?

Again, thanks for taking the time to assist.

Graeme
 
It should be. But I, personally, have not tested to be sure. The differences between WG and Enterprise are:

1) Connection limit of 65
2) No APWs
3) No -spin
 
I came here to ask a few questions about AI managment, but your presentation answered all of them (and a few I didn't know I had yet). Thanks, Tom!
 
Thanks! It's always nice to know when something works :)
I never even realized you could do variable length AI extents...and multiple ones at that. Right now we do the game of tuning the job that checks for full extents and archives them against the sporatic needs of the database. Variables with a regular archive schedule makes a lot more sense.

Two questions (OK, so the presentation didn't answer every question I didn't know I had): Is there any limit to the sequence number? Right now we may fill 2 128MB fixed extents throughout the day, but when we do our overnight processing, I may fill 1 every 2-3 minutes. We check for full extents every 5 minutes, archiving only if one is actually full. We wind up with about 250 archived extents every week. If I'm archiving a variable extent every 15 minutes, I'd have (let me do the math...) 672 archived extents. Doesn't seem like like a problem, but I'd hate to hit some ceiling.

Second, you mention some scripts (ai.new, ai.sweep, etc)--are those available anywhere?
 
Up until recently there is a limit on the sequence number of 65k (it might be slightly less).

This was (finally) fixed with 10.1C (I think, it is somewhere in that neighborhood).

Switching (variable) extents every 15 minutes is 96 switches per day. It takes a little less than 2 years to hit the limit at that rate. Once you have things running smoothly it is surprising how soon that 2 years rolls around ;)

The scripts are UNIX specific and generally take a bit of tweaking for your specific systems and setup so think of them more as examples than production scripts but, sure, they are (now) at: http://dbappraise.com/ppt/ai.tar
 
Back
Top