• This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn more.

Database Hot backup


Well-Known Member
What do you mean by "hot" backup?

What are your requirements regarding the following:
  • How fast must the system be available again, how much downtime can you afford?
  • How much data can you afford to lose when the machine which hosts your server backend is gone?
Answering these question is essential to come up with a disaster recovery strategy. Answering these questions will determine what you need to do/use to achieve the goals.

Pls don't get mad at me, but reading the database administration which comes with the OpenEdge documentation could be a good start to understand what's available to you with the database product.

Regards, RealHeavyDude.


ProgressTalk.com Sponsor
Indeed, if you are a 24x7 shop, then merely going back to a backup is going to be disastrous. If you do backups every 24h, count on it being 23h59m since the last backup when the system crashes. After-imaging is essential.

BTW, the idea of being a 24x7 shop already and *not* already having a backup strategy is *really* scary. I'd consider a quick consultation with someone like White Star to see what else might be wrong. Much better to apply a bit of preventative medicine than to have to consult them in an emergency.
You haven't revealed your OS but:

probkup online dbname dbname.pbk -com

should do what you ask so long as you are reasonably up to date and your db is either less than 2GB or you have enabled large files.

For more info on after-imaging check out After Imaging PPTX or tune in to my presentation at Virtual Interchange next week.
DB have more than 50GB. But files are about 2GB.
Now we are doing off-line buckup. Stoping database, and then filie systems are copied. It takes about 30 min and some other tasks takes 1h. So, 2 h database is unavailable.

In on-line buckup i would like to start after 00:00. A think it takes more than 30 min.

I have primary server DB and recovery server DB using matrix.


Well-Known Member
From your posts I assume that you don't use after image with your database. I can not stress enough the fact that you really should use it.

Using just backups once a day, regardless whether they are online or offline, leaves you with the chance of losing almost 24 hours of changes that happened to the database. I have experienced more than once how much fun database administrators had explaining to the users that they have to re-do everything they did yesterday because the backup from the day before had to be restored after the server crashed just before midnight.

Utilizing after image you can reduce the data loss to just the transactions that were uncommitted at the time the disaster occurred.

The after image is the log of all committed transaction since the last backup. If you have the last backup and an after image you can roll forward this after image and you'll be fine. Otherwise ...

Again, the question IS NOT whether the system is available 24x7, BUT, how much downtime and data loss can you afford when your system is gone.

Regards, RealHeavyDude.
Now, we lose any data from today, if we restore system from backup.
I would like to explain that I am not an Administrator but I am System Engineer, so administration it's quiet not know for me.

The requirements are that system must work on-line all the time. Recovery database let us to switch if something wrong happen with primary server.
But if 2 servers gone, at least one of them have to be restored from backup. I see that I can do on-line backup, which give me possibility to access to database all day.
And what I did not know - After Image process give me daily lost data to be restored. If servers gone, doesn't matter how long they are unavailable - there is no transactions.

I mean You understand now,
I do not think that you understand.

Properly configured and managed after-imaging will also copy the ai logs to an external system. Just as you should be sending your backup tapes off-site for proper safety the after-image logs should also be continuously archived off-site thoughout the day (a simple example that works for a low volume system would be to e-mail them to a gmail account, larger and more active systems obviously need a better method). If you lose your server (or both servers) you obtain a new server, recall your offsite backup tape, download your externally stored after-image logs and roll forward. That way you do not lose a days worth of transactions.

After-imaging does not just protect you against servers being destroyed -- it also protects you against human error and malfeasance. You can, for instance, use after-imaging to recover from someone deleting all of your customer data (either accidentally or maliciously). These sorts of things are actually more likely than the easier to imagine "fire, flood, earthquake" scenarios that most disaster planning considers.

I'm obviously biased but I think you would be well advised to engage a good consultant to help you. You are, as you say, not an Progress DBA. You can really mess things up if you don't know what you are doing.


ProgressTalk.com Sponsor
Expect your on-line backup to take considerably longer than the offline version, especially if there is concurrent activity against the DB.
Data Protector for Progress OpenEdge 4GL using IBM Tivoli Storage Manager, can help to protect the databases from logical or physical errors.
This in combination with Progress protections (replication) makes the solution even better.

Rob Fitzpatrick

ProgressTalk.com Sponsor
Data Protector for Progress OpenEdge 4GL using IBM Tivoli Storage Manager, can help to protect the databases from logical or physical errors.
This in combination with Progress protections (replication) makes the solution even better.
This sounds suspiciously like a sales pitch, and is not terribly relevant to the subject of this thread. I see no indication that the original poster has, wants, or can use the aforementioned product.
Your web page seems to indicate that your product is a wrapper of some kind for the standard after-imaging process. So I don't see how you're making anything "better".
Hi Tom,
Thanks for replying.
The Product uses native supported Progress way to backup and restore the databases.
But instead of storing the data to a local disk, all the data is sent directly to the backup system.

An example of how this can improve the backup and especially restore / recovery.
1. Progress backup to local disk takes 1 hour
2. TSM File backup of local disk (progress backup) to the TSM Server takes 1 hour

The above example will take 1h + 1h = 2h to protect the data.

With Data Protector for Progress, the below steps will the describe the same:
1. Backup of Progress directly to TSM takes 1 hour
The complete protection will be shorter.
No risk of loosing data.

What happends in the first example if TSM File backup is protecting a progress backup file, while the progress backup to local disk is running?
- The result would be a corrupt backup in TSM

Here is an example how the restore will be faster compared to the other method:
1. restore full backup from TSM to local disk, takes 1 hour
2. execute progress restore of the full backup, takes 1 hour
3. restore incremental backup from TSM to local disk, takes, 20 minutes
4. resore incremental progress data to the database, takes 20 minutes
5. restore after images from TSM to local disk, takes 10 minutes
6. restore after images from local disk to the progress databases, takes 10 minutes

The total restoration time might be as long as 3 hours
With Data Protector solution, the restore is done directly to the progress database
1. full restore from TSM to progress, takes 1 hour
2. inc restore from TSM to progress, takes 20 minutes
3. after image restore from TSM to progress, takes 10 minutes.

The total restore is now down to amazing: 1 hour and 30 minutes

Another benefits is that you do not need the expensier local disk space, which will lower the TCO for the solution.
And that that you will get benefits with deduplication and compression in TSM, compared to not using the solution.
How much data would be different between each full backup in progress?

For a normal Oracle database for example, the reduction ratio for 30 days retention time, and 2 full backup per week, incremental in between, and archive log backups would be ~80%.
So I would guess that you would see something similare here too.
And with this solution you can also protect the data over low performance bandwidth, because of the "client deduplicaiton" options.

Please let me know if this explains the benefits with the solutions.
Regards Tomas