Question Why prorest into / over an existing or void database

Chris Hughes

ProgressTalk.com Sponsor
Hi

As per the title really - why would you restore a database into an older copy or even create a void database first?

I've always taken the approach of never supplying anything more than an st file and letting the restore do the rest.

I've come across an interesting one where a production db has large files enabled, the restore is running on a server where this db didn't have large files and it won't create the BI file > 2GB. The st is just a single variable extent for the BI file.

Thanks

Chris.
 
Last edited:

TheMadDBA

Active Member
When I was restoring 1-2TB databases on a regular basis... it saved a ton of time on the restore, even with really fast drives.

But I also knew exactly how the Prod/UT/Test/DR databases were set up and made sure to synchronize structure changes.
 

Chris Hughes

ProgressTalk.com Sponsor
Cheers Mad

That's interesting I guess I need to understand more about the order of what prorest actually does.

I could sort of see where you are coming from with a void DB - bit like a thick disk in a VM rather than thin provisioned!?

But surely an existing DB would have to be the equivalent of zeroed before being populated so would save no time?
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
Prorest (with a provided structure file) first formats the extents to the prescribed size, if any, then restores the data from the backup file into those files. Obviously if the files don't exist yet they have to be created. If they do exist and have the proper characteristics then prorest can proceed directly to writing the data into the extents without having to grow them first. Note that prostrct create with an all-variable structure just gives you a tiny database. To use this technique you would specify a structure with one or more fixed extents per area (plus a variable) so the time to grow the files is taken in prostrct create, not in prorest.

I often use a similar technique for creating the target DB in a dump and load. I take a target DB previously created in a D&L test and procopy $DLC/empty8 overtop of it. This wipes out all the application data and schema, leaving an empty pre-grown DB ready for a schema/data load. Then during the load the I/O cycles are just used for writing the data, rather than a mix of extending extents and writing data. In short, it goes faster.

Regarding the large files issue: are you prevented from enabling large files, e.g. because of a Workgroup or Personal license on the target side? If so you can restore the DB with the -keeptargetlfe option (added in 10.2B04) which allows you to restore a large-files backup overtop of a non-large-files DB, provided of course areas in the target that need to be larger than 2 GB contain enough <2 GB extents to permit the restore. But if you do have Enterprise on the target side, you can enable large files even in a void DB.

Typically, as my DBs are small (<200 GB) I create all-variable structures. One change I have made away from that however is that I create the BI with one fixed extent and one variable. The fixed extent is sized to 4x the largest BI cluster size I plan to use. This makes opening the DB after a truncate BI much quicker as I don't have to re-grow the variable extent.
 
Top