Progress DB from Win to Lin

algernonz

New Member
A time-zone independent g'day to everybody,

I have to move a Progress database (9.1E) from Windows to Linux for first time.
I want to do this without a Dump & Load, but restore a backup from the Windows Progress database in Linux, since I 've understood that a backup file also contains the meta-structures of the database.

So when in Linux I do a
# prostrct create mydb mydb.st -blocksize 4096
(the .st file is copied from Windows and editted, 4096 is the db blocksize in Windows))

Then can I just do a

# prorest mydb /var/backup/windows-backup.du

... and start a fine database on Linux, or do I really need to walk the path of

# prostrct create mydb mydb.st -blocksize 4096
# procopy $DLC/empty4 mydb

And now load the dump made in Progress-Windows?

I am curious to whether the first option is possible.
If so, what advantages do both options have?
Thanks & regards.
 
From the Progress KB
"Although it is argued that it can work across platforms with the same endian, these scenarios are not tested and certified so they remain unsupported from a Progress perspective".

When restoring a backup, if the target database doesn't exist, it will be created as the first step of the restoration.
 
What is the issue with doing the dump and load? Can you not afford the downtime?

1. My impression is that a restore is quite a lot faster than a DB Dump & Load.
2. I am not familiar enough with Progress to deter which is better under the circumstances.
3. Hey, this is a learning process......
4. Thanks
 
All things being equal, for a decent-sized database a restore from a backup would be faster than a dump/load/index rebuild. For a small database, they're both quick enough that it doesn't much matter. Obviously, hardware (cores/core speed/# of disks/disk type/disk speed/etc.) and network speed factor in here as well.

However, as CJ pointed out, in attempting a backup and restore from Windows to Linux you would be relying on an unsupported method of migrating your database. It's not something I would do with a production database. I've seen very experienced DBAs try it and run into issues. You might find yourself backing out of the attempt and reverting back to plan B, i.e. a dump and load, and taking more total time than if you had started down that road in the first place. If you're new to Progress, stay on the beaten path; learn how to do a dump/load/index rebuild (and document what you learn, and test it).

Doing a dump and load also allows you the opportunity to address areas of database configuration that are only possible (or feasible) via D&L, for example adjusting storage area records-per-block settings, moving unusual or high-use tables and indexes to their own storage areas, adjusting database block size, etc. Depending on the current state of your database you may want to do one or more of those things. Also, while I'm on the subject, you're using an old, obsolete, buggy, essentially unsupported version of Progress. If you upgraded to the current release you would be able to take advantage of the many performance and reliability enhancements added since 9.1E, and you would also be able to take advantage of the performance, robustness, and maintenance advantages of Type II storage areas. You have to dump and load your data to move from Type I to Type II storage, so this would be an ideal time. (Technically you don't have to dump and load to move to Type II, but you really should.)

Even if you don't reconfigure or restructure your database, you may still derive benefits from a D&L, as it will correct issues like record framentation and scatter and low index utilization. Look into the "proutil" command with the "dbanalys","tabanalys", and "idxanalys" qualifiers for a way to generate reports that will give you data on these metrics.

Also, read the Database Administration manual and educate yourself on the various methods of dumping and loading a database, e.g. Data Dictionary dump & load, bulk load, and binary dump & load. There are pros and cons to each, so get to know them and figure out which is appropriate for your needs. That may even mean a mix of methods, e.g. D&L a few very large tables with a binary D&L, and the rest with a dictionary D&L.

Whichever approach you end up using, test it and time it, document it, test it again from your documentation to ensure it's correct, and repeat until you're confident. It will be a good learning experience.

And finally, read Tom Bascom's guide to Storage Optimization Strategies for a good overview of the subject.
 
Think of it as a specific minimum version.

If you can change platforms then you can also presumably compile the application. Therefore there is nothing stopping you from upgrading to a current release such as 10.2B or 11.0.

Version 9 is a nice solid product. However, version 9 was designed in the late 90s. 9.1e service pack 4 was released 6 years ago. There will never be another version 9 service pack. "Support" basically consists of a shoulder to cry on if something goes wrong. If you are going to make a change (and you should) you should not be targeting version 9. It is like upgrading from a 486 to a Pentium 3.
 
Worth noting that the binary dump and load *is* supported cross platform and is faster than ASCII.
 
Back
Top