ron
Member
Hi everyone. I have an annoying problem that I've wrestled-with for some time. Maybe one of you knowledgable gurus can help me!
On our production server (Sun V480 with 16GB memory - Solaris 8) we have a 260GB Progress database (Progress 9.1D09). Each night at about 7pm we start-off an online backup to disc using probkup. This always succeeds without a hitch. (We have "large files" enabled both for Solaris and for Progress.)
Sometimes we want a recent copy of the production database on the development server (Sun E450 with 2GB memory). We could do this with tapes -- but we still use DLT8000 drives and it takes 4 tapes and about 20 hours!
We used to achieve this job using the network, using a script, like the following. It pipes the very large backup file produced by probkup -- and restores it on the E450 server -- via a named pipe:
#!/usr/bin/ksh
DATETIME=060213190800
PROGRESS=/progress/Ver91.D09/bin
/usr/bin/rm -f /tmp/nod
/usr/sbin/mknod /tmp/nod p
chmod 777 /tmp/nod
nohup rsh -n tpjbil1 "cat /DBbck/BU.c.$DATETIME" > /tmp/nod &
$PROGRESS/prorest /Custtest/Database/CUSTIMA/custima /tmp/nod
The script is run on the development (E450) server.
We have done this several times in the past, (when the database was smaller) without a problem. But these days it crashes with a "broken pipe" message.
I estimate that the full task should take about 14 hours -- but it crashes at about the 9 hour mark (ie, after transferring about 160 GB).
The fact that it used to always work when the DB was smaller -- but now not at all -- seems to suggest that we've reached some kind of time limit -- or that we've exhausted some resource. Both the source and target servers have adequate disc space, and the source (at least) has abundant memory.
I have searched many forums -- but have not found anything that helps me. Any suggestions?
Ron.
On our production server (Sun V480 with 16GB memory - Solaris 8) we have a 260GB Progress database (Progress 9.1D09). Each night at about 7pm we start-off an online backup to disc using probkup. This always succeeds without a hitch. (We have "large files" enabled both for Solaris and for Progress.)
Sometimes we want a recent copy of the production database on the development server (Sun E450 with 2GB memory). We could do this with tapes -- but we still use DLT8000 drives and it takes 4 tapes and about 20 hours!
We used to achieve this job using the network, using a script, like the following. It pipes the very large backup file produced by probkup -- and restores it on the E450 server -- via a named pipe:
#!/usr/bin/ksh
DATETIME=060213190800
PROGRESS=/progress/Ver91.D09/bin
/usr/bin/rm -f /tmp/nod
/usr/sbin/mknod /tmp/nod p
chmod 777 /tmp/nod
nohup rsh -n tpjbil1 "cat /DBbck/BU.c.$DATETIME" > /tmp/nod &
$PROGRESS/prorest /Custtest/Database/CUSTIMA/custima /tmp/nod
The script is run on the development (E450) server.
We have done this several times in the past, (when the database was smaller) without a problem. But these days it crashes with a "broken pipe" message.
I estimate that the full task should take about 14 hours -- but it crashes at about the 9 hour mark (ie, after transferring about 160 GB).
The fact that it used to always work when the DB was smaller -- but now not at all -- seems to suggest that we've reached some kind of time limit -- or that we've exhausted some resource. Both the source and target servers have adequate disc space, and the source (at least) has abundant memory.
I have searched many forums -- but have not found anything that helps me. Any suggestions?
Ron.