currently our century parameter is set to -yy 1920.
This is starting to create some issues in our software(QAD)
What do I need to do to get to the default of -yy 1950
Is it just a matter of changing the client scripts?
Do I have to recompile all the code?
:confused:
Thanks for the responses everyone.
Casper - I will give your code a try and see.
I should have said in my original post "besides db analysis"
The reason for this is as follows.
During dump and load.
Dump goes great .... 700+ tables around an hour.
Great so far.
Now time to bulk load.
Say we...
Hello,
I am looking for a quick way to go through every table in a database and determine if there are any records in the table for dump and load purposes.
(not necessarily a count, just is there 1 or more records)
I have read quite a bit about people attempting to count how many records are...
I am trying to compare records in 2 databases.
The table names and fields names are the same in both.
For example.
Lets say db1 and db2.
I want to go through db1, table1 and find the same record in db2 table1.
If the record is not found in db2, I want the record used to find it displayed.
I...
Why the big difference building identically sized extents (15sec to 3min)
Sample output below
I realize that system load has some impact, but not to this degree.
Does the same on a dedicated system.
Here is another strange thing that I noticed ...... if I let the system build 6 extents, and...
Tried a test
Ok, I tried a test using the plan of dumping the data I want to keep and then deleting the table a then re-creating the table and pulling the data back in.
I had to use the drop table method to delete the table because it would not delete through the editor.
It went away for about...
Thanks for the responses everyone.
This is kind of what I had been thinking, but wanted to get some second opinions.
Does anyone know if it will take a lot of time when I actually delete the table?
or should it be pretty much instant?
(my concern - the space in the DB still has to be marked as...
Here is the scenario.
I have a table with a large number of records in it (45 million)
There are several Indexes on this table.
I want to delete say 44 million or the records.
for each where .... delete .... takes a looonnnnggggg time.
I was wondering if I could take an approach like this...
let me clarify a bit.
/data/live.db ...... live db ..... is running 24/7 except for 2 minute snap image
/test/test.db ....... test db
/backup/backup.db .... backup (unix copy)
Live DB is taken down just prior to unix copy(actually a snap copy)
then DB servers restarted, then tape backup of...
HPUX 11
progress 8.3b
I would like to figure out how to do a probkup of a multi volume DB(15 Gb) from disk to disk. (we have lost of fast disk)
(We currently do 2 probackups to tape before doing dump and loads, but I think we could save time by doing one of them to disk)
I obviously need to be...
Hello,
Firstly the specs that I am working with.
HPUX 11 ..... lots of spare disk
Progress 8.3b
15Gb multi volume structure DB no AI, just BI
24/7 operation.
Here is what I want to achieve.
We are developing several new things against a test copy of the live DB.
Problem is that frequently...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.