The SQL engine will put a share lock on several of the underscore tables depending on the version of OE. That session needs to be issuing a COMMIT or a ROLLBACK.. regardless of the transaction isolation level. If it is truly just sitting around then it should be leaving the session.
On the plus...
Look into the iostat command on AIX. There are options to track disk IO and total throughput on the host.
You can run a sample during the fast backup and during the slow backup and the results should be pretty clear.
There are also a host of vmo and ioo options that control how memory is used...
One thing that will matter if the order is changed.. if you so simple exports of data (export Monitor) from the table and try and import it again with different field orders you are likely to get errors or logical data corruption at the least.
But essentially it should be something you ignore...
No offense... but the first basic mistake was trying to use V8.3. It was released in 2001 and is very much a dead product, several more full releases have come and gone since then. V11.7 is the newest version with V12 around the corner.
It is a miracle that it even works on any version of...
The short version is... the CGI interface is pretty simple and rock solid. The WSISA interface is a little more buggy and had (maybe still has) memory leaks.
I would spend more time looking into the startup/code/network side of things than CGI vs WSISA. I would bet there is something there that...
I prefer the stacktrace method that Tom suggested.. it works every time without a ton of logs to parse. It will also show you any orphaned persistent procedures.
yeah... exactly where is all of this data to concatenate coming from?
If you are really doing the loop as show above.. the string is getting longer and longer on each iteration as you pass it to the substitute function... where the concat is not doing that.
Exactly how much data (length of the...
Crazy thought here.. you seem to be able to run the profiler.. so why not actually profile the entire code and see where you are actually spending your time.
Is your where clause literally WHERE 04/23/17 <= tranDate AND tranDate <= 04/25/17?
If so... don't do that. Use tranDate between startdate and endate or (tranDate >= startdate and tranDate <= enddate).
Also see this KB to see how to enable/disable SQL query plan logging ---> Progress KB -...
There isn't a prolog style option because that would make too much sense :D
The rather complicated workaround is available here Progress KB - How to archive the AIMGT after image archival log file online?
On the plus side my largest archival log is 11MB.. about 4 months of data with AI pulls...
Like Rob said.. indexes are your most likely culprit.
Schema versioning should take care of the new columns almost instantly in modern versions. In older versions I seem to recall some issues with default values causing actual writes though.
If you know for a fact that existing records will not be updated you can (very carefully) allow access to the database while you load in the historical records. Testing the process and validating the process is key. You have to also know that the application will behave properly without those...
This KB is related to WebSpeed but you can use the same basic logic/code to start a profiling session.. Progress KB - How to start the PROFILER from a WebSpeed application?
This KB tells you how to download the Profiler GUI (Windows) or if you are running on a version that has it included...
Apologies for the stream of consciousness post.. doing this in between meetings/tasks
There are some vmo and ioo parameters that should be set for AIX 7.1. Take a look at the documentation and the IO/memory usage on the box before changing these though..
Run a quick iostat sample first to get...
That means that is the actual total shared memory for the OE/Progress service.. most of that would be for buffers.
There are also screens in promon that will show you the status/size of the buffer pools and other shared memory structures, along with a ton of other things. From what you have...
Step 1.. don't split the databases. Unless they are terabytes in size there is little value to it and just more complications.
You are seriously much better making sure your data and index areas (Type II right?) are properly set up and that you are taking advantage of -B and -B2 properly.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.