There are many reason for locks hanging around a long time - and imho they are almost always due to poor programming in one way or another. I fairly often find that it's because of a missing release. Some programmers think that at the end of a transaction all locks "go away".
Yes Rob, I am aware of that. I actually take a bit more just because of that. I am only interested in troublesome locks, in particular those that have been around "a bit too long" so I can identify programs that are holding locks too long - most likely due to the omission of a RELEASE command...
Tom - you are quite correct (of course!) - promon will continue providing as many pages as you ask for. Therefore, one can use _DbStatus._DbStatus-NumLocks to determine how many pages to request.
Thanks everyone - I believe my problem is now solved.
Does anyone know what promon does if there are more than 9999 locks? I expect that it will just give you the first 9999 - and that's it.
Also - is anyone aware of any plans from Progress to increase the number of locks promon can report?
That's my experience too. On a "big" system under heavy load - ONE scan of the lock table with _Lock can take more than one hour (!!) - and at the same time it creates a terrible impact of system performance.
That's exactly why I've moved to using promon. It provide a full copy of everything...
Hi,
OE 10.2B - RH Linux.
I have a process that gets database lock details using the output from promon. I used to do it with the _Lock VST, but found that too slow.
Recently an associate said there was a risk of "freezing" a database by using promon to do this on "large" databases. That...
Rob - I agree with you entirely. What you have said is almost exactly what I said myself about the matter! But ..... deaf ears, I guess.
Tom - thank you very much. I believe you have "quantified" OEE and OEM for me. I think I'll let them gather a bit more dust.
Thanks for the comments, Rob.
Your point about rolling-out 10.2B is well-understood. I have pushed that barrow - but the decision is not mine to make. There are many balls in the air here right now - and the priority is to get off SCO/9.1D ASAP. The belief is that once all sites are RH/10.2B...
We have several 10.2B databases at a central office and about 250 others connected remotely. Most hosts are Linux (RH) - and some are SCO. Half the remote DBs are 9.1D - but are being migrated to 10.2B. All applications use character clients (we don't have an Appserver).
I know very little...
I have a system that controls some overnight activity. I created a directory /u0/sr - and under it several other directories for scripts, logs, data files and a small Progress DB (10.2B under Red Hat Linux). The DB is (naturally enough) in /u0/sr/db.
I ran into file space problems. It was...
This is not a DB question - but I thought the wise people in this forum may be able to help me.
We have several Progress DBs serving users 24x7 in different countries. Admins are not always on-site and if a problem happens we need to alert people. This is done now with email - but that's not...
Yes - there is quite a lot of temp file activity.
The "issue" about changing to 64-bit (which we physically have) is that our management will not allow the change unless the vendor has given an assurance that they have fully tested their application under 64-bit. Why doesn't the vendor...
I wasn't aware of the -Bt startup parameter; thank you for that. But I don't think that helps us. We recently upgraded our server - and we have "lots" of memory. If we "could" - we'd increase -B considerably - but we can't because we're stuck with a 32-bit version of OpenEdge (10.2B). We can't...
At a previous company where I used Solaris I put the client temp files in the /tmp directory (effectively RAM) and that gave a positive performance boost.
I'm now on an AIX system (OE 10.2B on AIX 7.1) and there is a "lot" of spare memory (over 20GB). I have tested creating a RAM disk and...
I may not know the particular meaning you have in mind, Tom! But if it is oriented towards something like "awkward" ... then I sure wouldn't mount an argument against it! :p
Nevertheless - there are a few things it is useful for.
I have solved the problem. Apparently an if statement is "invalid" unless it is inside a block. I enclosed everything inside {} braces - and now awk is happy. Strange limitation.
Ron.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.