Question Which buffer size takes precedence?

ron

Member
In a "managed" DB:

If this is set: blocksindatabasebuffers=8888 ... in conmgr.properties.

And then I set: -B 9999 ... in the pf file (pointed-to with otherargs=).

Which one takes precedence?

(I'd try it except that setting up a managed test DB is so much hassle.)

Ron.
 

RealHeavyDude

Well-Known Member
The last one wins. I never used managed databases. But, I guess that the parameters in the parameter file are picked up after the ones for which we have defined name/value pairs in the conmgr.properties. In the end a simple test should clarify - you can see the parameters used to start the database broker in the database log file immediately after the broker has been started.
 

TomBascom

Curmudgeon
Just guessing but... my guess is that he's using a .pf in otherargs in order to avoid having to worry about whatever OEE/OEM has decided to munge the last time someone mistakenly touched a field.
 

ron

Member
Thanks guys!

I got the answer - I found a test server that was set-up as "managed" and did a test. You are right - the -B value in the pf file wins.

My problem was this ... I have a DR server that is configured to be a mirror-image of a prod server. Although we're licenced to allow running programs on the DR server - this one has not until now been used to do that. But now I have to run some very heavy jobs on that server and they're taking an uncomfortably long time to run. So - I wanted to assign all available memory to DB buffers. But I wanted to do it in a way that made it super-simple to revert to the prior configuration in the (unlikely) event that the DR server has to take-over as production. Therefore I didn't want to change any of the "managed" stuff; all I have to do now is disable the -B parameters in the pf files.

I was also "heartened" by your comments about "managed" databases. I though I was alone in finding them a pain-in-the-butt! Looks like I'm not as lonely as I thought I was. :)
 

RealHeavyDude

Well-Known Member
Why in all the world would somebody have the Admin Server manage a database on a sane operating system with a bash shell?
 

TomBascom

Curmudgeon
Why are you using the admin server to mismanage databases on a Linux server? It is sort of necessary for Windows although I would still NEVER use any of the mouse enabled aspects of it. I don't like it but I will grudgingly accept that the admin server, dbman and the properties files are the only reasonable way to manage a Windows database. But on Linux it's just poking yourself in the eye with a sharp stick.
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
For my clients on Windows, we use OEE. As Tom says, you need a way of running a DB as a service so it won't go away when the user who started it logs off. Better to do that with Progress' supported method than with some third-party solution that knows nothing about Progress. On non-Windows platforms, we never use Admin Server/OEE for database management.

I can't see where Ron actually said this was Linux...

My problem was this ... I have a DR server that is configured to be a mirror-image of a prod server. Although we're licenced to allow running programs on the DR server - this one has not until now been used to do that. But now I have to run some very heavy jobs on that server and they're taking an uncomfortably long time to run. So - I wanted to assign all available memory to DB buffers. But I wanted to do it in a way that made it super-simple to revert to the prior configuration in the (unlikely) event that the DR server has to take-over as production.
This confuses me. You have some amount of load on your prod server, and you have such heavy load on DR (reporting, I assume) that you want more -B buffers than on prod. But you want to be able to revert to the lower prod -B in the event of a disaster. Why?

In a disaster, your only available system is DR so presumably it runs its existing reporting workload plus whatever currently happens in prod. In that case I'd want -B to be as large as it needs to be to do that extra work, without causing paging/swapping. Which also makes me wonder, if the prod and DR servers have the same amount of memory, why would you want prod -B to be smaller? What's the downside of having the same large -B value on both servers if the available resources allow it?
 

TomBascom

Curmudgeon
I must be getting grouchy -- you're right, Ron never said "Linux".

Just guessing (again) but it sounds like the situation on the DR server might be that when there is no DR going on there is a lot of unused memory that could be useful for reporting purposes etc. But in the event of a failover that memory would be wanted for other things like self-service user connections and their client side parameters maybe? It is also not uncommon for DR scenarios to accept that some parts of the workload might be turned off or degraded.
 

ron

Member
OK, I get it, the comments were poking fun at "managed" DBs.

Tom is right - When the DR server is "just" a DR server it has a large amount of unused memory. For the next few months I have some "heavy" and time-sensitive jobs to run and just wanted to temporarily use all that memory to help get these jobs done. But I wanted to do it in a way that the configuration could be quickly reverted to what's correct for a production situation.

I have wondered why anyone would set-up "managed" databases. They seem to me to be like cracking peanuts with a steam-roller -- an awful lot of effort for no obvious gain - resulting in a situation that was hard to manage!!!! But then I thought that maybe I'd missed the point and perhaps I was just a Luddite. (But it seems like there is no shortage of Luddites!)

I have two different OE systems - only the finance one is "managed". It was set-up by others some years ago and I've inherited it. The perpetrators are perpetrating elsewhere, now.

The server is Solaris.
 
Top