Question Too much RAM for buffers -B (is there such a thing?)

Chris Hughes

ProgressTalk.com Sponsor
Hi All

A simple question which no doubt has complicated answers.....

Can you assign to much RAM to a database?

Scenario
Server is Win 2008 64Bit, 256GB RAM - not virtualised. Current spare / available RAM at peak 100GB.
Progress 10.2B7 64Bit, running 2 databases, Sizes -DB1 is 20GB, DB2 is 150GB.
Current -B, DB1 = 10GB, DB2 = 75GB.

Now picture Promon etc telling us we're getting reads from disk - not lots but you want to stop it if you can ;)

So is there a reason not to double the -B on both databases, it would cost me 85GB, but the server would still have 15GB spare even at peak.

Thanks in advance

Chris.
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
Can you assign to much RAM to a database?
Yes. Depending on the platform there are OS limits to be aware of, as well as hard limits within OpenEdge. But there are also practical limits, i.e. how much memory should you allocate, regardless of whether you can? Your DB server may do other things aside from running databases (e.g. running batch reports, serving as a file server for application code, etc. If your RAM is tied up in buffer pools then it can't be used by the OS as file cache.

The answer will be dependent not only on database size but also on application needs. If, for example, you have a large amount of data in transaction history tables that you will read only once to run a month-end report and never read again, then sizing -B to the size of the database is probably a waste of memory. If you have way more memory than you need then fine, but most people aren't in that situation.

On the other hand, you might have a large database where you read all of the data repeatedly. In that case, more cache will help. If you are in that situation, you may want to look at more than your database. For all the logical I/O that is happening, should it happen? Do you have bracketing issues or bad logic in your code that causes you to read more than necessary? That's a question for the developers (and they should know how to answer it!).

Scenario:
Server is Win 2008 64Bit, 256GB RAM - not virtualised. Current spare / available RAM at peak 100GB.
Progress 10.2B7 64Bit, running 2 databases, Sizes -DB1 is 20GB, DB2 is 150GB.
Current -B, DB1 = 10GB, DB2 = 75GB.

Now picture Promon etc telling us we're getting reads from disk - not lots but you want to stop it if you can ;)
Yes, you want to eliminate excess disk I/O. But some amount of disk I/O is unavoidable, as that's how existing data gets into the buffer pool in the first place. Having a large value for -B doesn't mean you should see no disk reads.

So is there a reason not to double the -B on both databases, it would cost me 85GB, but the server would still have 15GB spare even at peak.

My question would be: after the databases have been up for "a while" (i.e. the application(s) have cycled through the code they typically run) do you still have empty buffers? If you do, increasing -B probably won't help you. It would just give you more of them. On the other hand, if you have no empty buffers then you may benefit from increasing -B.

However: used and empty buffer counts will be impacted by more than just your application code. If you run database utilities that read much of or all of the blocks in the database then the numbers in promon will no longer reflect just the effects from your application. In other words if you are running things like proutil *analys or probkup online without using private read-only buffers (-Bp) then your used buffers will be the lesser of the size of -B or the DB size. I run these utilities with -Bp 10 so they don't "pollute" the buffer pool and LRU chain with infrequently-read data. If you do this then your promon numbers will be more of a reflection of your application behaviour.

So in answer to your question, first make sure your promon numbers reflect your application's needs, then increase if the numbers warrant it.
 

Chris Hughes

ProgressTalk.com Sponsor
Thanks Rob, ever the informative answer.

I guess the key thing I wanted to make sure is that too much RAM will not affect the database adversley, and as I am in the enviable position of having RAM sat there available I am going to use it, I've rather wind it back if its not used.

I use the private buffers on backups, but never thought about or used it on dbanalys etc - that's my learning something for today!

Cheers
 

TomBascom

Curmudgeon
There is no direct negative consequence solely from the db POV.

But there may be indirect problems as Rob points out. In addition if you get too carried away and cause the OS to page excessively (or worse start swapping) or you might trigger some unpleasant architectural issues. For instance in a NUMA architecture not all memory can be reached at the same cost -- if you allocate memory that is "far way" it will be slower to access. Sometimes *much* slower.
 
Top