Windows Memory

GregTomkins

Active Member
I posted here a while back about memory use of the P4GL interpreter and since then did some fiddling with pslist, which is a MSFT utility that purports to report memory use per process. It is hard to say what is really going on here but my basic observation is this:

1. Start a 'bare' procedure editor: 52 MB
2. Connect a couple of large databases: 71 MB
3. Start our app: 86 MB

So far, reasonable enough. What is really strange (to my perhaps hopelessly OS knowledge deprived brain) though is that once you start using the app, the memory use never increases. Not even after you open 20+ windows, each containing 3-10 browses loaded with data and lots of GUI widgets and dozens of TT's and so forth.

(The numbers above are what pslist reports as 'Virtual Memory'. The 'Working Set' values do increment somewhat, but very erratically and nothing that remotely makes sense to me).

Just wondering if anyone had any comments about this. I am curious.
 
Most session memory is allocated at startup.

The -mmax buffer will grow as needed as will -l. But if you don't do anything that requires them to expand they will not. Opening windows, by itself, isn't going to do that. Running new r-code or loading lots of data into a work-file might but only in smallish increments and only if the existing space isn't needed.

So, basically, what you are seeing is "expected behavior".
 
It seems that when -l or -mmax increase, it doesn't actually increase memory from the perspective of Windows. It seems like those increases are within the pool of memory Progress already has. In fact, it seems like one could recklessly not worry about deleting PP's or creating widget pools and so forth, and as far as Windows is concerned, it wouldn't matter. Not that I am advocating that, of course!
 
Failing to clean up garbage *will* get you in trouble. Don't even joke in that direction.

The AVM mostly allocates space on startup and swaps stuff in and out as needed. When you are moving from one .p to another, the amount of memory needed for each is often not very much because the ABL p-code is very compact so you can do a lot in not very much room. But, swapping is never as efficient as having everything in memory so choosing your parameters wisely to avoid unnecessary paging of temp-tables to disk and the like can make a big difference in your performance. Likewise, filling up the available memory with things that appear to still be used, but aren't is a good way to impact performance as well ... but not in a good way.
 
-mmax and -l are "soft limits". New memory is allocated from the OS when needed. Simply opening windows and running a few programs may not result in a need for any new memory.

To test it you can waste lots of -l memory like this:

Code:
define variable i as integer no-undo.

define workfile waste-memory field dummy as character.

do i = 1 to 100000:
  create waste-memory.
  waste-memory.dummy = fill( "x", 1000 ).
end.

Wasting -mmax memory is harder and I don't have a handy test case.
 
Wasting -mmax memory is harder and I don't have a handy test case.

I would have thought you would have encountered plenty of real world examples in your consulting! Hard to encapsulate and pass along perhaps.
 
Back
Top