StandBy memory

bvanmeer

New Member
Hi all,

We are running OE DB & Appserver 10.2B SP 4 on Wun server 2008 R2 64bit with 16Gb of mem.

I assigned aprox 5 Gb in -B to the database.

Strange thing that we notice (with system mon and RAMMAP) is that the server has aprox 8Gb mem in use. but the other 8Gb fills up in "Standby"

In this list we see (with RAMMAP) that

also the database .d .lic .lg files are loaded in standby, is this normal behaviour?
sometimes a lot of output files (temp xml file drops form processes running op appserver) are filling op the standby? this fills up till there is no more "Free" memory.

When this happens we see al lot of io in the OS swap file, also the _mprosrb processes are constantly using the os swap file.

Anyone some ideas how and with who i can tackle this problem?

Greetings,

Benny
 
Standby memory is not a problem. It is similar in use to UNIX file system buffer cache -- the OS will put stuff in it when there is no other demand for it. As soon as something else wants that memory it can have it. If the OS did not do that it would just go to waste. (And if you are going to waste it please open the case, remove the chips and mail them to me...)
 
Hi Tom,

Thx for the answer, when i'm monitoring the system, i see still a lot of IO in the swapfile by the _mporsrv processes, this seems bizar since then there is still 8Gb mem standby - free.
as i mentioned i'm on a 64 bit system, for 2 dbses i raised the -B to 500000, so i can still raise more then?

Greetings,

Benny
 
It's not clear to me that you have a problem. First you need to understand what you're seeing in RamMap, and what this terminology means in that context. The Windows memory manager maintains a number of different page lists for different purposes.

"Active" memory is pages that are in the working set of a process, i.e. pages for which there would not be a hard or soft page fault if accessed by a process. If a page is removed from the working set of a process, it is not necessarily evicted from RAM immediately; that would be wasteful as you might need that page again. For example, if you boot your PC, run Notepad, then close it, the pages containing notepad.exe are still memory-resident. Then if you run Notepad again, notepad.exe doesn't have to be read from disk.

What happens to a page after it is removed from a working set depends on what it contains. If it contains unmodified data (i.e. its contents are the same as when it was read from disk) it is "moved" to the Standby page list. (The page doesn't actually "move"; a pointer is changed to indicate its list.) If it contains modified data, it is moved to the Modified page list. If a process accesses a page of virtual memory and the page is not in its working set, it incurs a page fault. If that page is in the Standby list (i.e. it is still in RAM), then it is put back in the process' working set. This is called a soft page fault. From this perspective, the Standby page list is just file cache. It is optimizing I/O. Soft page faults can also be service directly from a page in the Active list (a "global valid" fault), for example if a page accessed by process A is already in the working set of process B.

If the accessed page is not in the Standby list then it is paged into memory from disk. The page it is written to comes from the Free page list. This is a hard page fault.

After a page spends some period of time on the Modified page list, a kernel thread called the modified page writer flushes its contents to disk, at which point it is no longer "modified" and it is moved to the Standby page list, to act as file cache. You can think of this as being somewhat analogous to the action of a Progress APW, for the database cache; writing modified blocks to disk.

If a process has pages of private data, when the process exits its pages do not go to the Modified list or the Standby list. They are sent to the Free page list. This makes sense when you think about it. If you are logged in to a Terminal Server and editing "Resignation Letter.doc", you don't want other users' processes to have access to those memory pages after the fact.

Pages also come to the Free list from the Standby list, if its size is trimmed by the memory manager (or maybe also if the page is sufficiently least-recently-used; I'm not sure about the algorithm). Pages can leave the Free list in a couple of ways. They could be used for kernel allocations, or for page reads from disk; in either case their current contents are overwritten. Or they could be zeroed out by a kernel thread called the zero page thread and moved to the Zero page list. Once there, they can be used if needed for memory allocations by processes, in which case they are moved into a working set and are again on the Active list.

So what does all this mean?

Well, for one, "free" memory isn't a performance optimization. It doesn't actively help you. The OS needs some amount of it, to service new allocations, but not great amounts. If it needs great amounts, it can also evict pages from the Standby list, as they are unmodified. In the meantime, those pages in the Standby list *are* helping you, because they are file cache. They allow you to access pages from memory that would otherwise have incurred disk I/O.

In other words, in any operating system, I don't think you should gauge its health by how much free memory it has. Having a small amount isn't a danger sign. As I write this I have about 8 KB of free memory, and about 40 MB of zeroed memory. My system is fine.

Also, in memory management, don't think in terms of "files". We often speak informally of running a process and so "its executable is loaded into memory". Or you may open a text editor and "load the file into memory". That may happen, depending on various factors, but it doesn't have to. If you proserve a database, it isn't necessarily the case that all 2.5 MB of _mprosrv.exe is paged in. The memory manager may only page in the pages of the binary that contain the code that is currently being referenced (although with OS features like pre-fetch and SuperFetch this gets harder to predict...).

So when you say that dbname.lic and dbname.lg are "loaded in standby", it isn't necessarily the case that the entire file is memory-resident. Some portion of them is; you will see the details in the File Summary and File Details tabs in RamMap. The fact that they have pages in the Standby page list means that at one time they were in a process working set. The broker writes to the license file at the top of the hour, and it and other shared-memory database processes frequently write to the log file, so this certainly seems normal to me. When you say ".d", are you referring to data extents, i.e. .d1, .d2, etc.? If so, the same would apply to them.

You mentioned the "swap" file; in Windows its called the paging file. It contains process private modified data. Unmodified data is paged in from disk, but it is not paged out to the paging file. It doesn't need to be, as its contents are already on the disk in the file from which it was originally paged in.

When you look at the paging file, it isn't enough to know that there is "I/O". It could matter a lot whether the I/O is read or write, which processes are doing it, and even which parts of the file are being written or read. I don't know the internals of _mprosrv, or whether the one you mention is a 4GL server or a broker. Maybe it writes data about a client when they log in, then doesn't access it again until they log out hours or days later. If that private modified data written by _mprosrv was not accessed for some period of time, perhaps those pages would be paged out by the memory manager and then fetched again later due to some change in application state, causing the I/O you see. Knowing what little I do I can only speculate.

Tools like RamMap and perfmon and procmon show you a lot of stuff. It can look like horrible chaos. I remember seeing Wireshark (then Ethereal) for the first time and thinking my network must be horrible because of all the strange things I was seeing for the first time. I'm not implying you're new at this, I'm saying that info shown by tools or synthetic benchmarks isn't necessarily a problem. If your users say "My data entry screen is frozen"; "I can't ship orders"; "This 10-minute report now runs for 3 hours"; those are problems. What you see is interesting, but not necessarily a problem.

On a database server you care primarily about how the databases are doing, so if I were you I'd put aside RamMap for now and look at promon. Is that 5 GB of buffer pool enough for your database? If your DB is larger than 5 GB and it's the only thing on this server, maybe it could be larger. Also, you could try -pinshm to ensure that the OS doesn't page any of the buffer pool to the paging file and hurt your performance.

There are lots of things you can tweak and tune, if your circumstance calls for it. But I think application performance and database performance should determine where you go from here.
 
Wow, nice post Rob

@Benny, yes you can increase -B. Remember, to cut IO in half you must increase -B by approx 4x (the impact of increasing -B follows an inverse square rule...)
 
Yes, indeed... thanks Rob! It's really nice to have see explanations as to how all this stuff works internally (or in Windows case, infernally?)
 
Back
Top