Yes, I am mixing metaphors. I loosely refer to -spin as a spin lock timeout. My bad. Here is what I understand.
A missed Latch attempt will hammer away at a resource to the value of -spin. If it fails to acquire the resource after that time, "latch timeout" is incremented by 1. The latch...
Yes, we are in planning stages of adding more disks to this system, have already talked to vendor about splitting their files in an appropriate fashion.
How can you tell by looking at the structure file which is the BI file. I had an indication early on that BI was on C: with the AI file. If it's not there now, I can only assume I mis-read something or it has been moved.
Primarily, what we are seeing mostly on the client side is very many -l (little L) which I believe are grabs for more temp space/file at the client. This in and of itself is not the real issue, I don't believe. So what we see is this. ClientOne with begin to gram more and more temp in...
2.3 million latch timeouts in a 10 day period
750m buffer cache who's blocks never reach the -spinskip value, at a time that -spinlocktimeout was set to 800,000. Correct me if I'm wrong, but that would indicate that the buffer is flushing faster than blocks can be moved to the MRU with given...
Hi
Thanks. Some of those I have seen, but I will revisit and read in more depth.
One thing I just have trouble understanding about the lruskipis. In theory, any "New" block placed in the buffer should immediately move to the top of the MRU list and as such, should not be on the list of blocks...
Not sure which latches are most important. By far OM are the most numerous, followed by LKT. Right now, MTX seem to be the only ones that are napping, 4 - 8 per any 10 second period.
also note we have zero BFP yet we have very very many BUF (listed 4 times??)
LRU: very many. LRU is listed...
All the conn types are a handful of"SELF/APSV", and a majority of "REMC/ABL"
CPU: 16 cores, generally we are about about 25-30%
We have the process _ProMonsrv crashing frequently, which is one issue
Users also often complain of frozen screens/pauses at the client
Okay. I thought so. Cluster size determines the frequency of checkpoints, I believe. So when I am seeing heavy checkpointing, solution is increase cluster size; correct?
I assume block size on BI works the same as it does on buffer cache, etc.?
I have tried debghb before in the R&D menu, it doesn't work for me. Just returns me to the menu - I wish it would because I don't know exactly where the latch waits/misses are coming from. Is it possible latch misses from the BI file are counted in this counter?
For a 400 hour period it looks...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.