Slow reading data from database on the first time...

rcgomez11

Member
I'm using Progress 91e and i'm having trouble figuring out whats the best way to solve my problem, every time i started my workstation and database, reading of data from database is very slow but when it finishes and try to load it again, it is much faster...if anyone have an idea, please help me with this..thanks in advance..
 
Sounds like the data is being cached in memory on the first read, hence it being faster second time around. We will need more info though I suspect to work out though. Does the database live on your machine or a server? Is the query sufficiently indexed? Probably more questions to come from others who have more of an idea of this sort of thing ;)
 
Hi, good morning Cringer, i'm really sorry if i double-posted this thread, i'm currently using my local machine as a testing server, i'm really experiencing it even my queries is indexed. I'm dealing with as little as 200,000 records.
 
Whatever, the first time you read them, they come from disk into the -B buffer. The second time you read them, given that -B is big enough, there is no disk read and they come from the -B buffer. MUCH faster. That's the point of the -B cache.
 
So there's nothing that i can do to make my first read will loads faster same with the 2nd read?My -B parameter contains 30000, is it enough or should i add some more digits?thanks tamhas for the reply, it is really appreciated...
 
If it is test machine and you are not afraid of data loss then you can copy database to ramdisk (if you have enough RAM). If you cannot afford data loss or don't have enough RAM then you can improve performance of disk subsystem - use RAID 10 or even multiple arrays with multiple controllers. Or just start database once and never stop it (especially that in your case the problem is 1st read, disk -> memory).
As of -B - it helps when data are in database buffers, so the more is -B the better - except situation when too big -B causes OS to swap. -B * block size should be enough to keep in memory all frequently used records, when it is bigger than database size there will be no additional benefits.
 
There are really two separate questions here.
1. Why is the second and following passes faster than the first?
2. Is the first pass as fast as it can be?

Caching in -B is the answer to the first one. 30000 is not a large -B for a production system, but might be reasonable for a development box. Use promon during the second pass to see what percentage reads you are satisfying from the cache and that will tell you whether making it bigger will make any difference. Or, just make it bigger and see what happens.

As for the second, you haven't really given us any information except for the contrast with the second pass, which really doesn't mean much since there is a good chance you aren't reading the disk much. If you are sure that you are bracketing the index properly (check with COMPILE LISTING), then you are reading only what you need and that is the speed you can hope for. 200,000 isn't 20,000,000, but it isn't 200 either.
 
1) Make sure that you have plenty of RAM.

2) Make -B big enough to hold your entire db as well as any anticipated growth.

3) Run "proutil dbname -C dbanalys" immediately after db startup.

Or you could spend some time analyzing your queries and making sure that they are well written and efficient. Explaining how to do that might take a while though. It would probably be better to start with a training class.
 
Back
Top