Temp Table Performance

klodoma

New Member
I noticed that Temp Tables are heavily used... but when filled with lots of data they run real slow.

Now I want to ask some performance issues:
1. are they only created in the memory?
2. are they running on the server or on the local machine (where the software is running)... at this I think it's the first one, because I noticed no memory usage increase when filling the temp table with a lot of data.
3. how can the performance of the temp table be increased? read write?

... are there alternatives to temp tables?
 
Progress stores temp tables in the same directory as temp files by default current working directory or as specified by -T parameter.

Bt startup parameter allows you to set the size of the buffer that progress uses to control temporary tables. however you are not limited to -Bt, if there is a overflow, the data is written to disk (dbi files).

You can define indexes for temp tables. Correct indexes will help speed up the retrival process.

Define them as no-undo.

You can increase -Bt but it has very limited affect because most of the OS cache dbi files anyway.

but mostly indexes should improve the performance of the queries.

-Parul.




 
Just to make the implications clear ... temp-tables begin in memory and slop over to disk when they become too big. So, if you can give them enough memory to remain entirely in memory, they are going to perform better. And, hopefully your -T is pointing somewhere reasonable. If you point it at a drive share across the network, there are a bunch of things that are likely to be slow.

If that doesn't clarify things for you, then perhaps you should post some metrics about what you consider slow and show us the code so that we can see whether you are using them efficiently. Obviously, if you are doing full table scans, they aren't going to perform as well as if you have indexes to go directly to the needed records.
 
I have to fill it with about 60000 rows.
The Programm is already written, but the performance is really really bad so I have to rewrite it.

But i think that the main problem is not only the temp table, ... in the old programm for every row that is written in the temp table there are about 10 Find First query's executed. :eek: I never thought that code like this could be written, but anyway... NO COMMENT. At the end a simple math does.. 60000 x 10 600k Queries, no wonder that they go slow.
 
I.e., it isn't the temp-table, although setting your startup parameters to put as much of that temp-table in memory might help.

You might also ask yourself whether it is possible to refactor the problem. I.e., could you use multiple temp-tables and save yourself a lot of lookups.
 
Back
Top