ODBC Performance

agriffin02

New Member
I have a c#.net windows application that I have developed that is accessing the Progress 9.1D Database using the Merant ODBC driver and running SQL92 statements. Performance seems pretty crummy. It appears that the performance on a particular query is faster after it runs the first time, but it is still too slow. Does it cache the query on the server? I have made sure that I am using all columns that belong to an index.

Are there other options that would be better for accessing the data?

If it helps anyone to understand what I am working with, I am working with Epicor's Vantage ERP system.
 
I have a c#.net windows application that I have developed that is accessing the Progress 9.1D Database using the Merant ODBC driver and running SQL92 statements. Performance seems pretty crummy. It appears that the performance on a particular query is faster after it runs the first time, but it is still too slow. Does it cache the query on the server? I have made sure that I am using all columns that belong to an index.

Are there other options that would be better for accessing the data?

If it helps anyone to understand what I am working with, I am working with Epicor's Vantage ERP system.

On the database server side of things there is a pool of memory (determined by the -B database startup parameter). When records are read for the first time, they are placed in this buffer pool. Subsequent reads then appear much faster because they are retrieved from the pool as opposed to being read from disk again. This is probably why the query is faster the 2nd time.

ODBC access shouldn't be much slower than 4GL (although my experience is with v10.1's ODBC broker and driver, so v9 may be slower). Do you see better performance if you run the query in a 4GL session? You need to eliminate ODBC as the problem. It could be related to the table and its indexes.

If the query executes much faster in 4GL than over ODBC, then is there something wrong with the ODBC server configuration? Do you have a dedicated ODBC broker running against the database? What startup parameters are you using?

If you have to scrap ODBC as your data access model, then you're limited by the version of progress being used. If you were prepared to partially upgrade to v10, you could use a DLL or webservices proxy approach, giving you the ability to call methods in your C# client which result in progress code running on an appserver. No ODBC in this approach, and you're back to running traditional ABL code. Also, you could argue that you're now service-orientated, so stick that on your CV.
 
ODBC drivers and SQL-92 underwent a major upgrade in the late 9.1 time frame and even more significant improvements in OE10. You are almost certainly using the old drivers. That's part of your problem.

Obviously 9.1D is ancient, obsolete, unsupported and should be upgraded. 9.1D can be easily upgraded to 9.1E service pack 4 which has as many bug fixes and performance improvements as you can get without upgrading to OE10.

As Andrew explained, yes, the queries are cached.

When using SQL it is also important to regularly UPDATE STATISTICS. The SQL engine uses a cost-based optimizer. It cannot optimize your query if you never update the statistics.
 
I have been searching the forums and UPDATE STATISTICS seems to be a common theme. I have never done this nor has anyone else in this company, we were not aware of the command, probably becuase we just started using ODBC against Progress about a year ago. I need more info please...

1. Is there any perfomance concerns for progress while running it? Basically should I do this during off hours? Unfortunately my test database is on my production server, so I just want to be sure.

2. Is there anything else I can do to increase performance. I am using a 4 part name with a linked server in SQL Server 2005 for the following query that took over 5 minutes to return nothing:

SELECT *
FROM DBI..PUB.prh_hist
WHERE prh_part = 'repair misc.' and prh_rcp_date = GETDATE()

I just gave a specifc date that would produce 12 rows and it took over 4 minutes. The two fields I am using are an index do I need to word it differently to reflect that.

Any other advice would be appreciated.

 
Should you run UPDATE STATISTICS off hours?

Yes. UPDATE STATISTICS will scan all you data which will generate a lot of IO. You would usually want that to occur during some off hours period.

OTOH since your queries are currently untuned they are probably doing quite a lot of nasty IO already so it might not actually make much of a difference at the moment.
 
furthermore, if you run update statistics then also make sure the -L parameter is high enough, execute one statement at a time (table, index and column)and don;t forget to commit at the end.
Values of -L around 100000 are not uncommon. (well that is for running update statistics).
if the query is still slow after this check the query plan. Look in KB for _sql_qplan

Caser
 
I have found the following:
UPDATE STATISTICS [FOR table_name]

Earlier Casper said to "execute one statement at a time (table, index and column)", the syntax I show is only by table, does anyone have the syntax that includes index and column?

I am currently running with a -L of 24576, the server is almost 5 years old and I am running at about 50% of the memory and around 80 - 95% of CPU during production hours. What is a safe value to up the -L to, I am afraid 100000 may criple the server, since this is not the only DB on it? No comments are necessary for the server I have been begging to upgrade for 2 1/2 years.

I read that you need to use Progress Explorer Tool to run the command, is the true? Is it possible to I schedule a job to run it through a linked server in SQL Server 2005? Or is there a way to schedule it to run through Progress, I am using RH Linux?

I really appreaciate the help!
 
You need to use "SQL Explorer" (or any similar 3rd part ad-hoc SQL tool), which is different from "Progress Explorer" (Progress' Administrative GUI, fondly referred to as "Exploder" by Progress wonks).
 
Thanks, that wasn't the news I wanted, I didn't need anymore manual processes.

I am going to try it this weekend, hopefully all goes well. I am still not sure if I should change the lock table size, hopefully someone has some advice. I appologize but I do not have a lot courage when my test box and production box are the same box. Maybe the wizard can help me find my courage.:)
 
Lock table entries use a few bytes of memory each -- it varies based on Progress release and I don't have my notes handy but between 20 and 100 bytes per entry should be a reasonable estimate. So bumping -L up to, say, 100,000 isn't a huge problem. Making it 1,000,000 is probably ok too. 1,000,000,000 will, OTOH probably blow up your system ;)

People avoid really large -L not so much because it is a resource problem in itself but because it facilitates really crappy coding.
 
That is great information, I will take the chance and up the -L. Anyways I am sure I have nothing to worry about with my programming, it has to be perfect.;)

I will also look into scripting with the SQL Explorer Tool. I have not used it a lot since I only installed it to test the ODBC connection, so I am sure there is more I can learn to do with it.

Thanks!
 
Back
Top