Application Performance Tuneup

enoon

Member
Hi,

Taking an existing application, that may have on some areas performance issues, or just needs to be fine-tuned. What would be the process to find the problems and maybe suggest to the developer some hits for the modification. The application is residing on a Unix machine and it is only the BL, the UIs are not progress based. So the main option would be to find the problematic queries, XREF all the sources, and search for WHOLE-INDEXs, but can be something suggested to the developer for a change? Is there any other suggestion for where to start this job?

Thank you.
 

tamhas

ProgressTalk.com Sponsor
One place to start would be http://www.wss.com :)

There really is no generic answer ... one has to start with the problem. If you have an identified problem, then we can start to have opinions about how to solve it.
 

enoon

Member
The problem would be simple: Based on the query, how can you tell if it can be optimized? Based on the XREF how can you tell a query is optimal, or how can you tell that the query is probably taking too long to execute and suggest a solution, what king of solution?
 

tamhas

ProgressTalk.com Sponsor
So, you have a specific query and you want to know if it is optimal or if it simply takes a long time by necessity?

XREF will tell you what indices it is using. Are there indices to support what you understand the query should be?
 

TomBascom

Curmudgeon
The problem would be simple: Based on the query, how can you tell if it can be optimized? Based on the XREF how can you tell a query is optimal, or how can you tell that the query is probably taking too long to execute and suggest a solution, what king of solution?

It is not as simple as it seems at first blush.

XREF will statically analyze the query, tell you what indexes are being used and if you have a bracket.

But XREF has no knowledge of the data distribution and thus cannot tell you if it is a good query. It can tell you that it might be a good query or that it is probably a bad query but ultimately you need to verify that by running it against real data and obtaining metrics about the actual IO operations that are performed.
 

enoon

Member
It is not as simple as it seems at first blush.

XREF will statically analyze the query, tell you what indexes are being used and if you have a bracket.

But XREF has no knowledge of the data distribution and thus cannot tell you if it is a good query. It can tell you that it might be a good query or that it is probably a bad query but ultimately you need to verify that by running it against real data and obtaining metrics about the actual IO operations that are performed.

I know it not simple, we've been discussing this problem in the office all week, with not too many results.
Yes that is exactly what I want, to suggest to the developer by a statical analysis that a given query might be good or not, to point out the query so that further dynamic analysis can be performed on them. Is there any other statical analysis solution to tell the query is good or bad other then WHOLE-INDEX?
Would the -zqil parameter help me, how? I see index and bracket levels but how can they be used?
LEAF (1848)WHOLE-INDEX (1848)INDEX 8 0 0 is given by a simple XREF, how can the -zqil parameter be of use, I know it not documented, maybe someone has done it.
 

TomBascom

Curmudgeon
-zqil gives you run-time data not static analysis.

This topic is complex and to really make useful suggestions you need a much more comprehensive approach and understanding of the db than a few "tricks".

I'd start by downloading ProTop and getting to know which tables are being used in ways that seem at odds with your understanding of the application's needs. For instance, if you see the "vacation" table getting 20,000 read operations per second all day long when it only contains 75 records while the "orders" table averages 300 reads per second you might think that there is a coding problem associated with the vacation module...
 

enoon

Member
-zqil gives you run-time data not static analysis.

This topic is complex and to really make useful suggestions you need a much more comprehensive approach and understanding of the db than a few "tricks".

I'd start by downloading ProTop and getting to know which tables are being used in ways that seem at odds with your understanding of the application's needs. For instance, if you see the "vacation" table getting 20,000 read operations per second all day long when it only contains 75 records while the "orders" table averages 300 reads per second you might think that there is a coding problem associated with the vacation module...

Thank you for your reply. I will try to access the production and collect some data if possible. But again this exceeds the tasks limits which would be a static analysis for all the queries in the existing application and detect possible bottlenecks and maybe suggest the developer, what can cause the problem or what can be optimized. Any ideas on this subject are welcome. Can this anyway statically be done? Without a single program run or execution?
 

tamhas

ProgressTalk.com Sponsor
There are two very different levels of analysis here. Certainly, any developer while writing or reviewing code can look at the XREF, check index usage, explore whole table scans, consider alternate index usage, and apply any knowledge they might have about the data distribution, table size, etc. in order to create code which is likely to be OK. Much of the time, that is enough, but it is only following guidelines, not a guarantee of success.

When you identify a query which is an empirical problem, first you do these same things to see if it makes sense that it should be a problem, i.e., someplace along the line someone forgot to check that it was reasonable. If that doesn't produce any insights, that's when you have to start gathering data and doing testing to see what's up. Tools like Pro-Top can help gather information, e.g., to tell you whether the number of table reads is sensible for the problem being solved. Another technique is to take the query out of context, i.e., without any of the other processing logic, and see how fast it is on its own. At each step there is no single deterministic next step because you need to react to what you find. E.g., when you do the first survey, if you find a whole table scan when it seems inappropriate, then it is back to the code, but if the index usage is reasonable, you might try the out of context test. If it is fast out of context, then you know that something besides the query itself is the problem, but if it is slow, then you need to look more closely at what it is doing, how many reads, etc.

It isn't a simple check list, but a hierarchy of diagnostic tests and examinations.
 

TomBascom

Curmudgeon
Consider this -- if it were something simple and reliable that could be statically accomplished wouldn't the compiler automatically do it for you?

The truth is that performance analysis is hard. If you're having performance issues you really ought to engage an expert consultant (hint: see signature line). A good consultant will happily mentor you and get you positioned to support yourself but there is no magic wand (or tool) that will do all the work for you with no effort on your part.

It takes years to learn this stuff and it will be very difficult to gain the knowledge that you need via postings to online forums. There are several people here that have been working on it for quite a while now -- and they still a long ways to go.
 

enoon

Member
Thank you for the information, I think I will divide the process in two phases: 1. Static Analysis with Prolint, to point out index usage problems per source file per query pass it to the second phase
2. Dynamic, run time analysis the developer with the problematic query and source in hand should analyze with Statistics and/or Profiler and/or ProTop.
 
FWIW, On the specific point of monitoring dynamic (and static) queries: There is a util you can download from my home page that will report runtime queries. ie. it reports index usage as the application runs.

However, its a Windows frontend for the zqil output - you will need to adapt it for a Unix environment if you do not have a development db on Windows. It also requires a local multi-user db connection.

It should in theory run on platforms above 9.1d06, but its unsupported as I don't have time - documentation is included.

"RunTime Query Analyser" on the utilities page.
 

tamhas

ProgressTalk.com Sponsor
Sounds like it would be a good contribution to OE Hive, where it would be more visible.

BTW, I tried sending you a private message, but apparently you have used up your available space.
 
Sounds like it would be a good contribution to OE Hive, where it would be more visible.
Yes, it would be difficult for it to be less visible at the moment.

I'll upload it when I have some time - initially I didn't want to release it on a wider basis as its rather basic; it also requires a bit of experience to grok and I didn't (and don't) have time to offer support.

BTW, I tried sending you a private message, but apparently you have used up your available space.
Fixed.
 

enoon

Member
Thank you for showing this tool. I must mentions that I've tested it with progress 10.1C. I don't know why but it seems that progress spits out with the zqilv much more indexes in the log file then he actually uses in a particular procedure. I mean that there is a WHOLE-INDEX (FE Customer) on customer and other indexes like EmpNo and others show up in the db.lg file. Unfortunately I don't see right now how would this tool be helpful for the analysis, I mean, inspecting the query can give you the vague idea what bracket should progress try to use. Would someone please enlighten me on this subject, on how this tool be usefull?

Thank you.
 
Thank you for showing this tool. I must mentions that I've tested it with progress 10.1C. I don't know why but it seems that progress spits out with the zqilv much more indexes in the log file then he actually uses in a particular procedure. I mean that there is a WHOLE-INDEX (FE Customer) on customer and other indexes like EmpNo and others show up in the db.lg file.
Yes, I noticed this halfway into developing the utility (assuming you are talking about the RQA I linked to), and had some hair-pulling moments finding workarounds, all of which are documented in the code. (I haven't used it above 9.1, so can't say it works on 10.1c, though it should in theory)

Since 9.1d06 (ish) zqilv puts out a load of undocumented rubbish which makes the raw output next to useless without some filtering - which the util does, in addition to translating the relavent raw data to a more readable form.

You shouldn't need to look at the log while you are using the util.

Unfortunately I don't see right now how would this tool be helpful for the analysis, I mean, inspecting the query can give you the vague idea what bracket should progress try to use. Would someone please enlighten me on this subject, on how this tool be usefull?
It's use is limited.

All RQA does is put a friendly face on the zqilv output and shows you which indexes are being hit during a particular application run, nothing more. It doesn't advise you of a better query approach; as others have noted this requires understanding of - amongst other things - the application and data distribution.

It means you don't have to compile loads of code and scan through XREF logs. You just click Start and Stop when you get to the suspect functions in your app.

I have found it useful on a couple of occasions identifying slow queries, but don't use it often. But then I don't use XREF much either.
 

enoon

Member
Thank you.

As far as I see (just to point out to other readers) a good start in a static analysis on the queries of a given application would be: usage of Prolint as a base application, especially the "wholeindex" detection feature from the XREFs, and the Proparse (which is also used by the Prolint) to create a repository of sourcefiles/possible problematic queries. Afterwards create a Tool that reads and uses this information to put onscreen the sourcefile and problematic query to the developer, after which a dynamic analysis can be performed with the purpose of tuning the particular query.
 
As far as I see (just to point out to other readers) a good start in a static analysis on the queries of a given application would be: usage of Prolint as a base application, especially the "wholeindex" detection feature from the XREFs, and the Proparse (which is also used by the Prolint) to create a repository of sourcefiles/possible problematic queries.

Well you could kind of do that a lot quicker by COMPILE XREF > grep "WHOLE-INDEX"; but a query using WHOLE-INDEX isn't necessarily wrong, and it isn't going to flag any queries using dubious bracketing.

It is also rather preemptive to apply this approach over the whole base - the point of dynamic monitoring is to identify suspected problem areas first, before analysing the code within those small areas.

Afterwards create a Tool that reads and uses this information to put onscreen the sourcefile and problematic query to the developer, after which a dynamic analysis can be performed with the purpose of tuning the particular query.

I think there's a tool floating around somewhere that stores XREF output in tts for browsing. In Peg Utils heaven now, I think.

There's also the profiler for diagnosing function duration, and clientlogging, but that's a little OT.
 

enoon

Member
Well you could kind of do that a lot quicker by COMPILE XREF > grep "WHOLE-INDEX"; but a query using WHOLE-INDEX isn't necessarily wrong, and it isn't going to flag any queries using dubious bracketing.

I know but, the requirements are to do a static analysis first and then step to the dynamic analysis. An yes not necessarily WHOLE-INDEX is bad.

It is also rather preemptive to apply this approach over the whole base - the point of dynamic monitoring is to identify suspected problem areas first, before analysing the code within those small areas.

Static analysis required.

I think there's a tool floating around somewhere that stores XREF output in tts for browsing. In Peg Utils heaven now, I think.

There's also the profiler for diagnosing function duration, and clientlogging, but that's a little OT.

Thanks for the tip, I will look into it.
 
Top