• This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn more.

Question Profiling an AppServer

Rob Fitzpatrick

ProgressTalk.com Sponsor
#1
Windows Server 2012
OE 11.6.3 32-bit

I have an appserver that publishes a web service. I'm connected to the WSA with SoapUI to fire through a message. I'm trying to profile the code running in the appserver but I'm not getting any output. I feel like I'm missing something obvious; I'd appreciate a hand.

The AS runs on a Windows box. I'm doing configuration/start/stop via OEE. To enable profiling I've edited <asname> | Agent | General | Server startup parameters and added "-profile c:\temp\profiling\profile.cfg" to the list of parameters. Profile.cfg contains:
Code:
-FILENAME     c:\temp\profiling\profile_test.prof
-DESCRIPTION  "This is the description of my profiling test."
-LISTINGS     c:\temp\profiling\
-COVERAGE
-TRACE-FILTER "*"
-STATISTICS
-RAWDATA
I've looked at a bunch of different sources of info on profiling, of various vintages, and they don't all agree on the list of keyword names. In particular, it seems the file name param might be "-FILENAME" or "-OUTFILE"; I assume both work? (And of course some of the keywords are different from the corresponding profiler attributes...) I used basically the same config file on a CHUI client on Linux and it did give me profiling data.

I start the agent, run the program, and stop the agent. I don't get a .prof file or any debug listings. I don't see any errors in the server.log that would indicate a misspelled parameter or keyword name.

Is there some special incantation I need for appserver? Should I use the profiler system handle instead, and start in the Activate procedure and stop/flush data in the Shutdown procedure?
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
#3
Thanks, I'd appreciate that.

Waiting anxiously for the day when this is a supported, documented, bug-free process from end to end. Maybe in 12.0...
 
#6
The profiler is supported and mostly bug free. Except for the ridiculous GUI pieces that you don't actually need. "The Profiler" <> "the eye candy".

The really, really, really important part of it is the PROFILER handle. That has been supported, and documented, since v9. The fact that tech support says things like "the profiler isn't supported" when they really should say "the sample profiler GUI tool is not supported" is a case of tunnel vision and a management failure.

Having said that... it would be *very* helpful to have enhancements to make it easier to enable the data collection in running applications and for scenarios like app servers. For instance, I would *love* to be able to turn the profiler on via a VST and get results specific to some other connection via VSTs - client statement cache on steroids :) But first we have to get beyond the tunnel vision that "the profiler" is embodied by the GUI gunk.
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
#7
The profiler is supported and mostly bug free. Except for the ridiculous GUI pieces that you don't actually need. "The Profiler" <> "the eye candy".

The really, really, really important part of it is the PROFILER handle. That has been supported, and documented, since v9. The fact that tech support says things like "the profiler isn't supported" when they really should say "the sample profiler GUI tool is not supported" is a case of tunnel vision and a management failure.
I understand that "profiling" <> "the profiler UI", but data collection by itself is of little use. You need some kind of UX to make sense of the data you collect, and that's the part that wasn't supported until 11.6 when the profiler editor enhancement was added to PDSOE. I'm not interested in writing my own profiling data parser and UI; that's Progress' job.

Dropping a .prof file into 11.6 PDSOE and getting tables and graphs etc. almost instantly is so much nicer than the old workflow of importing profiling data into a local database and watching it grind away, sometimes for hours, before you can see the data in the GUI profiler viewer. That IDE enhancement is nice, for what it is. But it still needs work, and profiling on the whole does have bugs in my experience (both in UI and data collection). I was hoping 11.6 service packs and 11.7 would improve the experience but momentum seems to have been lost. Hopefully 12.0 will get some further improvements before it ships, and supported 11.x releases will get profiler bug fixes.

And while I'm on a rant... in my opinion, "documented" has a specific meaning. It means than anyone, whether a 30-plus-year veteran of the platform or a first-week newbie, can go the the appropriate place in the docs for their version and find the info they are looking for on a given feature. If they can't, then it's an undocumented feature. I don't care about the KB articles or the various versions of Tim's readme.doc, that isn't product documentation. If the PROFILER system handle truly is a supported feature then I should be able to read about it in the same place as all the other system handles. And I should be able to read about "-profile" in the parameter manual. But there is no mention of PROFILER anywhere in the 11.6 or 11.7 docs, apart from the fact that "PROFILER" is a reserved keyword. At best, that omission is a multi-year doc bug, known to PSC. But as you say, not everyone at PSC is on the same page on this and it's frustrating for users.

Ironically, the PDSOE UI is now the *only* part of profiling that is actually documented. But that's not much help to people who struggle to figure out how to properly collect the data in the first place, at least outside of the PDSOE launch configurations.
 
#8
Ok, I seem to have stepped on a banana peel -- I couldn't find the docs either.

I could have sworn it was right there in the "handle reference". Now I'm wondering where the heck did I find it because I cannot find it at all. I think you jinxed me ;)

Aside from that I realize I'm a walking fossil but...

I have had code that imports and parses the profiler data for more than 20 years and have freely shared it the whole time. There is no excuse for a tool that takes hours to digest that data. It's not that hard.

There are examples of using the profiler, and embedding it in an application in ProTop. Look at lib/zprof* for some examples.

BTW -- I did a talk at the both the US and EMEA PUGs that covered embedding the Profiler into an application among other things. I know of at least one application whose programming team makes extensive use of profiling data obtained in this way.

The Peter Judge/Paul Koufalis talk also covered using the profiler. Both presentations should have plenty of material to help you out.

Sure, it would be great if Progress provided all of that sort of thing. I agree. All of that is a million times more important than continuing to fart around with GUI code that only runs on one platform and which changes with the breeze.

Now for my own rant :)

Tables and charts and graphs for profiling data? Please explain to me what the usefulness of anything beyond the execution time of the top X lines of code is? Every time someone gets all excited about clicking and sorting and waving around at all the GUI gunk I find myself wondering WTF are they on? Ok, I guess maybe it is sort of interesting in a navel gazing kind of a way to know that you called something a bunch of times - but if it didn't rise up into the Top X execution time does it really matter? I find it really annoying when people get all hyper focused on something like that as if it is "the" problem while they ignore the elephant in the room that is eating up monstrous amounts of time. It's kind of like when people start their debugging process with the last error message rather than the first error message.
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
#9
There are examples of using the profiler, and embedding it in an application in ProTop. Look at lib/zprof* for some examples.

BTW -- I did a talk at the both the US and EMEA PUGs that covered embedding the Profiler into an application among other things. I know of at least one application whose programming team makes extensive use of profiling data obtained in this way.

The Peter Judge/Paul Koufalis talk also covered using the profiler. Both presentations should have plenty of material to help you out.
I have reviewed the material from you, Dan, Peter, Paul, and others over the years. It was more helpful to me, and more recent, than anything from PSC and I'm truly grateful. I'm familiar with enabling and disabling profiling progammatically but I haven't looked at zprof_topx.p yet; I'll do that, thanks. But again, to people who aren't already steeped in the platform (and who most need the help!), that stuff may as well not exist.

I said "tables and graphs", though I suppose "trendlines" would have been a better word than "graphs".

1513780609162.png

When used appropriately, they draw the eye to outlying data values. Are they a necessity? No. Only the numbers are a true necessity, though I'm not opposed to enhancements that make my brain see important data quickly. I'm a visual person and trendlines are a nice little help. I'm unapologetic about liking nice things. All I *really* need are food, water, and shelter, but I do like being able to drive to work in a car with air-conditioning and heated seats. :)

OpenEdge is itself a product and products need to compete on features and functionality. If it doesn't have a somewhat-competitive developer experience with good tooling, it will lose developer mind-share and market share. I started out on green-screen so I'm not tied to GUIs but I recognize that certain software markets have "table stakes" that vendors have to provide if they want to compete. For a development platform, that means a powerful, information-rich IDE and other tools.

The benefit of the profile editor UI goes beyond colour. The browses are interactive, allowing me to drill into data quickly. If I select an entry in "Execution time", the information in the other browses changes, so I can see callers and callees, line-specific stats, and debug-list info. That's very helpful, and allows me to look at the most time-consuming modules as well as the most expensive source lines, on average or in total, quickly and easily in a way that a flat, non-sortable list does not. And I can quickly flip over to the call-tree view for another way of looking at the data and following program flow. It isn't just eye candy, it's productivity.

You'll note that I didn't say "GUI". A productive, flexible UI doesn't have to be Windows-centric or Windows-specific. PSC chose to implement this feature as an Eclipse plug-in for their Windows-only IDE. They could have made profiling support a separate, free plug-in that could run in Eclipse on any platform. Or they probably could have made it a web app that would run in any browser. Heck, ProTop has linked, sortable browses; they even could have done this in CHUI (though I wouldn't recommend it).

Ok, I guess maybe it is sort of interesting in a navel gazing kind of a way to know that you called something a bunch of times - but if it didn't rise up into the Top X execution time does it really matter?
It can. I think it depends on the situation. In a one-developer application with maybe tens of thousands of lines of code, there shouldn't be anything in there that you don't want to be there, or that you don't know is there, so maybe all you care about when trying to reduce elapsed time is which procedures take the most time. But in an application developed over decades, with millions of lines of code, touched over time by many dozens of programmers, with shifting business requirements and many customer-specific modifications, it's a different challenge. No one person can realistically know the whole application in detail.

I've seen cases where a business process we were debugging ran procedures that it didn't need to run at all. If a procedure takes a couple of milliseconds to run, you could say who cares whether it runs or not? But if it runs in a loop, say a million times, you've just wasted over half an hour. That's meaningful. Removing something that isn't very time-consuming but doesn't need to run at all can be a quicker win than trying to squeeze a few more percent efficiency out of something that is much more time-consuming but has to run. Sure, that's an extreme example and it doesn't happen often. But it shows what can happen in a large, modular system with reusable components. If person A writes a business component that they run once and it takes 20 ms, they likely won't worry at all about its performance. But if person B later makes that component a part of their process that could potentially run it hundreds of thousands or millions of times, depending on how much data the client has, then being able to see which are the most expensive lines in that component, and maybe trimming it down to 18 or 19 ms per call could be a real win for the system overall, even if it's never in the top X most time-consuming programs.

Anyway, I don't think there's much danger of us changing each other's minds. ;) But we can agree that PSC needs to invest more in this area. And they need to document their freaking "documented" features!

</rant>
 
#10
I knew that I could count on you to provide a detailed set of reasons why I should care about that stuff :)

Now if I can just remember to refer back to it the next time I'm feeling like spouting off on the topic :rolleyes: