[Progress Communities] [Progress OpenEdge ABL] Forum Post: ABL Overhead Which is Due to R-Code Interpretation

  • Thread starter Thread starter dbeavon
  • Start date Start date
Status
Not open for further replies.
D

dbeavon

Guest
The profiler in PDSOE is helpful to determine where CPU is being consumed when programs are running. But it doesn't differentiate between the intended work of the program, and the overhead that is due to interpreting the R-code. Are there any performance "counters" that expose the activities of the R-code interpreter itself? I have some programs that are taking several seconds to execute, even after the data has been pulled entirely into TT's in memory on a 4 GHz machine with plenty of RAM. I have a high -mmax -Bt and -tmpbsize. I've already ruled out the possibility that the program is using disk or database, or network. The only resource that seems to be the bottleneck is CPU. I can use the PDSOE profiler to isolate various programs that are consuming the most time and then try to find problems or optimize them individually. However, when I'm working with these programs there doesn't seem to be any way to distinguish the portion of the CPU that is being consumed by the interpreter. I've used -y and -yx but these don't get to the level of details that I'm hoping for. Ideally there would be windows-style performance counters that track the interpreter itself. Another thing I'm considering is potentially using the VS profiler. I think it has the ability to attach to native applications on windows and do sampling. Has anyone ever tried attaching this to an AVM process?

Continue reading...
 
Status
Not open for further replies.
Back
Top