D
dbeavon
Guest
I'm fairly new to running ABL code over client/server but this seems to be the direction that Progress is taking, given some of the database enhancements that they are focusing on in OE 12. (And given the way PASOE seems to be so much better suited to running in it's own independent software tier). I was trying to figure out how to troubleshoot slow remote servers, and came across the _actserver vst. I had a couple quick questions about this documentation: OpenEdge 11.7 Documentation First of all, that link has a really unusual footnote that talks about temp-tables. It seems totally off-topic and is probably just a documentation bug, right? Secondly, I noticed a field called _Server-TimeSlice _Server-TimeSlice INT64 Number of query time slice switches This is probably equivalent to the field in the OEE management console called "interrupts". Can someone please confirm? It strikes me that these "remote servers" can place an additional and somewhat artificial bottleneck between a remote client application and its data. It forces arbitrary remote clients to permanently co-habitate with each other - insofar as the configuration has specified (by -Ma). This living arrangement might work out very unfavorably for one of the clients (eg. if an unresponsive and sluggish PASOE client, PASN, must share a server with a greedy batch processor that posts journal entries for the entire system). I'm trying to understand how this artificial bottleneck could be quantified, and I thought that perhaps the _Server-TimeSlice would be a good indicator. My thought is that for any server hosting multiple remote clients, I would take the number of queries (_Server-QryRec) and divide by _Server-TimeSlice to see how frequently clients are experiencing blocking because of the other co-habitants of the same server. Does that sound a reasonable metric? Are there any factors that will impact the results, other than contention with the other co-habitants of the server? What I was really looking for is some explicit "wait time" measurement. It seems to me that if the server is receiving query requests, then it could keep track of the amount of time those requests waited before being granted the necessary "timeslices" to complete. If a "wait time" like this was exposed in a vst, then it would be a much better indicator of how much inefficiency is being artificially imposed upon our remote client queries. That would be better than dividing the number of queries by the number of timeslices. The hardware we use to host the OE database is quite beefy with more than enough memory, cpu, and SSD disk capacity to service all the clients that are connected. So it really bothers me when remote clients are sluggish, and the database server is making poor use of its resources, simply because of artificially imposed bottlenecks. I know that OE 12 will improve the situation a great deal but until then I'd like to be able to monitor for these bottlenecks. Any pointers would be greatly appreciated. I'm hoping for ideas which leverage OEE and VST's, rather than purchasing third-party DBMS management tools. We are running OE 11.7.4.
Continue reading...
Continue reading...