Forum Post: RE: Paging in a resultset with many many records

  • Thread starter Thread starter Matt Baker
  • Start date Start date
Status
Not open for further replies.
M

Matt Baker

Guest
This isn’t a new problem. Microbench marks like this is a limited use scenario are easily deceptive. On your machine you only have 1 user and 1 table and no locks whatsoever. Now scale this to 500 users browsing the same table and it won’t work so well unless you have lots and lots of CPUs. You’re also only counting raw single table records without a join which is really cheap. As soon as you have filter criteria or touch something that isn’t perfectly indexed that 4.5 seconds could easily become more than 30 seconds per user and there goes your attention span. Counting records like this is really inefficient. This is why the Progress grid has “max data guess” property so you can make the scrollbar appear sane even though it is just a guess. This is general problem not specific to kendo and there are lots of articles on the web. First off you with kendo you have to do server side paging and probably want to consider virtual paging. http://docs.telerik.com/kendo-ui/api/javascript/ui/grid#configuration-scrollable.virtual and the demo: http://demos.telerik.com/kendo-ui/grid/virtualization-remote-data Sencha ExtJS 4+ supports this as well for those not using kendo. http://dev.sencha.com/deploy/ext-4.0.0/examples/grid/infinite-scroll.html From: egarcia [mailto:bounce-egarcia@community.progress.com] Sent: Friday, March 06, 2015 8:26 AM To: TU.OE.General@community.progress.com Subject: RE: [Technical Users - OE General] Paging in a resultset with many many records RE: Paging in a resultset with many many records Reply by egarcia Hello Stefan, Thank you for bringing this topic up. Performance of paging depends on the implementation. (For examples, the links that you provided show that an initial implementation, copying all the records to a temporary table, was not efficient and then could be optimized further.) I think that having optimizations at the database and language level to perform common operations such as count records and paging are always welcome. For example, the database engine knows / should know the total number of records in a table since it performs all the CRUD operations. This could also be done at the application level, if all CRUD operations are performed via a service. Caching can also be used to improve paging, by caching results of queries and frequent queries. I did a quick test with the sample implementation of the JSON Filter Pattern with 1,000,000 records in the Customer table: Link: community.progress.com/.../2279.business-entities-using-the-json-filter-pattern-code-samples.aspx I did not use filtering nor sorting. I only tried to optimize the performance by quickly increasing -B. A request with top: 10 (records), skip: 999900 (records) took 10 seconds. This code uses REPOSITION-TO-ROW(). 10 seconds is about the limit on keeping user's attention: www.nngroup.com/.../website-response-times This could be optimized. Regarding calculating the total number of records. A way to do it today is by using NUM-RESULTS: DEFINE VARIABLE filter AS CHARACTER NO-UNDO. DEFINE VARIABLE qh AS HANDLE NO-UNDO. ETIME(YES). CREATE QUERY qh. qh:SET-BUFFERS(BUFFER Customer:HANDLE). qh:QUERY-PREPARE("PRESELECT EACH Customer WHERE " + filter). qh:QUERY-OPEN (). MESSAGE qh:NUM-RESULTS ETIME. This code took about 4.5 seconds on my machine. The total number of records should not be calculated every time. Caching, calculating the total in advance could be used to optimize the performance. I hope this helps. Stop receiving emails on this subject. Flag this post as spam/abuse.

Continue reading...
 
Status
Not open for further replies.
Back
Top