Server's slow disk speed.

Cecil

19+ years progress programming and still learning.
It could also be sorting that causes the extra reads... you can compile with an XREF and look for SORT-ACCESS.

Doh! I do have have SORT-ACCESS coming up in my XREF-Compile. I've been ignoring this thinking it normal behaviour.

SORT-ACCESS:
Indicates that the query result is to be ordered by a particular column value and no suitable index exists. A sort of the query result on the noted column value is required.

Now I'm unsure what index is now required. However I do believe that 'SORT-ACCESS' is legitimate because I'm using BREAK BY for the ACCUMULATE statements.
 

TheMadDBA

Active Member
SORT-ACCESS is only required if you are doing a BY/BREAK BY and the index you are using to query the records doesn't support the sorting.

Compare the time to process just the FOR EACH with and without the sort and see how much difference it makes. Sometimes it makes it slightly slower, sometimes it makes it take way longer.

If you posted the query we could probably help you decide if an index (or changes to your existing index) is going to help or not. Without a new index here is what happens....

- Progress reads all of the records your index supports
- For every record that matches the where clause it puts the ROWID and the sorted columns in a temporary file
- Sorts the file
- Rereads the records using the ROWID
 

Cecil

19+ years progress programming and still learning.
This is getting embarrassing to ask for help as I've been a Progress developer for 16 years and I should really know this stuff.

I've attached a section of code the main section which is taking the longest is the FOR EACH BatchOutcome.

Code:
FOR EACH CampaignBatch NO-LOCK
            WHERE ((CampaignBatch.CampaignGUID      EQ Campaign.CampaignGUID)
              AND (CampaignBatch.CampaignBatchGUID BEGINS chCampaignBatchGUID))
            BREAK BY CampaignBatch.BatchCode:

            IF FIRST(CampaignBatch.BatchCode) THEN
               ASSIGN
                chBatch = CampaignBatch.BatchCode .
  
            IF LAST(CampaignBatch.BatchCode) AND
               chBatch NE CampaignBatch.BatchCode THEN
            DO:
               chBatch = SUBSTITUTE('&1 ~~ &2',chBatch,CampaignBatch.BatchCode).
/*                set-user-field('batches':U, chBatch ). */
            END.

            FOR EACH BatchOutcome NO-LOCK
                WHERE BatchOutcome.CampaignBatchGUID EQ CampaignBatch.CampaignBatchGUID
                AND BatchOutcome.Date GE daDateRangeFrom
                AND BatchOutcome.Date LE daDateRangeTo
                BREAK BY BatchOutcome.ReportingMonth:
          
                inRecordCount = inRecordCount + 1.

                ACCUMULATE BatchOutcome.sales11 (TOTAL BY BatchOutcome.ReportingMonth).
                ACCUMULATE BatchOutcome.anp     (TOTAL BY BatchOutcome.ReportingMonth).

                IF LAST-OF(BatchOutcome.ReportingMonth) THEN
                DO:

                    FIND ttMonthlySales
                        WHERE ttMonthlySales.ReportingMonth = BatchOutcome.ReportingMonth
                        NO-ERROR.

                    IF NOT AVAILABLE ttMonthlySales THEN
                    DO:
                        CREATE ttMonthlySales.
                        ASSIGN
                            ttMonthlySales.ReportingMonth    = BatchOutcome.ReportingMonth.
                    END.

                    ASSIGN
                        ttMonthlySales.sales11           = ttMonthlySales.sales11 + (ACCUM TOTAL BY BatchOutcome.ReportingMonth BatchOutcome.sales11)
                        ttMonthlySales.Anp               = ttMonthlySales.Anp     + (ACCUM TOTAL BY BatchOutcome.ReportingMonth BatchOutcome.anp)
                        ttMonthlySales.GrossAnp          = (ttMonthlySales.Anp * 12).
                END.
            END.
        END.
 

TheMadDBA

Active Member
Does ReportingMonth correspond with the date ranges... like Jan-01-2014 to Jan-31-2014 means ReportingMonth 201401? If so add an index on CampaignBatchGUID and ReportingMonth and just use that.

But personally I would skip the break by/accum and just find the TT and add it up as you go along. It is likely to be faster than break-by/accum, especially as the data grows. Just make sure you have a unique index on ttMonthlySales.ReportingMonth.
 

TomBascom

Curmudgeon
You are selecting using CampaignBatch.CampaignGUID and CampaignBatch.CampaignBatchGUID. Presumably there is an index for that.

You are then sorting the selected results set using CampaignBatch.BatchCode. That field isn't part of the selection criteria so you get to make another pass through the result set.
 

Cecil

19+ years progress programming and still learning.
Also, also... you can steal the VST code from the tool and integrate it into your webspeed call for more fine tuned stats gathering.

Last night I quickly whipped up a WebSpeed version of the DBActMon. Could this be of any use to anybody??

web-dbactmon-html-png.1266
 

Attachments

  • WEB-DBActMon.html.png
    WEB-DBActMon.html.png
    93.3 KB · Views: 48

Cecil

19+ years progress programming and still learning.
Does ReportingMonth correspond with the date ranges... like Jan-01-2014 to Jan-31-2014 means ReportingMonth 201401? If so add an index on CampaignBatchGUID and ReportingMonth and just use that.

But personally I would skip the break by/accum and just find the TT and add it up as you go along. It is likely to be faster than break-by/accum, especially as the data grows. Just make sure you have a unique index on ttMonthlySales.ReportingMonth.

The date range can be any arbitrator date where the user could select any particular date range they are interest in. So narrowing it down to just monthly blocks in not going to work.
I'll try and develop an alternative without using the break by & accumulate.
 

Cecil

19+ years progress programming and still learning.
You are selecting using CampaignBatch.CampaignGUID and CampaignBatch.CampaignBatchGUID. Presumably there is an index for that.

You are then sorting the selected results set using CampaignBatch.BatchCode. That field isn't part of the selection criteria so you get to make another pass through the result set.

I might change there where expression to use the CampaignBatch.BatchCode rather then using CampaignBatch.CampaignBatchGUID so it fits-in with the 'BREAK BY'. Several things to try out now.

Oh, yes, they are indexed.
 

Cecil

19+ years progress programming and still learning.
Yes please. :)

It's just a standard WebSpeed page, so WebSpeed is required.

Going along the theory of the orignal code, it's just an all-in-one solution i.e. no include files. It does make a web request to a CDN server to get a version of jQuery.

It was developed in a very short amount of time so feel free to make changes etc. no warranties blab blar blar.

Currently it does not report back the program stack.
 

Attachments

  • DBActMon.zip
    4.9 KB · Views: 1
Last edited:

Cecil

19+ years progress programming and still learning.
But personally I would skip the break by/accum and just find the TT and add it up as you go along. It is likely to be faster than break-by/accum, especially as the data grows. Just make sure you have a unique index on ttMonthlySales.ReportingMonth.

Quick update. The new code does have a small speed improvement and it now only loops through the 64575 records compared to 129150 as before.

Need to release to the production environment to see how it really performs.
 
Last edited:

Rob Fitzpatrick

ProgressTalk.com Sponsor
More testing is still required and I forgot to make it auto refresh for n period of time.
For code like this I prefer manual refresh. Do something in the application, refresh to its effect on I/O. Do something else, refresh again. Lather, rinse, repeat. For auto refresh, I have ProTop. :) (And it does manual too.)
 

TheMadDBA

Active Member
Quick update. The new code does have a small speed improvement and it now only loops through the 64575 records compared to 129150 as before.

Need to release to the production environment to see how it really performs.

The next step would be to look at what is happening inside of the loop... if it really is just the code you posted make sure -Bt and -tmpbsize are set high enough that the temp-table is never actually written to disk.
 

Cecil

19+ years progress programming and still learning.
For code like this I prefer manual refresh. Do something in the application, refresh to its effect on I/O. Do something else, refresh again. Lather, rinse, repeat. For auto refresh, I have ProTop. :) (And it does manual too.)

I did have the idea of using a periodical timer to capture the stats so that I could use highcharts to plot pretty charts/graphs over a period of time but like you say, its does have a performance impact on the DB.
 

Cecil

19+ years progress programming and still learning.
The next step would be to look at what is happening inside of the loop... if it really is just the code you posted make sure -Bt and -tmpbsize are set high enough that the temp-table is never actually written to disk.

The -tmpbsize parameter has confused me in the past, the values can only be 1 or 8. I not sure what the default value is.
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
The -tmpbsize parameter has confused me in the past, the values can only be 1 or 8. I not sure what the default value is.
Default is 4 KB. Multiply that by -Bt and that's how much memory buffer you have for temp-tables. If your DBI file is growing then your buffer is too small so you're reading and writing some data on disk.
 
Top