Question very slow queries

Emiliano

New Member
Hi all,

I have a database in progress with the Open edge Release 11.3 version.
Users complain that consultations are slow, the application provider recommended us to do a reorganization. Well I did a dump, load, rebuild of the base and they continue slowly.

I have extended (tripled) the spin lock retries value and there is a tiny improvement in the query but not enough.

please, could someone tell me if is there another parameter in the database configuration that could also improve the queries? Thanks a lot
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
please, could someone tell me if is there another parameter in the database configuration that could also improve the queries? Thanks a lot
Yes. There are lots of broker and client parameters that may affect performance. But there are no magic values that are appropriate for all situations. The proper configuration for you depends on your database, your system(s), your storage, and your application. We don't have details on any of those things so we can't suggest values for you.

What we do know is that you use the on-premise application of an application provider. Remember that is their responsibility to ensure that their application runs well in your environment. They own this problem until it is resolved to your satisfaction. You have done what they asked you to do: dump and load. (Did they provide you with guidance on doing that, or explain why they thought it would resolve your problem?) The ball is back in their court.

It doesn't help that they have you running a very old, and retired, version of OpenEdge. OE 11.3 was released 8 years ago and was retired 5 years ago. The current release is 12.4 and the current Long-Term Support (LTS) release is 12.2. You would have more performance-tuning options if you were using a newer release.

Tuning performance is a matter of starting with a reasonable configuration that takes advantage of available resources and then systematically identifying bottlenecks and eliminating them, finding the new bottlenecks, eliminating them, etc. In some cases, tuning is not enough. If your system just doesn't have enough resources or enough performance for your workload, database tuning alone will not raise performance to a level that satisfies users. Or if the application is badly written or designed, it may take changes to code, architecture, or schema to get the results you want.

Can you tell us more about the database? The size, the number of tables and indexes, the structure you used for your dump and load, the user count, the type of users and how they connect, the broker configuration; also the configuration of the OpenEdge clients.

For the database server: operating system type and version, CPUs, RAM, storage specifications.

For OpenEdge: the products you are using.

That would be a start.
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
Regarding the -spin ("spin lock retries") primary broker parameter, that isn't the first parameter I would look at changing if I were investigating an application performance issue. It is a crude tuning mechanism, with changes affecting many different aspects of the database at once. Increasing it will likely only help if you have significant latch contention.
 

TomBascom

Curmudgeon
Rob is, of course, correct that the application provider should be responsible for ensuring the performance of their application.

The unfortunate reality is that there are not very many of them that are good at doing so.

They tend to be very focused on the business functionality of their application and are not usually very good at improving performance or making good performance oriented recommendations. All too frequently their recommendations are "cookie cutter" configurations that they hope will work for everyone forever. That doesn't always work out so well...

You can get excellent help via forums like this one so long as you are patient and provide the requested missing details. It is an iterative process that takes time but you can learn a lot. Contributors to forums are happy to help you out when they have free time. They are not getting paid for it, so sometimes they aren't going to respond instantly.

If you would like some tooling to help explore performance issues in OpenEdge databases you should download ProTop

If you haven't got a lot of time or patience you might be better served by directly engaging a consultant. Both Rob and I and several others who might pop into this thread can offer such services if you'd like.
 

Emiliano

New Member
Thank you very much for your responses.

Hardware has enough resources, memory around 30% and CPU around 10%. Page files around 40%.
The base has 4 files of 2G in size each for area I1: 20 (indexes).
The base has 17 files of 2 Gb each one for area D1: 10
Tables 715
Records 116,498,000
For dump and load was used proutil base -C dump / load

Protop I'll use to see if I discover something.
 

TomBascom

Curmudgeon
Rob's point about the D&L was probably more targeted at getting good advice about configuring the new database storage areas.

One of the things that vendors typically do a really poor job of is defining the characteristics of the storage areas that you should assign your tables and indexes to.

The default approach is to let everything live in the schema area. This is bad because the schema area is a "type 1" area. That means that data blocks contain a mix of records from different tables. Everything is all jumbled together. 40 years ago that was somewhat understandable because it tends to optimize the disk space required. But it does a very poor job of optimizing performance.

So with version 10 we got "type 2" storage areas. These areas do NOT mix records from different tables in each block and they allow you to specify something called "cluster size" - the number of blocks which will be allocated together. (You also get to specify a the rows per block on a per storage area basis - technically that was also true in type 1 areas but the schema area is pre-defined with a fixed RPB and it was rare for vendors to use other storage areas in the era.)

If you are somewhat luckier than usual your vendor might have setup a "data area" separate from the schema area. They might have even gone ahead and created an "Index Area". That is a step in the right direction - at least you are no longer stuck with the schema area and it could, hopefully, be a type 2 area.

It sounds like you have a data area "d1" and an index area "i1" so that's a somewhat positive sign.

A very few vendors created "many" type 2 storage areas. Unfortunately they have almost universally organized such areas around application functions like "sales", "inventory", "history" and the like. This is unfortunate.

To take advantage of multiple storage areas you want to assign tables to those areas based on their *technical* characteristics. Chief among those is the average row size. You want to assign tables to areas whose rows per block setting is the next power of two greater than the block size divided by the average row size. In other words if you have 4k db blocks (the default on most systems) and if the table's average row is 150 bytes long then 4096 / 150 = 27. The next power of 2 = 32 so you want that table assigned to a storage area whose RPB = 32. If you go too low (say RPB = 16) then you will waste space since 16 * 150 = 2400 and thus 1,600 bytes will be wasted in every block (that's an oversimplification - blocks also have some overhead but it is small enough that it can be ignored for our purpose). If you go too high you won't waste space but you will encourage fragmentation as records grow or are added.

You do not need to follow the vendor's configuration. Changing the storage area configuration is invisible to the code. And doing so can have a dramatically positive impact on performance.

You might have to do some extra work to keep it properly maintained if the vendor is still shipping updates. If they send you new data definitions mapped to the schema area (or to their idea of proper areas) you will need to move those objects prior to adding much data. But other than that it is pretty easy to set things up optimally.
 
Last edited:

Rob Fitzpatrick

ProgressTalk.com Sponsor
Rob's point about the D&L was probably more targeted at getting good advice about configuring the new database storage areas.
Yes. Regarding database structure, you have told us the area names and numbers, which don't matter much. What matters are the area characteristics:
  • area records per block;
  • area blocks per cluster;
  • to a lesser extent, but still important:
    • extents per area;
    • extent sizes;
  • after imaging (not strictly performance-related, but more important than performance);
  • the assignment of storage objects (tables, indexes, LOBs) to storage areas.
Showing the content of the database structure file (dbname.st) will give us all but the last point. The last point you can get from the Storage Areas display in ProTop, e.g.:
Code:
┌─────────────────────────────────────────────────────────────────────── Storage Areas ────────────────────────────────────────────────────────────────────────┐
│     # BX Area Name                      Allocated  Variable     Tot GB  Hi Water   Free GB %Allo v %LastX BSZ RPB CSZ #Tbls #Idxs #LOBs #Exts Var *     Max% │
│  ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── │
│ >  11 B1 Order                               0.00      0.00       0.00      0.00      0.00    201%   100%   4  32   1     4    10     0     2 Yes     0.000% │
│     8 B1 Inventory                           0.00      0.00       0.00      0.00      0.00    191%   100%   4  32   1     8    15     0     2 Yes     0.000% │
│     9 B1 Cust_Data                           0.00      0.00       0.00      0.00      0.00     53%    35%   4  32   1     1     0     0     2 Yes     0.000% │
│    12 B1 Misc                                0.00      0.00       0.00      0.00      0.00     33%    15%   4  32   1     6    16     0     2 Yes     0.000% │
│    10 B1 Cust_Index                          0.00      0.00       0.00      0.00      0.00     26%     9%   4  32   1     0     5     0     2 Yes     0.000% │
│     7 B1 Employee                            0.00      0.00       0.00      0.00      0.00     21%     4%   4  32   1     6     8     0     2 Yes     0.000% │
│     1 B1 Control Area                        0.00      0.00       0.00      0.00      0.00      0%     0%   4  32   1     0     0     0     1 No      0.000% │
│     3 B1 Primary Recovery Area               0.00      0.00       0.00      0.00      0.00      0%     0%   8   1   1     0     0     0     1 Yes     0.000% │
│     6 B1 Schema Area                         0.00      0.00       0.00      0.00      0.00      0%     0%   4  32   1     0     1     0     1 Yes *   0.000% │
└──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘

Hardware has enough resources, memory around 30% and CPU around 10%. Page files around 40%.
This is what you were using at the moment you looked, not what you have. The system resources you will use, particularly from an OpenEdge perspective, will be a function of your configuration, which we still don't know. A percentage can be low for more than one reason: it could be because your configuration is appropriate for your needs and the system has a lot of resources; or it could be that the system is low on resources but your configuration uses very little of them.

Consider, for example, a 100 GB database with 100 users on a system with 8 GB of RAM. That is very likely not enough memory for the database to run well. But if -B is set to 3000, the database will use very little RAM and the percentage of RAM used will be low. Often, a system administration who doesn't know what they are doing will look at this situation and conclude that the system has more RAM than it needs (not understanding what "need" means) and offer to reduce system RAM so it can be used elsewhere. When the appropriate response would be to give more of the RAM to the database so it can cache more data and improve application client performance, and perhaps to add RAM on top of what is there.

I asked you for many different pieces of information. I am not going to list them again here; please reread my earlier post. You have not provided them, or indicated why you have not, or otherwise acknowledged that I asked for them. Trying to diagnose your three-word problem ("consultations are slow") is challenging enough in a forum setting, with almost no information to start from. As Tom said, people here will try to help you, for free, in their spare time, if you participate in the process. The proof of that is in hundreds of other threads in this forum. But I have other ways to spend that time. Quite frankly, if I feel like you are not fully engaged because my questions are being ignored then I am less likely to remain engaged with you to diagnose your problem.
 
Hello,
although I am a rookie when it comes to Progress dump, etc. I was looking for some guidance myself and I stumbled onto this youtube video which highlights the basic design of a Progress database (talk about Oracle, tablespaces and extents, etc....)

It sums up the concept very nicely, especially if someone has done the dump/load and it has not resolved the issue.

In my specific situation my third party configured my Linux server as one huge RAID5 partition (very bad practice!) and separating data type areas on different disks is not currently possible. The best practice is starting with RAID10 configurations, which my third party is opposed to.

Regards,
Richard
 
Hi all,

I have a database in progress with the Open edge Release 11.3 version.
Users complain that consultations are slow, the application provider recommended us to do a reorganization. Well I did a dump, load, rebuild of the base and they continue slowly.

I have extended (tripled) the spin lock retries value and there is a tiny improvement in the query but not enough.

please, could someone tell me if is there another parameter in the database configuration that could also improve the queries? Thanks a lot

Why not show us your .st file so we can see how your data areas, extentsion are configured?

If you are not an expert ..... it takes a lot of time and experience to learn these things and to implement the correct strategy.
 
Top