Answered Saving a structuref procedure take to much time

The question remains why the compilation of this program takes an "enormous" amount of time, relative to a larger program. Though you haven't confirmed the size of all of the source involved.


Try to think of what is different about this particular program, compared with larger ones that compile much more quickly. Does it read its source from a different location than others (e.g. a network share versus a local directory)? Is the r-code saved to a different location? Does it connect to a database that is in a different place, e.g. a slower database server on a slower network? Or a larger number of databases? Or databases with much larger schema?

Does it take a similarly long time to do Compile | Check Syntax? I would expect that to be similar to a compile, apart from saving r-code. If it does, try tracing it with Process Monitor and see where the time is spent. If it doesn't, then trace the save operation.
So the check syntax inside a procedure of this procedure is pretty fast, instantaeous if i can say.

I think it could be related to the number of temp-table I use (more than 10) and maybe the enormous quantity of data I charge inside.
I never did such a program with this much of temp-table before.

Also about the network and environnement, I have no issue or lag with the database itself even if she is on another server. The *.r is inside the same directory of the *.p file.

I think I will try to redo it from the start and see if I can detect when the compilation take to much time to be done.
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
I think it could be related to the number of temp-table I use (more than 10) and maybe the enormous quantity of data I charge inside.
I never did such a program with this much of temp-table before.
The amount of data you write to temp-tables, and where it is written and how it is read, can certainly affect your performance significantly at run time. In terms of tuning that, you should look at how large your DBI file is at its peak size and tune your temp-table buffers (-Bt) accordingly. You should be careful about where it is as well. Don't put your temp directory (-T) on a network drive. Do you know where your -T is, for compile and run sessions?

There is no temp-table data at compile time. What would matter is the size of the temp-table schema. You do have more than ten temp-tables; twenty-five, in fact. That's a lot, in my experience. But it isn't such a large schema that it would grow your DBI file, no matter how small your temp-table buffers or block size. Something else is the cause of the slowness.

I note you have a couple of temp-table fields defined LIKE database fields. In my opinion, this is a practice to be avoided.
 
I understand for the runtime and will see what I can do.

For the number of temp table I was thinking of separating the application in multiple procedure one for each output file.

I will correct this part. Why is it to avoid ?

Best Regards and thank you for helping as always .
 

Rob Fitzpatrick

ProgressTalk.com Sponsor
I note you have a couple of temp-table fields defined LIKE database fields. In my opinion, this is a practice to be avoided.
Why is it to avoid ?
This is getting off topic so I'll try not to spend too much time on it. As I said, this is my opinion and I know that other people have different opinions for their own reasons.

I think there is value in assuming that your code will outlive you. Perhaps not literally, but the next time it needs to be updated, you may be in another team, on a another project, or working with a different employer or consultation. So ask yourself, how comprehensible will my code be for the next person who needs to maintain it, maybe without the benefit of having me around to explain it? Defining a field explicitly makes your design intent more obvious. It also limits dependencies and side effects.

Defining a temp-table field LIKE a database field may be convenient; a syntactical shortcut. But it also introduces potential side effects. It means your temp-table design is tightly coupled to your database schema when it doesn't have to be, and it means your code can behave differently when run against different versions of your schema. But ultimately it's a matter of personal preference.
 
Hello,
I did not read the complet post,
Sometime the compile is too long because there are fields in your sourceocde without their table name.
(using "ordernum" is bad , "orderline.ordernum" is better)
You can have look on the access (read) of the VST (especially _field) , if the number of the read on this VST during the compilation seems to be too high, you can check in your sourcecode if all the field used have their table name .

You can use protop (with a -basetable Under -2 to see the VST _field), refresh before and atfer the compile.

Patrice
 

tamhas

ProgressTalk.com Sponsor
If you are up for restructuring this code, one design idea you might consider is encapsulating temp-tables in their own class with access methods to operate on the temp-table. That isolates knowledge of the specific schema of the temp-table to that class and everything else merely knows about the classes methods. In your case, this would have the added benefit of creating a bunch of small compile units rather than one gigantic one. If any of the temp-tables have a parent-child relationship, put both of them in the same class so that each class is about one thing.
 
Hello,
good news @Patrice Perrot is the winner. I found one procedure where I was calling temp-table field without the temp-table name before. After correcting it, it compile perfectly well.

Thank you again for all your help :)

I will work on rebuilding my code with POO .

Best Regards,
 
Top