Num records in table

TomBascom

Curmudgeon
Nope, proutil doesn't support -S.

It would be too slow anyway -- you'd be shipping the whole db across the network.

It should not, however, be too difficult to put together a utility that lets you invoke it via a socket or an app server or whatever...
 

BCM

Member
Tom, your reply reminds me of a serious "flaw" (in my opinion) with the Progress design: 4GL code is 4GL code; There is no distinction between 4GL code in the context of screens and local processes compared to 4GL code that does nothing but access the database.
Over the past 16 years I have become so accustomed to sending a request to a database engine that performs database processing within itself and returns to me any results. Consider (as I'm sure you did with your last post) the pure inefficiency of moving a large table across network bandwidth simply to obtain a row count.

We all can't be working locally on the box with the database.
 

joey.jeremiah

ProgressTalk Moderator
Staff member
Tom, your reply reminds me of a serious "flaw" (in my opinion) with the Progress design: 4GL code is 4GL code; There is no distinction between 4GL code in the context of screens and local processes compared to 4GL code that does nothing but access the database.

that part, embedded queries, is actually a good thing.

Over the past 16 years I have become so accustomed to sending a request to a database engine that performs database processing within itself and returns to me any results. Consider (as I'm sure you did with your last post) the pure inefficiency of moving a large table across network bandwidth simply to obtain a row count.

oh yeah ! progress network connection is a bitch !

but with a shared memory connection no data gets passed around, request aren't queued for server processes etc. the server code is part of the client and the client access the database directly from shared memory.

i think, the bigger problem is that 4gl is focused on transaction processing, not analytical processing. progress doesn't write anything like 4gl reporting tools anymore, it doesn't get much attention and there aren't even alot of samples in this area, they sort of passed it over to the sql engine.

imo that's why there's no full table and no-index scans, counting records or queries that can be satisfied using only index entries, not to mention a query optimizer and things like how the 4gl network connection etc.

imo if progress gets back to writing 4gl reporting utils we'll start seeing this features. they're never going to understand what's needed until they start writing these kind of applications themselves.

We all can't be working locally on the box with the database.

actually you can and should.

for argument sake ... i would imagine that on any non-trivial architecture the business logic should run on the server-side (probably, on an appserver) close to the database server, instead of running on the client-side together with the user interface.

i'm no big expert but it's kind of the difference between the older client-server style model and distributed and service oriented architectures.
 

BCM

Member
Joey, maybe I understand what you are saying and maybe I don't.

I think what you are saying is that an application can be written to provide this type of information from the database, and that application would be run under the appserver on the same box as the database.

If that's what you are saying then my response is simply that a pure database engine is that program, and I don't have to reinvent that wheel.

If that's not what you are saying then please help my old head to understand better.

Thanks.
 

TomBascom

Curmudgeon
Proutil isn't 4GL -- it's a stand-alone utility.

You could also use the SQL-92 engine to obtain the sort of access that you're looking for. It will even do the table scan without an index. Unfortunately I'm not the best person to provide details on actually doing that. I probably ought to take some time to figure it out though.
 

BCM

Member
Tom,
The SQL-92 approach works fine. However, it is not especially fast. I am simply looking for a way to obtain the count programatically in < 30 seconds per table regardless of row count.
 

tamhas

ProgressTalk.com Sponsor
So, one can dream about a function that will count entries in the index without reading the records or about the database maintaining such a count for you dynamically and one can make enhancement requests and hope that they are eventually implemented, but in the short run it isn't there, so you should focus on what you are going to do about it.

Does it need to be exact and up to the last second? Might a proutil run each night loaded into a table be good enough. Possibly a server side process?

Time to think about what is possible, not about what isn't there.
 

joey.jeremiah

ProgressTalk Moderator
Staff member
Joey, maybe I understand what you are saying and maybe I don't.

I think what you are saying is that an application can be written to provide this type of information from the database, and that application would be run under the appserver on the same box as the database.

If that's what you are saying then my response is simply that a pure database engine is that program, and I don't have to reinvent that wheel.

If that's not what you are saying then please help my old head to understand better.

Thanks.

i think, there may be some confusion between database access and business logic. database access and business logic are not the same thing. a business task could access the database many times, creates, updates, validations etc. and for several tables.

with the client-server style model the whole program, business logic and user interface was usually all one big blob, with database access over a network. one of the down sides of this architecture was high volumes of messaging. i think, that's what you had in mind, right ?

in a distributed architecture the business logic is separated into this service paradigm in an soa. the business logic which is actual code and logic not just database access runs on the server-side on an appserver close to the database server. the advantages are mostly flexibility and manageability.

but why am i talking about architectures when we've got Thomas here :)
 

joey.jeremiah

ProgressTalk Moderator
Staff member
So, one can dream about a function that will count entries in the index without reading the records or about the database maintaining such a count for you dynamically and one can make enhancement requests and hope that they are eventually implemented, but in the short run it isn't there, so you should focus on what you are going to do about it.

Does it need to be exact and up to the last second? Might a proutil run each night loaded into a table be good enough. Possibly a server side process?

Time to think about what is possible, not about what isn't there.

these discussions are usually more of a support group type thing :)
 

tamhas

ProgressTalk.com Sponsor
<I>these discussions are usually more of a support group type thing</i>

While there are times for a support group to express sympathy and understanding, there are also times for a support group to try to get a person to focus on moving forward and doing what they can to deal with their problem.
 

joey.jeremiah

ProgressTalk Moderator
Staff member
Tom,
The SQL-92 approach works fine. However, it is not especially fast. I am simply looking for a way to obtain the count programatically in < 30 seconds per table regardless of row count.

... 9.1d was released quite a long time ago.
 

joey.jeremiah

ProgressTalk Moderator
Staff member
silly question, but since you're dumping the data every night why not keep a count of the records dumped that you could compare afterwards with the MSSQL db.
 

razmig

New Member
The ideal method would have been reading the record count information from the system tables, similar to other databases like MySQL, PostgreSQL.

I checked the _File and some other "hidden" tables in Progress, but it seems there is no such information, although I am convinced there should be some place where Progress writes the current sizes of the tables...

So, I believe the only method is counting, FOR EACH or the SQL COUNT.
 

joey.jeremiah

ProgressTalk Moderator
Staff member
The ideal method would have been reading the record count information from the system tables, similar to other databases like MySQL, PostgreSQL.

I checked the _File and some other "hidden" tables in Progress, but it seems there is no such information, although I am convinced there should be some place where Progress writes the current sizes of the tables...

So, I believe the only method is counting, FOR EACH or the SQL COUNT.

Only if statistics are collected.

But not every shop uses SQL or even installs the SQL engine. And collecting statistics with the earlier version 9 and large databases may not be practical and can take several days, weeks or even more. But from release 10 and higher collecting index statistics will most likely be way faster then any methods available in 4GL, and will also be accurate.

Code:
SELECT SYSTABLES.TBL, SYSTBLSTAT.VALUE
 
FROM SYSTABLES, SYSTBLSTAT
 
WHERE SYSTBLSTAT.TBLID = SYSTABLES.ID
AND SYSTBLSTAT.PROPERTY = 4 ;

But if you're using SQL with a recent version you could also use SQL to count the number of records fast using only the index keys. SQL commands or scripts can also be called from 4GL using OS-COMMAND.
 

avdberg

New Member
Hello here a newbie to Progress.

I dump every night the tables to a flat txt file with the following code:

Output to "/data/export/micos/ContractRegel_xas4711i.csv".
For Each ContractRegel No-lock:
Export Delimiter "~t" ContractRegel.
End.
Output Close.

and so i have more code for every table that i want to export to flatfile. But now i want after every export the records count and writen to a export.log file with the table name an the count result of that table.

Can someone help me with this?

Is there also a way to change the text delimiter to something else than a quote?
 

Serj HAMMER

Junior Racer
Hello, avdberg.

Simply add counter inside your for-each cicle:

DEFINE STREAM fLog.
DEFINE VARIABLE lCount AS INTEGER NO-UNDO.
lCount = 0.
For Each ContractRegel No-lock:
Export Delimiter "~t" ContractRegel.
lCount = lCount + 1.
End.
OUTPUT STREAM fLog TO VALUE("export.log") APPEND.
PUT STREAM fLog UNFORMATTED lCount "~tContractRegel" SKIP.
OUTPUT STREAM fLog CLOSE.

And, if You want, You could add date and time of finish dumping each table:

PUT STREAM fLog UNFORMATTED
TODAY " " STRING(TIME, "HH:MM:SS") " "
lCount "~tContractRegel" SKIP.

About delimiter - You using tab-delimiter (~t).
Quotes used to contain text data inside it.
 

avdberg

New Member
Hello, avdberg.

Simply add counter inside your for-each cicle:

DEFINE STREAM fLog.
DEFINE VARIABLE lCount AS INTEGER NO-UNDO.
lCount = 0.
For Each ContractRegel No-lock:
Export Delimiter "~t" ContractRegel.
lCount = lCount + 1.
End.
OUTPUT STREAM fLog TO VALUE("export.log") APPEND.
PUT STREAM fLog UNFORMATTED lCount "~tContractRegel" SKIP.
OUTPUT STREAM fLog CLOSE.

And, if You want, You could add date and time of finish dumping each table:

PUT STREAM fLog UNFORMATTED
TODAY " " STRING(TIME, "HH:MM:SS") " "
lCount "~tContractRegel" SKIP.

About delimiter - You using tab-delimiter (~t).
Quotes used to contain text data inside it.

I am missing now the line: Output to "/data/export/micos/ContractRegel_xas4711i.csv". and i want the count information in a seperate file.

and want to dump more tables, is it possible to make a var and put in there the tables that i want to export and also a var where the file names are in?
 

Osborne

Active Member
Avdberg, from the example posted by Serj use two streams:-

DEFINE STREAM ftables.
DEFINE STREAM fLog.

DEFINE VARIABLE lCount AS INTEGER NO-UNDO.

OUTPUT STREAM ftables TO "/data/export/micos/ContractRegel_xas4711i.csv".
OUTPUT STREAM fLog TO VALUE("export.log") APPEND.

lCount = 0.
For Each ContractRegel No-lock:
Export STREAM ftables Delimiter "~t" ContractRegel.
lCount = lCount + 1.
End.
PUT STREAM fLog UNFORMATTED lCount "~tContractRegel" SKIP.
OUTPUT STREAM ftables CLOSE.
OUTPUT STREAM fLog CLOSE.

For Delimiter I normally use either of the following:-
EXPORT DELIMITER "," or EXPORT DELIMITER ";".

Hope this helps.
 

avdberg

New Member
Avdberg, from the example posted by Serj use two streams:-

DEFINE STREAM ftables.
DEFINE STREAM fLog.

DEFINE VARIABLE lCount AS INTEGER NO-UNDO.

OUTPUT STREAM ftables TO "/data/export/micos/ContractRegel_xas4711i.csv".
OUTPUT STREAM fLog TO VALUE("export.log") APPEND.

lCount = 0.
For Each ContractRegel No-lock:
Export STREAM ftables Delimiter "~t" ContractRegel.
lCount = lCount + 1.
End.
PUT STREAM fLog UNFORMATTED lCount "~tContractRegel" SKIP.
OUTPUT STREAM ftables CLOSE.
OUTPUT STREAM fLog CLOSE.

For Delimiter I normally use either of the following:-
EXPORT DELIMITER "," or EXPORT DELIMITER ";".

Hope this helps.

Yes this helps me very much, now i want to do this voor for example 10 tables. Now I i copy the above code 10 times and change the table name. Is it possible to use a array where i put in the table name and the ftables output file and that i can use one time the above code?
 

Osborne

Active Member
Yes this helps me very much, now i want to do this voor for example 10 tables. Now I i copy the above code 10 times and change the table name. Is it possible to use a array where i put in the table name and the ftables output file and that i can use one time the above code?

There may be other/better ways to do this as the only idea I have is to use query, buffer and field handles. E.g. something like this:-

DEFINE VARIABLE fCount AS INTEGER.
DEFINE VARIABLE lCount AS INTEGER.
DEFINE VARIABLE tCount AS INTEGER.
DEFINE VARIABLE qh AS HANDLE.
DEFINE VARIABLE bh AS HANDLE.
DEFINE VARIABLE fh AS HANDLE.
DEFINE VARIABLE expTables AS CHARACTER EXTENT 10.

DEFINE STREAM ftables.
DEFINE STREAM fLog.

expTables[1] = "ContractRegel".

OUTPUT STREAM ftables TO "/data/export/micos/ContractRegel_xas4711i.csv".
OUTPUT STREAM fLog TO VALUE("aaexport.log") APPEND.

DO tCount = 1 TO 10:
IF expTables[tCount] <> "" THEN DO:
CREATE BUFFER bh FOR TABLE expTables[tCount].
CREATE QUERY qh.
qh:SET-BUFFERS(bh).
qh:QUERY-PREPARE("FOR EACH " + expTables[tCount]).
qh:QUERY-OPEN.
lCount = 0.
REPEAT:
qh:GET-NEXT().
IF qh:QUERY-OFF-END THEN LEAVE.
REPEAT fCount = 1 TO bh:NUM-FIELDS:
fh = bh:BUFFER-FIELD(fCount).
IF fh:dATA-TYPE = "CHARACTER" THEN
PUT STREAM ftables UNFORMATTED QUOTER(fh:BUFFER-VALUE).
ELSE
PUT STREAM ftables UNFORMATTED fh:BUFFER-VALUE.
IF fCount = bh:NUM-FIELDS THEN
PUT STREAM ftables SKIP.
ELSE
PUT STREAM ftables ",".
END.
lCount = lCount + 1.
END.
PUT STREAM fLog UNFORMATTED lCount SPACE(1) bh:NAME SKIP.
qh:QUERY-CLOSE().
DELETE OBJECT fh NO-ERROR.
DELETE OBJECT bh NO-ERROR.
DELETE OBJECT qh NO-ERROR.
END.
END.
OUTPUT STREAM ftables CLOSE.
OUTPUT STREAM fLog CLOSE.
 
Top