[Progress Communities] [Progress OpenEdge ABL] Forum Post: RE: Bulk insert into OpenEdge database table

Status
Not open for further replies.
D

dbeavon

Guest
Mohd Sayeed Akthar >> share as number of transactions does not matter here. "Insert Load" can be divided into 3-4 groups and can be fed to 3-4 different SQL clients While the "delete" and "insert" operations can happen in separate transactions, it is better if each of the two of those were atomic. I already do use parallel operations quite often when I'm interacting with the OE database (especially when using the "client/server" connectivity from ABL). But normally this is NOT for database updates - it is just when I'm pulling NO-LOCK (uncommitted) data out as fast as possible via FIND statements. Rob Fitzpatrick >> Why is this ideal? A single long-running transaction with a lot of notes can cause BI growth and, in the event of a crash, could take a long time (and even more BI space) to undo. My point is that it should NOT be "long-running" in terms of duration ... but must still insert lots of data. Whatever is reasonable to do for a "shared memory" client should be just as reasonable for a remote client. IE. I'm looking for an insert operation that can happen over the network with comparable performance to doing it in "shared-memory". Our PASOE servers (where all our ABL logic is hosted) are remote from the database and are load-balanced and fault-tolerant. They can accomplish just about everything that they could when connected in "shared memory" with a few exceptions (like large inserts). Sometimes we have to fall back on SQL92 for these types of things ... although in the case of inserts SQL does NOT seem to have the bulk insert functionality yet either. >> Can you say more about your use-case? Normal business operations like calculating a large financial Journal Entry at the end of a fiscal week and inserting the calculated results quickly in bulk (ie. 100's or 1000's or records). Another would be for the purpose of running server-side joins. Lets say you have a U/I screen where the user has prepared a filtered list of 1000 arbitrary sales orders ... and they would like to see all the related details. Instead of the (typical) approach of looping thru these individually and gathering the data as separate round-trips (parallel or otherwise) it would be nice to insert the order numbers back INTO the database in bulk (into a utility table of some kind) and that would facilitate a subsequent server-side-join queries that would gather all the related records that were joined to those order numbers. The bottlenecks are NOT really at the database level (I/O or transaction limitations). They are related to application logic (limitations in "client-server" ABL ... and limitations in SQL features.) Conceptually speaking, bulk inserts aren't profoundly new or interesting for database engines. I think Progress should have this capability on *both* the SQL and ABL side of things. For example, perhaps there could be a "COPY-BUFFER(S)" operation where you send an entire TT of data back into a database table from ABL in a single logical round-trip from the programmer's perspective. (_mprosrv should not have to rely on a long, chatty conversation with PASOE before it inserts every single record ). Of course the ABL bulk inserts might seem more easy than they really are. I would guess it would have to be disallowed in the presence of ABL triggers and such.

Continue reading...
 
Status
Not open for further replies.
Top