How to Read XML Files

simedarby

New Member
Hi All,

We have a Progress based application and a third party Web Based Application(Not Progress). We will pull data from that third party application by downloading the xml files and save it to c:\temp directory.

Heres my question, do you have any idea or even a piece of code that will read xml files and save it to Progress Database? Does anyone of you done this before, program it using progress 4gl, without using any progress tools?

Any ideas or suggestion :confused:.

Thanks in advance,

Simedarby
 
Hello,

well iam not sure if my answer is correct and its optimal. But it depends a lot on structure of that XML document. I have done similar work once and as it was only a one time import, i read the XML file one row after another with input stream and i decided what to do with each row in the repeat cycle. I mean when theres the opening element tag <element1> you know you need to create new record and few next rows will be atributes of that record...

So the logic for me looked like:

main:
REPEAT:
i_counter = i_counter + 1. /* just a counter */
IMPORT STREAM stream1 UNFORMATTED c_Row NO-ERROR.
IF ERROR-STATUS:ERROR THEN DO: /* from here its only examples of BL */
/* report */
NEXT hlavni.
END.
IF c_row = "" THEN NEXT hlavni.
IF ENTRY(2,c_Row,separator) = "" THEN DO :
/* another lots of logic */
END.

Was about 60000 records in that XML file and it was rather fast even with lot of checking etc...

Good luck and sorry if this isnt what you were lookin for

EDIT: theres nice article, which might help you do it in more civilised way
http://www.psdn.com/library/entry!default.jspa?categoryID=1176&externalID=53&fromSearchPage=true
 
Options vary according to the version of Progress (which one should always note when asking a question). Using INPUT has got to be the hard way.

Even fairly antique versions of ABL support DOM parsing so that one can intelligently deal with nodes and structures in the XML. The downside of DOM is that it requires the entire document to be in memory at the same time, so it is a problem with really large documents.

With somewhat less antique versions you have SAX readers and writers. SAX is considerably faster and does its work in a pass so does not have the same memory requirements. It is particularly well suited to picking out limited amounts of information from a file containing a lot of information that one doesn't want.

Again, with a sufficiently modern version and some luck, you might be able to use the READ-XML method on a temp-table to suck in the XML in a single operation to its temp-table equivalent. I say "luck" because READ-XML really only works dependably for thing written in the same format as is produced by the WRITE-XML method. If you have trouble with the READ, I would hand build a sample temp-table and experiment with WRITE-XML to see what the difference is from the input you are being provided. It is possible that some simple XSLT transformation will fix things up for you.

There are samples of all this in the documentation and we can help with specific issues once you get into it.

FWIW, I heartily recommend Stylus Studio for working with XML documents ... does all kinds of things.
 
Back
Top