G
Garry Hall
Guest
My last attempt, I promise. Instead of WRITE-XML() to a LONGCHAR of the target codepage, write to a UTF-8 LONGCHAR, then COPY-LOB it to a LONGCHAR of the target codepage. e.g. define temp-table tt1 no-undo field f1 as char index ix1 f1. define temp-table tt2 no-undo field f1 as char field f2 as char index ix2 f1 f2. define dataset ds1 for tt1, tt2 data-relation dr1 for tt1, tt2 relation-fields(f1,f1). DEFINE VARIABLE lcds AS LONGCHAR NO-UNDO. DEFINE VARIABLE lcds2 AS LONGCHAR NO-UNDO. DO transaction: create tt1. assign tt1.f1 = "A". create tt2. assign tt2.f1 = tt1.f1 /* Turkish lowercase dotless i * U+0131 = UTF-8 hex 0xc4b1 UTF-8 dec 50353 */ tt2.f2 = CHR(50353). END. fix-codepage(lcds) = "UTF-8". dataset ds1:handle:write-xml( "LONGCHAR", lcds, true /* formatted */, "UTF-8" /* encoding */). fix-codepage(lcds2) = "1252". copy-lob lcds to lcds2. This gives me the following error: Large object assign or copy failed. (11395) It is a vague error message, it doesn't explain exactly what the problem is, but it might flag that there is further investigation warranted. I believe it will be faster than a char-by-char comparison written in ABL. Depending on the size of your dataset, the memory consumption of the LONGCHARs could be significant.
Continue reading...
Continue reading...