Search results

  1. George Potemkin

    Question Cleanup after using a dataset

    Bingo! DELETE OBJECT ophDataSet. END PROCEDURE. /* Callee */ Tested: a caller gets all records from ophDataSet and then a callee deletes the dataset Thank you!
  2. George Potemkin

    Question Cleanup after using a dataset

    I was wrong. On return to the editor, Progress deletes all temp-tables linked to the orphaned datasets but it just forgets to delete the datasets themselves. EMPTY-DATASET does crash a session when dataset does not have the temp-tables. The workaround does work! In the callee I’m using...
  3. George Potemkin

    Question Cleanup after using a dataset

    Unfortunately, it turned out to be impossible to re-use or to delete the "forgotten" datasets from the chain in SESSION:FIRST-DATASET. DELETE OBJECT hDataset has no effect. hDataset:EMPTY-DATASET() crashes a session whatever I do with dataset before EMPTY-DATASET. IMHO, it looks like a bug...
  4. George Potemkin

    Question Cleanup after using a dataset

    PROCEDURE B: DEFINE OUTPUT PARAMETER DATASET-HANDLE ophDataSet. <code> END PROCEDURE. /* B */ It looks like a dataset is sent to a calling procedure at the END operator. We should not empty a dataset before the END. We can't empty a dataset after the END.
  5. George Potemkin

    Question Cleanup after using a dataset

    Can it be used to call a remote procedure? Demo.p calls a local procedure only because it's a demo.
  6. George Potemkin

    Question Cleanup after using a dataset

    # Breakpoint Datasets Buffers 1 Test is beginning... 0 0 2 A: Before OUTPUT DATASET-HANI 0 0 3 B: Before CREATE TEMP-TABLE 0 0 4 B: After CREATE TEMP-TABLE 0 0 5 B: Before CREATE DATASET 0 1 6 B: After CREATE DATASET 1 1...
  7. George Potemkin

    Question Cleanup after using a dataset

    Documentation says: https://docs.progress.com/bundle/openedge-prodatasets-guided-journey/page/Cleanup-after-using-a-dataset.html There are two recommended practices for cleaning up after using datasets in your code: DETACH-DATA-SOURCE EMPTY-DATASET IMHO, it’s true only partially. I wrote...
  8. George Potemkin

    Working with Progress 9.1A

    The link is back: https://cloud.mail.ru/public/GFD8/YjTxQ5PtZ
  9. George Potemkin

    Change AI Extent Sizes

    I totally agree with Rob but a few questions are still remained. The most of the times I see the voluntaristic choice of the options and I’m trying to collect the criteria to make the choice more “calculable”. I'm looking for a "formula". Let’s assure we should keep the replication running...
  10. George Potemkin

    Working with Progress 9.1A

    https://cloud.mail.ru/public/KHAm/3Qm87siCu It's the 9.1B.rar file - 67.2MB. Let me know when I can remove the file.
  11. George Potemkin

    Change AI Extent Sizes

    What would experts say about the optimal number of variable-length AI extents? 3 is the minimum. What is the best practice? About (or less than) 1 MB/sec (less than 1 GB per 16 min) -> 300 GB of disc space will give you 3.5 days to fix an issue. Great!
  12. George Potemkin

    Change AI Extent Sizes

    I hope your database does not have 300 AI extents of 1 GB size . It's a good idea. You can add the AI extents online. V12.8 prostrct removeonline can remove online only data extents: Unrecognized extent type sports.a3 for prostrct removeonline utility
  13. George Potemkin

    Working with Progress 9.1A

    I can share the documentation for Progress V9.1B if you need it. It's in the pdf format; You can find the installed licenses that will tell you what you can do with a database: %DLC%\bin\showfcfg.exe %DLC%\progress.cfg; You can check a database log (dbname.lg file) to find out how database is...
  14. George Potemkin

    Dumping with Multiple Threads

    If we stay with D&L… There are the reasons not to use the multi-threaded load. The load will be the longest phase. Then the best choice, IMHO, is to dump and load in parallel using the multi-volume dump. Bonus - we will use the less disc space.
  15. George Potemkin

    Dumping with Multiple Threads

    Max number of the dump threads is defined by the number of the keys in the root block of the index used for dump. In other words, the number of dump threads depends on the chosen index. The larger table more likely its root blocks have just the small numbers of index keys. Dbanalys does not...
  16. George Potemkin

    All webspeed broker agents locked suddenly

    To find the root cause ones need: 1. to gather as much information as possible; 2. to have a qualified support team who is able to analyze the huge volume of the collected information; 3. to be lucky to get the answers after the minimal number of the incidents or to be patient to continue an...
  17. George Potemkin

    All webspeed broker agents locked suddenly

    All 5 old WS agents (PIDs 14181,14184,14192,14195,14201) died with memory violation between 08:23:21 and 08:23:38 All 5 new WS agents (PIDs 31660,31673,31687,31697,31707) were LOCKED. The reason must be common to them. > also during this time java update was going on. Just curious what the...
  18. George Potemkin

    Long running transaction by a user and his activity code

    Oops! That is why I don't like the numbers! My friend the promon confirms: 1-Single, 2-Stack, 3-One Time
  19. George Potemkin

    Long running transaction by a user and his activity code

    Disagree is my second name. :cool: CachingType = 1 methord: You set the value only once and check the statement cache serially - at t1, t2, t3 etc time. Let’s, every minute. You will get the call stacks on all borders of stat intervals. CachingType = 3 methord: I set CachingType = 3 at t1...
  20. George Potemkin

    Long running transaction by a user and his activity code

    @TomBascom You know the customer of ours who taught me to be very careful about the code that is proposed to run in the production environment. :-) Does anyone have a code to monitor the long running transactions they can share? Long transaction watchdog? I know there is a simple code in Dan...
Back
Top