M
mparish
Guest
One option to consider is to use a java Service Call Out as the first step in the rule flow which can read the data from the databases and construct the appropriate Corticon objects with associations. Finally as the last step in the rule flow make a call to another SCO which can do all the database updates in a more efficient manner. Mike iPhone On Dec 3, 2014, at 12:54 AM, "hendrige" bounce-hendrige@community.progress.com wrote: Speeding up database inserts Thread created by hendrige Hi all, We use Corticon Studio and Server version 5.4.1. We are building a batch process for a monthly run. Because of EDC-limitations we are forced to read data from various sources (SQL Server tables and views, with no associations/ join expressions possible), and do matching within Corticon. This is a less efficient and a little more time-consuming, yet acceptable option. With around 33.000 instances in one specific flow, a matching- and calculation process takes up to around 32 seconds. The major drawback comes from the data-writing. After walking through the flow with separate rulesheets for reading data, creating non-persistent entity-copies of every read record, and doing calculations on these non-persistent entities, every created CDO is then persisted to the database via the final rulesheet via a persistent entity. With 33.000 records, Hibernate generates 33.000 separate SQL-inserts (one for each CDO), which is very time-consuming. This can take up to 55 minutes, where we only need 32 seconds for calculating and matching. Is there any way to speed up the database insert process in this situation? Will for instance the High Performance Batch Processor help us with this? Thank you! Stop receiving emails on this subject. Flag this post as spam/abuse.
Continue reading...
Continue reading...