This weekend was supposed to be the crowning jewel in my series of system upgrades over the past two months. I migrated our SX.Enterprise database (and character code and appserver code) to a shiny new VM running Redhat AS5 64bit. The new server is running OE 10.1C (64bit); old server is OE 10.1B.
The migration went fine--did a backup/restore (didn't want to touch the actual data on the old server just in case...good plan, unfortunately), a dump and load on the new server, upgraded our production terminal servers to 10.1C, checked all of the various ODBC stuff...everything went great. And there was no reason it shouldn't have--I had done the same thing three times previous and had various groups of users testing it beforehand. Then, this morning comes along, and I start getting calls about people being unable to connect. This was after we had about 120 connections (SX GUI users wind up with 2 or 3 database connections, so I figure we had about 50 users connected out of 140). Error says that there is no server available for the database.
My 4GL broker setup (I run separate brokers for 4GL and SQL) has a minimum number of clients per server value of 1, but no max. So I tried setting the max to a higher number (started with 20, and worked my way down from there), but the database wouldn't start. Then I realized that the .lg file for the database actually displays what max value it picks, so I compared old to new server. Both were picking an -Ma value of 3. While I'm surprised I never ran into a problem on the old system, I know that I was not reaching 3 per server on the new system.
I understand the overall function of the servers, but I'm at my limit right now. Ideally I'd like to be able to support 500+ connections (we're licensed for up to 1000 DB connections).
Below are the settings in the conmgr.properties file and a snippet from the .lg file showing the parms at startup. Both are from the old server; the only thing I've tried to change so far is blocksindatabasebuffers on the new server, but I even bumped that down to the original value.
The migration went fine--did a backup/restore (didn't want to touch the actual data on the old server just in case...good plan, unfortunately), a dump and load on the new server, upgraded our production terminal servers to 10.1C, checked all of the various ODBC stuff...everything went great. And there was no reason it shouldn't have--I had done the same thing three times previous and had various groups of users testing it beforehand. Then, this morning comes along, and I start getting calls about people being unable to connect. This was after we had about 120 connections (SX GUI users wind up with 2 or 3 database connections, so I figure we had about 50 users connected out of 140). Error says that there is no server available for the database.
My 4GL broker setup (I run separate brokers for 4GL and SQL) has a minimum number of clients per server value of 1, but no max. So I tried setting the max to a higher number (started with 20, and worked my way down from there), but the database wouldn't start. Then I realized that the .lg file for the database actually displays what max value it picks, so I compared old to new server. Both were picking an -Ma value of 3. While I'm surprised I never ran into a problem on the old system, I know that I was not reaching 3 per server on the new system.
I understand the overall function of the servers, but I'm at my limit right now. Ideally I'd like to be able to support 500+ connections (we're licensed for up to 1000 DB connections).
Below are the settings in the conmgr.properties file and a snippet from the .lg file showing the parms at startup. Both are from the old server; the only thing I've tried to change so far is blocksindatabasebuffers on the new server, but I even bumped that down to the original value.