Webspeed Broker Status

skevin.john

New Member
Hi,

Progress 9.1E
Webspeed 3.1C

We have 4 different Solaris boxes (Production, Dev, ST and UAT). In
each boxes we have a broker running with the same name and same port.

When we check the status of the broker in Production and ST by wtbman
-port 30931 -i broker-name -q and We are able to get the below proper
response as expected.

"PROGRESS Version 9.1E04 as of Sat Apr 15 01:28:45 EDT 2006


Connecting to Progress AdminServer using rmi://localhost:30931/Chimera (8280)
Searching for broker-name (8288)
Connecting to broker-name (8276)

Broker Name : broker-name
Operating Mode : Stateless
Broker Status : ACTIVE
Broker Port : 8981
Broker PID : 18439
Active Agents : 1
Busy Agents : 0
Locked Agents : 0
Available Agents : 1
Active Clients (now, peak) : (0, 3)
Client Queue Depth (cur, max) : (0, 3)
Total Requests : 3426
Rq Wait (max, avg) : (10084 ms, 44 ms)
Rq Duration (max, avg) : (10085 ms, 45 ms)

PID State Port nRq nRcvd nSent Started Last Change
15577 AVAILABLE 08900 000482 000482 000482 Oct 10, 2012 13:22 Nov 3, 2012 15:47"


However when we try to get the status in UAT box, we are getting the
below response. But broker is running fine, we are able to use the
CRM.

"PROGRESS Version 9.1E04 as of Sat Apr 15 01:28:45 EDT 2006


Connecting to Progress AdminServer using rmi://localhost:30931/Chimera (8280)
Searching for broker-name (8288)
Connecting to broker-name (8276)
Broker: broker-name not running (8313)"


In DEV box, its getting stuck while connecting Adminserver. But
adminserver and broker is running fine (able to grep the adminserver
and broker)

wtbman -port 30931 -i broker-name -q
PROGRESS Version 9.1E04 as of Sat Apr 15 01:28:45 EDT 2006


Connecting to Progress AdminServer using rmi://localhost:30931/Chimera (8280)"


Please help on the above issues.


Thanks,

Kevin Joe
 

RealHeavyDude

Well-Known Member
Do you see any errors in the AdminServer's log file when the wtbman fails?

In our environments the wtbman -query sometimes fails for unknown reasons on the first attempt. As a workaround in our scripts - I know it's nasty - we always issue the command twice.

Heavy Regards, RealHeavyDude.
 
Top