AppServer broker out of memory error

Tony Brooks

New Member
Hi,

We are running on AWS Linux 64Bit - OpenEdge 10.2B08 with Java 1.5 and getting java.lang.OutOfMemoryError with the AppServer brokers. This is with the default heap space of 1Gb e.g. no heap space setting in jvmArgs.

This AppServer is not that busy about 68k requests in 20 hours. Only 12 end user. Only 4 agents running. Does not automatically start any more agents.

System seems to run fine for a period of time then we get the java.lang.OutOfMemoryError.

We can't seem to reproduce this issue even if we take a snapshot and start up the same VM on different hardware.

Anyone else seen something like this?

Most of our other systems are on 10.2B07, but also on red hat.


[rev@revpdb1 bin]$ ./java -version
java version "1.5.0_22"
Java(TM) 2 Runtime Environment, Standard Edition (build 1.5.0_22-b03)
Java HotSpot(TM) 64-Bit Server VM (build 1.5.0_22-b03, mixed mode)

[14/12/08@17:35:42.071+1300] P-031030 T-Main 1 --- --- /tunz/log/as_oamaru.broker.000001.log opened.
[14/12/08@17:35:42.071+1300] P-031030 T-Main 1 --- --- Logging level set to = 1
[14/12/08@17:35:42.071+1300] P-031030 T-Main 1 --- --- Log entry types activated: UBroker.Basic,
[14/12/09@14:08:35.096+1300] P-031030 T-C-0003 1 UB ---------- Unhandled exception caught in C-0003. (8419)
[14/12/09@14:08:35.096+1300] P-031030 T-C-0003 1 UB ---------- java.lang.OutOfMemoryError: Java heap space
[14/12/09@14:08:35.096+1300] P-031030 T-C-0003 1 UB ---------- at java.security.AccessController.getStackAccessControlContext(Native Method)
[14/12/09@14:08:35.096+1300] P-031030 T-C-0003 1 UB ---------- at java.security.AccessController.checkPermission(AccessController.java:411)
[14/12/09@14:08:35.096+1300] P-031030 T-C-0003 1 UB ---------- at java.lang.SecurityManager.checkPermission(SecurityManager.java:532)
[14/12/09@14:08:35.096+1300] P-031030 T-C-0003 1 UB ---------- at java.lang.SecurityManager.checkPropertyAccess(SecurityManager.java:1285)
[14/12/09@14:08:35.096+1300] P-031030 T-C-0003 1 UB ---------- at java.lang.System.getProperty(System.java:628)
[14/12/09@14:08:35.096+1300] P-031030 T-C-0003 1 UB ---------- at com.progress.common.util.PropertyFilter.filterValue(PropertyFilter.java:46)
[14/12/09@14:08:35.096+1300] P-031030 T-C-0003 1 UB ---------- at com.progress.ubroker.util.ubProperties.getValueAsString(ubProperties.java:2508)
[14/12/09@14:08:35.096+1300] P-031030 T-C-0003 1 UB ---------- at com.progress.ubroker.util.ubProperties.getValueAsInt(ubProperties.java:2518)
[14/12/09@14:08:35.096+1300] P-031030 T-C-0003 1 UB ---------- at com.progress.ubroker.broker.ubClientThread.solicitEvent(ubClientThread.java:800)
[14/12/09@14:08:35.096+1300] P-031030 T-C-0003 1 UB ---------- at com.progress.ubroker.broker.ubClientThread.mainline(ubClientThread.java:421)
[14/12/09@14:08:35.096+1300] P-031030 T-C-0003 1 UB ---------- at com.progress.ubroker.broker.ubClientThread.run(ubClientThread.java:355)
[14/12/09@14:08:40.991+1300] P-031030 T-C-0007 1 UB ---------- Unhandled exception caught in C-0007. (8419)
[14/12/09@14:08:40.991+1300] P-031030 T-C-0007 1 UB ---------- java.lang.OutOfMemoryError: Java heap space
 
I would recommend generating a protrace of the process to see what shows up in the "Persistent procedures/Classes" section of the protrace file. We have found thousands of persistent procedures loaded in memory due to bad code. Redhat -> kill -s USR1 <pid>. then go to the current working directory of the process to find the protrace file.

Another item we have found that can cause OutOfMemoryErrors is if the user that started the admin server and other processes has run out of a resource. Might compare the user's ulimit settings on the different servers to make sure they all match.
 
Back
Top