I get this PermGen space OutOfMemoryException sometimes when I do a lot of deployments on the Sun Java System Application Server 9.1_02 (build b04-fcs). The first hit when googling this error was this thread with a suggestion to increase memory heap size. But not only for the complete heap with the -Xmx argument because the PermGen space is only a fixed part of that memory, used for classloading. You can increment that specific part with the following arguments:
Understanding the problem
Even better is to find out where it leaks. But before we can do so we need to understand the problem.
What is PermGen space anyways? The memory in the Virtual Machine is divided into a number of regions. One of these regions is PermGen. It’s an area of memory that is used to (among other things) load class files. The size of this memory region is fixed, i.e. it does not change when the VM is running. You can specify the size of this region with a commandline switch: -XX:MaxPermSize . The default is 64 Mb on the Sun VMs.
This is a comprehensive explanation of the cause:
To summarize, a new instance of custom Classloader is created by Application Server whenever a new application (.ear, .jar, .war) is deployed to the server, and this Classloader is used to load all the classes and resources contained in this application. Benefit in this approach is that, this way applications are self-contained and isolated from each other, and there are no conflicts between different applications. When an application is undeployed from server, its associated Classloader is also unloaded, and it is subject to garbage-collection by JVM.
As described in Frank’s blog, there are situations in which Classloaders cannot be garbage-collected because of dangling references to them thru most unexpected places, and this will cause memory-leak in the PermGen space (a special section of heap). To find the cause of this problem, I used JDK 6.0’s jmap and jhat utility to generate memory dump and analyze memory dump, respectively.
Fixing the problem
First of all we need to trigger the problem. That is by deploying your application and undeploying it again and then look for classes still loaded that shouldn’t. When you find these classes you can track back for strong references from classes from another classloader. There are several tools for that. The easiest (as explained on the blog of Frank Kieviet) is to get the memory dump using jmap. For that we need to know the PID of the process, using jps:
will return a list of PIDs and one of them is of your application server:
For Glassfish it is called PELaunch. For other containers I have no idea.
Using that PID we can get a memory dump using jmap (included in JDK 6):
jmap -dump:format=b,file=leak 1824
Where 1824 is the PID of the container you’re running and leak is the file where you want the dump be written to.
Then you can run jhat (included in JDK 6) to browse that dump:
jhat -J-Xmx512m leak
leak is the same file you’ve written the memory dump to. This will start a server you can reach by directing your browser to http://localhost:7000. 7000 is the default port. Now you can browse the classes and find those that are leaking. A goog idea is to look into the exclude weak refs links. Even better is to use the modified code of Edward Chou you can find on his blog. Then you can also list a hierarchy of classloaders with Show ClassLoader Hierarchy (added) and an extra link is added to show only strong links from classes in a different classloader: Exclude weak refs, filter enabled (added)
I now found a more visual approach at http://dev.eclipse.org/blogs/memoryanalyzer/2008/05/17/the-unknown-generation-perm/. It’s an eclipse plugin that visualises the heap dump created with jmap (give the file a .hprof extension so it’s recognized by this tool).
Check the following resources if you want to get into this problem: