I'm getting OutOfMemoryError

Skip to end of metadata
Go to start of metadata

If your Jenkins started choking with OutOfMemoryError, there are four possibilities.

  1. Your Jenkins is growing in data size, requiring a bigger heap space. In this case you just want to give it a bigger heap.
  2. Your Jenkins is temporarily processing a large amount of data (like test reports), requring a bigger head room in memory. In this case you just want to give it a bigger heap.
  3. Your Jenkins is leaking memory, in which case we need to fix that.
  4. The Operating System kernel is running out of virtual memory.

Which category your OutOfMemoryError falls into is not always obvious, but here are a few useful techniques to diagnose the problem.

  1. Use VisualVM, attach to the running instance, and observe the memory usage. Does the memory max out while loading Jenkins? If so, it probably just needs a bigger memory space. Or is it slowing creeping up? If so, maybe it is a memory leak.
  2. Do you consistently see OOME around the same phase in a build? If so, maybe it just needs a bigger memory.
  3. In cases where virtual memory is running short the kernel OOM (Out of Memory) killer may forcibly kill Jenkins or individual builds. If this occurs on Linux you may see builds terminate with exit code 137 (128 + signal number for SIGKILL). The dmesg command output will show log messages that will confirm the action that the kernel took.

If you think it's a memory leak, the Jenkins team needs to get the heap dump to be able to fix the problem. There are several ways to go about this.

  • Run JVM with -XX:+HeapDumpOnOutOfMemoryError so that JVM will automatically produce a heap dump when it hits OutOfMemoryError.
  • You can run jmap -heap:format=b pid where pid is the process ID of the target Java process. Please only do this if you use Java6, as earlier versions have issues.
  • Use VisualVM, attach to the running instance, and obtain a heap dump
  • If your Jenkins runs at http://server/jenkins/, request http://server/jenkins/heapDump with your browser and you'll get the heap dump downloaded. (1.395 and newer)
  • If you are familiar with one of many Java profilers, they normally offer this capability, too.

Once you obtain the heap dump, please post it somewhere, then open an issue (or look for a duplicate issue), and attach a pointer to it. Please be aware that heap dumps may contain confidential information of various sorts.

In the past, the distributed build support has often been a source of leakage (as this involves in a distributed garbage collection.) To check for this possibility, visit links like http://yourserver/jenkins/computer/YOURSLAVENAME/dumpExportTable. If this show too many objects, they may be leaks.

Analyzing the heap dump yourself

If you cannot let us inspect your heap dump, we need to ask you to diagnose the leak.

  • First, find the objects with biggest retention size. Often they are various Maps, arrays, or buffers.
  • Next, find the path from that object to GC root, so that you can see which Jenkins object owns those big objects.

Report the summary of those findings to the list and we'll take it from there.

Enter labels to add to this page:
Please wait 
Looking for a label? Just start typing.
  1. Apr 11, 2012

    Frank Merrow says:

    Could you please add instructions for how to increase the heap space? I assume ...

    Could you please add instructions for how to increase the heap space?

    I assume this the -Xmx parameter in Jenkins.xml . . . is that correct?

    Do you have a recommended size based on number of jobs (or some other criteria)?

    The default of -Xmx256m seems like a lot . . . some guidance on expected size would be nice.

    I expect this is trivial for you Java Gods, but not so much for us C# folks.  <wink>

    1. May 09, 2012

      rjohnst - says:

      Hi Frank We're using Tomcat as our container, so the way we do it is to add a s...

      Hi Frank

      We're using Tomcat as our container, so the way we do it is to add a setenv.sh script to our tomcat's bin directory, wherein we do the following things:

      CATALINA_OPTS="$CATALINA_OPTS -server"
      CATALINA_OPTS="$CATALINA_OPTS -Xms4096m"
      CATALINA_OPTS="$CATALINA_OPTS -Xmx4096m"
      CATALINA_OPTS="$CATALINA_OPTS -Xmn1024m"
      CATALINA_OPTS="$CATALINA_OPTS -XX:PermSize=128m"
      CATALINA_OPTS="$CATALINA_OPTS -XX:MaxPermSize=128m"
      CATALINA_OPTS="$CATALINA_OPTS -XX:+PrintGCDetails"
      CATALINA_OPTS="$CATALINA_OPTS -XX:+PrintGCApplicationConcurrentTime"
      CATALINA_OPTS="$CATALINA_OPTS -XX:+PrintGCApplicationStoppedTime"
      CATALINA_OPTS="$CATALINA_OPTS -XX:+PrintGCDateStamps"
      CATALINA_OPTS="$CATALINA_OPTS -XX:+PrintHeapAtGC"
      CATALINA_OPTS="$CATALINA_OPTS -Xloggc:/path/to/jenkins-garbage-collection.log"
      CATALINA_OPTS="$CATALINA_OPTS -Djava.awt.headless=true" 

      At any one time we'll have between 400 and 600 jobs active. Watching the garbage collection log is really the only way to tell what you need to do, not only to figure out how much memory to give Jenkins but also how to allocate that memeory to prevent the JVM from doing full GCs all the time, which will slow your Jenkins down...

  2. Jun 08, 2012

    Rob de Heer says:

    I'm seeing a similar issue. We  are using jenkins 1.435 with six executors....

    I'm seeing a similar issue. We  are using jenkins 1.435 with six executors. All build jobs are slaved to the build server. When I start the slave agent on the build server, it immediately grabs about 4g of memory, and climbs from there. PID 3886 is the build slave agent. 

     VIRT   PID USER      PR  NI  RES  SHR S %CPU %MEM    TIME+  SWAP COMMAND                                                                                                                                       
    6418m  3886 svnbuild  20   0 685m 4332 S  9.0  4.3  47:07.20 5.6g java                                                                                                                                         
    4041m  6392 svnbuild  20   0 769m 4912 S  4.0  4.8  53:43.42 3.2g java                                                                                                                                         
    2033m  1931 root      20   0    8    4 S  0.0  0.0   0:00.48 2.0g console-kit-dae                                                                                                                               
     977m  9385 svnbuild  20   0 141m 4732 S  0.0  0.9  66:26.71 835m java   

    The biggest problem we are seeing is that jobs are killed randomly, possibly because they can't get more memory. 

    I haven't combed through the heap to see what's taking most of the space. 

    Thanks, 
    Rob

    1. Jun 25, 2012

      Rob de Heer says:

      We are still seeing this problem. After doubling available memory from 16 to 32 ...

      We are still seeing this problem. After doubling available memory from 16 to 32 gig, jenkins grabs almost all of the available memory. Here is a heap dump from the main slave process that runs most of our builds (about 100). 

      https://docs.google.com/open?id=0B1yvL2GloBmIQ1o0TkV2YmZOdDA

      Here's the dumpExportTable https://docs.google.com/open?id=0B1yvL2GloBmIZ1ZqUWpjM1JpLWs

      We also found that setting xmx on the slave to 4 gig slowed jenkins to a crawl. 

      Any insights would be appreciated.

      Rob

  3. Sep 04, 2013

    alexanderlink - says:

    There is another type of OutOfMemoryError which is not necessarily related to me...

    There is another type of OutOfMemoryError which is not necessarily related to memory: "java.lang.OutOfMemoryError: unable to create new native thread".

    We currently struggle with this since on our Mac for some reason the native thread limit is about 2000 (in tests I got this exception after 2025 java threads) and our Jenkins has about 1800 threads and more.