Java & Docker: Java 10 improvements strengthen the friendship!

Till JDK 10, running java applications in linux containers was a bit tricky and requires additional setup to avoid surprises. These issues does not affect only older versions of Java (prior to 10), but also some tools that collect information from the execution environment like top, free and ps. That's because this tools and even the JVM have been implemented before the existence of cgroups, hence not optimized for executing inside a container.

Back to basics

Before digging more into the problem, let's start by a small recall to the basics behind Docker containerization. Docker uses two main Linux kernel components: namespaces and cgoups.

Anyone familiar with chroot already has a basic idea of what Linux namespaces can do and how to use namespace generally. Just as chroot allows processes to see any arbitrary directory as the root of the system (independent of the rest of the processes), Linux namespaces allow other aspects of the operating system to be independently modified as well. This includes the process tree, networking interfaces, mount points, inter-process communication resources and more. In Docker, the namespaces make the containerized process isolated from other processes running on the same Docker machine.

On the other hand, the cgroups (i.e control groups) provides a facility to limit the resource consumption of processes in a hierarchical way. cgroups allows to allocate resources — such as CPU time, system memory, network bandwidth, or combinations of these resources.

Java and Docker (Prior to 10): Not very friends!

Java processes running inside Linux containers don’t behave as expected when we let the JVM ergonomics set the default values for the garbage collector, heap size, and runtime compiler. When we execute a Java application without any tuning parameter, the JVM will adjust by itself several parameters to have the best performance in the execution environment. Let's take the following example using Docker for Mac installation with 2GB of memory and 4 CPUs:

docker container run -it -m=256M --entrypoint bash openjdk:8
# docker-java-home/bin/java -XX:+PrintFlagsFinal -version | grep MaxHeapSize

    uintx MaxHeapSize                              := 524288000                           {product}
openjdk version "1.8.0_171"

Unlike specified explicitly using the -Xmx option, JVM allocates one-fourth of the system’s memory for heap space, which explains the above output showing the configuration values of the JVM for memory. These values are extracted directly from the underlying host instead of Docker container, because the max heap size is 512M (2G / 4) instead of the limit set on the container. In other words, regardless of how many containers are running in parallel or any given limits on CPU and/or memory for particular containers, JVM always favours the configuration of Docker Host itself.

Furthermore, when specifying the parameter “-m 256M”, Docker daemon will limit memory usage to 256 MB in the RAM and 256M in the Swap. As a result, our java app process can allocate no more 512MB or it will be killed by docker, leading to Out of Memory.

Also, worth nothing to mention that CPU limits can affect Java application in various ways. From JVM perspective, the number of GC threads and JIT compiler threads are set according to available processors (unless they’re specified explicitly via JVM options).

Java 10 <3 Docker

One of early solutions for the problems above were the use of the base Docker image provided by the Fabric8 community. As for openJDK, efforts started in Java 8 (update 131) and Java 9. However it was finally solved in Java 10. Now applying CPU and
memory limits to our containerized JVMs will be straightforward. The JVM will detect hardware capability of the container correctly, tune itself appropriately and make a good representation of the available capacity to the application. As a result, not only CPU Sets but also CPU Shares are now examined by JVM. Furthermore, this becomes the default behaviour, and can only be disabled via -XX:-UseContainerSupport option.

memory limits and available CPUs

Since Java 10 is the Docker aware version, resource limits should have taken effect without any explicit configuration.

docker container run -it -m=512M --cpuset-cpus 0 --entrypoint bash openjdk:10
$ jshell

jshell> Runtime runtime = Runtime.getRuntime();
runtime ==> java.lang.Runtime@64bf3bbf

jshell> runtime.availableProcessors();
$2 ==> 1

jshell> runtime.maxMemory() / 1024 / 1024;
$3 ==> 123

The previous snippet shows that CPU Sets are handled correctly. Now let’s try with setting CPU Shares:

docker container run -it -m=512M -c=512 --entrypoint bash openjdk:10
$ jshell

jshell> Runtime.getRuntime().availableProcessors();
$1 ==> 1

As you can see, it’s working as intended ;) Magic!

Using -XX:-UseContainerSupport

The option -XX:-UseContainerSupport is used to allow the container support to be disabled. The default for this flag is true, i.e the container support is enabled by default.

To see it in action, I'm using the following java class:

public class DockerJava10 {
  public static void main(String[] args) throws InterruptedException {
    Runtime runtime = Runtime.getRuntime();
    int  cpus = runtime.availableProcessors();
    long mmax = runtime.maxMemory() / 1024 / 1024;
    System.out.println("Cores : " + cpus);
    System.out.println("Memory: " + mmax);
  }
}

Running the above code results on:

docker container run -it -m=512M -c=512 -v $PWD/DockerJava10.java:/DockerJava10.java --entrypoint bash openjdk:10

$ docker-java-home/bin/java -XX:-UseContainerSupport DockerJava10

Cores : 4
Memory: 500

This time, the JVM reads the configuration from Docker Host.

Ressources: