Advent calendar: 1001 ways from source code to Docker image

„Advent Advent, ein Lichtlein brennt.“ – German saying
The first advent Sunday, at least for me, begins the Christmas season, meaning thinking of presents, planning the end of the year and how to celebrate Christmas and New Year’s Eve with family and friends and a lot of stressful days. For me, the days before Christmas are usually completely booked with work, planning and a lot of shopping. This year, I promised myself, to also include some tech topics, since shopping for presents this year is all online and I already saved some time.
Last year, colleagues of mine did a small advent calendar on twitter presenting tools, books, tips and tricks they use everyday in their work. I loved the idea, so this year, I asked colleagues of mine, if they are interested to write a small post each week on advent Sunday, about tools, ideas, tricks and all the little things they use to spend more time with their family and less time with deploying code to production, planning the next retrospective or creating a new showcase for machine learning frameworks. And a lot of my colleagues loved this idea. In the coming four weeks, each advent Sunday I will write a post about a certain field of work, where other Novateccies have something to share.
This week, I asked software engineers and cloud teachers and users, how they transport their applications to the cloud, how they create containers and if they still use self-managed Dockerfiles for this. Together with Matthias Haeussler (better known as maeddes) we found ways to work with docker, that you might want to look at and see, if they work for your case.
Multistage builds and buildx – Advanced docker builds
I guess most Software Engineers, especially when they had contact to cloud computing and cloud native applications, know Docker and Dockerfiles. And hopefully, most of them build apps according to the 12-factor guidelines. Factor 5 is especially interesting, when looking into docker files: Strictly separate build and run stages of an application. Often this is difficult, especially when one want to build an application in a container and then move the build binary. To solve this, one can use two separate Dockerfiles, one which builds the application and one, that takes the jar and runs it. But who likes to manage two Dockerfiles, that are most of the time run together? To really solve this, Docker offers another way: Multistage builds. The Dockerfile below shows such a multistage build in action:
1 2 3 4 5 6 7 8 9 10 11 12 |
# build stage based on maven FROM maven:3 as build COPY . /src RUN mvn clean package -Dspring.profiles.active=default -f src/ # openjdk 11 as base image for run FROM adoptopenjdk/openjdk11-openj9:jre as run WORKDIR /app # reference target folder from build stage COPY --from=build target/* . # assume app.jar as build name in maven ENTRYPOINT ["java", "-jar", "app.jar"] |
The advantage is, no volume sharing or anything has to be done. Docker can access files from the build stage without any more work from the developer, and all of this in one Dockerfile.
If this is not enough for you, and you might want more build options and features, such as automatic garbage collection or branching in your multistage files, have a look into buildx and buildkit. Buildkit allows for more complicated build setups for your container landscape, while buildx extends the docker CLI with the features of buildkit.
Paketo Buildpacks
With Paketo (paketo.io) there is an option to build container images without using any Dockerfiles at all. The underlying concept is based on the Buildpack technologie. Buildpacks were first conceived by Heroku in 2011. Since then, they have been adopted by Cloud Foundry and other PaaS such as Google App Engine, Gitlab, Knative & more. The basic idea is that the buildpack mechanism transforms your application source code to images and takes this duty away from the developers.
The advantage of this approach is that there is a standardized mechanism for image construction, which means the selection of base image, application runtime and all other required container image building blocks. The disadvantage is the limitation to the scope of available language runtimes.
Paketo Buildpacks provide a range of language runtime support for applications. They leverage the Cloud Native Buildpacks framework (buildpacks.io) to make image builds easy, performant, and secure.
One way to apply Paketo is to install the CLI tool ‚pack‘. In case of a Java application based on Maven or Gradle you can execute the pack command directly in the base directory and let Paketo do the rest. It requires a local docker environment and will download and run a so-called builder image. Within this container your code will be built and put into a resulting container image. The mechanism will create the individual image layers and subsequent builds will be much faster as only individual container layers will be exchanged.
To execute simply run:
1 |
$pack build repo-name/image-name:tag |
The CLI works in an equivalent way for various programming languages. Refer to the docs for more details.
In the special case of a Spring Boot application it is even easier as starting from version 2.3 it is part of the Maven Spring Boot Plugin.
1 |
$mvn spring-boot:build-image (optional: -Dspring-boot.build-image.imageName=repo-name/image-name:tag) |
will not only build your source code, but also provide a resulting container image.
Jib
Jib is another mechanism to build container images. It comes from Google and as the name already implies it has it’s focus on Java applications. It can easily be applied as Maven or Gradle plugins. Information about Jib can be found on the GitHub page.
The main difference to Dockerfiles and Paketo is that it does not even require a local Docker daemon for building the image. Similar to Paketo it also splits the build into layers separating classes and dependencies. Hence it is also very fast in subsequent builds of applications and containers.
These layers, by default, are layered on top of a distroless base image.
In case of a Maven set up you need to authenticate to your container registry and invoke
1 |
mvn compile com.google.cloud.tools:jib-maven-plugin:2.6.0:build -Dimage=repo-name/image-name:tag |
Alternatively you can also build to your local daemon and registry. In this case just omit the ‚-Dimage‘ part.
In case you plan to use it more frequently it probably makes sense to add it to your pom.xml or build.gradle file.
Again the docs here are very helpful with examples.
buildah
Maybe you don’t want to build docker containers all the time. Nowadays, most platforms don’t care, if you build your container with docker or any other software, as long as it follows the OCI Standard. Internally, docker builds just these, OCI standardized images. For this scenario, buildah can be used. Buildah is by far smaller application to build container images as well as run containers from them. It can build containers from an existing Dockerfile as well as create an image from an existing container, without the whole docker stack installed locally. The benefit is, that buildah focuses just on building images. If you want to manage images, tag them and version them in some sort, podman is a tool worth looking into.
dive – Inspect what your Images look like
With all the tools to build new images, one thing is missing – transparency. What did i include in my image? Why is the new container so large? And so on. Usually, a developer just looks into the docker file, finds out what is in the image build and can answer all of those question. But with packet.io, jib, buildah and the likes, this gets by far more difficult. This is where dive comes to the rescue. Dive allows to inspect images and see exactly, what is included in each layer of your container. In my private projects, i use packet.io and the integration with spring-boot to build images without the need to write own Dockerfiles. This makes developing much faster and I don’t have to manage the Dockerfile by myself. Dive allows me to see, what is included in each layer, without the need to read the OCI JSON printout in a easy way and see why the images change in size, when compared with a Dockerfile based build.
Final Remarks
I hope that all the presented ways help in your journey to cloud native applications and containers. This is list is of course not complete when it comes to options for building container images.
Have a nice first advent Sunday and enjoy your Christmas seasons. Hope to see you next week, when I collect a buch of efficiency tips and tools from my colleagues to present to you.
Authors
buildah, dive, buildx and multistage builds – Corvin Schapöhler
Paketo Buildpacks, Jib – Matthias Haeussler
Aktuelle Beiträge





Artikel kommentieren