Jeff Mesnil
Weblog · About

Deliver Go applications as containers

October 31, 2024

I switched to a Apple Silicon laptop running on a ARM architecture and I often want to develop Go applications that are running both on ARM architecture (to run it on my laptop) as well as on Intel.

Go makes it very easy to target different architectures at build time but it makes the delivery of the software more complex as I have to deliver multiple binaries (linux on ARM, linux on Intel, Darwin on ARM, Darwin on Intel, etc.) There are tools for this such as GoReleaser but it still makes the deployment of the software more complex. A potential solution is to have a script file that determines the OS & architecture of the target platform and then download the appropriate executable.

Another solution that I am using more often is to deliver the software as a multi-arch container image. The user then just has to pull the image and run it with podman or docker.

As as simplistic example, let's say I need to write a Go application that gives the SHA-256 checksum of strings.
To do so, I can create a Go module with a simple checksum application:

$ mkdir checksum
$ cd checksum
$ go mod init checksum
$ mkdir cmd
$ touch cmd/checksum.go

The content of the cmd/checksum.go is:

package main

import (
        "crypto/sha256"
        "encoding/hex"
        "fmt"
        "os"
)

func main() {

        if len(os.Args) == 1 {
                fmt.Printf("No arguments\n")
                fmt.Printf("Usage: checksum <list of strings to hash>\n")
                os.Exit(1)
        }

        strs := os.Args[1:]

        hash := sha256.New()

        for i, str := range strs {
                h.Reset()
                hash.Write([]byte(str))
                checksum := hash.Sum(nil)
                if i > 0 {
                        fmt.Printf(" ")
                }
                fmt.Printf("%s", hex.EncodeToString(checksum))
        }
        fmt.Printf("\n")
}

I can test that the application is working as expected by running it with go run:

$ go run ./cmd/checksum.go foo
2c26b46b68ffc68ff99b453c1d30413413422d706483bfa0f98a5e886266e7ae

Now, I need to create a container that provides this application for both the ARM and Intel architectures.

I need a simple Containerfile to do so:

FROM golang:1.23 AS go-builder

WORKDIR /workspace/
COPY . .
RUN GOOS=linux go build -o ./build/checksum ./cmd/checksum.go

FROM scratch

COPY --from=go-builder /workspace/build/checksum /
ENTRYPOINT [ "/checksum" ]

The container build is done in two stages:

  1. I use the golang:1.23 builder to compile the code, targeting the linux operating system.
  2. I create an image from scratch that only contains the executable compiled from the first stage.

Then I can use podman to build a multi-arch image (for both linux/amd64 and linux/arm64):

$ podman build --platform linux/amd64,linux/arm64 --manifest localhost/checksum .

The resulting localhost/checksum image is small and contains only the checksum executable.

I can run it locally with podman:

$ podman run localhost/checksum foo

2c26b46b68ffc68ff99b453c1d30413413422d706483bfa0f98a5e886266e7ae

Podman will run the linux/arm64 image on my ARM laptop but an user on a Intel machine would use the linux/amd64 image. I can force Podman to use the Intel variant on my ARM laptop and it would run fine too (with a warning that the image no longer matches my platform)

$ podman run --platform linux/amd64 localhost/checksum foo

WARNING: image platform (linux/arm64) does not match the expected platform (linux/amd64)
2c26b46b68ffc68ff99b453c1d30413413422d706483bfa0f98a5e886266e7ae

At this point, to make the application available for others, I just need to push it to a container registry on Quay.io or ghcr.io and they will be able to use it as I do on my laptop.

This solution works fine for programs that don't need heavy integration with the host operating system. If my input would need to access the file system, I would have to mount directories with -v to make them available inside the image. If the integration with the host starts to be more complex, it would be better to provide a shell script that pulls the image and run Podman with the right configuration parameters.

Dikembe Mutombo (1966-2024)

October 1, 2024

Dikembe Mutombo All Star Card
Dikembe Mutombo All Star Card

I learnt yesterday the sad news that Dikembe Mutombo passed away at the age of 58 from a brain cancer.

He started his career in 1991 as I got interested in the NBA. He was a great player to watch as I enjoy the defensive side of Basket ball.

I fondly remember when he fell down the floor after upsetting the Seattle SuperSonics of Gary Payton and Shawn Kemp. That was the first time a 8th seed won against a 1st seed and his joy was contagious.

23 years later, I still have player cards from him that I scanned for this post.

Dikembe Mutombo Player Card
Dikembe Mutombo Player Card

Next time I play with my kids, I'll show them how he was wagging his finger when he blocked an opponent shot :)

20-year anniversary of jmesnil.net

September 12, 2024

I missed the date but I posted the first article on this blog on June, 11th 2024.

Twenty years later, and 350 posts later, my server is still there serving all the content I wrote from the start.

I went through different hosts and visual presentation until I settled down with using a AWS S3 Buck to host this static server built with Awestruct.

Out of nostalgia, I played with the Wayback Machine to remember of my web site looked over the years.

The initial version was hosted on Movable Type using their standard template. I made some terrible visual choice that I quickly removed for a cluttered look before going full page on the content that I stripped down to the current design.

On the technical side, I don't remember all the solutions but I used Movable Type, BlogPost, WordPress. Eventually, I switched to a static generation of the content from Markdown pages in a Git repository that are pushed to a AWS S3 Bucket.

The latest technical changes I did were to use a container image to manage all the Ruby dependencies from Awestruct and adding a TLS Certificate to the server.

I have had bursts of activity on my web site and long periods of neglects but it is still great to have a place I can call my own on the Web...

My Professional Mission Statement

September 10, 2024

[This week is Career Week at my company and a great opportunity to reflect on my career and what lies ahead]

A few years ago, my manager, Stefano, introduced me to the concept of defining personal Mission and Vision statements to help me guide my career.

I updated my About page some time ago with my professional mission statement but never elaborated on it. This post is a good opportunity to do so.

My professional mission statement is:

  I aim to build software based on sustainability, openness, and humane values, driving towards a more equitable world.  

I'm an engineer at heart so building software is what I do and what I enjoy the most.

We are facing a climate change that is impacting all humanity and it is our duty for ourselves and future generations to do as much as we can to circumvent it. In my profession, the best I can do is build sustainable software that have minimal carbon impact on the planet.

There are many approaches to build software, but I only want to do it in a way that fosters openness, transparency and human-centered values. I have been fortunate to work for many years with great colleagues and users that are kind, knowledgeable, willing to listen, giving and receiving constructive feedback.

I can't imagine switching to another working environment as I believe in a variant of Conway's law:

Organizations which design systems are constrained to produce designs which copy the behaviours of these organizations.

By acting with transparency and empathy, we are more likely to build software that creates a more open and humane world.

All I am doing professionally is making sure that I contribute to a better world for my kids, and the world I want for them is an equitable one where every individual has opportunities to grow, learn, and accomplish their goals.

I do not want to build software that benefits a few at the expense of the rest of the humanity. That is what attracted me to Open Source development at the beginning of my career, and I still believe in it more than 20 years later.

Unfortunately, my mission statement is at odds with most of the IT industry which aims for infinite growth at the expense of users, citizens, its own employees and the planet.

I have to live with that dichotomy and do my best to align the world with my values and beliefs. My mission statement is a simple effective way to never lose sight of what is truly meaningful to me.

Engineering is Problem-solving

September 9, 2024

Last weekend, I discussed with a friend the different approaches we had in our jobs.

I told him an anecdote from my study years.

I have a mathematical background and studied applied mathematics with a sprinkle of computing and applied physics.

During a trimester, our Physics teacher taught us a single theorem (sadly, I can't remember which one...). We had an upcoming exam and what could it be about if not this theorem? The day of the exam, we all applied the theorem as expected... and we all failed the exam.

During the correction, our teacher told us that we all applied the theorem to the stated problem without verifying that the constraints and boundaries were applicable. They were not and the theorem was not a solution to the problem.

The teacher explained to us that he "tricked" us on purpose. His objective was not to make us learn and apply the theorem but to make us think by ourselves:

First, understand the problem, its constraints, and its boundaries.
Then see if there is an equation, a theorem that could apply to its resolution.
Finally, use this tool to solve the problem optimally.

I told this anecdocte to my friend because that might be the best advice I got during all my studies.

As an engineer, my main task is to solve "problems" that our users face with the toolkits at their disposal.

The toolkits in the IT industry are ever expanding (Cloud! Microservices! Blockchains! Now AI!) and an increasing part of my work is to figure out if a tool is relevant for a given problem.

The reasoning should follow the advice of my teacher: first understand the problem to solve, then find the solutions that can be applied to it and finally find the optimal solution for the problem.

Of course, it is easier said than done as there is a strong industry push to find a technical solution to then apply it to any problem.

  • Let's move all our workloads to the cloud!
  • Let's add a chatbot to our applications!
  • Let's split our monolithic application into microservices!

These tools are fine and definitely suitable for many cases, but they are never universally applicable.

Following trends without ensuring they address the specific issue at hand can lead to wasted effort and resources, leaving us with all the downsides of a new technology without reaping its benefits.

Problem-solving is the most creative aspect of my engineering job, the one I enjoy the most and (talking about the latest IT trend) the one least likely to be replaced by artificial intelligence.

WildFly - The GitOps Way

March 5, 2024

We have improved the Maven plug-in for WildFly to be able to provision and configure WildFly application server directly from the application source code. This make it very simple to control the server configuration and management model and make sure it is tailor-made for the application requirements.

This is a good model for DevOps team where a single team is responsible for the development and deployment of the application.

However, we have users that are in a different organisational structure where the Development team and the Operational team work in silos.

In this article, we will show how it is possible to leverage the WildFly Maven plugin to handle the configuration and deployment of WildFly separately from the application development in a loose GitOps manner.

Provision the WildFly Server

We will use a Maven project to control the application server installation and configuration.

mkdir wildfly-gitops
cd wildfly-gitops
touch pom.xml

The pom.xml will configure the provisioning and configuration of WildFly

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <groupId>org.wildfly.gitops</groupId>
  <artifactId>wildfly-gitops</artifactId>
  <version>1.0.0-SNAPSHOT</version>

  <packaging>pom</packaging>
  <properties>
    <!-- Specify the version of WildFly to provision -->
    <version.wildfly>31.0.0.Final</version.wildfly>
    <version.wildfly.maven.plugin>4.2.2.Final</version.wildfly.maven.plugin>
  </properties>

  <build>
    <plugins>
      <plugin>
        <groupId>org.wildfly.plugins</groupId>
        <artifactId>wildfly-maven-plugin</artifactId>
        <version>${version.wildfly.maven.plugin}</version>
        <configuration>
          <feature-packs>
            <feature-pack>
              <groupId>org.wildfly</groupId>
              <artifactId>wildfly-galleon-pack</artifactId>
              <version>${version.wildfly}</version>
            </feature-pack>
          </feature-packs>
        </configuration>
        <executions>
          <execution>
            <id>provision-widfly</id>
            <phase>package</phase>
            <goals>
              <goal>provision</goal>
            </goals>
          </execution>
        </executions>
      </plugin>
    </plugins>
  </build>
</project>

This pom.xml will provision (install and configure) WildFly. The version of WildFly is configured with the version.wildfly property (set 31.0.0.Final in the snippet above).

Let's now install it with:

mvn clean package

Once the execution is finished, you have a WildFly server ready to run in target/server and you can run it with the command:

cd target/server
./bin/standalone.sh

The last log will show that we indeed installed WildFly 31.0.0.Final:

13:21:52,651 INFO  [org.jboss.as] (Controller Boot Thread) WFLYSRV0025: WildFly Full 31.0.0.Final (WildFly Core 23.0.1.Final) started in 1229ms - Started 280 of 522 services (317 services are lazy, passive or on-demand) - Server configuration file in use: standalone.xml

At this point you can init a Git repository from this wildfly-gitops directory and you have the foundation to manage WildFly in a GitOps way.

The Maven Plugin for WildFly provides rich features to configure WildFly including:

  • using Galleon Layers to trim the server according to the deployment capabilities
  • Running CLI scripts to configure its subsystems (for example, the Logging Guide illustrates how you can add a Logging category for your own deployments)

[Aside] Create Application Deployments

To illustrate how to manage the deployments of application in this server without direct control of the application source code, we must first create these deployments.

When Dev and Ops teams are separate, the Dev team will have done these steps and the Ops team would only need to know the Maven coordinates of the deployments.

For this purpose, we will compile and install 2 quickstarts examples from WildFly in our local maven repository:

cd /tmp
git clone --depth 1 --branch 31.0.0.Final https://github.com/wildfly/quickstart.git
cd quickstart
mvn clean install -pl helloworld,microprofile-config

We have only built the helloworld and microprofile-config quickstarts and put them in our local Maven repository.

We now have two deployments that we want to deploy in our WildFly Server with the Maven coordinates:

  • org.wildfly.quickstarts:helloworld:31.0.0.Final
  • org.wildfly.quickstarts:microprofile-config:31.0.0.Final

Assemble The WildFly Server With Deployments

Now that we have deployments to work with, let's see how we can include them in our WildFly server in a GitOps manner.

We will use a Maven assembly to control the deployments in our server. To do so, we will create a assembly.xml file in the wildfly-gitops directory:

<assembly xmlns="http://maven.apache.org/ASSEMBLY/2.1.1"
          xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
          xsi:schemaLocation="http://maven.apache.org/ASSEMBLY/2.1.1 http://maven.apache.org/xsd/assembly-2.1.1.xsd">
  <id>gitops-server</id>
  <formats>
    <format>dir</format>
  </formats>
  <fileSets>
    <fileSet>
      <directory>target/server</directory>
      <outputDirectory/>
    </fileSet>
  </fileSets>

  <dependencySets>
    <dependencySet>
      <includes>
        <include>*:war</include>
      </includes>
      <outputDirectory>standalone/deployments</outputDirectory>
      <outputFileNameMapping>${artifact.artifactId}.${artifact.extension}</outputFileNameMapping>
    </dependencySet>
  </dependencySets>
</assembly>

All this verbose file does is:

  • create a directory that is composed of:
    • the content of the target/server (that contains the WildFly Server)
    • add any war dependency to the standalone/deployments of this directoy
      • and rename them to xxx.war (instead of the whole Maven coordinates)

We also need to update the pom.xml to use this assembly:

<project>
  [...]
  <build>
    [...]
    <plugins>
      [...]
      <plugin>
        <artifactId>maven-assembly-plugin</artifactId>
        <configuration>
          <finalName>wildfly</finalName>
        </configuration>
        <executions>
          <execution>
            <id>make-assembly</id>
            <phase>package</phase>
            <goals>
              <goal>single</goal>
            </goals>
            <configuration>
              <descriptors>
                <descriptor>assembly.xml</descriptor>
              </descriptors>
            </configuration>
          </execution>
        </executions>
      </plugin>
  [...]
</project>

We can now run a Maven command to assemble our server:

mvn clean package

When the command is finished, we now have an assembled server in target/wildfly-gitops-server/wildfly

cd target/wildfly-gitops-server/wildfly
./bin/standalone.sh

NOTE: There are 2 different "servers" after mvn package is executed:

  • target/server contains the provisioned WildFly Server
  • target/wildfly-gitops-server/wildfly contains the WildFly server (copied from the previous directory) with any additional deployments.

But we did not add any deployment! Let's do it now.


In the wildfly-gitops/pom.xml, we will add a dependency to specify that we want to include the helloworld quickstart:

<project>
  [...]
  <dependencies>
    <dependency>
      <groupId>org.wildfly.quickstarts</groupId>
      <artifactId>helloworld</artifactId>
      <version>31.0.0.Final</version>
      <type>war</type>
    </dependency>
  </dependencies>

And that's it!

Let's now run once more mvn clean package.

If we now list the standalone/deployments directory of the assembled server, the helloworld.war deployment is listed:

ls target/wildfly-gitops-server/wildfly/standalone/deployments
README.txt                      helloworld.war

When we run the assembled server, the HelloWorld application is deployed and ready to run:

cd target/wildfly-gitops-server/wildfly
./bin/standalone.sh

...
14:01:25,307 INFO  [org.jboss.as.server] (ServerService Thread Pool -- 45) WFLYSRV0010: Deployed "helloworld.war" (runtime-name : "helloworld.war")

We can access the application by opening our browser at localhost:8080/helloworld/

At this stage, we have complete control of the WildFly server and the application(s) we want to deploy on it from this wildfly-gitops Git repository.

Let's see what we could do from here.

Add Another Deployment to The Server

We can now add the microprofile-config deployment to the assembled server by adding it as a dependency in the pom.xml:

<project>
  [...]
  <dependencies>
    [...]
    <dependency>
      <groupId>org.wildfly.quickstarts</groupId>
      <artifactId>microprofile-config</artifactId>
      <version>31.0.0.Final</version>
      <type>war</type>
    </dependency>
  </dependencies>

Let's package the server again and start it:

mvn clean package
cd target/wildfly-gitops-server/wildfly
CONFIG_PROP="Welcome to GitOps" ./bin/standalone.sh

The microprofile-config application is deployed and can be accessed from localhost:8080/microprofile-config/config/value

We have added deployments using Maven dependencies but it is also possible to include them in the assembled server by other means (copy them from a local directory, fetch them from Internet, etc.). The Assembly Plugin provides additional information for this.

Update The WildFly Server

The version of WildFly that we are provisioning is specified in the pom.xml with the version.wildfly property. Let's change it to use a more recent version of WildFly 31.0.1.Final

<project>
  [...]
  <properties>
    <!-- Specify the version of WildFly to provision -->
    <version.wildfly>31.0.1.Final</version.wildfly>

We can repackage the server and see that it is now running WildFly 31.0.1.Final:

mvn clean package
cd target/wildfly-gitops-server/wildfly
...
14:15:23,909 INFO  [org.jboss.as] (Controller Boot Thread) WFLYSRV0025: WildFly Full 31.0.1.Final (WildFly Core 23.0.3.Final) started in 1938ms - Started 458 of 678 services (327 services are lazy, passive or on-demand) - Server configuration file in use: standalone.xml

Use Dependabot to Be Notified of WildFly Updates

WildFly provisioning is using Maven artifacts. We can take advantage of this to add a "symbolic" dependency to the WildFly Galleon Pack artifact in our pom.xml so that Dependabot will periodically check and propose updates when new versions of WildFly are available:

<project>
  [...]
  <dependencies>
    [...]
    </dependency>
      <!-- We add the WildFly Galleon Pack as a provided POM dependency
           to be able to  use dependabot to be notified of updates -->
      <dependency>
        <groupId>org.wildfly</groupId>
        <artifactId>wildfly-galleon-pack</artifactId>
        <version>${version.wildfly}</version>
        <type>pom</type>
        <scope>provided</scope>
      </dependency>

We use a provided scope as we don't want to pull this dependency but this will ensure that Dependabot is aware of it and triggers updates when a new version is available.

Summary

In this article, we show how you can leverage the WildFly Maven Plug-in to manage WildFly in a GitOps way that is not directly tied to the development of the applications that are be deployed to the server.

The code snippets used in this article are available on GitHub at github.com/jmesnil/wildfly-gitops.

WildFly and the Twelve-factor App Methodology

September 13, 2023

In my daily job at Red Hat, I'm focused these days on making WildFly run great on container platforms such as Kubernetes.

"Traditional" way to develop and run applications with WildFly

WildFly is a "traditional" application server for Entreprise Java applications. To use it on the cloud, we are making it more flexible and closer to "cloud-native" applications so that it can be used to develop and run a 12-Factor App.

Traditionally, if you were using WildFly on your own machine, the (simplified) steps would be:

  1. Download WildFly archive and unzip it
  2. Edit its XML configuration to match your application requirements
  3. In your code repository, build your deployment (WAR, EAR, etc.) with Maven
  4. Start WildFly
  5. Copy your deployment
  6. Run tests

At this point, your application is verified and ready to use.

There are a few caveats to be mindful of.

  • Whenever a new version of WildFly is available, you have to re-apply your configuration change and verify that the resulting configuration is valid.
  • You run tests against your local download of WildFly with your local modifications. Are you sure that these changes are up to date with the production servers?
  • If you are developing multiple applications, are you using different WildFly downloads to test them separately?

"Cloudy" way to develop and run applications with WildFly

When you want to operate such application on the cloud, you want to automate all these steps in a reproduceable manner.

To achieve this, we inverted the traditional application server paradigm.

Before, WildFly was the top-level entity and you were deploying your applications (ie WARs and EARs) on it. Now, your application is the top-level entity and you are in control of the WildFly runtime.

With that new paradigm, the steps to use WildFly on the cloud are now:

  1. In your code repository, configure WildFly runtime (using a Maven plugin)
  2. Use Maven to build your application
  3. Deploy your application in your target container platform

Step (2) is the key as it automates and centralizes most of the "plumbing" that was previously achieved by hand.

If we decompose this step, it actually achieves the following:

  1. Compile your code and generate the deployment
  2. "Provision" WildFly to download it, change its configuration to match the application requirements
  3. deploy your deployment in the provisioned WildFly server
  4. Run integration tests against the actual runtime (WildFly + your deployments) that will be used in production
  5. Optionally create a container image using docker

Comparing the two ways to develop and run WildFly can look deceiving. However, a closer examination shows that the "Cloudy" way unlocks many improvements in terms of productivity, automation, testing and, at least in my opinion, developer joy.

What does WildFly provide for 12-Factor App development?

The key difference is that your Maven project (and its pom.xml) is the single 12-factor's Codebase to track your application. Everything (your application dependencies, WildFly version and configuration changes) is tracked in this repo. You are sure that what is built from this repository will always be consistent. You are also sure that WildFly configuration is up to date with production server because your project is where its configuration is updated. You are not at risk of deploying your WAR in a different version of the server or a server that has not been properly configured for your application.

Using the WildFly Maven Plugin to provision WildFly ensures that all your 12-factor's Dependencies are explicitly declared. Wheneve a new version of WildFly is released, you can be notified with something like Dependabot and automatically test your application with this new release.

We have enhanced WildFly configuration capabilities so that you can store your 12-factor's Config in your environment. WildFly can now use environment variables to change any of its management attributes or resolve their expressions. Eclipse MicroProfile Config is also available to store any of your application config in the environment.

Connecting to 12-factor's Backing Services is straightforward and WildFly is able to connect to a database with a few env vars representing its URL and credentials with the datasources feature pack.

Using the WildFly Maven Plugin in your pom.xml, you can simply have different stages to 12-factor's Build, release, run and make sure you build once your release artifact (the application image) and runs it on your container platform as needed.

Entreprise Java application as traditionally stateful so it does not adhere to the 12-factor's Processes unless you refactor your Java application to make it stateless.

WildFly complies with 12-factor's Port binding and you can rely on accessing its HTTP port on 8080 and its management interfance on 9990.

Scaling out your application to handle 12-factor's concurrency via the process model is depending on your application architecture. However WildFly can be provisioned in such a way that its runtime can exactly fit your application requirements and "trim" any capabilites that is not needed. You can also split a monolith Entreprise Java application in multiple applications to scale the parts that need it.

12-factor's Disposability is achieved by having WildFly fast booting time as well as graceful shutdown capabilities to let applications finish their tasks before shutting down.

12-factor's Dev/prod parity is enabled when we are able to use continuous deployment and having a single codebase to keep the gap between what we develop and what we operate. Using WildFly with container-based testing tool (such as Testcontainers) ensures that what we test is very similar (if not identical) to what is operated.

WildFly has extensive logging capabilities (for its own runtime as well as your application code) and works out of the bow with 12-factor's Logs by outputting its content on the standard output. For advanced use cases, you can change its output to use a JSON formatter to query and monitor its logs.

12-factor's Admin processes has been there from the start with WildFly that provides a extensive CLI tool to run management operations on a server (running or not). The same management operations can be executed when WildFly is provisioned by the WildFly Maven Plugin to adapt its configuration to your application.

Summary

We can develop and operate Entrprise Java applications with a modern software methodology. Some of these principles resonate more if you targeting cloud environment but most of them are still beneficical for traditional "bare metal" deployments.

I. Codebase

II. Dependencies

  • All dependencies (including WildFly) are managed by your application pom.xml.

III. Config

  • Use Eclipse MicroProfile Config and WildFly capabilities to read configuration from the environment.

IV. Backing Services

  • Jakarta EE is designed on this principle (eg JDBC, JMS, JCA, etc.).

V. Build, release, run

  • With a single mvn package, you can build your release artifact and deploy it wherever you want. The WildFly Maven Plugin can generate ate a directory or an application image to suit either bare-metal or container-based platform.

VI. Processes

  • WildFly can run stateless application but you will have to design them this way :)

VII. Port Binding

  • 8080 for the application, 9990 for the management interface :)

VIII. Concurrency

  • Entreprise Java application have traditionally be scaling up so there is some architecture and application changes to make them scale out instead. The lightweight runtime from WildFly is definitely a good opportunity for scaling out Entreprise Java applications.

IX. Disposability

  • WildFly boots fast and gracefully shuts down.

X. Dev/prod parity

  • Use the WildFly Maven Plugin to control WildFly, container-based testing to reduce the integration disparity and keep changes between dev, staging and production to a minimum.

XI. Logs

  • WildFly outputs everything on the standard output. Optionally, you can use a JSON formatter to query and monitor your application logs.

XII. Admin processes

  • WildFly tooling provides CLI scripts to run management operations. You can store them in your codebase to handle configuration changes, migration operations, maintenance operations.

Conclusion

Using the "cloudy" way to develop and operate entreprise applications unlocks many benefits regardless of the deployment platform (container-based or bare metal).

It can automate most of the mundane tasks that reduce developer joy & efficiency while improving the running operations of WildFly improving operator joy & efficiency.

TLS certificate on jmesnil.net

September 13, 2023

Web browers now treats sites served by HTTP as "Not secure". I finally caved in and added a TLS certificate to jmesnil.net.

Displayed Padlock achievement: completed
Displayed Padlock achievement: completed

If you are visiting jmesnil.net, you can now browse it safely and be sure that your credit cards numbers will not be stolen. That's progress I suppose...

I host my site on Amazon AWS and use a bunch of their services (S3, Route 53, CloudFront, Certificate Manager) to be able to redirect the HTTP traffic to HTTPS (and the www.jmesnil.net URLs to jmesnil.net). I will see how much this increase the AWS bill...

More interestingly, I used Let's Encrypt to generate the certificates. It took me less than 5 minutes to generate it (including the acme challenge to verify the ownership of the jmesnil.net domain name). This project is a great example of simplifying and making accessible a complex technology to web publishers.

Health Update

July 3, 2023

On October 17th of last year, while playing basketball, I suffered a ruptured Achilles tendon.

Unfortunately, an initial misdiagnosis and a lengthy waitlist for necessary medical examinations resulted in me having to postpone surgery until December 9th. The tendon rupture measured approximately 6cm, necessitating the use of tissue from adjacent areas of my feet to construct a completely new tendon.

This led to a period of immobilization lasting 45 days. Although the Christmas break was not particularly enjoyable, I consider myself fortunate to have an incredible wife and children who provided unwavering support, showering me with love and kindness throughout the ordeal. My managers and colleagues at Red Hat were also very supportive so that I could focus on my health during that period.

When my boot was finally removed, I caught sight of my foot for the first time, revealing a 15cm scar that I could proudly boast about if I were on the "Jaws" boat :)

Scar of my Achilles tendon
Scar of my Achilles tendon © Jeff Mesnil

By the end of January, I cautiously began walking again, albeit with a noticeable limp. Since then, my rehabilitation has been a gradual journey with its fair share of ups and downs. Yesterday, I was able to run 5 km, but today climbing stairs causes discomfort. I am hopeful that I will achieve a full recovery. As a symbolic "endpoint" to my rehab, I have set a goal to participate in a semi-marathon next year.

Although my competitive basketball days are over, I am still enthusiastic about playing with my kids and continuing to enjoy the sport. I'll play it less and watch it more :) Walking, running, and hiking at my own pace have become my main physical activities, whether I'm by myself or accompanied by friends and family. They provide me energy, focus and a deep appreciation of a functional body.