Holly Cummins, a colleague at Red Hat and Java Champion, is making a lot of great content about making software greener.
Her presentation focuses on Quarkus which is a greenfield1 Java framework focused on Cloud-native applications while my job is on WildFly and JBoss EAP, Java application servers that predate the Cloud.
Her presentation resonates with me professionally and personally.
Professionally, my focus these days is on making WildFly and EAP run great on the Cloud. Doing so in a sustainable manner is important as it has direct impact on our users' costs and performance.
Personally, and more importantly, it resonates with my vision of my work and my life. All my career has been done around Open Source projects. They fit my idea of a good humane society with core values of Openness, Meritocracy and Accountability. Developping Open Source code is now a given for me and I don't envision other ways to perform my job.
The next frontier is now to develop Sustainable code that reduces our impact on the planet. I deeply believe now that any advancements we are making, whether it's the Cloud, Cryptocurrency, Machine-Learning, can not be at the expense at the planet and our future generations.
It's not simple to see how that translates in my daily tasks but we are past the point where we should include sustainability into our guiding principles and I'm making a deliberate conscious choice to do that now at my own level.
It is quite a milestone as it is the first specification released by the Eclipse MicroProfile project. It covers a simple need: unify the configuration of Java application from various sources with a simple API:
@Inject
@ConfigProperty(name = "app.timeout", defaultValue = "5000")
long timeout;
The developer no longer needs to check for configuration files, System properties, etc. He or she just specifies the name of the configuration property (and an optional default value). The Eclipse MicroProfile Config specification ensures that several sources will be queried in a consistent order to find the most relevant value for the property.
This implementation passes the MicroProfile Config 1.0 TCK. It can be used by any CDI-aware container/server (i.e. that are able to load CDI extensions).
This project also contains a WildFly extension so that any application deployed in WildFly can use the MicroProfile Config API. The microprofile-config subsystem can be used to configure various config sources such as directory-based for OpenShift/Kubernetes config maps (as described in a previous post) or the properties can be stored in the microprofile-config subsystem itself):
Finally, a Fraction is available for WildFly Swarm so that any Swarm application can use the Config API as long as it depends on the appropriate Maven artifact:
It is planned that this org.wildfly.swarm:microprofile-config Fraction will eventually move to Swarm own Git repository so that it Swarm will be able to autodetect applications using the Config API and load the dependency automatically. But, for the time being, the dependency must be added explicitely.
If you have any issues or enhancements you want to propose for the WildFly MicroProfile Config implementation, do not hesitate to open issues or propose contributions for it.
I recently looked at the Eclipse MicroProfile Healthcheck API to investigate its support in WildFly. WildFly Swarm is providing the sample implementation so I am very interested to make sure that WildFly and Swarm can both benefit from this specification.
This specification and its API are currently designed and anything written in this post will likely be obsolete when the final version is release. But without further ado...
Eclipse MicroProfile Healthcheck
The Eclipse MicroProfile Healthcheck is a specification to determine the healthiness of an application. It defines a HealthCheckProcedure interface that can be implemented by an application developer. It contains a single method that returns a HealthStatus: either UP or DOWN (plus some optional metadata relevant to the health check). Typically, an application would provide one or more health check procedures to check healthiness of its parts. The overall healthiness of the application is then determined by the aggregation of all the procedures provided by the application. If any procedure is DOWN, the overall outcome is DOWN. Else the application is considered as UP.
The specification has a companion document that specifies an HTTP endpoint and JSON format to check the healthiness of an application.
Using the HTTP endpoint, a container can ask the application whether it is healthy. If it is not healthy, the container can take actions to deal with it. It can decide to stop the application and eventually respin a new instance. The canonical example is Kubernetes that can configure a liveness probe to check this HTTP health URL (OpenShift also exposes this liveness probe).
WildFly Extension prototype
I have written a prototype of a WildFly extension to support health checks for applications deployed in WildFly and some provided directly by WildFly:
This HTTP endpoint can be used to configure OpenShift/Kubernetes liveness probe.
Any deployment that defines Health Check Procedures will have them registered to determine the overall healthiness of the process. The prototype has a simple example of a Web app that adds a health check procedure that randomly returns DOWN (which is not very useful ;).
WildFly Health Check Procedures
The Healthcheck specification mainly targets user applications that can apply application logic to determine their healthiness. However I wonder if we could reuse the concepts inside the application server (WildFly in my case). There are "things" that we could check to determine if the server runtime is healthy, e.g.:
The amount of heap memory is close to the max
some deployments have failed
Excessive GC
Running out of disk space
Some threads are deadlocked
These procedures are relevant regardless of the type of applications deployed on the server.
Subsystems inside WildFly could provide Health check procedures that would be queried to check the overall healthiness. We could for example provide a health check that the used heap memory is less that 90% of the max:
HealthCheck.install(context, "heap-memory", () -> {
MemoryMXBean memoryBean = ManagementFactory.getMemoryMXBean();
long memUsed = memoryBean.getHeapMemoryUsage().getUsed();
long memMax = memoryBean.getHeapMemoryUsage().getMax();
HealthResponse response = HealthResponse.named("heap-memory")
.withAttribute("used", memUsed)
.withAttribute("max", memMax);
// status is is down is used memory is greater than 90% of max memory.
HealthStatus status = (memUsed < memMax * 0.9) ? response.up() : response.down();
return status;
});
Summary
To better integrate WildFly with Cloud containers such as OpenShift (or Docker/Kunernetes), it should provide a way to let the container checks the healthiness of WildFly. The Healthcheck specification is a good candidate to provide such feature. It is worth exploring how we could leverage it for user deployments and also for WildFly internals (when that makes sense).
This post is about some updates of the project status and work being done to leverage the Config API in OpenShift (or other Docker/Kubernetes-based environment).
Project update
Since last post, WildFly and Swarm projects agreed to host the initial work I did in their GitHub projects and the Maven coordinates have been changed to reflect this. For the time being, everything is hosted at wildfly-extras/wildfly-microprofile-config.
The Eclipse MicroProfile Config 1.0 API should be released soon. Once it is released, we can then release the 1.0 version of the WildFly implementation and subsystem. The Swarm fraction will be moved to the Swarm own Git repo and will be available with future Swarm releases.
Until the Eclipse MicroProfile Config 1.0 API is released, you still have to build everything from wildfly-extras/wildfly-microprofile-config and uses the Maven coordinates:
We have added a new type of ConfigSource, DirConfigSource, that takes a File as a parameter.
When this ConfigSource is created, it scans the file (if that is a directory) and creates a property for each file in the directory. The key of the property is the name of the file and its value is the content of the file.
For example, if you create a directory named /etc/config/numbers-app and add a file in it named num.size with its content being 5, it can be used to configure the following property:
@Inject
@ConfigProperty(name = "num.size")
int numSize;
There are different ways to use the corresponding DirConfigSource depending on the type of applications.
WildFly Application
If you are deploying your application in WildFly, you can add this config source to the microprofile subsystem:
If you are using the WildFly implementation of the Config API outside of WildFly or Swarm, you can add it to a custom-made Config using the Eclipse MicroProfile ConfigBuilder API.
OpenShift/Kubernetes Config Maps
What is the use case for this new type of ConfigSource?
It maps to the concept of OpenShift/Kubernetes Config Maps so that an application that uses the Eclipse MicroProfile Config API can be deployed in a container and used its config maps as a source of its configuration.
I have added an OpenShift example that shows a simple Java application running in a variety of deployment and configuration use cases.
The application uses two properties to configure its behaviour. The application returns a list of random positive integers (the number of generated integers is controlled by the num.sizeproperty and their maximum value by the num.max property):
@Inject
@ConfigProperty(name = "num.size", defaultValue = "3")
int numSize;
@Inject
@ConfigProperty(name = "num.max", defaultValue = "" + Integer.MAX_VALUE)
int numMax;
The application can be run as a standalone Java application configured with System Properties:
This highlights the benefit of using the Eclipse MicroProfile Config API to configure a Java application: the application code remains simple and use the injected values from the Config API. The implementation then figures out all the sources where the values can come from (System properties, properties files, environment, and container config maps) and inject the appropriate ones.
In this post, I will show how to use the Config API from a Java application. The remaining posts will be about developing such new features within the WildFly and Swarm ecosystem.
Eclipse MicroProfile
As stated on its Web site, the mission of the Eclipse MicroProfile is to define:
An open forum to optimize Enterprise Java for a microservices architecture by innovating across multiple implementations and collaborating on common areas of interest with a goal of standardization.
One of the first new API that they are defining is the Config API that provides a common way to retrieve configuration coming from a variety of sources (properties file, system properties, environment variables, database, etc.). The API is very simple and consists mainly of 2 things:
a Config interface can be used to retrieve (possibly optional) values identified by a name from many config sources
a @ConfigProperty annotation to directly inject a configuration value using CDI
The API provides a way to add different config source from where the properties are fetched. By default, they can come from:
the JVM System properties (backed by System.getProperties())
the OS environment (backed by System.getenv())
properties file (stored in META-INF/microprofile-config.properties)
A sampe code to use the Config API looks like this:
@Inject
Config config;
@Inject
@ConfigProperty(name = "BAR", defaultValue = "my BAR property comes from the code")
String bar;
@Inject
@ConfigProperty(name = "BOOL_PROP", defaultValue = "no")
boolean boolProp;
...
Optional<String> foo = config.getOptionalValue("FOO", String.class);
...
There is really not much to the Config API. It is a simple API that hides all the complexity of gathering configuration from various places so that the application can focus on using the values.
One important feature of the API is that you can define the importance of the config sources. If a property is defined in many sources, the value from the config source with the higher importance will be used. This allows to have for example default values in the code or in a properties file (with low importance) that are used when the application is tested locally. When the application is deployed in a container, environment variables defined by the container will have higher importance and be used instead of the default ones.
Instructions
The code above comes from the example that I wrote as a part of my work on the Config API.
To run the example, you need to first install my project and then run the example project:
$ cd example
$ mvn wildfly-swarm:run
...
2017-04-14 10:35:24,416 WARN [org.wildfly.swarm] (main) WFSWARM0013: Installed fraction: Eclipse MicroProfile Config - UNSTABLE net.jmesnil:microprofile-config-fraction:1.0-SNAPSHOT
...
2017-04-14 10:35:30,676 INFO [org.wildfly.swarm] (main) WFSWARM99999: WildFly Swarm is Ready
It is a simple Web application that will return the value of some variable and fields that are configured using the Config API:
$ curl http://localhost:8080/hello
FOO property = Optional[My FOO property comes from the microprofile-config.properties file]
BAR property = my BAR property comes from the code
BOOL_PROP property = false
We then run the application again with environment variables:
$ BOOL_PROP="yes" FOO="my FOO property comes from the env" BAR="my BAR property comes from the env" mvn wildfly-swarm:run
If we call the application, we see that the environment variables are now used to configure the application:
$ curl http://localhost:8080/hello
FOO property = Optional[my FOO property comes from the env]
BAR property = my BAR property comes from the env
BOOL_PROP property = true
The example is using Swarm and for those familiar with it, only requires to add two fractions to use the Config API:
I have not yet released a version of my implementation as it is not clear yet where it will actually be hosted (and which Maven coordinates will be used).
Conclusion
This first post is a gentle introduction to the Config API. The specification is not final and I have left outside some nice features (such as converters) that I will cover later.
The next post will be about my experience of writing an implementation of this API and the way to make it available to Java EE applications deployed in the WildFly application server or to MicroServices built with WildFly Swarm.
This web site is composed of static files generated by Awestruct. I have extended the basic Awestruct project to provide some additional features (link posts, article templates, photo thumbnails, etc.) Unfortunately, some of these extensions uses Rubygems that depends on native libraries or uglier spaghetti dependencies.
I recently wanted to write an article but was unable to generate the web site because some Rubygems where no longer working with native libraries on my Mac. Using bundler to keep the gems relative to this project does not solve the issue of the native libraries that are upgraded either the OS or package system (such as Homebrew).
This was the perfect opportunity to play with Docker to create an image that I could use to generate the web site independantly of my OS.
I create this jmesnil/jmesnil.net image by starting from the vanilla awestruct image created by my colleague, Marek. I tweaked it because he was only installing the Awestruct gem from the Dockerfile while I have a lot of other gems to install for my extensions.
I prefer to keep the gems listed in the Gemfile so that the project can also work outside of Docker so I added the Gemfile in the Dockerfile before calling bundle install.
This web site is hosted on Amazon S3 and use the s3cmd tool to push the generated files to the S3 bucket. The s3cmd configuration is stored on my home directory and I need to pass it to the Docker image so that when the s3cmd is run inside it, it can used by secret credentials. This is done in a docker.sh script that I used to start the Docker image:
# Read the s3cmd private keys from my own s3cmd config...
AWS_ACCESS_KEY_ID=`s3cmd --dump-config | grep access_key | grep -oE '[^ ]+$'`
AWS_SECRET_ACCESS_KEY=`s3cmd --dump-config | grep secret_key | grep -oE '[^ ]+$'`
# ... and pass it to the s3cmd inside Docker using env variables
docker run -it --rm \
-e AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID \
-e AWS_SECRET_KEY=$AWS_SECRET_ACCESS_KEY \
-v `pwd`/:/home/jmesnil/jmesnil.net \
-p 4242:4242 \
-w /home/jmesnil/jmesnil.net \
jmesnil/jmesnil.net
Once the Docker image is started by this script, I can then use regular Rake tasks to run a local Web server to write articles (rake dev) or publish to Amazon S3 (rake production).
This is a bit overkill to use a 1.144 GB Docker image to generate a 6MB web site (that only contains text, all the photos are stored in a different S3 bucket) but it is worthwhile as it will no longer be broken every time I upgrade the OS or Brew.
The image is generic enough that it could serve as the basis of any Ruby project using Bundler (as long as required native libs are added to the yum install at the beginning of the Dockerfile).
This is a great review of the iPhone 6s camera and summary of its new features.
Marion has bought an iPhone 6s and the camera is a definite improvement when I compare her photos to the ones from my iPhone 6.
Better low light performance and higher resolution are always a plus, but the new feature I prefer is Live Photos. When Apple announced it, I found that superfluous but I changed my mind after looking at watching photos of Raphaël and hearing him twittering...
Our first child, Raphaël, is born last September and we will soon celebrate his first birthday.
This year has been the happiest of my life. However, having a baby means that I have less free time than before, and I want to spend this time with my family or doing personal projects I feel passionate about.
These days I feel more passionate about making photography (mostly of Raphaël and Marion) than writing software (I already have a full time job at Red Hat where I enjoy coding on WildFly).
Before our baby's birth, I could spend evenings and weekends working on personal Open Source projects. After one year, it is time to admit that I do not want to that anymore and act accordingly.
I have decided to flag my personal Open Source projects as not maintained anymore. Some of these projects are still quite used, the three main are all dealing with messaging clients:
stomp.js - a JavaScript library to write web apps/nodes.js client for STOMP protocol
StompKit - an Objective-C client for STOMP protocol
I'll modify these projects READMEs to warn that they are no longer maintained (with a link to that post to give some context).
It's not fair to users to spend time using them, reporting bugs and have no warning that their issues will be ignored.
If you are using these projects, I understand that you may be upset that your bugs are not fixed or that the enhancement you request will not be fixed in the original project. Fortunately, these projects are licensed under the Apache License V2. You can fork them (they are hosted on GitHub and use Git as their version control system) and modify them to suit your needs.
It is a tough decision to not maintain these projects anymore but it is a decision that I have subconsciously made months ago. Now I just have to acknowledge it.
I may revisit this decision when my child is older or when I feel passionate about these project again. Or I may create other Open Source projects, who knows?
The key thing is that by releasing these projects under an Open Source license, I ensured that their use could outlast my initial contributions to it.