I updated stomp-websocket JavaScript library to fix a critical bug.
When it receives a WebSocket message, I was parsing it to unmarshall a single STOMP frame. However it is valid to send many STOMP frames in a single WebSocket message (ActiveMQ Apollo does this). I updated the code to take this into account (and check for content-length header too).
This should considerably improve the performance when consuming STOMP messages from the Web browsers.
Thanks to rfox90 which proposed a solution to this fix and Jeff Robbins which tested it on many STOMP brokers to validate it.
The latest version of the library is available on GitHub.
Since I have started working on AS7, I have been pleasantly surprised by its ease of configuration. My favorite thing about AS7 (after its fast boot time) is that its configuration is located into a single file.
If you need messaging with AS7, you can use its standalone/configuration/standalone-full.xml configuration file which contains a messaging stack built on top of HornetQ. The whole messaging stack is configured in its messaging <subsystem>:
By default, AS7 only supports JMS as its messaging protocol. If your applications needs to send and receive messages from other platforms or languages than Java, JMS is not an option and you need to enable STOMP too.
Fortunately, adding STOMP support to AS7 is dead easy:
add an HornetQ acceptor to let AS7 accept STOMP frames on a dedicated socket
configure this socket to bind to default STOMP port
To add an HornetQ acceptor for STOMP, edit the standalone/configuration/standalone-full.xml file and add these lines in the <acceptors> section of <hornetq-server>:
The stomp-acceptor is expected to receive STOMP frames on the socket binding named messaging-stomp.
At the end of the same standalone/configuration/standalone-full.xml file, add a <socket-binding> to the <socket-binding-group name="standard-sockets"> section:
With these 2 changes, we can now start JBoss AS7 with STOMP enabled:
$ ./bin/standalone.sh -c standalone-full.xml
...
14:14:38,027 INFO [org.hornetq.core.remoting.impl.netty.NettyAcceptor] (MSC service thread 1-2) Started Netty Acceptor version 3.2.5.Final-a96d88c 127.0.0.1:61613 for STOMP protocol
...
14:14:38,263 INFO [org.jboss.as] (Controller Boot Thread) JBAS015874: JBoss AS 7.1.2.Final-SNAPSHOT "Brontes" started in 2749ms - Started 163 of 251 services (87 services are passive or on-demand)
Before sending and receiving messages from a STOM client, we also need to configure 2 things for the AS7:
add an user (AS7 is secured by default and we must explicitely add an application user to connect to it)
add a JMS queue to send and receive messages on it
To add an user, we use the add-user.sh tool:
$ ./bin/add-user.sh
What type of user do you wish to add?
a) Management User (mgmt-users.properties)
b) Application User (application-users.properties)
(a): b
Enter the details of the new user to add.
Realm (ApplicationRealm) : [type enter]
Username : myuser
Password : mypassword
Re-enter Password : mypassword
What roles do you want this user to belong to? (Please enter a comma separated list, or leave blank for none)[ ]: guest
About to add user 'myuser' for realm 'ApplicationRealm'
Is this correct yes/no? yes
Added user 'myuser' to file '[...]/standalone/configuration/application-users.properties'
Added user 'myuser' to file '[...]/domain/configuration/application-users.properties'
Added user 'myuser' with roles guest to file '[...]/standalone/configuration/application-roles.properties'
Added user 'myuser' with roles guest to file '[...]/domain/configuration/application-roles.properties'
Finally, to add a JMS queue, we will use JBoss CLI tool:
$ ./bin/jboss-cli.sh
You are disconnected at the moment. Type 'connect' to connect to the server or 'help' for the list of supported commands.
[disconnected /] connect
[standalone@localhost:9999 /] /subsystem=messaging/hornetq-server=default/jms-queue=test/:add(entries=["/java:jboss:exported/queue/test"])
{"outcome" => "success"}
We now have configured an user and a JMS queue. Let's send and receive STOMP messages using Ruby and its stomp gem (installed with sudo gem install stomp):
require 'rubygems'
require 'stomp'
client = Stomp::Client.new "myuser", "mypassword", "localhost", 61613
client.publish 'jms.queue.test', 'Hello, STOMP on AS7!'
client.subscribe "jms.queue.test" do |msg|
p "received: #{msg.body}"
end
And it will output the expected message:
received: Hello, STOMP on AS7!
Conclusion
add a <netty-acceptor> with a stomp protocol param
add a <socket-binding> for default 61613 STOMP port
Using STOMP with AS7 is simple to setup (4 lines to add to its configuration file) and allow any languages and platforms to send and receive messages to your application hosted in AS7.
I have a pet project that uses a Web application running on node.js with its data stored in Redis. Since I joined Red Hat, I took the opportunity to eat our own dog food and port this application from Heroku to OpenShift.
OpenShift does not support (yet) Redis in its list of database cartridges but it is straighforward to build a Redis server from scratch directly in OpenShift by following these instructions.
The next step is to add redis module to the list of dependencies in deplist.txt:
$ cat deplist.txt
...
redis
...
Redis node.js module does not expose a method to create a client from a Unix socket. We need to add our own function to do that and pass the path to the socket located in a temporary directory inside OPENSHIFT_GEAR_DIR directory (I found the code snippet on stackoverflow):
var express = require('express');
var net = require('net');
var __redis__ = require('redis');
var createSocketClient = function (path, options) {
var client = new __redis__.RedisClient(net.createConnection(path), options);
client.path = path;
return client;
};
var redis = createSocketClient(process.env.OPENSHIFT_GEAR_DIR + 'tmp/redis.sock');
The rest of the code was not changed at all:
var app = express.createServer();
// Handler for POST /test
app.post('/test', function(req, res){
redis.incr('val', function(err, value) {
if (err) {
res.writeHead(500);
res.end();
} else {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('val=' + value + '\n');
}
});
})
...
Once the code is committed and pushed to OpenShift Git repository, the hook will build and start Redis (only the first time) and the Web application is ready to serve any data from Redis:
$ curl -X POST http://${appname}-jmesnil.rhcloud.com/test/
val=1
$ curl -X POST http://${appname}-jmesnil.rhcloud.com/test/
val=2
Even though Redis is not officially supported by OpenShift yet, it is still possible to use it now and it is very simple to deploy and run it.
I have played a little with OpenShift and it is very easy to use and deploy. I ported a small application using node.js and Redis to it in a matter of minutes (I will write about it in my next post).
Over the past 6 months, we've "scaled" MongoDB by moving data off of it. [...] For key-value data, we switched to Riak, which provides predictable read/write latencies and is completely horizontally scalable. For smaller sets of relational data where we wanted a rich query layer, we moved to PostgreSQL.
As usual, take this with a grain of salt (performance varies with data size, use cases, read/write ratio, etc).
I had a short experience developing with MongoDB and I like it. Unfortunately, I had not the opportunity to see how it behave in production use.
What we're really trying to build is the Internet's file system. —Drew Houston
As a follow up to my previous post, this is an interesting interview from Dropbox founder, Drew Houston.
I use heavily Dropbox to synchronize contents and share documents between my iDevices and home/corporate laptop and it works without any problem.
I have not used their APIs but I hear it is great. Given the number of apps installed on my iPhone and iPad which uses it to share content, it is likely true.
The past six months have also shown that, without proper developer tools and a clear explanation of how things work in backend, things don't "just work" — in fact, quite the opposite: some developers have given up entirely on building iCloud apps for now, others are wishing for new APIs that would make the platform suitable to their needs, while the ones who did implement iCloud in their apps are torn better the positive feedback of "it just works" users, and the frustration of those struggling to keep their data in sync on a daily basis.
I played a little bit with iCloud on a iOS personal project. Unsurprisingly, it is hard to get syncing right.
It was obvious from the start that supporting iCloud would not been easy. There are many fallacies to avoid once an application depends on the network (especially when it is as unreliable and spotty as a cellular network) and version conflict resolution works best with a deep knowledge of your domain.
Apple documentation for iCloud is too high-level to be helpful. The best resources I found about iCloud development are lectures 17 & 18 of Stanford's iPad and iPhone Application Development by Paul Hegarty. These lectures give the correct mindset to comprehend and work with iCloud.
Supporting iCloud is not an easy task and it requires to really think about it. It is not just about dropping files in a directory. However when it works seamlessly, it improves and simplifies the user experience. The user benefit is worth the developer pain. With the release of Mountain Lion I expect Apple to continue to improve iCloud API and features and make more developer resources available as they learn from adding iCloud to their own apps.