OpenMAMA ZeroMQ Bridge 1.0 Rundown

on
4 minute read

OpenMAMA ZeroMQ Bridge 1.0 Rundown

In my previous post on the ZeroMQ bridge, I made a pretty big deal about how there was a bunch of enhancements I wanted to add before completing a ZeroMQ bridge 1.0 release. This mostly revolved around making various bridge and middleware specific functionality configurable along with some architectural optimzations around locking for multithreaded applications.

It ended up being a little more and included new, more modular data distribution patterns to allow for better fan-in and fan-out capabilities without requiring an intermediary. This is thanks to some new configuration parameters which allow a node to connect to more than one endpoint.

Inherently point to point

In the 0.x bridge way of doing things, there was no way to specify more than one publisher or subscriber, so the assumption was always that subscription requests would be sent to one particular address and received on one particular address and that was the end of it. If fan-out or fan-in was required, you would use an intermediary, so you ended up with something like this if you preferred not to use an intermediary:

                   +-------------+
                   |             |
                   |  PUBLISHER  |
                   |             |
                   +---^-----+---+
   Incoming Port Binds |     | Outgoing Port Connects
                       |     |
                       |     |
Outgoing Port Connects |     | Incoming Port Binds
                   +---+-----v---+
                   |             |
                   | SUBSCRIBER  |
                   |             |
                   +-------------+

Reconfiguring for proxy-free fan-out

Then the idea came up to use the same approach, but just bind on the publisher side for both incoming and outgoing connections. This means that the publisher doesn’t need to know in advance who to send market data subscription requests out to (_MD*) as it will effectively send them to every node connected. So that looked something more like this:

                    +-------------+
                    |             |
                    |  PUBLISHER  |
                    |             |
                    +^-^-^---^-^-^+
                     | | |   | | | All incoming connections
                     | | |   | | | to binding ports
                     | | |   | | |
    +----------------+ | |   | | +----------------+
    |     +------------+ |   | +------------+     |
    |     |              |   |              |     |
    |     |              |   |              |     |
+---+-----+---+     +----+---+----+     +---+-----+---+
|             |     |             |     |             |
| SUBSCRIBER  |     | SUBSCRIBER  |     | SUBSCRIBER  |
|             |     |             |     |             |
+-------------+     +-------------+     +-------------+

What about proxy-free fan-out

This of course worked great for fanout… but you would still need an intermediary for fan-in because subscribers had no way to connect to more than one publisher… until the 1.0 release.

See below for the new way to do fan-in and fan-out without requiring a proxy. Note that the binding ports are still all on the publisher side, but you can now specify more than one URI in both directions.

# This is where all messages will be published including requests
mama.zmq.transport.pub.outgoing_url_0=tcp://*:5557
# This is where all messages will be received including replies
mama.zmq.transport.pub.incoming_url_0=tcp://*:5556

# This is where all messages will be published including requests
mama.zmq.transport.pub2.outgoing_url_0=tcp://*:5559
# This is where all messages will be received including replies
mama.zmq.transport.pub2.incoming_url_0=tcp://*:5558
# This is where all messages will be published including requests
mama.zmq.transport.subdual.outgoing_url_0=tcp://localhost:5556
# This is where all messages will be received including replies
mama.zmq.transport.subdual.incoming_url_0=tcp://localhost:5557
# Connect to additional publisher 'pub2'
mama.zmq.transport.subdual.outgoing_url_1=tcp://localhost:5558
mama.zmq.transport.subdual.incoming_url_1=tcp://localhost:5559

So you can now finally do this without needing a proxy (just to finish off playing with asciiflow.com):

+-------------+       +-------------+
|             |       |             |
|  PUBLISHER  |       |  PUBLISHER  |
|             |       |             |
+----^---^----+       +----^---^----+
     |   |                 |   |
     |   +-----+     +-----+   |
     ^------|  |     |  |------+
           +---+-----+---+
           |             |
           | SUBSCRIBER  |
           |             |
           +-------------+

This is great, but why would anyone want to not use a proxy?

Of course a proxy is still easiest when it comes to configuration, but this flexibility is very important to provide the option to users not just for TCP, but also for the multicast options provided by the OpenMAMA ZeroMQ bridge where subscribing to multiple source is a must. Plus a lot of people have gotten bitten by daemon based data distribution technologies in the past and have built up an instinctual aversion towards it.

OK cool so what else is new in 1.0?

Well, the rest of it is pretty much what I said before - you can now tweak native ZeroMQ socket options to optimize for performance (e.g. watermarks and buffer sizes), and the memory pooling mechanism is much more efficient. This is also the first release that comes with binary releases so you no longer need to compile it yourself - shoot over to https://github.com/fquinner/OpenMAMA-zmq/releases to grab a copy.

OpenMAMA, ZeroMQ, Architecture
comments powered by Disqus