To provide massive scalability of a large messaging fabric you typically want to allow many brokers to be connected together into a network so that you can have as many clients as you wish all logically connected together - and running as many message brokers as you need based on your number of clients and network topology.
If you are using client/server or hub/spoke style topology then the broker you connect to becomes a single point of failure which is another reason for wanting a network (or cluster) of brokers so that you can survive failure of any particular broker, machine or subnet.
From 1.1 onwards of ActiveMQ supports networks of brokers which allows us to support distributed queues and topics across a network of brokers.
This allows a client to connect to any broker in the network - and fail over to another broker if there is a failure - providing from the clients perspective a HA cluster of brokers.
N.B. By default a network connection is one way only - the broker that establishes the connection passes messages to the broker(s) its connected to. From version 5.x of ActiveMQ, a network connection can be optionally enabled to be duplex, which can be useful for hub and spoke architectures, where the hub is behind a firewall etc.
The easiest way to configure a network of brokers is via the Xml Configuration. There are two main ways to create a network of brokers
Here is an example of using the fixed list of URIs
ActiveMQ also supports other transports than tcp to be used for the network connector such as http.
This example uses multicast discovery
By default, network connectors are initiated serially as part of the broker start up sequence. When some networks are slow, they prevent other networks from starting in a timely manner. Version 5.5 supports the broker attribute networkConnectorStartAsync="true" which will cause the broker to use an executor to start network connectors in parallel, asynchronous to a broker start.
With static: discovery you can hard code the list of broker URLs. A network connector will be created for each one.
There are some useful properties you can set on a static network connector for retries:
A common configuration option for a network of brokers is to establish a network bridge between a broker and an n+1 broker pair (master/slave). Typical configurations involve using the failover: transport, but there are a some other non-intuitive options that must be configured for it to work as desired. For this reason, ActiveMQ v5.6+ has a convenience discovery agent that can be specified with the masterslave: transport prefix:
The URIs are listed in order for: MASTER,SLAVE1,SLAVE2...SLAVE
The same configuration options for static: are available for masterslave:
Networks of brokers do reliable store and forward of messages. If the source is durable, persistent messages on a queue or a durable topic subscription, a network will retain the durability guarantee.
Total message ordering is not preserved with networks of brokers. Total ordering works with a single consumer but a networkBridge introduces a second consumer. In addition, network bridge consumers forward messages via producer.send(..), so they go from the head of the queue on the forwarding broker to the tail of the queue on the target. If single consumer moves between networked brokers, total order may be preserved if all messages always follow the consumer but this can be difficult to guarantee with large message backlogs.
ActiveMQ relies on information about active consumers (subscriptions) to pass messages around the network. A broker interprets a subscription from a remote (networked) broker in the same way as it would a subscription from a local client connection and routes a copy of any relevant message to each subscription. With Topic subscriptions and with more than one remote subscription, a remote broker would interpret each message copy as valid, so when it in turns routes the messages to its own local connections, duplicates would occur. Hence default conduit behavior consolidates all matching subscription information to prevent duplicates flowing around the network. With this default behaviour, N subscriptions on a remote broker look like a single subscription to the networked broker.
However - duplicate subscriptions is a useful feature to exploit if you are only using Queues. As the load balancing algorithm will attempt to share message load evenly, consumers across a network will equally share the message load only if the flag conduitSubscriptions=false. Here's an example. Suppose you have two brokers, A and B, that are connected to one another via a forwarding bridge. Connected to broker A, you have a consumer that subscribes to a queue called Q.TEST. Connected to broker B, you have two consumers that also subscribe to Q.TEST. All consumers have equal priority. Then you start a producer on broker A that writes 30 messages to Q.TEST. By default, (conduitSubscriptions=true), 15 messages will be sent to the consumer on broker A and the resulting 15 messages will be sent to the two consumers on broker B. The message load has not been equally spread across all three consumers because, by default, broker A views the two subscriptions on broker B as one. If you had set conduitSubscriptions to "false", then each of the three consumers would have been given 10 messages.
By default a network bridge forwards messages on demand in one direction over a single connection. When dupex=true, the same connection is used for a network bridge in the opposite directions, resulting in a by directional bridge. The network bridge configuration is propagated to the other broker so the duplex bridge is an exact replica or the original.
Conduit subscriptions ignore consumer selectors on the local broker and send all messages to the remote one. Selectors are then parsed on the remote brokers before messages are dispatched to consumers. This concept could create some problems with consuming on queues using selectors in a multi-broker network. Imagine a situation when you have a producing broker forwarding messages to two receiving brokers and each of these two brokers have a consumer with different selector. Since no selectors are evaluated on the producer broker side, you can end up with all messages going to only one of the brokers, so messages with certain property will not be consumed. If you need to support this use case, please turn off conduitSubscription feature.
Network of brokers relies heavily on advisory messages, as they are used under the hood to express interest in new consumers on the remote end. By default, when network connector starts it defines one consumer on the following topic ActiveMQ.Advisory.Consumer.> (let's ignore temporary destination for the moment). In this way, when consumer connects (or disconnects) to the remote broker, the local broker will get notified and will treat it as one more consumer it had to deal with.
This is all fine and well in small networks and environments whit small number of destinations and consumers. But as things starts to grow a default model (listen to everything, share everything) won't scale well. That's why there are many ways you can use to filter destinations that will be shared between brokers.
Let's start with dynamically configured networks. This means that we only want to send messages to the remote broker when there's a consumer there. If we want to limit this behavior only on certain destinations we will use dynamicallyIncludedDestinations, like
In earlier versions of ActiveMQ, the broker would still use the same advisory filter and express interest in all consumers on the remote broker. The actual filtering will be done during message dispatch. This is suboptimal solution in huge networks as it creates a lot of "advisory" traffic and load on the brokers. Starting with version 5.6, the broker will automatically create an appropriate advisory filter and express interest only in dynamically included destinations. For our example it will be ActiveMQ.Advisory.Consumer.Queue.include.test.foo,ActiveMQ.Advisory.Consumer.Topic.include.test.bar. This can dramatically improve behavior of the network in complex and high-load environments.
So what's to be done in older versions of the broker? Luckily, we can achieve the same thing with a bit more complicated configuration. The actual advisory filter that controls in which consumers we are interested is defined with the destinationFilter connector property. It's default value is >, which is concatenated to the ActiveMQ.Advisory.Consumer. prefix. So to achieve the same thing we would need to do the following
Note that first destination doesn't have the prefix because it's already implied. It's a bit more complicated to set and maintain, but it will work. And if you're using 5.6 or newer version of the broker just including desired destinations with dynamicallyIncludedDestinations should suffice.
This also explains why dynamic networks doesn't work if you turn off advisory support on the brokers. The brokers in this case cannot dynamically respond to new consumers.
If you wish to completely protect the broker from any influence of consumers on the remote broker, or if you wish to use the brokers as a simple proxy and forward all messages to the remote side no matter if there are consumers there or not, static networks are something you should consider.
The staticBridge parameter is available since version 5.6 and it means that broker will not subscribe to any advisory topic on the remote broker, meaning it is not interested in any consumers there. Additionally, you need to add a list of destinations to staticallyIncludedDestinations. This will have the same effect as having an additional consumer on the destinations so messages will be forwarded to the remote broker as well. As there's no staticBridge parameter in the earlier versions of ActiveMQ, you can trick the broker by setting destinationFilter to listen to an unused advisory topic, like
If configured like this, broker will try to listen for new consumers on ActiveMQ.Advisory.Consumer.NO_DESTINATION, which will never have messages so it will be protected from information on remote broker consumers.
This part of an example configuration for a Broker
It is possible to have more than one network connector between two brokers. Each network connector uses one underlying transport connection, so you may wish to do this to increase throughput, or have a more flexible configuration.
N.B. You can only use wildcards in the excludedDestinations and dynamicallyIncludedDestinations properties.
By default, it is not permissible for a message to be replayed back to the broker from which it came. This ensures that messages do not loop when duplex or by directional network connectors are configured. Occasionally it is desirable to allow replay for queues. Consider a scenario where a bidirectional bridge exists between a broker pair. Producers and Consumers get to randomly choose a broker using the failover transport. If one broker is restarted for maintenance, messages accumulated on that broker, that crossed the network bridge, will not be available to consumers till they reconnect to the broker. One solution to this problem is to force a client reconnect using rebalanceClusterClients. Another, is to allow replay of messages back to the origin as there is no local consumer on that broker.
N.B.: When using replayWhenNoConsumers=true for versions < 5.9, it is necessary to also disable the cursors duplicate detection using enableAudit=false as the cursor could mark the replayed messages as duplicates (depending on the time window between playing and replaying these messages over the network bridge). The problem is fully explained in this blog post.
The conditionalNetworkBridgeFilterFactory factory allows a rate limit to be specified for a destination, such that network consumer can be throttled. Prefetch for a network consumer is largely negated by the fact that a network consumer relays a message typically acks very quickly so even with a low prefetch and decreased priority a network consumer can starve a modestly quick local consumer. Throttling provides a remedy for this.
If you run the following commands in separate shells you'll have 2 brokers auto-discovering themselves and 2 clients using fixed-URLs
Or to try the same thing again using Zeroconf discovery you could try this