One of the aims of ActiveMQ is to be a high performance message bus. This means using a SEDA architecture to perform as much work as possible asynchronously. To be able to achieve high performance it is important to stream messages to consumers as fast as possible so that the consumer always has a buffer of messages, in RAM, ready to process - rather than have them explicitly pull messages from the server which adds significant latency per message.

There is a danger however that this aggressive pushing of messages to the consumers could flood a consumer as typically its much faster to deliver messages to the consumer than it often is to actually process them.

So ActiveMQ uses a prefetch limit on how many messages can be streamed to a consumer at any point in time. Once the prefetch limit is reached, no more messages are dispatched to the consumer until the consumer starts sending back acknowledgements of messages (to indicate that the message has been processed). The actual prefetch limit value can be specified on a per consumer basis.

Its a good idea to have large values of the prefetch limit if you want high performance and if you have high message volumes. If you have very few messages and each message takes a very long time to process you might want to set the prefetch value to 1 so that a consumer is given one message at a time. Specifying a prefetch limit of zero means the consumer will poll for more messages, one at a time, instead of the message being pushed to the consumer.

Specifying the PrefetchPolicy

You can specify an instance of the ActiveMQPrefetchPolicy on an ActiveMQConnectionFactory or ActiveMQConnection. This allows you to configure all the individual prefetch values; as each different quality of service has a different value. e.g.

  • persistent queues (default value: 1000)
  • non-persistent queues (default value: 1000)
  • persistent topics (default value: 100)
  • non-persistent topics (default value: Short.MAX_VALUE -1)

It can also be configured on the connection URI used when establishing a connection the broker:

To change the prefetch size for all consumer types you would use a connection URI similar to:

To change the prefetch size for just queue consumer types you would use a connection URI similar to:

It can also be configured on a per consumer basis using Destination Options.

Pooled Consumers and prefetch

Consuming messages from a pool of consumers an be problematic due to prefetch. Unconsumed prefetched messages are only released when a consumer is closed, but with a pooled consumer the close is deferred (for reuse) till the consumer pool closes. This leaves prefetched messages unconsumed till the consumer is reused. This feature can be desirable from a performance perspective but it can lead to out-of-sequence messages when there is more than one consumer in the pool.
For this reason, the org.apache.activemq.pool.PooledConnectionFactory does not pool consumers.
The problem is visible with the Spring DMLC when the cache level is set to CACHE_CONSUMER and there are multiple concurrent consumers.
One solution to this problem is to use a prefetch of 0 for a pooled consumer, in this way, it will poll for messages on each call to receive(timeout). Another option is to enable the AbortSlowAckConsumerStrategy on the broker to disconnect consumers that have not acknowledged a Message after some configurable time period.

Ram versus Performance tradeoff

Setting a relatively high value of prefetch leads to higher performance; so the default values are typically > 1000; usually higher for topics and higher for the non-persistent messaging. The prefetch size dictates how many messages will be held in RAM on the client so if your RAM is limited you may want to set a low value such as 1 or 10 etc.

© 2004-2011 The Apache Software Foundation.
Apache ActiveMQ, ActiveMQ, Apache, the Apache feather logo, and the Apache ActiveMQ project logo are trademarks of The Apache Software Foundation. All other marks mentioned may be trademarks or registered trademarks of their respective owners.
Graphic Design By Hiram