Performance Improvements in ActiveMQ

(last updated 2013-Jul-31)

There are a number of ways to help scale ActiveMQ vertically and horizontally.  Horizontal scaling (unless just putting a limited number of queues on each server) generally needs vertical scaling as well.  (By vertical, I mean put as many queues and as much traffic as possible through one server, horizontal means making it easy to throw another server in the mix to take the load.)

Vertical scaling (usually needed for horizontal scaling anyway):
NIO instead of TCP: use nio for the transport connector setting in the broker ( as in
     <transportConnector name="openwire" uri="nio://"
               updateClusterClientsOnRemove="true" enableStatusMonitor="true"/>

see also: If you want, you can also have a TCP, as well as NIO, transportConnector for use for the network of brokers configuration - make sure to use a different port number and name and use that port number in the network of brokers configurations as well.
Why use NIO? With default settings, there will be a thread per destination and a thread per connection (and that's without network of brokers turned on) if blocking I/O is on (i.e. not using nio) - thus it is important to turn NIO on.
View of ActiveMQ threads on producer/consumer links:

Optimizeddispatch: set optimizeddispatch to true (on queue policy) This only applies to queues and stops the system from using a separate thread for dispatching.  Optimizeddispatch is set in the broker in the destination policy section, for example:
                <policyEntry queue=">" optimizedDispatch="true"/>
queue=">" means all queues (in ActiveMQ's conf, > means all to the right, * means on character).

Dedicatedtaskrunner: turn off dedicatedtaskrunner:
Turning off dedicatedtaskrunner means that a pool of threads is used to handle the queues as opposed to a new thread per queue. Turning this off can be done in the activemq file in the bin directory adding -Dorg.apache.activemq.UseDedicatedTaskRunner=false to the ACTIVEMQ_OPTS line. (also mentioned in passing here:

It can be important to use both optimizedispatch=true and usededicatedtaskrunner=false:
one relies on the thread pool and the other lets the thread pool be used.  This page also mentions turning off dedicatedTaskRunner as well as mentioning JMS template gotchas.

Caching: this is related more to issues with stuck and/or missing messages - try turning off caching on queues (and producer flow control) as in:
   <policyEntry queue=">" producerFlowControl="false" memoryLimit="1mb" useCache="false">
(from: &

Horizontal scaling:

Horizontal scaling is achieved by throwing more ActiveMQ instances at the problem, but we need the instances to be aware of each other (see hybrid for when they aren't aware).  
The key feature of the horizontal scaling configuration is the Network Connector.

Network Connector:
       <networkConnector uri="static:(tcp://localhost:61616)" <!-- or use nio instead of tcp -->
         prefetchSize="1" <!-- must be >0 as brokers don't poll for messages like consumers do -->
conduitSubscriptions="true" - multiple consumers subscribing to the same destination are treated as one consumer by the network 
networkTTL="3" -  controls how many times a message can move around brokers - comes from the ActiveMQ in Action book and has more info on configuring for HA (note that this presentation mentions prefetch=1000 on network connector, which doesn't agree with other sources is not right).

Network of brokers will create higher thread numbers as it relies on advisory topics - one thread for each topic and one topic for each queue in use where there was already another thread handling it.  The threading changes listed in Vertical Scaling will help keep the number of thread down.
One last thought - check this post for an observation of heavy load with the network of broker solution.

Other Settings
Memory Usage Limits (relevant to producer flow control): - version 5.6.0 added a comment to flag issues that were going silent.

Increase memory:
 - give activemq more memory on startup via the activemq file in the bin directory adding -Xmx 2048M to the ACTIVEMQ_OPTS line (adding to useDedicatedTaskRunner).
- set system usage:
       <memoryUsage limit="1 gb"> <!-- how much memory ActiveMQ can use, also point at which a producer will be blocked if sending too many messages, the default is 64MB-->
       <tempUsage limit = "2 gb"> <!-- how much disk space non-persisted messages can use, will error if set higher than your available disk space, default is 50GB -->
     <storeUsage> <!-- won't work with 5.5.0 and lower versions -->
       <storeUsage limit = "100 gb"> <!-- how much disk space persisted messages can use, will warn if set higher than your available disk space, default is 100GB -->

The ActiveMQ docs (including ActiveMQ in Action) and the forums are a little inconsistent on the meaning of memoryUsage as to whether it applies to the whole broker or only non-persistent messages. 
The latter link seems to make it clear :)  To add to that, if the memoryUsage limit has been reached producer flow control will kick in; if producer flow control is off, then the thread will be blocked until space is free.

Issues moving messages:
Prefetch limits: Set prefetch to a low value for network connectors (0 only for consumer connections, >1  for network connectors).  Prefetch for network connectors needs to be greater than or equal to 1 since brokers don't poll for messages like consumers do.
Queue values can be changed in the broker's queue policy section using queuePrefetch to control (
Note that setting a low prefetch may have some negative impact on performance as a large number of messages aren't handled as a group saving on overhead.

Hybrid scaling (traffic partitioning):
Vertical scaling is essential to get ActiveMQ to produce the most per instance. Horizontal scaling can add more machines (although not all believe that network of brokers are a good choice:
 Combining vertical with horizontal, but leaving out the overhead of network of brokers, is a hybrid approach relying on partitioning traffic between servers.  This could be done by setting all queues that begin with A on one instance and B on another.  The applications would need to know which ActiveMQ instance to send/recieve messages/events for B - i.e. you need to set these manually.  Hybrid scaling - queue groups on specific brokers which requires more app configuration, but provides better scaling, a solution that scales well, but the manual configuration is a known downside.

Producer Flow Control
Producer flow control sounds like a great idea, but one of the downsides is that it uses an extra thread per queue being controlled.  In a network of broker set up, the advisory topics are also being flow controlled and while you can stop producer flow control on the queues, you can't on the advisory topics.
possibly remove queue limits and set async close to false: 

<policyEntry queue=">" producerFlowControl="true" memoryLimit="1mb"> The memoryLimit should be less than the  memoryUsage setting/number of queues to avoid reaching memory limits.

Some links related to producer flow control (this and the last message were looking for CLOSE_WAIT reasons)
The above configuration changes are for scaling; however, scaling doesn't necessarily handle high availability.  ActiveMQ has 3 basics options for HA with one being deprecated and replaced with a new approach soon:
Shared nothing (note that this has been removed in ActiveMQ 5.8) The shared nothing master/slave configuration doesn't have a good recovery method - not ideal for many situations. Recovery requires downtime and copying files between systems.  It relies on keeping a slave updated with all messages and changes such that if the master goes down, the slave can take over.

Shared DB storage - relies on a DB to provide storage and a locking location for two or more ActiveMQ instances to try to control.  The one with the lock is the master, should the master go down, the lock is released and a slave can take over.  Simple configuration, relatively robust, but will be limited by the performance of the DB.
We ran this configuration extensively until one week where we were flooded with messages due to a new system going live (and no planning being made).  The message back log grew, the data queries too longer and longer and the performance of the system dropped to barely functioning levels.  The situation was severe and our option was to go with ActiveMQ's KahaDB disk level storage for much faster throughput - our systems recovered quickly.  We've also had issues with the DB basically locking for lengthy periods of time (10min-hours) if someone pressed 'purge' on a large queue via the web admin UI if the system was DB backed. Due to our concerns on purging DB backed queues and some of our systems generating too many DLQ messages, we had to write a script in our DB to delete messages from the corresponding DLQs; this was fairly easy as the queue name is clear in the SQL table.

Shared storage master/slave - good if you have a SAN.  Be sure to use nfsv4 or higher and make sure that file locking works (and times out!).  This configuration is much like the Shared DB storage, but utilizes faster disk storage options - higher through put is attainable.

LevelDB replicated storage - coming soon in ActiveMQ 5.9 It seems to rely on the ActiveMQ brokers communicating state change from the elected master to a number of slaves.  When the master fails, a slave will be elected master based on the slave with the most recent updates.

Other options
Consider Apache Apollo 1.0 as it supports jms (according to or HornetMQ.  RabbitMQ doesn't support JMS directly otherwise it would be higher on our list of alternatives.
It looks like Apollo code may make it into ActiveMQ so the prospects are looking better.

Dealing with performance issues
Even with a good setup there can be some performance issues (most of ours stemmed from the network of brokers). See other pages on this blog for more info, especially: Network of Brokers Revisited and Performance Issues.


Popular Posts