Thursday, September 27, 2012

Admin tools for ActiveMQ

Check out these pages on the ActiveMQ site

One important item to keep track of is the number of messages going through each queue.  This can be done with the activemq-admin tool. One of the most useful options of this tool is query which returns information about each queue.  The general format is:
   activemq-admin query

To get information about a particular queue use the -QQueue=queue_name; * works as a wildcard.

Due to our set up, we also need to set the rmi jmx details (port number, username, and password) via jmx flags: --jmxuser user_name --jmxpassword password_string --jmxurl service:jmx:rmi:///jndi/rmi://localhost:11223/jmxrmi

This script starts collecting stats such as the number of dequeued messages every 120secs into a file with the starting time stamp as part of the file name:

nd=`date +%d.%m.%y-%H.%M.%S`
echo $nd
echo $file_out

while [ $i -lt 30 ]
 echo -n "date: " >> $file_out
 date >> $file_out
 ../bin/activemq-admin query -QQueue=my.important.queue --jmxuser user_name --jmxpassword password_string --jmxurl service:jmx:rmi:///jndi/rmi://localhost:11223/jmxrmi >> $file_out
 echo " " >> $file_out
 let i=$i+1
 sleep 120

Running activemq-admin against a queue will give a number of useful pieces of information about a queue such as the number of enqueued or dequeued messages.

We used the activemq-admin query command to diagnose a problem with message flow - more on that here.

For a way to find which queues have old messages and to list the number of old messages this post has more.

Monday, July 30, 2012

Network of Brokers Revisited

Are ActiveMQ Network of Brokers a reliable choice? As mentioned on the performance improvements post, network of brokers is a way to horizontally scale ActiveMQ. Is it a reliable choice, though? Looks increasingly unlikely based on our experience.

We switched to a KahaDB backed network of brokers configuration when our MS SQL Server backed master/slave configuration couldn't handle some heavy load. We didn't have shared filesystems (like NAS devices) so network of brokers was the only other failover option.

Our network connector is simple. One broker establishes a duplex connection to the other broker - only one broker has the connector.

Initially, the network of brokers ran well and was/is faster than before only suffering from an issue about once every three months. As the number of our queues grew (rapid development), the network of brokers became increasingly troublesome. It became clear that one broker (the receiver of the network connector) was so burdened by thread load from both the queues and the network connector (ActiveMQ, not the tcp/ip network connection) that it was doing nothing except being a burden on the other broker. We used a number of the vertical scaling features mentioned in the performance post here to bring that thread load under control and get both brokers back into operation.

The new configuration was running well until someone dumped 500k+ messages onto a couple of queues in a short amount of time. Even with the new configuration, the network connector had broken under this load. We'd seen this happen in a few of our heavy, repeated load tests, but thought it might be an artifact of the way we were running the tests. Sadly, it doesn't look like it was an artifact.

We now feel that under heavy load the network of brokers will lose connections on certain queues and the two brokers will work in a split brain setup - often with messages producers on one broker and consumers on the other. The fix is to restart a broker which resets the network connection and invokes failover behavior (consumers and producers on one broker). Expect some delay (up to a few minutes) in restarting if this happens during heavy load or rather with loads of db files as ActiveMQ/KahaDB has to read loads of file data.

The network of brokers was our configuration to handle failover and heavy load, but if it is unreliable during very heavy load, then it's not right for us.

What's the next step?  The next step is to configure a shared filesystem (using NFS v4) and try an active/passive configuration with shared KahaDB data store.

Sunday, July 15, 2012

KahaDB log files not clearing ActiveMQ

After we upgraded to ActiveMQ5.6.0 and changed a number of configuration options listed in the vertical scaling post, things were moving brilliantly (see the post on testing the new configuration as well).

However, after a couple of weeks, disk space on one of the brokers continued to grow. Looking at the data/kahadb directory, we saw that the log files back to log1 still existed.  This sounded like a frequent problem with ActiveMQ where it doesn't clear its log files after use (users seem to log this issue every other release).  Only the broker that received the duplex network connection was suffering; the broker that established that network connection was fine.  It seemed like a problem with the consuming of messages being acknowledged.

We turned on logging as detailed here: (see:
log4j.appender.kahadb.layout.ConversionPattern=%d [%-15.15t] %-5p 
%-30.30c{1} - %m%n, kahadb
and used jconsole's access to the reload log4j method on the broker mbean to reload the logging file.

This showed that the broker didn't find any log to be cleared - the first attempt to find a free log produced no candidates and the clearing failed.  This didn't really add much information except that there was a fundamental problem.  We asked a question on the ActiveMQ forums which was a useless as our previous questions.  This left us with two obvious options:
1) restart the troublesome broker
2) clear any needed messages from the broker, shut it down, clear off the KahaDB files and start fresh.

We started with 1, but after the broker started taking a while to load the many, many GBs of old logs, we grew concerned about message replay due to unacknowledged message consumption. So, we shut down (had already saved any pending messages), cleared off KahaDB and then started the broker again.

After several hours, the troubled broker looked healthy again and was clearing off early log files.  Issue closed for now!

Tuesday, June 26, 2012

Performance tests in ActiveMQ

Wanting to see what kind of performance ActiveMQ has (in preparation for making changes mentioned in the scaling post), we did some performance testing.  All of these tests were run using versions of the jmeter script in another post and ActiveMQ 5.6.0

We've used jmeter's throughput metric, the total number of actions, as the main indicator of performance. Since our threads only put messages on a queue or took messages off a queue, we'll use that as a close guide to the number of messages that ActiveMQ can send through the system.

Using my relatively new Windows7 workstation and KahaDB, we saw ~70k messages/min move - that was putting them on and taking them off the queues.

On one of our live servers (otherwise not in use) again with KahaDB and with an extra queue or two caused memory issues due to too many threads.  While it failed to run entirely, it showed how threading can quickly create issues.
Changing to more loops and less threads and running the test several times saw: ~100k/minute for 80s test using kahadb. (125k samples, dev 50, sample 7 average 25, median 12)

Running several tests on that live system with a SQL Server back data store seemed to run at 32k/minute, but finished at 50k/minute.  (122k samples, dev 100, sample 6, aver 45 median 16, took 150 secs).

So far it looks like a definite performance gain for KahaDB, but further tests showed that SQL Server knew how to optimise for our tests and close that gap significantly.  The servers are all virtual machines running on the same hardware; however, the ActiveMQ/KahaDB server has 1 cpu and a few GBs of memory while the SQL Server has several cpus dedicated to it and 32GB of memory - more than enough to hold all of that data in memory. For running under normal loads, the concern was that SQL Server wouldn't hold our data in memory in live use and would show a clearly slower peformance.

Larger tests with more queues and fewer threads which looped more:
Write only - throughput on write is directly a function of the number of threads:
3 thread each (multiple runs): 35k/min throughput, 41secs
5 threads each: 50k/min
10 threads each: 92k/min
15 threads each: 132k/min
20 threads each: 145k/min
25 threads each: memory issues, 60k/min then 0k/min

Read only
3 threads: 12k, 3k/min, dec 288, ave 91, median 0, 230 secs
10 threads each: 94k/min
Further tests ran into a limitation with the setup.

SQL Server: with long write only job and 20 threads, SQL Serverl eventually gets close to KahaDB performance despite starting much lower.  Read only achieves ~12k/min which is far less.

repeated runs of a read/write version have seen maximum values for KahaDB hit 200k/min (34s) and SQL Server around 150k/min.  Readers seem faster on kahadb as they close sooner.
With fresh data (tables), SQL Server runs at an average of ~80k/min (70k/min (68s), 2nd run: 101k/min (47s), 3rd 88k/min (55s). 4th 66k/min)
With fresh data stores, KahaDB was averaging 150k/min (140k/min (34s), 2nd 148k/min, 3rd 156k/min, 4th 169k/min).  Making it definitely the performance winner.

We've glossed over a few details with the tests largely because there are so many other details (time of day, other load, order of execution, etc) that might affect any one or many tests.  In other words, this is an indicator of performance differences, any particular setup will differ.

Sunday, May 20, 2012

Performance Improvements in ActiveMQ

(last updated 2013-Jul-31)

There are a number of ways to help scale ActiveMQ vertically and horizontally.  Horizontal scaling (unless just putting a limited number of queues on each server) generally needs vertical scaling as well.  (By vertical, I mean put as many queues and as much traffic as possible through one server, horizontal means making it easy to throw another server in the mix to take the load.)

Vertical scaling (usually needed for horizontal scaling anyway):
NIO instead of TCP: use nio for the transport connector setting in the broker ( as in
     <transportConnector name="openwire" uri="nio://"
               updateClusterClientsOnRemove="true" enableStatusMonitor="true"/>

see also: If you want, you can also have a TCP, as well as NIO, transportConnector for use for the network of brokers configuration - make sure to use a different port number and name and use that port number in the network of brokers configurations as well.
Why use NIO? With default settings, there will be a thread per destination and a thread per connection (and that's without network of brokers turned on) if blocking I/O is on (i.e. not using nio) - thus it is important to turn NIO on.
View of ActiveMQ threads on producer/consumer links:

Optimizeddispatch: set optimizeddispatch to true (on queue policy) This only applies to queues and stops the system from using a separate thread for dispatching.  Optimizeddispatch is set in the broker in the destination policy section, for example:
                <policyEntry queue=">" optimizedDispatch="true"/>
queue=">" means all queues (in ActiveMQ's conf, > means all to the right, * means on character).

Dedicatedtaskrunner: turn off dedicatedtaskrunner:
Turning off dedicatedtaskrunner means that a pool of threads is used to handle the queues as opposed to a new thread per queue. Turning this off can be done in the activemq file in the bin directory adding -Dorg.apache.activemq.UseDedicatedTaskRunner=false to the ACTIVEMQ_OPTS line. (also mentioned in passing here:

It can be important to use both optimizedispatch=true and usededicatedtaskrunner=false:
one relies on the thread pool and the other lets the thread pool be used.  This page also mentions turning off dedicatedTaskRunner as well as mentioning JMS template gotchas.

Caching: this is related more to issues with stuck and/or missing messages - try turning off caching on queues (and producer flow control) as in:
   <policyEntry queue=">" producerFlowControl="false" memoryLimit="1mb" useCache="false">
(from: &

Horizontal scaling:

Horizontal scaling is achieved by throwing more ActiveMQ instances at the problem, but we need the instances to be aware of each other (see hybrid for when they aren't aware).  
The key feature of the horizontal scaling configuration is the Network Connector.

Network Connector:
       <networkConnector uri="static:(tcp://localhost:61616)" <!-- or use nio instead of tcp -->
         prefetchSize="1" <!-- must be >0 as brokers don't poll for messages like consumers do -->
conduitSubscriptions="true" - multiple consumers subscribing to the same destination are treated as one consumer by the network 
networkTTL="3" -  controls how many times a message can move around brokers - comes from the ActiveMQ in Action book and has more info on configuring for HA (note that this presentation mentions prefetch=1000 on network connector, which doesn't agree with other sources is not right).

Network of brokers will create higher thread numbers as it relies on advisory topics - one thread for each topic and one topic for each queue in use where there was already another thread handling it.  The threading changes listed in Vertical Scaling will help keep the number of thread down.
One last thought - check this post for an observation of heavy load with the network of broker solution.

Other Settings
Memory Usage Limits (relevant to producer flow control): - version 5.6.0 added a comment to flag issues that were going silent.

Increase memory:
 - give activemq more memory on startup via the activemq file in the bin directory adding -Xmx 2048M to the ACTIVEMQ_OPTS line (adding to useDedicatedTaskRunner).
- set system usage:
       <memoryUsage limit="1 gb"> <!-- how much memory ActiveMQ can use, also point at which a producer will be blocked if sending too many messages, the default is 64MB-->
       <tempUsage limit = "2 gb"> <!-- how much disk space non-persisted messages can use, will error if set higher than your available disk space, default is 50GB -->
     <storeUsage> <!-- won't work with 5.5.0 and lower versions -->
       <storeUsage limit = "100 gb"> <!-- how much disk space persisted messages can use, will warn if set higher than your available disk space, default is 100GB -->

The ActiveMQ docs (including ActiveMQ in Action) and the forums are a little inconsistent on the meaning of memoryUsage as to whether it applies to the whole broker or only non-persistent messages. 
The latter link seems to make it clear :)  To add to that, if the memoryUsage limit has been reached producer flow control will kick in; if producer flow control is off, then the thread will be blocked until space is free.

Issues moving messages:
Prefetch limits: Set prefetch to a low value for network connectors (0 only for consumer connections, >1  for network connectors).  Prefetch for network connectors needs to be greater than or equal to 1 since brokers don't poll for messages like consumers do.
Queue values can be changed in the broker's queue policy section using queuePrefetch to control (
Note that setting a low prefetch may have some negative impact on performance as a large number of messages aren't handled as a group saving on overhead.

Hybrid scaling (traffic partitioning):
Vertical scaling is essential to get ActiveMQ to produce the most per instance. Horizontal scaling can add more machines (although not all believe that network of brokers are a good choice:
 Combining vertical with horizontal, but leaving out the overhead of network of brokers, is a hybrid approach relying on partitioning traffic between servers.  This could be done by setting all queues that begin with A on one instance and B on another.  The applications would need to know which ActiveMQ instance to send/recieve messages/events for B - i.e. you need to set these manually.  Hybrid scaling - queue groups on specific brokers which requires more app configuration, but provides better scaling, a solution that scales well, but the manual configuration is a known downside.

Producer Flow Control
Producer flow control sounds like a great idea, but one of the downsides is that it uses an extra thread per queue being controlled.  In a network of broker set up, the advisory topics are also being flow controlled and while you can stop producer flow control on the queues, you can't on the advisory topics.
possibly remove queue limits and set async close to false: 

<policyEntry queue=">" producerFlowControl="true" memoryLimit="1mb"> The memoryLimit should be less than the  memoryUsage setting/number of queues to avoid reaching memory limits.

Some links related to producer flow control (this and the last message were looking for CLOSE_WAIT reasons)
The above configuration changes are for scaling; however, scaling doesn't necessarily handle high availability.  ActiveMQ has 3 basics options for HA with one being deprecated and replaced with a new approach soon:
Shared nothing (note that this has been removed in ActiveMQ 5.8) The shared nothing master/slave configuration doesn't have a good recovery method - not ideal for many situations. Recovery requires downtime and copying files between systems.  It relies on keeping a slave updated with all messages and changes such that if the master goes down, the slave can take over.

Shared DB storage - relies on a DB to provide storage and a locking location for two or more ActiveMQ instances to try to control.  The one with the lock is the master, should the master go down, the lock is released and a slave can take over.  Simple configuration, relatively robust, but will be limited by the performance of the DB.
We ran this configuration extensively until one week where we were flooded with messages due to a new system going live (and no planning being made).  The message back log grew, the data queries too longer and longer and the performance of the system dropped to barely functioning levels.  The situation was severe and our option was to go with ActiveMQ's KahaDB disk level storage for much faster throughput - our systems recovered quickly.  We've also had issues with the DB basically locking for lengthy periods of time (10min-hours) if someone pressed 'purge' on a large queue via the web admin UI if the system was DB backed. Due to our concerns on purging DB backed queues and some of our systems generating too many DLQ messages, we had to write a script in our DB to delete messages from the corresponding DLQs; this was fairly easy as the queue name is clear in the SQL table.

Shared storage master/slave - good if you have a SAN.  Be sure to use nfsv4 or higher and make sure that file locking works (and times out!).  This configuration is much like the Shared DB storage, but utilizes faster disk storage options - higher through put is attainable.

LevelDB replicated storage - coming soon in ActiveMQ 5.9 It seems to rely on the ActiveMQ brokers communicating state change from the elected master to a number of slaves.  When the master fails, a slave will be elected master based on the slave with the most recent updates.

Other options
Consider Apache Apollo 1.0 as it supports jms (according to or HornetMQ.  RabbitMQ doesn't support JMS directly otherwise it would be higher on our list of alternatives.
It looks like Apollo code may make it into ActiveMQ so the prospects are looking better.

Dealing with performance issues
Even with a good setup there can be some performance issues (most of ours stemmed from the network of brokers). See other pages on this blog for more info, especially: Network of Brokers Revisited and Performance Issues.

Tuesday, May 15, 2012

JMeter test file for ActiveMQ

Here is the text of the jmx file - start JMeter and read it in to get started:

<?xml version="1.0" encoding="UTF-8"?>
<jmeterTestPlan version="1.2" properties="2.2">
    <TestPlan guiclass="TestPlanGui" testclass="TestPlan" testname="Test Plan" enabled="true">
      <stringProp name="TestPlan.comments"></stringProp>
      <boolProp name="TestPlan.functional_mode">false</boolProp>
      <boolProp name="TestPlan.serialize_threadgroups">false</boolProp>
      <elementProp name="TestPlan.user_defined_variables" elementType="Arguments" guiclass="ArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
        <collectionProp name="Arguments.arguments"/>
      <stringProp name="TestPlan.user_define_classpath"></stringProp>
      <ThreadGroup guiclass="ThreadGroupGui" testclass="ThreadGroup" testname="Subscribers" enabled="true">
        <stringProp name="ThreadGroup.on_sample_error">continue</stringProp>
        <elementProp name="ThreadGroup.main_controller" elementType="LoopController" guiclass="LoopControlPanel" testclass="LoopController" testname="Loop Controller" enabled="true">
          <boolProp name="LoopController.continue_forever">false</boolProp>
          <stringProp name="LoopController.loops">2</stringProp>
        <stringProp name="ThreadGroup.num_threads">200</stringProp>
        <stringProp name="ThreadGroup.ramp_time">1</stringProp>
        <longProp name="ThreadGroup.start_time">1336426286000</longProp>
        <longProp name="ThreadGroup.end_time">1336426286000</longProp>
        <boolProp name="ThreadGroup.scheduler">false</boolProp>
        <stringProp name="ThreadGroup.duration"></stringProp>
        <stringProp name="ThreadGroup.delay"></stringProp>
        <SubscriberSampler guiclass="JMSSubscriberGui" testclass="SubscriberSampler" testname="ActiveMQ-JMS Subscriber1" enabled="true">
          <stringProp name="jms.jndi_properties">false</stringProp>
          <stringProp name="jms.initial_context_factory">org.apache.activemq.jndi.ActiveMQInitialContextFactory</stringProp>
          <stringProp name="jms.provider_url">tcp://localhost:61616</stringProp>
          <stringProp name="jms.connection_factory">ConnectionFactory</stringProp>
          <stringProp name="jms.topic">dynamicQueues/read-write</stringProp>
          <stringProp name="jms.security_principle"></stringProp>
          <stringProp name="jms.security_credentials"></stringProp>
          <boolProp name="jms.authenticate">false</boolProp>
          <stringProp name="jms.iterations">10</stringProp>
          <stringProp name="jms.read_response">true</stringProp>
          <stringProp name="jms.client_choice">jms_subscriber_receive</stringProp>
          <stringProp name="jms.timeout">1000</stringProp>
      <ThreadGroup guiclass="ThreadGroupGui" testclass="ThreadGroup" testname="Publishers" enabled="true">
        <stringProp name="ThreadGroup.on_sample_error">continue</stringProp>
        <elementProp name="ThreadGroup.main_controller" elementType="LoopController" guiclass="LoopControlPanel" testclass="LoopController" testname="Loop Controller" enabled="true">
          <boolProp name="LoopController.continue_forever">false</boolProp>
          <stringProp name="LoopController.loops">50</stringProp>
        <stringProp name="ThreadGroup.num_threads">100</stringProp>
        <stringProp name="ThreadGroup.ramp_time">1</stringProp>
        <longProp name="ThreadGroup.start_time">1336426413000</longProp>
        <longProp name="ThreadGroup.end_time">1336426413000</longProp>
        <boolProp name="ThreadGroup.scheduler">false</boolProp>
        <stringProp name="ThreadGroup.duration"></stringProp>
        <stringProp name="ThreadGroup.delay"></stringProp>
        <PublisherSampler guiclass="JMSPublisherGui" testclass="PublisherSampler" testname="JMS Publisher" enabled="true">
          <stringProp name="jms.jndi_properties">false</stringProp>
          <stringProp name="jms.initial_context_factory">org.apache.activemq.jndi.ActiveMQInitialContextFactory</stringProp>
          <stringProp name="jms.provider_url">tcp://localhost:61616</stringProp>
          <stringProp name="jms.connection_factory">ConnectionFactory</stringProp>
          <stringProp name="jms.topic">dynamicQueues/read-write</stringProp>
          <stringProp name="jms.security_principle"></stringProp>
          <stringProp name="jms.security_credentials"></stringProp>
          <stringProp name="jms.text_message">hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!hello!!!!</stringProp>
          <stringProp name="jms.input_file"></stringProp>
          <stringProp name="jms.random_path"></stringProp>
          <stringProp name="jms.config_choice">jms_use_text</stringProp>
          <stringProp name="jms.config_msg_type">jms_text_message</stringProp>
          <stringProp name="jms.iterations">10</stringProp>
          <boolProp name="jms.authenticate">false</boolProp>
      <ResultCollector guiclass="GraphVisualizer" testclass="ResultCollector" testname="Graph Results" enabled="true">
        <boolProp name="ResultCollector.error_logging">false</boolProp>
          <value class="SampleSaveConfiguration">
        <stringProp name="filename"></stringProp>


Testing ActiveMQ with JMeter

For testing ActiveMQ using JMeter, the JMeter site has some useful info.  Here's one particular version of this with some extra detail (well a little :)

One important piece of info: running these tests required ActiveMQ-all-5.4.0.jar or older due to dependencies on a prefetchQueue class which newer jars don't have.  Something in the jmeter set up for JMS, I guess although I haven't looked into it.  The ActiveMQ site should have older versions of ActiveMQ available - grab it and grab the needed jar.  On the day I went, they'd deleted all of their older versions, but I was able to find a copy via a maven repository online (and downloaded from there). Click on the image for a better view.

Start up JMeter. In the test plan, add a Thread Group (I've called it Subscribers): Add -> Threads(users) -> ThreadGroup.  Then on Subscribers add a Sampler->JMS Subscriber and configure the JMS Subscriber similar to below.

Here is the test subscriber set up:
Initial Connection Factory: org.apache.activemq.jndi.ActiveMQInitialContextFactory
Provider URL: tcp://localhost:61616 (obviously, this can vary)
Connection factory: ConnectionFactory
Destination: dynamicQueues/MyQ4  (this requires the magic name 'dynamicQueues/' as a prefix. Obvious? No, it's not.  For Topics use dynamicTopics/)
Number of sample to aggregate: 10 (aggregate them otherwise they'll arrive too fast to deal with them)
Timout (ms): 1000 - set to something otherwise the subscriber will sit waiting for a message

I'll upload the jmx file that backs these tests for ease of use.

To set up the Publisher, it is much the same - new ThreadGroup (renamed Publisher here), then new Sampler JMS Publisher and filling in the fields with similar/identical values although you do get to configure the type of message. Click on image for a better view.

Don't forget to add a Listener -> Graph Results to see a little output and turn on Log Viewer under Options for extra info.  Press the green button to run and check your ActiveMQ for traffic.

Here is the text of the jmx file - start JMeter and read it in to get started: