We've recently decided to upgrade our ActiveMQ instances and thought we'd try replicated LevelDB with the hope that it is the answer to our shared storage problems (shared SQL store is too slow (on fast SQL Servers) and shared locking on NFS didn't lock). We used three ActiveMQ brokers and three ZooKeeper instances. The general setup and configuration followed the Replicated_LevelDB page. We also used HAproxy (or use another load balancer) as an interface to our front end so that one URL would support all the brokers. There was one feature that we were not able to run with replicated LevelDB: delayed message queuing, but we could live with that.
This 3 node replicated configuration worked fine for weeks although occasionally one of the nodes would fail and we'd need to bring it back. The logs indicated that it'd timed out and shut down. Then we had a cascade of these shutdowns - node 1 would fail, then restart, then node 3, then node 2 - they were good in that they tried restarting, but after a few minutes of rapid failover and recovery, the nodes gave up and shut down entirely often leaving one node up, but not the master since a quorum wasn't available.
After restarting the nodes to get the system running again, we checked the logs and saw messages about ZooKeeper timeouts on the order of a few seconds (2-3secs). These nodes are all in the same rack in the same data center - network times should always be 1-2ms (sometimes spiking at 7-10ms), not 1000 times higher. We left the cluster again to observe this again. Within a month, it happened again with very much the same situation. For operational sanity, we turned off replicated LevelDB and are back to a non-HA solution while we investigate the solutions. To put this in perspective, the problem could be our ZooKeepers and set up - we have had to put effort into tuning ZooKeeper in the past. The ActiveMQ guys also mention (can't find the link now) that replicated LevelDB is cutting edge and might not be ready for full production use.