Features > Persistence

ActiveMQ Classic V5.14.2 / V5.17.0

ActiveMQ Classic 5.14.2 was the first release after the deprecation announcement of LevelDB. The implementation was removed in 5.17.0. We once again recommend you use KahaDB.

ActiveMQ Classic V5.9

In ActiveMQ Classic 5.9, the Replicated LevelDB Store was introduced. It handles using Apache ZooKeeper to pick a master from a set of broker nodes configured to replicate single LevelDB Store. Then synchronizes all slave LevelDB Stores with the master keeps them up to date by replicating all updates to the master. It may have become the preferred Master Slave configuration going forward.

ActiveMQ Classic V5.8

In ActiveMQ Classic 5.8, the LevelDB Store was introduced. The LevelDB Store is a file based persistence database. It has been optimized to provide even faster persistence than KahaDB. Although not yet the default message store, we expect this store implementation become the default in future releases.

ActiveMQ Classic V5.3

From 5.3 onwards - we recommend you use KahaDB - which offers improved scalability and recoverability over the AMQ Message Store.
The AMQ Message Store which although faster than KahaDB - does not scales as well as KahaDB and recovery times take longer.

ActiveMQ Classic V4

For long term persistence we recommend using JDBC coupled with our high performance journal. You can use just JDBC if you wish but its quite slow.

Our out of the box default configuration uses Apache Derby as the default database, which is easy to embed - but we support all the major SQL databases - just reconfigure your JDBC configuration in the Xml Configuration.

High performance journal - ActiveMQ Classic 4.x

To achieve high performance of durable messaging in ActiveMQ Classic V4.x we strongly recommend you use our high performance journal - which is enabled by default. This works rather like a database; messages (and transcation commits/rollbacks and message acknowledgements) are written to the journal as fast as is humanly possible - then at intervals we checkpoint the journal to the long term persistence storage (in this case JDBC).

Its common when using queues for example that messages are consumed fairly shortly after being published; so you could publish 10,000 messages and only have a few messages outstanding - so when we checkpoint to the JDBC database, we often have only a small amount of messages to actually write to JDBC. Even if we have to write all the messages to the JDBC, we still get performance gains with the journal, since we can use a large transaction batch to insert the messages into the JDBC database to boost performance on the JDBC side.

Our journal is based on lots of the great work in the Howl project; we keep close ties to the Howl community. However since ActiveMQ Classic has to handle arbitarily large message sizes, we’ve had to make our journal handle any size of message and so we don’t use the fixed size record model that Howl uses.

Configuring persistence

For full explict control over configuration check out the Xml Configuration. However a quick way to set which persistence adapter to use is to set the following system property to be the class name of the PersistenceAdapter implementation.


When running the broker from the command line, we look for the activemq.xml on the classpath unless you specify one to use. e.g.
AMQ 4.x

activemq xbean:file:myconfig.xml

AMQ 3.x

activemq myconfig.xml

or just


Here is a sample XML configuration which shows how to configure the journal and the JDBC persistence.

AMQ 5.x

<beans xmlns="" 
  <bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer"/> 
  <broker useJmx="true" xmlns=""> 
      <!-- <networkConnector uri="multicast://default?initialReconnectDelay=100" /> <networkConnector uri="static://(tcp://localhost:61616)" /> --> 
      <journalPersistenceAdapterFactory journalLogFiles="5" dataDirectory="${basedir}/target" /> 
      <!-- To use a different dataSource, use the following syntax : --> 
      <!-- <journalPersistenceAdapterFactory journalLogFiles="5" dataDirectory="${basedir}/activemq-data" dataSource="#mysql-ds"/> --> 
      <transportConnector uri="tcp://localhost:61636" /> 
  <!-- MySql DataSource Sample Setup --> 
  <bean id="mysql-ds" class="org.apache.commons.dbcp2.BasicDataSource" destroy-method="close"> 
    <property name="driverClassName" value="com.mysql.jdbc.Driver"/> 
    <property name="url" value="jdbc:mysql://localhost/activemq?relaxAutoCommit=true"/> 
    <property name="username" value="activemq"/> 
    <property name="password" value="activemq"/> 
    <property name="poolPreparedStatements" value="true"/> 

For more details see the Initial Configuration guide.

JDBC Persistence without Journaling

To enable JDBC persistence of JMS messages without journaling, we need to change the message broker’s default persistence configuration from
AMQ 4.x

  <journaledJDBC journalLogFiles="5" dataDirectory="../activemq-data"/> 


  <jdbcPersistenceAdapter dataSource="#my-ds"/> 

For AMQ 3.x

  <journalPersistence directory="../var/journal"> 
    <jdbcPersistence dataSourceRef="derby-ds"/> 


  <jdbcPersistence dataSourceRef="derby-ds"/> 

Make sure to send durable messages so that it will be persisted in the database server while waiting to be consumed by clients. More information on configuration JDBC persistence at JDBC Support

Kaha Persistence

Disaster Recovery options

For people with high DR requirements we have various options for providing a Replicated Message Store to allow full failover in times of major data centre failure.

Disabling Persistence

If you don’t want persistence at all you can disable it easily via the Xml Configuration. e.g.

<broker persistent="false"> </broker>

This will make the broker use the []( For an example of using a configuration URI see [How To Unit Test JMS Code](how-to-unit-test-jms-code)

Apache, ActiveMQ, Apache ActiveMQ, the Apache feather logo, and the Apache ActiveMQ project logo are trademarks of The Apache Software Foundation. Copyright © 2024, The Apache Software Foundation. Licensed under Apache License 2.0.