KahaDB is a file based persistence database that is local to the message broker that is using it. It has been optimised for fast persistence and is the the default storage mechanism from ActiveMQ 5.4 onwards. KahaDB uses less file descriptors and provides faster recovery than its predecessor, the AMQ Message Store.
You can configure ActiveMQ to use KahaDB for its persistence adapter - like below:
the path to the directory to use to store the message store data and log files
If set, configures where the KahaDB index files will be stored. If not set, the index files are stored in the directory specified by the 'directory' attribute.
number of indexes written in a batch
number of index pages cached in memory
if set, will asynchronously write indexes
a hint to set the maximum size of the message data logs
ensure every non transactional journal write is followed by a disk sync (JMS durability requirement)
time (ms) before checking for a discarding/moving message data logs that are no longer used
time (ms) before checkpointing the journal
If enabled, will ignore a missing message log file
If enabled, will check for corrupted Journal files on startup and try and recover them
false true v5.9
create a checksum for a journal file - to enable checking for corrupted journals
If enabled, will move a message data log to the archive directory instead of deleting it.
Define the directory to move data logs to when they all the messages they contain have been consumed.
the maximum number of asynchronous messages that will be queued awaiting storage (should be the same as the number of concurrent MessageProducers)
enable the dispatching of Topic messages to interested clients to happen concurrently with message storage
enable the dispatching of Queue messages to interested clients to happen concurrently with message storage
If enabled, corrupted indexes found at startup will be archived (not deleted)
For tuning locking properties please take a look at Pluggable storage lockers
Slow file system access diagnostic logging
You can configure a non zero threshold in mili seconds for database updates.
If database operation is slower than that threshold (for example if you set it to 500), you may see messages like
Slow KahaDB access: cleanup took 1277 | org.apache.activemq.store.kahadb.MessageDatabase | ActiveMQ Journal Checkpoint Worker
You can configure a threshold used to log these messages by using a system property and adjust it to your disk speed so that you can easily pick up runtime anomalies.
Multi(m) kahaDB persistence adapter
From 5.6, it is possible to distribute destinations stores across multiple kahdb persistence adapters. When would you do this? If you have one fast producer/consumer destination and another periodic producer destination that has irregular batch consumption, you disk usage can grow out of hand because unconsumed messages get dotted across journal files. Having a separate journal for each ensures minimal journal usage. Also, some destination may be critical and require disk synchronisation while others may not.
In these cases you can use the mKahaDB persistence adapter and filter destinations using wildcards, just like with destination policy entries.
Transactions can span multiple journals if the destinations are distributed. This means that two phase completion is necessary, which does impose a performance (additional disk sync) penalty to record the commit outcome. This penalty is only imposed if more than one journal is involved in a transaction.
Each instance of kahaDB can be configured independently. If no destination is supplied to a
filteredKahaDB, the implicit default value will match any destination, queue or topic. This is a handy catch all. If no matching persistence adapter can be found, destination creation will fail with an exception. The
filteredKahaDB shares its wildcard matching rules with Per Destination Policies.
Automatic per destination persistence adapter
perDestination boolean attribute is set to true on the catch all (no explicit destination set),
filteredKahaDB. Each matching destination will get its own