Cluster Setup documentation for the dotCMS Content Management System

dotCMS Enterprise Professional and Prime support multi-node clusters in load balanced, round-robin or hot standby configurations. Clustering may be setup in either Auto-clustering or Manual Cluster Configuration modes.

This document describes the common configuration required for both clustering options. Please review the following sections before performing your clustering configuration:

For more information on Auto-Clustering and Manual Cluster Configuration, please click the links below:

Initial Setup

The following resources are shared among all servers in the cluster, and must be set up for all clustering configurations. These configuration steps must be completed before implementing the steps for either Auto-clustering or Manual Cluster Configuration.

  1. Common Location
  2. Shared Database
  3. Load Balancer
  4. Shared Assets Directory
  5. (Optional) Sharing OSGi Plugin Directories

1. Common Location

Clustering in dotCMS is designed to be used in a single location/datacenter with low network latency between servers. Clusters require significant inter-node communications, and having nodes in different locations can negatively affect performance and reliability of the cluster due to network latency or communication failures. Therefore all nodes in a cluster should be co-located in the same physical location.

Nodes Spanning Multiple Locations

Push Publishing is the recommended and fully supported solution to span multiple locations.

Although some other methods can be implemented to connect nodes spanning multiple locations, these require custom configuration and support. If you would like more information on other methods of spanning nodes in different physical locations (such as clusters with nodes in different data centers or on separate networks), please contact dotCMS Professional Services for assistance.

2. Shared Database

In order to cluster dotCMS, you must first create and set up your initial database. Though caches and indexes are stored separately on each node in the cluster, all nodes in a cluster connect to the same centralized database in order to sync data across the cluster.

3. Load Balancer

Additionally, you will need a load balancer that has sticky session support enabled and running in front of your dotCMS instances. Apache's mod_jk can provide load balancing in front of the Tomcat servers in the dotCMS clusters (please see the Apache mod_jk documentation for more information). However direct HTTP-based load-balancing via appliance or software is used much more often.

4. Shared Assets Directory

dotCMS requires a network share or NAS that shares the contents of the assets directory (/dotserver/tomcat-8.0.18/webapps/ROOT/assets) across all nodes in the cluster. If your assets directory exists in a different location, it can be configured by setting the ASSET_REAL_PATH variable in the file.

Note: It is strongly recommended that all changes to the file be made through a properties file extension.

Cluster Diagram

5. (Optional) Sharing OSGi Plugin Directories

You can share your OSGi plugins and deploy and undeploy them across the whole cluster at the same time, from any node. To do this, you must use the Shared Assets Directory for 2 of the OSGi folders. Each server in the cluster monitors these shared folders and deploys/un-deploys any OSGi jars found there.

  1. Inside the Shared Asset Directory, create folders named /felix/load and /felix/undeployed.
    • e.g., dotCMS/assets/felix/load and dotCMS/assets/felix/undeployed.
  2. Replace the server local OSGi folders under WEB-INF with symlinks to these shared folders, so you would have
    • dotCMS/WEB-INF/felix/load sym link to —> dotCMS/assets/felix/load
    • dotCMS/WEB-INF/felix/undeployed sym link to —> dotCMS/assets/felix/undeployed

Note: If you share your plugins across all nodes in your cluster, each node will try run the code in the plugins Activator class simultaneously. It is important to know this when doing “set up type work” in the plugins Activator.

Starting Up a Cluster

When you start a cluster for the first time (after installing or upgrading dotCMS), you must start a single node first. Then, once you can login to the admin console on the initial node, you can start all the other nodes in the cluster, and they will automatically replicate the content from the initial node.

This is important because if you attempt to start up multiple nodes in a cluster at the same time after installing dotCMS or upgrading to a new version, multiple nodes will perform the database initialization and database upgrade steps at the same time; the nodes will try to synchronize with each other, but since there is different data on each node, it will cause multiple errors and prevent the nodes from being able to synchronize properly. Ensuring that a single node starts fully before staring any other nodes helps avoid these synchronization problems.

Note that this requirement to start a single node in the cluster only applies when first installing dotCMS or after upgrading dotCMS to a new version. Once you have run a new version of dotCMS once, you may start up any number of nodes at the same time, and you may start the nodes in any order.

Clustering Method Configuration

After performing all the common configuration steps, you must perform additional configuration steps specific to the method of clustering you choose. Follow the appropriate link below for documentation on the additional configuration steps required for your clustering method.

Testing your Cluster

After completing all configuration - both in this document and either the Auto-clustering Configuration or Manual Cluster Configuration documents - you should test your cluster to ensure that everything is working properly. This section details 3 tests that will verify the operation of your cluster.

Test 1: Test your cache cluster startup

  1. Shut down and restart 1 node in the cluster.
  2. Open the log file for the restarted node and search for “ping”.

Result: When you restart the node, it should “ping” the other servers in the cluster, and you should see the results of those pings in the dotcms.log file. If you do not see “ping” on the other servers in the cluster, then your cluster cache settings are incorrect.

Test 2: Test your ElasticSearch index replicas

  1. Log into the back end of one dotCMS node and navigate to the System -> Maintenance tab.
  2. Select the “Index” tab and right click on the active index.
  3. Change the number of replicas for that index to another number, higher than the number of servers in your cluster.
    • For example, if you have 2 nodes in your cluster, set the number of replicas for that index to 3.

Result: The change in the number of replicas should turn your index yellow and should instantly be visible from the other nodes in the dotCMS cluster.


  • If the change is not visible on the other nodes in the cluster, then your ElasticSearch indexes are not communicating properly.
  • The number of replicas is intentionally incorrect in this test, to verify that an incorrect setting or other problem will be visible from the other nodes.
    • Make sure to set the number of replicas back to the correct number after performing this test.

Test 3: Test your ElasticSearch cluster

  1. Enter a piece of content on one server in the cluster.
    • Note: It is best to do this with content that contains a binary file or image, to ensure that these assets are being shared propertly among nodes in the cluster.
  2. Search for the content on the other nodes in the cluster.
  3. Display the content on the other nodes in the cluster.

*Result: The content should be found when searched for on other nodes, and should display properly on all the other nodes.

Test Results

If your cluster passes all of these tests, then your servers are communicating properly and acting in unison.

If your cluster fails to pass any of these tests, your cluster configuration is incorrect, and you should review all the cluster installation steps (both above, and in the Auto-clustering Configuration or Manual Cluster Configuration as appropriate).