Setting Up | Ignite Documentation

Ignite Summit 2023 — Watch on demand — Register now!


Setting Up


Configuring .NET, Python, Node.JS and other programming languages

  • .NET developers: refer to the Ignite.NET Configuration section

  • Developers of Python, Node.JS, and other programming languages: use this page to configure your Java-powered Ignite cluster and thin clients section to set up your language-specific applications that will be working with the cluster.

System Requirements

Ignite was tested on:


Oracle JDK 8, 11 or 17 Open JDK 8, 11 or 17, IBM JDK 8, 11 or 17


Linux (any flavor), Mac OSX (10.6 and up), Windows (XP and up), Windows Server (2008 and up), Oracle Solaris


x86, x64, SPARC, PowerPC


No restrictions (10G recommended)

Running Ignite with Java 11 or later

To run Ignite with Java 11 or later, follow these steps:

  1. Set the JAVA_HOME environment variable to point to the Java installation directory.

  2. Ignite uses proprietary SDK APIs that are not available by default. You need to pass specific flags to JVM to make these APIs available. If you use the start-up script (or ignite.bat for Windows), you do not need to do anything because these flags are already set up in the script. Otherwise, provide the following parameters to the JVM of your application:

    --add-opens java.desktop/java.awt.font=ALL-UNNAMED

Using Binary Distribution

  • Download the appropriate binary package from Apache Ignite Downloads.

  • Unzip the archive into a directory.

  • (Optional) Set the IGNITE_HOME environment variable to point to the installation folder and make sure there is no trailing / in the path.

Using Maven

The easiest way to use Ignite is to add it to your pom.



The 'ignite-core' library contains the core functionality of Ignite. Addition functionality is provided by various Ignite modules.

The following are the two most commonly used modules:


Using Docker

If you want to run Ignite in Docker, refer to the Docker Deployment section.

Configuring Work Directory

Ignite uses a work directory to store your application data (if you use the Native Persistence feature), index files, metadata information, logs, and other files. The default work directory is as follows:

  • $IGNITE_HOME/work, if the IGNITE_HOME system property is defined. This is the case when you start Ignite using the bin/ script from the distribution package.

  • ./ignite/work, this path is relative to the directory where you launch your application.

There are several ways you can change the default work directory:

  1. As an environmental variable:

    export IGNITE_WORK_DIR=/path/to/work/directory
  2. In the node configuration:

    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
        <property name="workDirectory" value="/path/to/work/directory"/>
        <!-- other properties -->
    IgniteConfiguration igniteCfg = new IgniteConfiguration();
    var cfg = new IgniteConfiguration
        WorkDirectory = "/path/to/work/directory"
    IgniteConfiguration cfg;
    cfg.igniteHome = "/path/to/work/directory";

Enabling Modules

Ignite ships with a number of modules and have a lot of extensions that provide various functionality. You can enable modules or extensions one by one, as required.

All modules are included in the binary distribution, but by default they are disabled (except for the ignite-core, ignite-spring, ignite-control-utility and ignite-indexing modules). Modules can be found in the libs/optional directory of the distribution package (each module is located in a separate sub-directory).

Also, you can download any of Ignite extensions you require.

Depending on how you use Ignite, you can enable modules or extensions using one of the following methods:

  • If you use the binary distribution, move the libs/optional/{module-dir} to the libs directory before starting the node.

  • Add libraries from libs/optional/{module-dir} to the classpath of your application.

  • Add a module as a Maven dependency to your project.


The following modules have LGPL dependencies and, therefore, can’t be deployed on the Maven Central repository:

  • ignite-hibernate (Apache Ignite Extensions)

  • ignite-geospatial (Apache Ignite Extensions)

  • ignite-schedule (deprecated)

To use these modules, you will need to build them from sources and add to your project. For example, to install the ignite-hibernate into your local repository and create distribution binaries, run the following command from the Ignite Extensions sources:

mvn clean install -DskipTests -f modules/hibernate-ext -Pextension-release

The following modules are available:

Module’s artifactId Description


The Ignite Cassandra Serializers module provides additional serializers to store objects as BLOBs in Cassandra. The module could be used as in conjunction with the Ignite Cassandra Store module.


Ignite Cassandra Store provides a CacheStore implementation backed by the Cassandra database.


Ignite Direct IO is a plugin that provides a page store with the ability to write and read cache partitions in O_DIRECT mode.


SQL querying and indexing


Support for the Jakarta Common Logging (JCL) framework.


Integration of Ignite transactions with JTA.


Ignite Kafka Streamer provides capability to stream data from Kafka to Ignite caches.


Ignite Kubernetes module provides a TCP Discovery IP Finder that uses a dedicated Kubernetes service for IP addresses lookup of Ignite pods containerized by Kubernetes.


Support for Log4j2


Ignite ML Grid provides machine learning features and relevant data structures and methods of linear algebra, including on heap and off heap, dense and sparse, local and distributed implementations. Refer to the Machine Learning documentation for details.


Ignite REST-HTTP starts a Jetty-based server within a node that can be used to execute tasks and/or cache commands in grid using HTTP-based RESTful APIs.


This module provides functionality for scheduling jobs locally using UNIX cron-based syntax.


Support for SLF4J logging framework.


The Ignite TensorFlow Integration Module allows using TensorFlow with Ignite. In this scenario Ignite will be a datasource for any TensorFlow model training.


Ignite URI Deploy module provides capabilities to deploy tasks from different sources such as File System, HTTP, or even Email.


Open source command line management and monitoring tool


Ignite Web allows you to start nodes inside any web container based on servlet and servlet context listener. In addition, this module provides capabilities to cache web sessions in an Ignite cache.


ZooKeeper Discovery implementation.

The following extensions are available:

Extension’s artifactId Description


Ignite AOP module provides capability to turn any Java method to a distributed closure by adding @Gridify annotation to it.


Cluster discovery on AWS S3. Refer to Amazon S3 IP Finder for details.


Ignite Azure provides Azure Blob Storage-based implementation of IP finder for TCP discovery.


Ignite Cloud provides Apache jclouds implementations of the IP finder for TCP discovery.


This module provides bridging components to make Ignite run seamlessly inside an OSGi container such as Apache Karaf.


This module contains a feature repository to facilitate installing Ignite into an Apache Karaf container.


Ignite GCE provides Google Cloud Storage based implementations of IP finder for TCP discovery.


This module provides an implementation of Spark RDD abstraction that enables easy access to Ignite caches.


Ignite Spring Data provides an integration with Spring Data framework.


Ignite Spring Data 2.0 provides an integration with Spring Data 2.0 framework.


Ignite Spring Data 2.2 provides an integration with Spring Data 2.2 framework.


The Ignite SSH module provides capabilities to start Ignite nodes on remote machines via SSH.


TCP Discovery IP Finder that uses a ZooKeeper directory to locate other Ignite nodes to connect to.

Setting JVM Options

There are several ways you can set JVM options when starting a node with the script. These ways are described in the following sections.

JVM_OPTS System Variable

You can set the JVM_OPTS environment variable:


Command Line Arguments

You can also pass JVM options by using the -J prefix:


Setting Ignite System Properties

In addition to public configuration settings, you can adjust specific, usually low-level, Ignite behavior with internal system properties. You can find all the properties with their descriptions and default values by using the command below:

./ -systemProps
.\ignite.bat -systemProps

Example of the output: -systemProps
IGNITE_AFFINITY_HISTORY_SIZE                                    - [Integer] Maximum size for affinity assignment history. Default is 25.
IGNITE_ALLOW_ATOMIC_OPS_IN_TX                                   - [Boolean] Allows atomic operations inside transactions. Default is true.
IGNITE_ALLOW_DML_INSIDE_TRANSACTION                             - [Boolean] When set to true, Ignite will allow executing DML operation (MERGE|INSERT|UPDATE|DELETE) within transactions for non MVCC mode. Default is false.
IGNITE_ALLOW_START_CACHES_IN_PARALLEL                           - [Boolean] Allows to start multiple caches in parallel. Default is true.

Configuration Recommendations

Below are some recommended configuration tips aimed at making it easier for you to operate an Ignite cluster or develop applications with Ignite.

Setting Work Directory

If you are going to use either binary distribution or Maven, you are encouraged to set up the work directory for Ignite. The work directory is used to store metadata information, index files, your application data (if you use the Native Persistence feature), logs, and other files. We recommend you always set up the work directory.

Logs play an important role when it comes to troubleshooting and finding what went wrong. Here are a few general tips on how to manage your log files:

  • Start Ignite in verbose mode:

    • If you use, specify the -v option.

    • If you start Ignite from Java code, set the following system variable: IGNITE_QUIET=false.

  • Do not store log files in the /tmp folder. This folder is cleared up every time the server is restarted.

  • Make sure that there is enough space available on the storage where the log files are stored.

  • Archive old log files periodically to save on storage space.