In-Memory Data Grid

Ignite In-Memory Data Grid is an in-memory distributed key-value store that enables caching data in memory within distributed clusters. Ignite data grid can be viewed as a distributed partitioned hash map with every cluster node owning a portion of the overall data. This way the more cluster nodes we add, the more data we can cache.

Ignite data grid has been built from the ground up to linearly scale to hundreds of nodes with strong semantics for data locality and affinity data routing to reduce redundant data noise.

Ignite data grid is lightning fast and is one of the fastest implementations of transactional or atomic data in distributed clusters today. We know it because we constantly benchmark it ourselves.

Code Examples:
                            Ignite ignite = Ignition.ignite();

                            // Get an instance of named cache.
                            final IgniteCache<Integer, String> cache = ignite.cache("cacheName");

                            // Store keys in cache.
                            for (int i = 0; i < 10; i++)
                                cache.put(i, Integer.toString(i));

                            // Retrieve values from cache.
                            for (int i = 0; i < 10; i++)
                                System.out.println("Got [key=" + i + ", val=" + cache.get(i) + ']');

                            // Remove objects from cache.
                            for (int i = 0; i < 10; i++)

                            // Atomic put-if-absent.
                            cache.putIfAbsent(1, "1");

                            // Atomic replace.
                            cache.replace(1, "1", "2");
                            Ignite ignite = Ignition.ignite();

                            // Clone every object we get from cache, so we can freely update it.
                            IgniteCache<Integer, Account> cache = ignite.cache("cacheName");

                            try (IgniteTx tx = Ignition.ignite().transactions().txStart()) {
                                Account acct = cache.get(acctId);

                                assert acct != null;

                                // Deposit $20 into account.
                                acct.setBalance(acct.getBalance() + 20);

                                // Store updated account in cache.
                                cache.put(acctId, acct);

                            Ignite ignite = Ignition.ignite();

                            // Get an instance of named cache.
                            final GridCache<String, Integer> cache = ignite.cache("cacheName");

                            // Lock cache key "Hello".
                            Lock lock = cache.lock("Hello");


                            try {
                                cache.put("Hello", 11);
                                cache.put("World", 22);
                            finally {
                            IgniteCache<Long, Person> cache = ignite.cache("mycache");

                            SqlFieldsQuery sql = new SqlFieldsQuery(
                              "select concat(firstName, ' ', lastName) from Person");

                            // Select concatinated first and last name for all persons.
                            try (QueryCursor<List<?>> cursor = cache.query(sql)) {
                              for (List<?> row : cursor)
                                System.out.println("Full name: " + row.get(0));
                            IgniteCache<Long, Person> personCache = ignite.cache("personCache");

                            // Select with join between Person and Organization to
                            // get the names of all the employees of a specific organization.
                            SqlFieldsQuery sql = new SqlFieldsQuery(
                                "select  "
                                    + "from Person p, \"orgCache\".Organization o where "
                                    + "p.orgId = "
                                    + "and = ?");

                            // Execute the query and obtain the query result cursor.
                            try (QueryCursor<List<?>> cursor =  personCache.query(sql.setArgs("Ignite"))) {
                                for (List<?> row : cursor)
                                    System.out.println("Person name=" + row);
                            IgniteCache<Long, Person> personCache = ignite.cache("personCache");

                            // Select average age of people working within different departments.
                            SqlFieldsQuery sql = new SqlFieldsQuery(
                                "select avg(p.age) as avg_age, as dpmt_name, as org_name "
                                    + "from Person p, \"depCache\".Department d, \"orgCache\".Organization o "
                                    + "where p.depid = and d.orgid = "
                                    + "group by, "
                                    + "order by avg_age";

                            // Execute the query and obtain the query result cursor.
                            try (QueryCursor<List<?>> cursor =  personCache.query(sql.setArgs("Ignite"))) {
                                for (List<?> row : cursor)
                                    System.out.println("Average age by department and organization: " + row);
GitHub Examples:

Also see data grid examples available on GitHub.

Data Grid Features

Feature Description
Key-Value Store

Ignite data grid is an in-memory key-value store which can be viewed as a distributed partitioned hash map, with every cluster node owning a portion of the overall data. This way the more cluster nodes we add, the more data we can cache.

Unlike other key-value stores, Ignite determines data locality using a pluggable hashing algorithm. Every client can determine which node a key belongs to by plugging it into a hashing function, without a need for any special mapping servers or name nodes.

JCache (JSR 107)

Ignite is a 100% compliant implementation of JCache (JSR 107) specification. JCache provides a very simple to use, yet very powerful API for data caching.

Some of the JCache API features include:

  • Basic Cache Operations
  • ConcurrentMap APIs
  • Collocated Processing (EntryProcessor)
  • Events and Metrics
  • Pluggable Persistence

Partitioning & Replication

Depending on the configuration, Ignite can either partition or replicate data in memory. Unlike REPLICATED mode, where data is fully replicated across all nodes in the cluster, in PARTITIONED mode Ignite will equally split the data across multiple cluster nodes, allowing for caching TBs of data in memory.

Ignite also allows to configure multiple backup copies to guarantee data resiliency in case of node failures.

Collocated Processing

Ignite allows executing any native Java, C++, and .NET/C# code directly on the server-side, close to the data, in collocated fashion.

Self-Healing Cluster

Ignite cluster can self-heal, where clients automatically reconnect in case of failures, slow clients are automatically kicked out, and data from failed nodes is automatically propagated to other nodes in the grid.

Client-side Near Caches

Near cache is local client-side cache that stores the most recently and most frequently accessed data.

Page Memory

Apache Ignite Page Memory is a manageable off-heap based memory architecture that is split into pages of fixed size. Ignite stores data in off-heap memory with an option to store data on-heap.

Off-Heap Indexes

Ignite stores query indexes off-heap memory. For every unique index that is declared in an SQL schema, Apache Ignite instantiates and manages a dedicated B+ tree instance.

Binary Protocol

Apache Ignite stores data in caches as BinaryObjects that allows you to:

  • Read a serialized object's field without full object deserialization.
  • Dynamically change an object's structure.
  • Dynamically create an object.
ACID Transactions

Ignite provides fully ACID compliant distributed transactions that ensure guaranteed consistency.

Ignite supports OPTIMISTIC and PESSIMISTIC concurrency modes as well as READ_COMMITTED, REPEATABLE_READ, and SERIALIZABLE isolation levels.

Ignite transactions utilize 2PC protocol with many one-phase-commit optimizations whenever applicable.

Deadlock-Free Transactions

Ignite supports deadlock-free, optimistic transactions, which do not acquire any locks, and free users from worrying about the lock order. Such transactions also provide much better performance.

Transactional Entry Processor

Ignite transactional entry processor allows executing collocated user logic on the server side within a transaction.

Cross-Partition Transactions

In Ignite, transactions can be performed on all partitions of a cache across the whole cluster.


Ignite allows developers to define explicit locks enforcing mutual exclusion on cached objects.

SQL Queries

Ignite supports the standard SQL syntax (ANSI 99) to query the cache. You can use any SQL function, aggregation, or grouping.

Distributed Joins

Ignite supports distributed SQL joins as well as cross-cache joins.

Continuous Queries

Continuous queries are useful for cases when you want to execute a query and then continue to get notified about the data changes that fall into your query filter.

Query Indexing

For SQL queries, ignites supports in-memory indexing, so all the data lookups are extremely fast.

Query Consistency

In Ignite, provides full query consistency. Updates that happened after the query execution started do not affect the query result.

Query Fault-Tolerance

Ignite queries are fault-tolerant, i.e. query result is always consistent and is not affected by cluster topology changes.

JDBC Driver

Ignite is shipped with JDBC Driver that allows you to retrieve distributed data from cache using standard SQL queries and JDBC API.

ODBC Driver

Ignite ODBC driver allows users to retrieve data from cache using standard SQL queries and ODBC API.


Write-Through mode allows updating the data in the database.


Read-Through mode allows reading the data from the database.

Write-Behind Caching

Ignite provides an option to asynchronously perform updates to the database via Write-Behind Caching.

Automatic Persistence

Automatically connect to the underlying database and generate XML OR-mapping configuration and Java domain model POJOs.

Database Integration

Ignite can automatically integrate with external databases - RDBMS, NoSQL, and HDFS.

Web Session Clustering

Ignite data grid is capable of caching web sessions of all Java Servlet containers that follow Java Servlet 3.0 Specification, including Apache Tomcat, Eclipse Jetty, Oracle WebLogic, and others.

Web sessions caching becomes useful when running a cluster of app servers to improve performance and scalability of the servlet container.

Hibernate L2 Caching

Ignite data grid can be used as Hibernate Second-Level Cache (or L2 cache), which can significantly speed-up the persistence layer of your application.

Spring Caching

Ignite provides Spring-annotation-based way to enable caching for Java methods so that the result of a method execution is stored in the Ignite cache. If later the same method is called with the same set of parameters, the result will be retrieved from the cache instead of actually executing the method.

Spring Data

Apache Ignite implements Spring Data CrudRepository interface that not only supports basic CRUD operations but also provides access to the Apache Ignite SQL capabilities via the unified Spring Data API.


Ignite.NET is built on top of Ignite. This allows you to perform almost all the in-memory data grid operations including ACID transactions, SQL queries, distributed joins, messaging and events, etc.


Ignite C++ is built on top of Ignite. This allows you to perform almost all the in-memory data grid operations including SQL queries, and distributed joins.


Ignite can be configured with a Java Transaction API (JTA) transaction manager lookup class.

OSGI Support