In-Memory Cache
With Apache Ignite

Improve the performance and scalability of your applications,
databases, and microservices with Apache Ignite
Distributed In-Memory Cache

What Is In-Memory Cache?

In-memory cache is a storage layer placed between applications and databases. The cache keeps your hot data in memory to offload existing databases and accelerate applications.

Advantages of Distributed In-Memory Cache

A distributed in-memory cache is the most straightforward and scalable way to accelerate your existing applications and databases, thanks to:


Memory as a storage layer provides the lowest latency and highest throughput. Laws of physics.


Horizontal scalability lets you grow the cluster size to an unlimited extent to accommodate data size and throughput.

Unlike Standard In-Memory Caches, Apache Ignite
Supports Essential Developers APIs

ACID transactions
to ensure consistency
of data

SQL queries execution

Custom computations,
e.g. on Java, available

Read-Through / Write-Through Caching

How It Works

The read-through/write-through caching strategy can be
classified as an in-memory, data-grid type of deployment.

When Apache Ignite is deployed as a data grid, the application layer begins to treat Ignite as the primary store.

As applications write to and read from the data grid, Ignite ensures that all underlying external databases stay updated and are consistent with the in-memory data.

How It Works

This strategy is recommended for architectures that need to:

  • accelerate disk-based databases;
  • create a shared caching layer across various data sources.

Ignite integrates with many databases out-of-the-box and, in write-through or write-behind mode, can synchronize all changes to the databases.

The strategy also applies to ACID transactions: Ignite will coordinate and commit a transaction across its in-memory cluster as well as to a relational database.

Read-through capability implies that, if a record is missing from memory, a cache can read the data from an external database. Ignite fully supports this capability for key-value APIs.

When you use Ignite SQL, you must preload the dataset into memory—because Ignite SQL can query on-disk data only if the data is stored in native persistence.

Cache-Aside Deployment

When It Works

This strategy works well in two cases:

  • 1. The cached data is relatively static, i.e. not updated frequently
  • 2. A temporary data lag is allowed between the primary store and the cache

It’s usually assumed that changes will be fully replicated eventually and,
thus, the cache and the primary store will become consistent.

Cache-Aside Deployment And Native Persistence

When Apache Ignite is deployed in a cache-aside configuration, its native persistence can be used as a disk store for Ignite datasets. Native persistence allows for the elimination of the time-consuming cache warm-up step.

As native persistence maintains a full copy of data on disk, you can cache a subset of records in memory. If a required data record is missing from memory, then Ignite reads the record from the disk automatically, regardless of which API you use — be it SQL, key-value, or scan queries.

  • Seconds needed for recovery
  • Full copy of cached records is duplicated on disk
  • Use any API: SQL, key-value, or scan queries


Raiffeisen Bank

As users transition to digital channels, the load on the bank's systems has increased. Therefore, load reduction and system scaling are constant and top priorities.

Ready to Start?

Discover our quick start guide and build your first application in 5-10 minutes

Quick Start Guide