Apache Ignite can be used as a distributed, in-memory cache that you can query with SQL and update atomically by using ACID transactions and on which you can execute custom computations that are written in languages such as Java, C#, and C++. Ignite provides all the components that are required to speed applications, databases, and microservices.
An Apache Ignite cluster can span multiple nodes, allowing your applications to use all the memory and CPU resources of a distributed environment. Your applications can interact with the cluster as they interact with a standard cache, by using simple key-value requests. Also, for more advanced operations, you can run distributed SQL queries that join and group datasets. If strong consistency is required, you can execute multi-node and cross-cache ACID transactions in both pessimistic and optimistic modes. Additionally, if an application runs compute or data-intensive logic, you can minimize data shuffling and network utilization by running co-located computations that are written in a contemporary programming language.
There are two primary deployment strategies for Ignite as an in-memory cache— cache-aside deployment and read-through/write-through caching. We will review both strategies.
With the cache-aside deployment strategy, a cache is deployed separately from the primary data store and might not even know that the primary store exists. An application or change-data-capture (CDC) process becomes responsible for data synchronization between the two storage locations. For example, if a record is updated in the primary data store, then its new value needs to be replicated to the cache.
This strategy works well when the cached data is relatively static (not updated frequently) or when temporary data lag is allowed between the primary store and the cache. It's usually assumed that changes will be fully replicated eventually and, thus, the cache and the primary store will become consistent.
When Apache Ignite is deployed in a cache-aside configuration, its native persistence can be used as a disk store for Ignite datasets. Native persistence allows for elimination of the time-consuming cache warm-up step. Furthermore, because native persistence maintains a full copy of data on disk, you can cache a subset of records in memory. If a required data record is missing from memory, then Ignite reads the record from the disk automatically, regardless of which API you use—whether SQL, key-value, or scan queries.
The read-through/write-through caching strategy can be classified as an in-memory, data-grid type of deployment. When Apache Ignite is deployed as a data grid, the application layer begins to treat Ignite as the primary store. As applications write to and read from the data grid, Ignite ensures that all underlying external databases stay updated and are consistent with the in-memory data.
This strategy is recommended for architectures that need to accelerate disk-based databases or to create a shared caching layer across various data sources. Ignite integrates with many databases out-of-the-box and, in write-through or write-behind mode, can synchronize all changes to the databases. The strategy also applies to ACID transactions: Ignite will coordinate and commit a transaction across its in-memory cluster as well as to a relational database.
Read-through capability implies that, if a record is missing from memory, a cache can read the data from an external database. Ignite fully supports this capability for key-value APIs. However, when you use Ignite SQL, you must preload the dataset into memory—because Ignite SQL can query on-disk data only if the data is stored in native persistence.