Accelerate Existing Hadoop Deployments
With Apache Ignite

Accelerate the performance of Hadoop-based applications with Ignite
as a high-performance data access layer
hero-image

Benefits Of Using Apache Ignite

Real-time analytics

Apache Ignite enables real-time analytics across Apache Hadoop operational and historical data silos.

Low-latency and high-throughput operations

Ignite enables low-latency and high-throughput access while Hadoop continues to be used for long-running OLAP workloads.

How Does Apache Ignite Acceleration Work?

To achieve the performance acceleration of Hadoop-based systems, deploy Ignite as a separate distributed storage that maintains the data sets required for your low-latency operations or real-time reports

There are 3 basic steps:

01

Depending on the data volume and available memory capacity, you can enable Ignite native persistence to store historical data sets on disk while dedicating a memory space for operational records.

You can continue to use Hadoop as storage for less frequently-used data or for long-running and ad-hoc analytical queries.

02

Your applications and services should use Ignite native APIs to process the data residing in the in-memory cluster. Ignite provides SQL, compute (aka. map-reduce), and machine learning APIs for various data processing needs.

03

Consider using Apache Spark DataFrames APIs if an application needs to run federated or cross-database queries across Ignite and Hadoop clusters.

Ignite is integrated with Spark, which natively supports Hive/Hadoop. Cross-database queries should be considered only for a limited number of scenarios when neither Ignite nor Hadoop contains the entire data set.

image

How Can You Split Data And Operations Between Ignite And Hadoop?

Use Apache Ignite for tasks that require:
– Low-latency response time (microseconds, milliseconds, seconds)

– High-throughput operations (thousands and millions of operations per second)
– Real-time processing

Continue using Apache Hadoop for:
— High-latency operations (dozens of seconds, minutes, hours)
— Batch processing

5 Steps To Implement The Architecture In Practice

01
Download and install Apache Ignite to your system.
02
Select a list of operations for Ignite.

The best operations are those that require low-latency response time, high-throughput, and real-time analytics.

03

Consider enabling Ignite native persistence, or use Ignite as a pure in-memory cache, or in-memory data grid that persists changes to Hadoop or another external database.

04
Update your applications

Ensure they use Ignite native APIs to process Ignite data and Spark for federated queries.

05
If you need to replicate changes between Ignite and Hadoop clusters, use existing change-data-capture solutions:

Debezium
Kafka

GridGain Data Lake Accelerator
Oracle GoldenGate

To write-through changes to Hadoop directly,
implement Ignite's CacheStore interface.

Ready to Start?

Discover our quick start guide and build your first
application in 5-10 minutes

Quick Start Guide