Cross-cluster Replication Extension
Warning
|
Change Data Capture (CDC) and Cross-cluster Replication Extension are experimental features. API or design architecture might be changed. |
Overview
Cross-cluster Replication Extension module provides the following ways to set up cross-cluster replication based on CDC.
-
Ignite2IgniteClientCdcStreamer - streams changes to destination cluster using Java Thin Client.
-
Ignite2IgniteCdcStreamer - streams changes to destination cluster using client node.
-
Ignite2KafkaCdcStreamer combined with KafkaToIgniteCdcStreamer streams changes to destination cluster using Apache Kafka as a transport.
Note
|
Conflict resolver should be defined for each cache replicated between the clusters. |
Note
|
All implementations of the cross-cluster replication support replication of BinaryTypes and TypeMappings |
Note
|
To use SQL queries on the destination cluster over CDC-replicated data, set the same VALUE_TYPE in
CREATE TABLE on both source and destination clusters for each table.
|
Ignite to Java Thin Client CDC streamer
This streamer starts Java Thin Client which connects to destination cluster. After connection is established, all changes captured by CDC will be replicated to destination cluster.
Note
|
Instances of ignite-cdc.sh with configured streamer should be started on each server node of source cluster to capture all changes.
|
Configuration
Name | Description | Default value |
---|---|---|
|
Set of cache names to replicate. |
null |
|
Client configuration of thin client that will connect to destination cluster to replicate changes. |
null |
|
Flag to handle changes only on primary node. |
|
|
Maximum number of events to be sent to destination cluster in a single batch. |
1024 |
Metrics
Name | Description |
---|---|
|
Count of messages applied to destination cluster. |
|
Timestamp of last applied event to destination cluster. |
|
Count of binary types events applied to destination cluster. |
|
Count of mappings events applied to destination cluster |
Ignite to Ignite CDC streamer
This streamer starts client node which connects to destination cluster. After connection is established, all changes captured by CDC will be replicated to destination cluster.
Note
|
Instances of ignite-cdc.sh with configured streamer should be started on each server node of source cluster to capture all changes.
|
Configuration
Name | Description | Default value |
---|---|---|
|
Set of cache names to replicate. |
null |
|
Ignite configuration of client nodes that will connect to destination cluster to replicate changes. |
null |
|
Flag to handle changes only on primary node. |
|
|
Maximum number of events to be sent to destination cluster in a single batch. |
1024 |
Metrics
Name | Description |
---|---|
|
Count of messages applied to destination cluster. |
|
Timestamp of last applied event to destination cluster. |
|
Count of binary types events applied to destination cluster. |
|
Count of mappings events applied to destination cluster |
CDC replication using Kafka
This way to replicate changes between clusters requires setting up two applications:
-
ignite-cdc.sh
withorg.apache.ignite.cdc.kafka.IgniteToKafkaCdcStreamer
that will capture changes from source cluster and write it to Kafka topic. -
kafka-to-ignite.sh
that will read changes from Kafka topic and then write them to destination cluster.
Note
|
Instances of ignite-cdc.sh with configured streamer should be started on each server node of source cluster to capture all changes.
|
Important
|
CDC trough Kafka requires metadata topic with the only one partition for sequential ordering guarantees. |
IgniteToKafkaCdcStreamer Configuration
Name | Description | Default value |
---|---|---|
|
Set of cache names to replicate. |
null |
|
Kafka producer properties. |
null |
|
Name of the Kafka topic for CDC events. |
null |
|
Number of Kafka partitions in CDC events topic. |
null |
|
Name of topic for replication of BinaryTypes and TypeMappings. |
null |
|
Flag to handle changes only on primary node. |
|
|
Maximum size of concurrently produced Kafka records. When streamer reaches this number, it waits for Kafka acknowledgements, and then commits CDC offset. |
|
|
Kafka request timeout in milliseconds. |
|
-
kafkaRequestTimeout
property sets how muchIgniteToKafkaCdcStreamer
will wait forKafkaProducer
to finish request.
Note
|
kafkaRequestTimeout should not be too low. If wait time exceeds kafkaRequestTimeout , then IgniteToKafkaCdcStreamer will fail with a timeout error.
|
-
To specify
KafkaProducer
settings, usekafkaProperties
property. We suggest to use a separate file to store all the necessary configuration properties and reference it from the IgniteToKafkaCdcStreamer configuration '.xml' file. See the examples below.
kafka.properties
bootstrap.servers=xxx.x.x.x:9092
request.timeout.ms=10000
IgniteToKafkaCdcStreamer bean declaration in ignite-to-kafka-streamer-config.xml
<bean id="cdc.streamer" class="org.apache.ignite.cdc.kafka.IgniteToKafkaCdcStreamer">
<property name="topic" value="${send_data_kafka_topic_name}"/>
<property name="metadataTopic" value="${send_metadata_kafka_topic_name}"/>
<property name="kafkaPartitions" value="${send_kafka_partitions}"/>
<property name="caches">
<list>
<value>terminator</value>
</list>
</property>
<property name="onlyPrimary" value="false"/>
<property name="kafkaProperties" ref="kafkaProperties"/>
</bean>
<util:properties id="kafkaProperties" location="file:kafka_properties_path/kafka.properties"/>
Note
|
request.timeout.ms Kafka producer property is mandatory for streamer configuration. For more details you should refer to a configuration section of the official Kafka documentation. |
IgniteToKafkaCdcStreamer Metrics
Name | Description |
---|---|
|
Count of messages applied to Kafka. |
|
Timestamp of last applied event to Kafka. |
|
Count of binary types events applied to Kafka. |
|
Count of mappings events applied to Kafka. |
|
Count of bytes sent to Kafka. |
|
Count of metadata markers sent to Kafka. |
kafka-to-ignite.sh
application
This application should be started near the destination cluster.
kafka-to-ignite.sh
will read CDC events from Kafka topic and then apply them to destination cluster.
Important
|
kafka-to-ignite.sh implements the fail-fast approach. It just fails in case of any error. The restart procedure should be configured with the OS tools.
|
Count of instances of the application does not corellate to the count of destination server nodes.
It should be just enough to process source cluster load.
Each instance of application will process configured subset of topic partitions to spread the load.
KafkaConsumer
for each partition will be created to ensure fair reads.
Installation
-
Build
cdc-ext
module with maven:$~/src/ignite-extensions/> mvn clean package -DskipTests $~/src/ignite-extensions/> ls modules/cdc-ext/target | grep zip ignite-cdc-ext.zip
-
Unpack
ignite-cdc-ext.zip
archive to$IGNITE_HOME
folder.
Now, you have additional binary $IGNITE_HOME/bin/kafka-to-ignite.sh
and $IGNITE_HOME/libs/optional/ignite-cdc-ext
module.
Note
|
Please, enable ignite-cdc-ext to be able to run kafka-to-ignite.sh .
|
Configuration
Application configuration should be done using POJO classes or Spring xml file like regular Ignite node configuration. Kafka to Ignite configuration file should contain the following beans that will be loaded during startup:
-
One of the configuration beans to define a client type that will connect to the destination cluster:
-
IgniteConfiguration
bean: Configuration of a client node. -
ClientConfiguration
bean: Configuration of a Java Thin Client.
-
-
java.util.Properties
bean with the namekafkaProperties
: Single Kafka consumer configuration. -
org.apache.ignite.cdc.kafka.KafkaToIgniteCdcStreamerConfiguration
bean: Options specific tokafka-to-ignite.sh
application.
Name | Description | Default value |
---|---|---|
|
Set of cache names to replicate. |
null |
|
Name of the Kafka topic for CDC events. |
null |
|
Lower Kafka partitions number (inclusive) for CDC events topic. |
-1 |
|
Lower Kafka partitions number (exclusive) for CDC events topic. |
-1 |
|
Name of topic for replication of BinaryTypes and TypeMappings. |
null |
|
Group for |
ignite-metadata-update-<kafkaPartsFrom>-<kafkaPartsTo> |
|
Kafka request timeout in milliseconds. |
|
|
Kafka poll timeout in milliseconds. |
|
|
Maximum number of events to be sent to destination cluster in a single batch. |
1024 |
|
Count of threads to proceed consumers. Each thread poll records from dedicated partitions in round-robin manner. |
16 |
|
Name for metric registry. |
cdc-kafka-to-ignite |
-
kafkaRequestTimeout
property is used as timeout forKafkaConsumer
methods (except forKafkaConsumer#poll
).
Note
|
kafkaRequestTimeout should not be too low, otherwise you are risking the application fail on method execution.
|
-
kafkaConsumerPollTimeout
property is used as timeout forKafkaConsumer#poll
method.
Note
|
High kafkaConsumerPollTimeout property setting might greatly affect replication performance. Kafka topics partitions are equally distributed among threads (see threadCount ). Each thread can only poll one partition at a time, meaning no other partition, asigned to the same thread, will be polled from while the current is not handled.
|
-
To specify
KafkaConsumer
settings, usekafkaProperties
bean. Basically, you need to use a separate file to store all the necessary configuration properties and reference it from the KafkaToIgniteCdcStreamer configuration '.xml' file. See the examples below.
kafka.properties
bootstrap.servers=127.0.0.1:9092
request.timeout.ms=10000
group.id=kafka-to-ignite-dc1
auto.offset.reset=earliest
enable.auto.commit=false
Kafka properties bean declaration in kafka-to-ignite-streamer-config.xml
<util:properties id="kafkaProperties" location="file:kafka_properties_path/kafka.properties"/>
Note
|
request.timeout.ms Kafka consumer property is mandatory for streamer configuration. |
Metrics
Name | Description |
---|---|
|
Count of events received from Kafka. |
|
Timestamp of last received event from Kafka. |
|
Count of events sent to destination cluster. |
|
Timestamp of last sent batch to the destination cluster. |
|
Count of metadata markers received from Kafka. |
Logging
kafka-to-ignite.sh
uses the same logging configuration as the Ignite node does. The only difference is that the log is written in the "kafka-ignite-streamer.log" file.
Fault tolerance
It expected that CDC streamers will be configured with the onlyPrimary=false
in most real-world deployments to ensure fault-tolerance.
That means streamer will send the same change several times equal to CacheConfiguration#backups
+ 1.
Conflict resolution
Conflict resolver should be defined for each cache replicated between the clusters. Cross-cluster replication extension has the default conflict resolver implementation.
Note
|
Default implementation only select correct entry and never merge. |
The default resolver implementation will be used when custom conflict resolver is not set.
Configuration
Name | Description | Default value |
---|---|---|
|
Local cluster id. Can be any value from 1 to 31. |
null |
|
Set of cache names to handle with this plugin instance. |
null |
|
Value field to resolve conflict with. Optional. Field values must implement |
null |
|
Custom conflict resolver. Optional. Field must implement |
null |
Conflict resolution algorithm
Replicated changes contain some additional data. Specifically, entry’s version from source cluster is supplied with the changed data.
Default conflict resolve algorithm based on entry version and conflictResolveField
.
Conflict resolution based on the entry’s version
This approach provides the eventual consistency guarantee when each entry is updatable only from a single cluster.
Important
|
This approach does not replicate any updates or removals from the destination cluster to the source cluster. |
-
Changes from the "local" cluster are always win. Any replicated data can be overridden locally.
-
If both old and new entry are from the same cluster then entry versions comparison is used to determine the order.
-
Conflict resolution failed. Update will be ignored. Failure will be logged.
Conflict resolution based on the entry’s value field
This approach provides the eventual consistency guarantee even when entry is updatable from any cluster.
Note
|
Conflict resolution field, specified by conflictResolveField , should contain a user provided monotonically increasing value such as query id or timestamp.
|
Important
|
This approach does not replicate the removals from the destination cluster to the source cluster, because removes can’t be versioned by the field. |
-
Changes from the "local" cluster are always win. Any replicated data can be overridden locally.
-
If both old and new entry are from the same cluster then entry versions comparison is used to determine the order.
-
If
conflictResolveField
is provided then field values comparison is used to determine the order. -
Conflict resolution failed. Update will be ignored. Failure will be logged.
Custom conflict resolution rules
You’re able to define your own rules for resolving conflicts based on the nature of your data and operations. This can be particularly useful in more complex situations where the standard conflict resolution strategies do not apply.
Choosing the right conflict resolution strategy depends on your specific use case and requires a good understanding of your data and its usage. You should consider the nature of your transactions, the rate of change of your data, and the implications of potential data loss or overwrites when selecting a conflict resolution strategy.
Custom conflict resolver can be set via conflictResolver
and allows to compare or merge the conflict data in any required way.
Configuration example
Configuration is done via Ignite node plugin:
<property name="pluginProviders">
<bean class="org.apache.ignite.cdc.conflictresolve.CacheVersionConflictResolverPluginProvider">
<property name="clusterId" value="1" />
<property name="caches">
<util:list>
<bean class="java.lang.String">
<constructor-arg type="String" value="queryId" />
</bean>
</util:list>
</property>
</bean>
</property>
Apache, Apache Ignite, the Apache feather and the Apache Ignite logo are either registered trademarks or trademarks of The Apache Software Foundation.