Apache Ignite 3.1 improves the three areas that matter most when running distributed systems: performance at scale, language flexibility, and operational visibility. The release also fixes hundreds of bugs related to data corruption, race conditions, and edge cases discovered since 3.0.
Before you upgrade: This release introduces zone-based replication, which changes how RAFT groups are allocated. Persistent storage in upgraded 3.0 clusters will continue to use table-based replication. Read the Zone-Based Replication Migration section to understand your options.
What This Release Delivers
- Improved replication performance and reliability: Zone-based replication improves data colocation, reduces required thread count and memory overhead while naturally improving SQL performance.
- Extended clients support: Python DB API driver, .NET distributed computing with ADO.NET integration, and enhanced C++ client support expand your options for working with data.
- Better cluster observability: 50+ new metrics covering checkpoints, SQL, transactions, and storage. New system views for cluster introspection.
- Improved APIs: Multiple schema support, improved query planning controls, and streamlined Java APIs.
Performance: Built for Scale
Zone-Based Replication Reduces Overhead
Apache Ignite 3.1 replaces the table-based replication model from 3.0 with zone-based replication.
Updated defaults: Zone-based replication is enabled by default for new 3.1 clusters.
Existing clusters: Clusters with persistent storage upgraded from 3.0 will continue to use table-based replication. To adopt zone-based replication and gain the performance benefits, you must migrate to a new 3.1 cluster. Table-based replication support will be discontinued in Ignite 3.2 release.
DDL Operations Now Batch Automatically
Creating multiple tables no longer requires multiple round-trips. DDL operations batch automatically when possible, reducing setup time during schema initialization and testing.
Partition Pruning and Partition Awareness
Apache Ignite 3.1 introduces two orth major SQL optimizations that dramatically improve query performance:
Partition Pruning: The query optimizer automatically eliminates unnecessary partition scans based on predicates. Queries with key-based filters only scan relevant partitions instead of the entire dataset.
Partition Awareness: Client queries route directly to nodes owning the data, eliminating coordinator hops. The client determines the exact target node for single-partition queries.
Use the new EXPLAIN MAPPING statement to verify query routing:
EXPLAIN MAPPING FOR SELECT * FROM orders WHERE order_id = 12345;
Combined impact: Queries with partition key predicates can see 10x+ performance improvements.
Multi-Language Support
Python: PEP 249-Compliant Database Driver
Connect to Apache Ignite from Python with a standard DB API 2.0 compliant driver. SSL support and macOS compatibility are built in.
You can install the driver from pip:
pip install pyignite-dbapi
Then, you can import it into your application and initialize the connection:
# Import and use the driver
import pyignite_dbapi
conn = pyignite_dbapi.connect(address='localhost:10800', use_ssl=True)
cursor = conn.cursor()
cursor.execute('SELECT * FROM orders WHERE customer_id = ?', (customer_id,))
.NET: Distributed Computing and ADO.NET
Write compute jobs in C#, F#, or any .NET language and distribute them across your cluster:
public class HelloJob : IComputeJob<string, string>
{
public ValueTask<string> ExecuteAsync(IJobExecutionContext context, string arg, CancellationToken cancellationToken) =>
ValueTask.FromResult("Hello " + arg);
}
var jobDesc = new JobDescriptor<string, string>(
JobClassName: typeof(HelloJob).AssemblyQualifiedName!,
DeploymentUnits: [new DeploymentUnit("unit1")]);
var jobTarget = JobTarget.AnyNode(await client.GetClusterNodesAsync());
var jobExec = await client.Compute.SubmitAsync(jobTarget, jobDesc, "world");
ADO.NET integration brings familiar patterns to .NET developers:
var connStr = "Endpoints=localhost:10800";
await using var conn = new IgniteDbConnection(connStr);
await conn.OpenAsync();
DbCommand cmd = conn.CreateCommand();
cmd.CommandText = "DROP TABLE IF EXISTS Person";
await cmd.ExecuteNonQueryAsync();
Additional .NET features in 3.1:
- Platform Streamer Receiver: Custom data processing during streaming operations
- Batch SQL Execution:
ISql.ExecuteBatchAsyncfor efficient multi-statement execution - RunInTransaction: Automatic transaction retry mechanism for transient failures
- CancellationToken Support: Integrated cancellation for SQL and Compute APIs
See the extended blog post about .NET compute in Ignite 3.1 for a more in-depth explanation.
C++ Client
Use the improved C++ client to improve you application with several new production-ready features:
- Heartbeat Support: Connection health monitoring prevents timeout disconnects
- Transaction Timeouts: Configurable timeout settings for transaction operations
- Query Cancellation: An option to cancel long-running queries
ODBC Driver SSL Support
Use the newly added support for SSL/TLS to enable secure connections to the cluster from your ODBC applications.
SQL Capabilities
Multiple Schemas
Organize tables across multiple schemas instead of using only PUBLIC:
CREATE SCHEMA analytics;
CREATE TABLE analytics.events (id int primary key, timestamp timestamp, data varchar);
Query Plan Recalculation
Configure when query plans are recalculated based on data changes:
CREATE TABLE Person (
id INT PRIMARY KEY,
name VARCHAR,
age INT
) WITH (MIN STALE ROWS 1000, STALE ROWS FRACTION 0.15);
Ignite will recalculate plans automatically when the application exceeds the thresholds.
Alternatively, manually invalidate plans to ensure they reflect current data:
sql planner invalidate-cache --tables=PUBLIC.Person
EXPLAIN Output Improvements
Use the improved EXPLAIN command to track which nodes execute queries and what data they access, making query execution plans clearer. The command now also supports the EXPLAIN MAPPING FOR option for tracking
data distribution.
New Functions
GROUPING: Aggregate function for advanced grouping operationsCURRENT_USER: Access current user for auditing and access control
Code Deployment
Access deployment unit information directly from your compute jobs to better diagnose issues and validate your code:
public class DiagnosticJob implements ComputeJob<Void, String> {
@Override
public CompletableFuture<String> executeAsync(JobExecutionContext context, Void input) {
String deploymentInfo = context.deploymentUnits().stream()
.map(unit -> String.format("%s:%s at %s", unit.name(), unit.version(), unit.path()))
.collect(Collectors.joining(", "));
return CompletableFuture.completedFuture(deploymentInfo);
}
}
Deployment improvements in 3.1:
- ZIP archive support preserves folder structure for complex applications
- Files over 10 MB now supported
- Automatic unit loading at node startup
Production Operations
Metrics and Observability
Apache Ignite 3.1 adds comprehensive metrics across all major subsystems:
- Storage Metrics: Checkpoint operations, data regions, and storage I/O for aipersist storage engine
- Table Metrics: Per-table operation statistics including read/write throughput
- Rebalance Metrics: Track rebalancing progress and performance
- SQL Query Metrics: Execution time, row counts, and query cache hit rates
- Transaction Metrics: Transaction lifecycle and duration tracking
- Topology Metrics: Node join/leave events and cluster state changes
- Throttling Metrics: Backpressure and flow control statistics
- Clock Drift Metrics: Monitor time synchronization across cluster nodes
Metric Log Exporter: Exports metrics to files. The exporter is used by default for all new clusters, providing guaranteed access to basic cluster metrics.
System Views for Cluster Introspection
New views expose internal cluster state:
SYSTEM.SQL_CACHED_QUERY_PLANS: View cached query plansSYSTEM.INDEX_COLUMNS: Access index column informationSYSTEM.SCHEMAS: List all schemas in the cluster
All system views now use standardized column naming. Old column naming is still supported for compatibility purposes.
Compute Job Lifecycle Events
New lifecycle events help you track compute jobs through submission, execution, completion, and failure. MapReduce task events provide visibility into distributed computations.
Cluster Management
Automatic Metastorage Node Selection
Ignite now automatically selects metastorage and cluster management group nodes based on cluster size on cluster initialization:
- ≤3 nodes: all nodes participate
- 4 nodes: 3 nodes (maintains odd number for consensus)
- ≥5 nodes: 5 nodes (balances fault tolerance with overhead)
Multicast Discovery for Dynamic Environments
Nodes discover each other automatically using multicast, removing the need for static node lists in containerized deployments:
node config update ignite.network.nodeFinder.multicast.group=239.5.0.0
node config update ignite.network.nodeFinder.type=MULTICAST
Docker Enhancements
BOOTSTRAP_NODE_CONFIGenvironment variable for configuration management- ARM64 images for ARM-based systems
- Non-root default user improves security
- Java 17 and 21 images available
Distribution Zone Quorum Control
You can explicitly set quorum requirements in distribution zones:
CREATE ZONE exampleZone (REPLICAS 3, QUORUM SIZE 3) STORAGE PROFILES['default'];
Transaction Improvements
Automatic Transaction Retry
The new runInTransaction API automatically retries transactions that fail due to transient errors:
ignite.transactions().runInTransaction(tx -> {
// Transaction logic here
// Automatically retried on transient failures
});
Configurable retry policies handle common failure scenarios like lock conflicts and temporary connectivity issues.
Separate Read-Only and Read-Write Timeouts
New transaction timeout options can set different timeouts for read-only and read-write transactions:
readOnlyTimeoutMillis: Shorter timeout for read-only transactionsreadWriteTimeoutMillis: Longer timeout for complex write operations
This prevents read-only queries from timing out unnecessarily while protecting against long-running writes.
Java API Updates
deleteAll(): Bulk delete operationsignite.cluster().nodes(): Returns nodes in logical topologyignite.cluster().localNode(): Quick access to local node in embedded modeCancelHandleAPI: Stop queries, transactions, and compute jobs- Batched execution cancellation support
Disaster Recovery and Operational Tools
New CLI and REST APIs enable partition-level data cleanup and restart, letting you recover from corrupted partitions without restarting the whole cluster. The system properly destroys tables during node recovery and cleans abandoned transaction write intents during index builds.
Migration from Apache Ignite 2
Apache Ignite 3.1 includes a complete migration toolkit with DDL generator for automatic schema conversion, persistent data migration with progress tracking, and automatic type conversion for legacy Java time APIs. The toolkit supports authenticated operations and complex field mappings for key and value replication.
Breaking Changes and Deprecations
All breaking changes include backward compatibility support. Recreate your cluster, update your code and configuration before 3.2, when deprecated approaches will be removed.
Zone-Based Replication Migration
Zone-based replication changes how RAFT groups are allocated across tables. Clusters upgraded from 3.0 continue using table-based replication to preserve stability. To adopt zone-based replication and gain the performance
benefits, create a new 3.1 cluster and migrate data using SQL COPY INTO/COPY FROM commands. See the
3.0 to 3.1 Migration Guide for detailed workflow.
Configuration and API Changes
Configuration: Property names now include units (i.e. timeoutMillis instead of timeout). System properties were consolidated under ignite.system. Old formats work
temporarily.
SQL Syntax: CREATE ZONE syntax modernized to align with SQL standards. Old WITH clause syntax is deprecated but functional.
Java API: ignite.clusterNodes() deprecated in favor of ignite.cluster().nodes(). System view columns standardized with old names temporarily available.
Data Types: BINARY and CHAR removed. Use VARBINARY and VARCHAR instead. Maximum precision for VARCHAR/VARBINARY increased to
2GB.
Get Started
Download: Apache Ignite 3.1
Migration Guide: Upgrading from 3.0
Community: Join the Apache Ignite mailing list or Slack channel
Questions about upgrading? Ask on the dev list or user list.