Discover our quick start guide and build your first application in 5-10 minutes
Quick Start GuideApache Ignite executes compute jobs on nodes holding the data. This colocated compute pattern eliminates network overhead. Jobs access memory-resident data directly. Combined with schema-driven colocation, this enables complex operations at memory speed without data movement.
The Compute API schedules jobs on nodes holding relevant data partitions. No data movement across the network. Jobs read and write local memory directly. This eliminates the network bottleneck that limits traditional distributed processing.
Submit jobs with specific keys. The system routes jobs to nodes holding those keys. Works with colocation to ensure jobs and data reside together. This single-hop execution delivers minimal latency for targeted operations.
Execute jobs across entire partitions. The job receives all rows in the partition as input. Process partition data sequentially or build in-memory indexes. This enables operations that require full partition visibility.
Broadcast jobs to all nodes for cluster-wide operations. Each node processes its local partitions independently. Results aggregate at the coordinator. This pattern works for parallel aggregations and distributed transformations.
Submit jobs that read data, perform calculations, and return results. No state persists between invocations. Jobs implement simple Java methods. The system handles serialization, routing, and result collection automatically.
Implement map-reduce patterns with compute jobs. Map phase executes on data-holding nodes. Reduce phase aggregates results. The framework handles distribution and coordination. This provides map-reduce semantics without separate systems.
Compute API returns CompletableFuture for non-blocking operations. Submit multiple jobs in parallel. Compose operations with async combinators. This enables high-concurrency compute workloads without thread exhaustion.
Jobs execute within try-catch blocks. Exceptions propagate to caller as CompletionException. The system handles node failures transparently. Failed jobs retry on other nodes holding the same data partitions.
Compute jobs access tables through RecordView and KeyValueView. Same partition-aware semantics as client access. Local reads avoid network overhead. This provides consistent programming model across client and compute layers.
Compute jobs can execute SQL queries on local partitions. Filter and aggregate local data with SQL. Combine procedural logic with declarative queries. This enables complex business logic at the data layer.
Compute jobs execute within transactions. Begin transactions in compute code. Read and write data transactionally. Commit or rollback based on business logic. This ensures consistency for complex multi-step operations.
Compute jobs operate on memory-resident data. No disk I/O during execution. MVCC provides snapshot isolation for read operations. This delivers the performance needed for real-time compute workloads.
Discover our quick start guide and build your first application in 5-10 minutes
Quick Start GuideLearn about compute job submission, execution patterns, and colocated processing
Compute Documentation