← Apache Ignite Blog

Ignite 2.8 Released: Less Stress in Production and Advances in Machine Learning

March 11, 2020 by Denis Magda. Share in Facebook, Twitter

With thousands of changes contributed to Apache Ignite 2.8 that enhanced almost all the components of the platform, it’s possible to overlook some of the improvements that can convince you to upgrade to this version sooner than later. While a quick check of the release notes will help to discover anticipated bug fixes, this article aims to guide through enhancements every Ignite developer should be aware of.

New Subsystem for Production Monitoring and Tracing

Several months of constant work on IEP-35: Monitoring & Profiling has resulted in the creation of a robust and elastic subsystem for production monitoring and diagnostic (aka. profiling). This was influenced by the needs of many developers who deployed Ignite in critical environments and were asking for a foundation that can be integrated with many external monitoring tools and be expanded easily.

The new subsystem consists of several registries that group individual metrics related to a specific Ignite component. For instance, you will find registries for cache, compute, or service grid APIs. Since the registries are designed to be generic, specific exporters can observe the state of Ignite via a myriad of tools supporting various protocols. By default, Ignite 2.8 introduces exporters for monitoring interfaces such as log files, JMX and SQL views, and contemporary ones such as OpenCensus.

Presently, this new subsystem is released in an experimental mode only to give Ignite users some time to check the new API and suggest any improvements. Since the developer community is already impatient to remove the experimental flag, don’t delay!

Advances in Ignite Machine Learning

Machine Learning (ML) capabilities of Ignite 2.8 are so drastically different from previous versions that if you’ve been waiting for the best moment to use the API, then the time has come. Let’s scratch the surface here and learn more details from the updated documentation pages.

A model training is usually a multi-step process that goes with preprocessing, training, and evaluation/valuation phases. A new pipelining API puts things in order by combining all the phases in a single workflow.

In addition to the pipelining APIs, Ignite 2.8 introduced ensemble methods, which allow combining several machine learning techniques into one predictive model to decrease variance (bagging) and bias (boosting), or improve predictions (stacking).

Furthermore, now you can import Apache Spark or XGBoost models to Ignite for further inference, pipelining other tasks. Feel free to keep training a model with your favorite framework and convert it to Ignite representation once the model needs to be deployed in production and executed at scale.

Beyond Java: Partition-Awareness and Other Changes

Even though Ignite is a Java middleware, it functions as a cross-platform database and compute platform that is used for applications developed in C#, C++, Python, and other programming languages.

Thin client protocol is a real enabler for other programming languages support, and with Ignite 2.8, it got a significant performance optimization by supporting partition-awareness. The latter allows thin clients to send query requests directly to nodes that own the queried data. Without partition awareness, an application that is connected to the cluster via a thin client executes all queries and operations via a single server node that acts as a proxy for the incoming requests.

Check the detailed blog post by Pavel Tupitsyn, Ignite committer and PMC, who elaborates on the partition-awareness feature and introduces other .NET-specific enhancements.

Less Stress in Production

This section lists top improvements that might not have striking or catchy names but can bring relief by automating and optimizing things, and by avoiding data inconsistencies when you are already in production.

The stop-the-world pauses triggered by Java garbage collectors impact performance, responsiveness, and throughput of our Java applications. Apache Ignite has a partition-map-exchange (PME) process that, as Java garbage collectors, has some phases that put on hold all running operations for the sake of cluster-wide consistency. For most of the Ignite usage scenarios, these phases complete promptly and are unnoticed. However, some low-latency or high-throughput use cases can detect a decline that might impact some business operations for a moment in time. This wiki page lists all the conditions that can trigger a distributed PME, and with Ignite 2.8, some of them were taken off the list -- the blocking PME no longer happens if a node belonging to the current baseline topology leaves the cluster or a thick client connects to it.

Next, we all know that things break, and what really matters is how a system handles failures. With Ignite 2.8, we revisited the way the cluster handles crash recoveries on restarts while replaying write-ahead-logs (check IGNITE-7196 and IGNITE-9420). Also, the read-repair feature was added to manage data inconsistencies between primary and backups copies of the cluster on-the-fly.

Furthermore, it’s worth mentioning that Ignite 2.8 became more prudent about disk space consumption by supporting the compaction of data files and write-ahead-logs of the native persistence. By sacrificing a bit more CPU cycles for the needs of compaction algorithms, you can save a lot on the storage end.

Last but not least, is an auto-baseline feature that changes a cluster topology for deployments with Ignite native persistence without the need for your intervention in many scenarios. Check this documentation page for more details.

Reach out to us on the community user list for more questions, details, and feedback.

Sincerely yours,
Ignite contributors and committers