Google Kubernetes Engine Deployment | Ignite Documentation

Ignite Summit 2025 – Call for Speakers Now Open – Learn more 

Edit

Google Kubernetes Engine Deployment

This page explains how to deploy an Ignite cluster on Google Kubernetes Engine.

We will consider two deployment modes: stateful and stateless. Stateless deployments are suitable for in-memory use cases where your cluster keeps the application data in RAM for better performance. A stateful deployment differs from a stateless deployment in that it includes setting up persistent volumes for the cluster’s storage.

Caution
This guide focuses on deploying server nodes on Kubernetes. If you want to run client nodes on Kubernetes while your cluster is deployed elsewhere, you need to enable the communication mode designed for client nodes running behind a NAT. Refer to this section.
Caution
This guide was written using kubectl version 1.17.

Creating a GKE Cluster

A cluster in GKE is a set of nodes that provision resources for the applications that are deployed in the cluster. You must create a GKE cluster with enough resources (CPU, RAM, and storage) for your use case.

The easiest way to create a cluster is to use the gcloud command line tool:

$ gcloud container clusters create my-cluster --zone us-west1
...
Creating cluster my-cluster in us-west1... Cluster is being health-checked (master is healthy)...done.
Created [https://container.googleapis.com/v1/projects/gmc-development/zones/us-west1/clusters/my-cluster].
To inspect the contents of your cluster, go to: https://console.cloud.google.com/kubernetes/workload_/gcloud/us-west1/my-cluster?project=my-project
kubeconfig entry generated for my-cluster.
NAME        LOCATION  MASTER_VERSION  MASTER_IP       MACHINE_TYPE   NODE_VERSION    NUM_NODES  STATUS
my-cluster  us-west1  1.14.10-gke.27  35.230.126.102  n1-standard-1  1.14.10-gke.27  9          RUNNING

Verify that your kubectl is configured correctly:

$ kubectl get nodes
NAME                                        STATUS   ROLES    AGE   VERSION
gke-my-cluster-default-pool-6e9f3e45-8k0w   Ready    <none>   73s   v1.14.10-gke.27
gke-my-cluster-default-pool-6e9f3e45-b7lb   Ready    <none>   72s   v1.14.10-gke.27
gke-my-cluster-default-pool-6e9f3e45-cmzc   Ready    <none>   74s   v1.14.10-gke.27
gke-my-cluster-default-pool-a2556b36-85z6   Ready    <none>   73s   v1.14.10-gke.27
gke-my-cluster-default-pool-a2556b36-xlbj   Ready    <none>   72s   v1.14.10-gke.27
gke-my-cluster-default-pool-a2556b36-z8fp   Ready    <none>   74s   v1.14.10-gke.27
gke-my-cluster-default-pool-e93974f2-hwkj   Ready    <none>   72s   v1.14.10-gke.27
gke-my-cluster-default-pool-e93974f2-jqj3   Ready    <none>   72s   v1.14.10-gke.27
gke-my-cluster-default-pool-e93974f2-v8xv   Ready    <none>   74s   v1.14.10-gke.27

Now you are ready to create Kubernetes resources.

Kubernetes Configuration

Kubernetes configuration involves creating the following resources:

  • A namespace

  • A cluster role

  • A ConfigMap for the node configuration file

  • A service to be used for discovery and load balancing when external apps connect to the cluster

  • A configuration for pods running Ignite nodes

Creating Namespace

Create a unique namespace for your deployment. In our case, the namespace is called “ignite”.

Create the namespace using the following command:

kubectl create namespace ignite

Creating Service

The Kubernetes service is used for auto-discovery and as a load-balancer for external applications that will connect to your cluster.

Every time a new node is started (in a separate pod), the IP finder connects to the service via the Kubernetes API to obtain the list of the existing pods' addresses. Using these addresses, the new node discovers all cluster nodes.

service.yaml
apiVersion: v1
kind: Service
metadata:
  # The name must be equal to KubernetesConnectionConfiguration.serviceName
  name: ignite-service
  # The name must be equal to KubernetesConnectionConfiguration.namespace
  namespace: ignite
  labels:
    app: ignite
spec:
  type: LoadBalancer
  ports:
    - name: rest
      port: 8080
      targetPort: 8080
    - name: thinclients
      port: 10800
      targetPort: 10800
  # Optional - remove 'sessionAffinity' property if the cluster
  # and applications are deployed within Kubernetes
  #  sessionAffinity: ClientIP
  selector:
    # Must be equal to the label set for pods.
    app: ignite
status:
  loadBalancer: {}

Create the service:

kubectl create -f service.yaml

Creating Cluster Role and Service Account

Create a service account:

kubectl create sa ignite -n ignite

A cluster role is used to grant access to pods. The following file is an example of a cluster role:

cluster-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: ignite
  namespace: ignite
rules:
- apiGroups:
  - ""
  resources: # Here are the resources you can access
  - pods
  - endpoints
  verbs: # That is what you can do with them
  - get
  - list
  - watch
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: ignite 
roleRef:
  kind: Role
  name: ignite 
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: ignite
  namespace: ignite

Run the following command to create the role and a role binding:

kubectl create -f cluster-role.yaml

Creating ConfigMap for Node Configuration File

We will create a ConfigMap, which will keep the node configuration file for every node to use. This will allow you to keep a single instance of the configuration file for all nodes.

Let’s create a configuration file first. Choose one of the tabs below, depending on whether you use persistence or not.

We must use the TcpDiscoveryKubernetesIpFinder IP finder for node discovery. This IP finder connects to the service via the Kubernetes API and obtains the list of the existing pods' addresses. Using these addresses, the new node discovers all other cluster nodes.

The file will look like this:

node-configuration.xml
    <bean class="org.apache.ignite.configuration.IgniteConfiguration">

        <property name="discoverySpi">
            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
                <property name="ipFinder">
                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder">
                        <constructor-arg>
                            <bean class="org.apache.ignite.kubernetes.configuration.KubernetesConnectionConfiguration">
                                <property name="namespace" value="ignite" />
                                <property name="serviceName" value="ignite-service" />
                            </bean>
                        </constructor-arg>
                    </bean>
                </property>
            </bean>
        </property>
    </bean>

In the configuration file, we will:

  • Enable native persistence and specify the workDirectory, walPath, and walArchivePath. These directories are mounted in each pod that runs an Ignite node. Volume configuration is part of the pod configuration.

  • Use the TcpDiscoveryKubernetesIpFinder IP finder. This IP finder connects to the service via the Kubernetes API and obtains the list of the existing pods' addresses. Using these addresses, the new node discovers all other cluster nodes.

The file look like this:

node-configuration.xml
    <bean class="org.apache.ignite.configuration.IgniteConfiguration">

        <property name="workDirectory" value="/ignite/work"/>

        <property name="dataStorageConfiguration">
            <bean class="org.apache.ignite.configuration.DataStorageConfiguration">
                <property name="defaultDataRegionConfiguration">
                    <bean class="org.apache.ignite.configuration.DataRegionConfiguration">
                        <property name="persistenceEnabled" value="true"/>
                    </bean>
                </property>

                <property name="walPath" value="/ignite/wal"/>
                <property name="walArchivePath" value="/ignite/walarchive"/>
            </bean>

        </property>

        <property name="discoverySpi">
            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
                <property name="ipFinder">
                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder">
                        <constructor-arg>
                            <bean class="org.apache.ignite.kubernetes.configuration.KubernetesConnectionConfiguration">
                                <property name="namespace" value="ignite" />
                                <property name="serviceName" value="ignite-service" />
                            </bean>
                        </constructor-arg>
                    </bean>
                </property>
            </bean>
        </property>

    </bean>

The namespace and serviceName properties of the IP finder configuration must be the same as specified in the service configuration. Add other properties as required for your use case.

To create the ConfigMap, run the following command in the directory with the node-configuration.xml file.

kubectl create configmap ignite-config -n ignite --from-file=node-configuration.xml

Creating Pod Configuration

Now we will create a configuration for pods. In the case of stateless deployment, we will use a Deployment. For a stateful deployment, we will use a StatefulSet.

Our Deployment configuration will deploy a ReplicaSet with two pods running Ignite 2.16.0.

In the container’s configuration, we will:

  • Enable the “ignite-kubernetes” and “ignite-rest-http” modules.

  • Use the configuration file from the ConfigMap we created earlier.

  • Open a number of ports:

    • 47100 — the communication port

    • 47500 ­—­ the discovery port

    • 49112 — the default JMX port

    • 10800 — thin client/JDBC/ODBC port

    • 8080 — REST API port

The deployment configuration file might look like as follows:

deployment.yaml
# An example of a Kubernetes configuration for pod deployment.
apiVersion: apps/v1
kind: Deployment
metadata:
  # Cluster name.
  name: ignite-cluster
  namespace: ignite
spec:
  # The initial number of pods to be started by Kubernetes.
  replicas: 2
  selector:
    matchLabels:
      app: ignite
  template:
    metadata:
      labels:
        app: ignite
    spec:
      serviceAccountName: ignite
      terminationGracePeriodSeconds: 60000
      containers:
        # Custom pod name.
      - name: ignite-node
        image: apacheignite/ignite:2.16.0
        env:
        - name: OPTION_LIBS
          value: ignite-kubernetes,ignite-rest-http
        - name: CONFIG_URI
          value: file:///ignite/config/node-configuration.xml
        ports:
        # Ports to open.
        - containerPort: 47100 # communication SPI port
        - containerPort: 47500 # discovery SPI port
        - containerPort: 49112 # dafault JMX port
        - containerPort: 10800 # thin clients/JDBC driver port
        - containerPort: 8080 # REST API
        volumeMounts:
        - mountPath: /ignite/config
          name: config-vol
      volumes:
      - name: config-vol
        configMap:
          name: ignite-config

Create a deployment by running the following command:

  kubectl create -f deployment.yaml

Our StatefulSet configuration deploys 2 pods running Ignite 2.16.0.

In the container’s configuration we will:

  • Enable the “ignite-kubernetes” and “ignite-rest-http” modules.

  • Use the configuration file from the ConfigMap we created earlier.

  • Mount volumes for the work directory (where application data is stored), WAL files, and WAL archive.

  • Open a number of ports:

    • 47100 — the communication port

    • 47500 ­—­ the discovery port

    • 49112 — the default JMX port

    • 10800 — thin client/JDBC/ODBC port

    • 8080 — REST API port

The StatefulSet configuration file might look like as follows:

statefulset.yaml
# An example of a Kubernetes configuration for pod deployment.
apiVersion: apps/v1
kind: StatefulSet
metadata:
  # Cluster name.
  name: ignite-cluster
  namespace: ignite
spec:
  # The initial number of pods to be started by Kubernetes.
  replicas: 2
  serviceName: ignite
  selector:
    matchLabels:
      app: ignite
  template:
    metadata:
      labels:
        app: ignite
    spec:
      serviceAccountName: ignite
      terminationGracePeriodSeconds: 60000
      containers:
        # Custom pod name.
      - name: ignite-node
        image: apacheignite/ignite:2.16.0
        env:
        - name: OPTION_LIBS
          value: ignite-kubernetes,ignite-rest-http
        - name: CONFIG_URI
          value: file:///ignite/config/node-configuration.xml
        - name: JVM_OPTS
          value: "-DIGNITE_WAL_MMAP=false"
        ports:
        # Ports to open.
        - containerPort: 47100 # communication SPI port
        - containerPort: 47500 # discovery SPI port
        - containerPort: 49112 # JMX port
        - containerPort: 10800 # thin clients/JDBC driver port
        - containerPort: 8080 # REST API
        volumeMounts:
        - mountPath: /ignite/config
          name: config-vol
        - mountPath: /ignite/work
          name: work-vol
        - mountPath: /ignite/wal
          name: wal-vol
        - mountPath: /ignite/walarchive
          name: walarchive-vol
      securityContext:
        fsGroup: 2000 # try removing this if you have permission issues
      volumes:
      - name: config-vol
        configMap:
          name: ignite-config
  volumeClaimTemplates:
  - metadata:
      name: work-vol
    spec:
      accessModes: [ "ReadWriteOnce" ]
#      storageClassName: "ignite-persistence-storage-class"
      resources:
        requests:
          storage: "1Gi" # make sure to provide enought space for your application data
  - metadata:
      name: wal-vol
    spec:
      accessModes: [ "ReadWriteOnce" ]
#      storageClassName: "ignite-wal-storage-class"
      resources:
        requests:
          storage: "1Gi"
  - metadata:
      name: walarchive-vol
    spec:
      accessModes: [ "ReadWriteOnce" ]
#      storageClassName: "ignite-wal-storage-class"
      resources:
        requests:
          storage: "1Gi"

Note the spec.volumeClaimTemplates section, which defines persistent volumes provisioned by a persistent volume provisioner. The volume type depends on the cloud provider. You can have more control over the volume type by defining storage classes.

Create the StatefulSet by running the following command:

  kubectl create -f statefulset.yaml

Check if the pods were deployed correctly:

$ kubectl get pods -n ignite
NAME                                READY   STATUS    RESTARTS   AGE
ignite-cluster-5b69557db6-lcglw   1/1     Running   0          44m
ignite-cluster-5b69557db6-xpw5d   1/1     Running   0          44m

Check the logs of the nodes:

$ kubectl logs ignite-cluster-5b69557db6-lcglw -n ignite
...
[14:33:50] Ignite documentation: http://gridgain.com
[14:33:50]
[14:33:50] Quiet mode.
[14:33:50]   ^-- Logging to file '/opt/gridgain/work/log/ignite-b8622b65.0.log'
[14:33:50]   ^-- Logging by 'JavaLogger [quiet=true, config=null]'
[14:33:50]   ^-- To see **FULL** console log here add -DIGNITE_QUIET=false or "-v" to ignite.{sh|bat}
[14:33:50]
[14:33:50] OS: Linux 4.19.81 amd64
[14:33:50] VM information: OpenJDK Runtime Environment 1.8.0_212-b04 IcedTea OpenJDK 64-Bit Server VM 25.212-b04
[14:33:50] Please set system property '-Djava.net.preferIPv4Stack=true' to avoid possible problems in mixed environments.
[14:33:50] Initial heap size is 30MB (should be no less than 512MB, use -Xms512m -Xmx512m).
[14:33:50] Configured plugins:
[14:33:50]   ^-- None
[14:33:50]
[14:33:50] Configured failure handler: [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0, super=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet [SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]]]
[14:33:50] Message queue limit is set to 0 which may lead to potential OOMEs when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to message queues growth on sender and receiver sides.
[14:33:50] Security status [authentication=off, tls/ssl=off]
[14:34:00] Nodes started on local machine require more than 80% of physical RAM what can lead to significant slowdown due to swapping (please decrease JVM heap size, data region size or checkpoint buffer size) [required=918MB, available=1849MB]
[14:34:01] Performance suggestions for grid  (fix if possible)
[14:34:01] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true
[14:34:01]   ^-- Enable G1 Garbage Collector (add '-XX:+UseG1GC' to JVM options)
[14:34:01]   ^-- Specify JVM heap max size (add '-Xmx[g|G|m|M|k|K]' to JVM options)
[14:34:01]   ^-- Set max direct memory size if getting 'OOME: Direct buffer memory' (add '-XX:MaxDirectMemorySize=[g|G|m|M|k|K]' to JVM options)
[14:34:01]   ^-- Disable processing of calls to System.gc() (add '-XX:+DisableExplicitGC' to JVM options)
[14:34:01] Refer to this page for more performance suggestions: https://ignite.apache.org/docs/latest/perf-and-troubleshooting/general-perf-tips
[14:34:01]
[14:34:01] To start Console Management & Monitoring run ignitevisorcmd.{sh|bat}
[14:34:01] Data Regions Configured:
[14:34:01]   ^-- default [initSize=256.0 MiB, maxSize=370.0 MiB, persistence=false, lazyMemoryAllocation=true]
[14:34:01]
[14:34:01] Ignite node started OK (id=b8622b65)
[14:34:01] Topology snapshot [ver=2, locNode=b8622b65, servers=2, clients=0, state=ACTIVE, CPUs=2, offheap=0.72GB, heap=0.88GB]

The string servers=2 in the last line indicates that the two nodes joined into a single cluster.

Readiness Probe Setup

To set up a readiness probe uncomment the relevant section inside the yaml definitions below. A special REST command (PROBE) is used to determine when an Ignite Kernal has started.

Liveness Probe

To set up a liveness probe uncomment the relevant section inside the yaml definitions below. The REST command (VERSION) is used to verify that Ignite is running properly.

Activating the Cluster

If you deployed a stateless cluster, skip this step: a cluster without persistence does not require activation.

If you are using persistence, you must activate the cluster after it is started. To do that, connect to one of the pods:

kubectl exec -it <pod_name> -n ignite -- /bin/bash

Execute the following command:

/opt/ignite/apache-ignite/bin/control.sh --set-state ACTIVE --yes

You can also activate the cluster using the REST API. Refer to the Connecting to the Cluster section for details about connection to the cluster’s REST API.

Scaling the Cluster

You can add more nodes to the cluster by using the kubectl scale command.

Caution
Make sure your GKE cluster has enough resources to add new pods.

In the following example, we will bring up one more node (we had two).

To scale your Deployment, run the following command:

kubectl scale deployment ignite-cluster --replicas=3 -n ignite

To scale your StatefulSet, run the following command:

kubectl scale sts ignite-cluster --replicas=3 -n ignite

After scaling the cluster, change the baseline topology accordingly.

Caution
If you reduce the number of nodes by more than the number of partition backups, you may lose data. The proper way to scale down is to redistribute the data after removing a node by changing the baseline topology.

Connecting to the Cluster

If your application is also running in Kubernetes, you can use either thin clients or client nodes to connect to the cluster.

Get the public IP of the service:

$ kubectl describe svc ignite-service -n ignite
Name:                     ignite-service
Namespace:                ignite
Labels:                   app=ignite
Annotations:              <none>
Selector:                 app=ignite
Type:                     LoadBalancer
IP:                       10.0.144.19
LoadBalancer Ingress:     13.86.186.145
Port:                     rest  8080/TCP
TargetPort:               8080/TCP
NodePort:                 rest  31912/TCP
Endpoints:                10.244.1.5:8080
Port:                     thinclients  10800/TCP
TargetPort:               10800/TCP
NodePort:                 thinclients  31345/TCP
Endpoints:                10.244.1.5:10800
Session Affinity:         None
External Traffic Policy:  Cluster

You can use the LoadBalancer Ingress address to connect to one of the open ports. The ports are also listed in the output of the command.

Connecting Client Nodes

A client node requires connection to every node in the cluster. The only way to achieve this is to start a client node within Kubernetes. You will need to configure the discovery mechanism to use TcpDiscoveryKubernetesIpFinder, as described in the Creating ConfigMap for Node Configuration File section.

Connecting with Thin Clients

The following code snippet illustrates how to connect to your cluster using the java thin client. You can use other thin clients in the same way. Note that we use the external IP address (LoadBalancer Ingress) of the service.

ClientConfiguration cfg = new ClientConfiguration().setAddresses("13.86.186.145:10800");
IgniteClient client = Ignition.startClient(cfg);

ClientCache<Integer, String> cache = client.getOrCreateCache("test_cache");

cache.put(1, "first test value");

System.out.println(cache.get(1));

client.close();

Partition Awareness

Partition awareness allows the thin client to send query requests directly to the node that owns the queried data.

Without partition awareness, an application that is connected to the cluster via a thin client executes all queries and operations via a single server node that acts as a proxy for the incoming requests. These operations are then re-routed to the node that stores the data that is being requested. This results in a bottleneck that could prevent the application from scaling linearly.

Without Partition Awareness

Notice how queries must pass through the proxy server node, where they are routed to the correct node.

With partition awareness in place, the thin client can directly route queries and operations to the primary nodes that own the data required for the queries. This eliminates the bottleneck, allowing the application to scale more easily.

With Partition Awareness

To enable the partition awareness feature within scaling Kubernetes enviroment, one should start a client within the cluster and configure it with KubernetesConnectionConfiguration. In this case, a client can connect to every pod in a cluster.

KubernetesConnectionConfiguration kcfg = new KubernetesConnectionConfiguration();
kcfg.setNamespace("ignite");
kcfg.setServiceName("ignite-service");

ClientConfiguration ccfg = new ClientConfiguration();
ccfg.setAddressesFinder(new ThinClientKubernetesAddressFinder(kcfg));

IgniteClient client = Ignition.startClient(cfg);

ClientCache<Integer, String> cache = client.getOrCreateCache("test_cache");

cache.put(1, "first test value");

System.out.println(cache.get(1));

client.close();

Connecting to REST API

Connect to the cluster’s REST API as follows:

$ curl http://13.86.186.145:8080/ignite?cmd=version
{"successStatus":0,"error":null,"response":"2.16.0","sessionToken":null}