Links
Comment on page

Installing to an Existing Kubernetes Cluster

Steps to deploy Grainite in an existing Kubernetes cluster on Azure, AWS, GCP, and VMware vSphere with Kubernetes.

Prerequisites

Installing Grainite into an existing Kubernetes cluster requires at least three nodes dedicated to Grainite. Grainite pods run in a 1:1 relationship with the nodes in the cluster. For high availability, we recommend that the nodes within the cluster be spread across multiple availability zones. Grainite will remain available as long as two out of the three nodes within the cluster are operational.

Nodes within the cluster

To run a Grainite server, nodes within the cluster must be configured as follows
  • Number of nodes within the cluster: 3
  • CPU requirement: 8 vCPU
  • RAM requirement: 32 GiB
  • Storage requirement: 2256 GiB SSD
  • Operating system: Ubuntu Linux 22.04 LTS

Deployment virtual machine

The deployment virtual machine will be used to install the Grainite server into an existing Kubernetes cluster using Helm Charts. For this virtual machine, we recommend the following configuration
  • CPU requirement: 2 vCPU
  • RAM requirement: 8 GiB
  • Storage requirement: 200 GiB
  • Operating system: Ubuntu Linux 22.04 LTS

Software packages deployed in the deployment virtual machine

Helm
curl https://baltocdn.com/helm/signing.asc | sudo apt-key add -
sudo apt-get install apt-transport-https --yes
echo "deb https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm
Cloud provider CLI
Terraform
jq
sudo apt install -y jq
zip
sudo apt install zip
grepcidr
sudo apt install -y grepcidr
ipcalc
sudo apt install -y ipcalc

Tokens

Make sure you have each of the below credentials which have been sent to you by the Grainite team. The credentials will be passed in as arguments to some of the commands in this guide:
  • Helm deploy token (same as GitLab deploy token)
  • Helm username
  • Quay username
  • Quay password

Cluster Preparation

Before you begin to install Grainite into your cluster, please ensure that the context is set up correctly. The following command helps to get and set the context of currently running clusters in your environment.
Get-Context
This will describe one or many contexts within the environment.
kubectl config get-contexts
Use-Context
kubectl config use-context <context name>
Get-nodes
Obtain a list of nodes within the cluster
kubectl get nodes
Label-nodes
First, you may have to delete the existing labels on the nodes. This may only be required if you already have existing labels on the node.
kubectl label node <node_name> <label>-
Label the nodes where you would like to install Grainite
kubectl label node <node_name> grainite-env.node=grainite-node --overwrite
Create-namespace
kubectl create namespace <grainite>
Show-labels
Verify the nodes have been labeled correctly
kubectl get nodes --show-labels
Show-namespace
Verify the namespace has been set correctly
kubectl get ns

Grainite Setup

Set up a Helm Chart repo
helm repo add grainite-repo https://gitlab.com/api/v4/projects/26443204/packages/helm/stable --username <Helm username> --password <Helm deploy token>
Download the Grainite Helm Chart
Latest version
helm pull grainite-repo/grainite
Specific version
helm pull grainite-repo/grainite --version <chart version>
Note: Helm Chart version is specified in the format 23.23.0
Verify file has been downloaded
~/grainite/scripts/bin$ ls -l grainite*.tgz
-rw-r--r-- 1 user user 5967 Jun  5 16:31 grainite-23.23.0.tgz
Install workload
GCP:
helm install <NAME> <CHART> -n <namespace> \
--create-namespace <namespace> \
--set kms_sa_name=missing \
--set imageCredentials.registry=quay.io \
--set imageCredentials.username=<Quay username> \
--set imageCredentials.password=<Quay password> \
--set grainiteImage=quay.io/grainite/cluster:<version> \
--set cloudType=gcp \
--set cluster=<cluster name> \
--set configMap.dat_size_gb=1Ti \
--set configMap.num_servers=<nodes in cluster: default 3> \
--set storeCapacity.grainite-dat=1Ti \
--set storeCapacity.grainite-meta=1Ti \
--set storeCapacity.grainite-sav=64Gi \
--set volumeType=pd-ssd \
--set vpc_name=<vpc-name> \
--set internalLB=true \
[--set sourceCIDRList=1.2.3.4/24,5.6.7.8/24 needed when internalLB=false]
Note: --create-namespace <namespace> is optional and can be specified if namespace was not created using it in the earlier step.
Example:
helm install grainite ./grainite-23.23.0.tgz -n grainite \
--create-namespace grainite \
--set kms_sa_name=missing \
--set imageCredentials.registry=quay.io \
--set imageCredentials.username=<Quay username> \
--set imageCredentials.password=<Quay password> \
--set grainiteImage=quay.io/grainite/cluster:2323 \
--set cloudType=gcp \
--set cluster=bob-2323 \
--set configMap.dat_size_gb=1Ti \
--set configMap.num_servers=3 \
--set storeCapacity.grainite-dat=1Ti \
--set storeCapacity.grainite-meta=1Ti \
--set storeCapacity.grainite-sav=64Gi \
--set volumeType=pd-ssd \
--set vpc_name=bob-vpc \
--set internalLB=false \
--set sourceCIDRList=1.2.3.4/24,5.6.7.8/24
AWS:
helm install <NAME> <CHART> -n <namespace> \
--create-namespace <namespace> \
--set kms_sa_name=missing \
--set imageCredentials.registry=quay.io \
--set imageCredentials.username=<Quay Username> \
--set imageCredentials.password=<Quay Password> \
--set grainiteImage=quay.io/grainite/cluster:<version> \
--set cloudType=aws \
--set terminationGracePeriodSeconds=180 \
--set volumeType=gp3 \
--set configMap.num_servers=<nodes in cluster default:3> \
--set cluster=<cluster name> \
--set storeCapacity.grainite-dat=<1Ti> \
--set storeCapacity.grainite-meta=<1Ti> \
--set storeCapacity.grainite-sav=<64Gi> \
--set internalLB=true \
[--set sourceCIDRList=1.2.3.4/24,5.6.7.8/24 needed when internalLB=false]
Note: --create-namespace <namespace> is optional and can be specified if namespace was not created in the earlier step.
Example
helm install grainite ./grainite-23.23.0.tgz -n grainite \
--create-namespace grainite \
--set kms_sa_name=missing \
--set imageCredentials.registry=quay.io \
--set imageCredentials.username=<Quay Username> \
--set imageCredentials.password=<Quay Password> \
--set grainiteImage=quay.io/grainite/cluster:2323 \
--set cloudType=aws \
--set terminationGracePeriodSeconds=180 \
--set volumeType=gp3 \
--set configMap.num_servers=3 \
--set cluster=bob-2323 \
--set storeCapacity.grainite-dat=1Ti \
--set storeCapacity.grainite-meta=1Ti \
--set storeCapacity.grainite-sav=64Gi \
--set internalLB=false \
--set sourceCIDRList=1.2.3.4/24,5.6.7.8/24
Azure:
helm install <NAME> <CHART> -n <namespace> \
--create-namespace <namespace> \
--set imageCredentials.password=<Quay Password> \
--set imageCredentials.registry=quay.io/grainite \
--set imageCredentials.username=<Quay Username> \
--set grainiteImage=quay.io/grainite/cluster:<version> \
--set volumeType=StandardSSD_LRS \
--set storeCapacity.grainite-dat=<dat_size> \
--set storeCapacity.grainite-sav=<sav_size> \
--set storeCapacity.grainite-meta=<dat_size> \
--set configMap.dat_size_gb=<dat_size> \
--set configMap.num_servers=3 \
--set kms_sa_name=missing \
--set cloudType=azure \
(--version <chart_version> if using helm repo directly) \
--set cluster=<cluster_name> \
--set vpc_name=<vpc_name> \
--set externalTrafficPolicy=Local \
--set internalLB=true \
[--set sourceCIDRList=1.2.3.4/24,5.6.7.8/24 needed when internalLB=false]
Note: --create-namespace <namespace> is optional and can be specified if namespace was not created in the earlier step.
Example
helm install grainite grainite ./grainite-23.23.0.tgz -n grainite \
--set imageCredentials.password=<Quay Password> \
--set imageCredentials.registry=quay.io/grainite \
--set imageCredentials.username=<Quay Username> \
--set grainiteImage=quay.io/grainite/cluster:2319.9 \
--set volumeType=StandardSSD_LRS \
--set storeCapacity.grainite-dat=1Ti \
--set storeCapacity.grainite-sav=64Gi \
--set storeCapacity.grainite-meta=1Ti \
--set configMap.dat_size_gb=1Ti \
--set configMap.num_servers=3 \
--set kms_sa_name=missing \
--set cloudType=azure \
--set cluster=bob-2319.9 \
--set vpc_name=bob-vpc \
--set externalTrafficPolicy=Local \
--set internalLB=false \
--set sourceCIDRList=1.2.3.4/24,5.6.7.8/24
VMware Tanzu
helm install <NAME> <CHART> -n <namespace> \
--create-namespace <namespace> \
--set imageCredentials.password=<Image repository password> \
--set imageCredentials.registry=<image repository url> \
--set imageCredentials.username=<Image repository username> \
--set grainiteImage= <grainite image name> \
--set storeCapacity.grainite-dat=<dat volume size> \
--set storeCapacity.grainite-sav=<sav volume size> \
--set storeCapacity.grainite-meta=<meta volume size> \
--set configMap.dat_size_gb=<dat volume size> \
--set configMap.num_servers=<num nodes in cluster> \
--set kms_sa_name=missing \
--set cloudType=tanzu \
--set namespace=<Grainite namespace> grainite-repo/grainite
Example
#!/bin/bash
dat_size=64Gi
DAT_SAV_RATIO=16
dat_unit=$( echo ${dat_size} | sed 's/[0-9]//g' )
dat_num=$( echo ${dat_size} | sed 's/[GT]i//g' )
sav_num=$((dat_num1024/DAT_SAV_RATIO))
if [[ "${dat_unit}" == "Gi" ]]; then
sav_size="$((dat_num1024/DAT_SAV_RATIO))Mi"
elif [[ "${dat_unit}" == "Ti" ]]; then
sav_size="$((dat_num1024/DAT_SAV_RATIO))Gi"
fi
image_pass=<Quay password>
image_user=<Quay username>
image_name="quay.io/grainite/cluster:2323.0"
helm install -n grainite-ns --create-namespace grainite --set imageCredentials.password=${image_pass} --set imageCredentials.registry=quay.io/grainite --set imageCredentials.username=${image_user} --set grainiteImage=${image_name} --set storeCapacity.grainite-dat=${dat_size} --set storeCapacity.grainite-sav=${sav_size} --set storeCapacity.grainite-meta=${dat_size} --set configMap.dat_size_gb=${dat_size} --set configMap.num_servers=3 --set kms_sa_name=missing --set cloudType=tanzu --set namespace=grainitne-ns grainite-repo/grainite
Upon successful deployment, you should see a message similar to the one below
NAME: grainite
LAST DEPLOYED: Tue Jun 6 14:29:40 2023
NAMESPACE: grainite
STATUS: deployed
REVISION: 1
TEST SUITE: None

Verify the installation

The first step to verify your installation is to ensure you have set the correct namespace set for Kubectl.
kubectl config set-context --current --namespace=<namespace>
Ensure all your pods are running
kubectl get pods
You should see an output like the one below.
NAME READY STATUS RESTARTS AGE
gxs-0 1/2 Running 0 33h
gxs-1 1/2 Running 0 33h
gxs-2 1/2 Running 0 34h

Error Recovery

In case of errors, you can restart the Grainite installation by first uninstalling the pods from the Kubernetes cluster.
helm delete grainite -n <namespace>
If you want to destroy the persistent volumes
kubectl get pvc |grep gxs|awk '{print $1}'|xargs kubectl delete pvc