my first shot at deploying single-cluster k8ssandra on EKS
will use thingsboard ce as a platfrom to test the database
- setup an AWS account
- install:
- configure environment:
aws configure --profile k8ssandra-test-triki
eksctl --profile k8ssandra-test-triki create cluster -f eks/cluster.yml \
--set-kubeconfig-context=false \
--auto-kubeconfig=false \
--write-kubeconfig=false
aws eks --profile k8ssandra-test-triki update-kubeconfig --region ap-south-1 --name k8ssandra-test-triki --alias k8ssandra-test-triki
kubectl apply -f eks/gp3-sc.yml
kubectl patch storageclass gp2 -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
ALB controller - needed for thingsboard
cert manager - needed for cass-operator
helm repo add eks https://aws.github.io/eks-charts
helm repo update eks
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
-n kube-system \
--set clusterName=k8ssandra-test-triki \
--set serviceAccount.create=false \
--set serviceAccount.name=aws-load-balancer-controller
helm repo add jetstack https://charts.jetstack.io
helm repo update jetstack
helm install --version 1.17.2 cert-manager jetstack/cert-manager \
-n cert-manager \
--create-namespace \
--set crds.enabled=true \
--set serviceAccount.create=false \
--set cainjector.serviceAccount.create=false
since thingsboard will need to access cassandra db, we will deploy them both in the separate namespace - need to use cluster scope
helm repo add k8ssandra https://helm.k8ssandra.io/stable
helm repo update k8ssandra
helm install k8ssandra-operator --version 1.24.0 k8ssandra/k8ssandra-operator \
-n k8ssandra-operator \
--create-namespace \
--set global.clusterScoped=true
kubectl apply -f thingsboard/namespace.yml
kubectl apply -f cassandra/k8ssandra-cluster.yml
AWS Console -> Aurora and RDS -> Create database
see rds-params.png - crucial parameters are highlighted
IMPORTANT - save master user password somewhere! you cannot read it after DB creation; we will put it inside kubernetes secret in the next step
alternatively, you can use AWS secret manager for this
RDS_SOURCE="jdbc:postgresql://$(aws --profile k8ssandra-test-triki rds describe-db-instances --region ap-south-1 | jq -r '.DBInstances[] | select(.DBInstanceIdentifier == "rds-for-k8ssandra-test-triki") | .Endpoint.Address'):5432/thingsboard"
RDS_USER=$(aws rds --profile k8ssandra-test-triki describe-db-instances --region ap-south-1 | jq -r '.DBInstances[] | select(.DBInstanceIdentifier == "rds-for-k8ssandra-test-triki") | .MasterUsername')
RDS_PASS=<paste master password here>
kubectl create -n thingsboard secret generic tb-rds-secret \
--from-literal=rds-datasource="$RDS_SOURCE" \
--from-literal=rds-username="$RDS_USER" \
--from-literal=rds-password="$RDS_PASS"
by default thingsboard installs keyspace with RF=1 - so we need to pre-install keyspace with proper RF=3
to get cassandra creds:
kubectl -n thingsboard get secret cassandra-superuser -o json | jq -r '.data.username' | base64 --decode
kubectl -n thingsboard get secret cassandra-superuser -o json | jq -r '.data.password' | base64 --decode
run query:
kubectl -n thingsboard exec -it cassandra-ap-south-1-r1a-sts-0 -c cassandra -- cqlsh \
-u cassandra-superuser \
-e \
"CREATE KEYSPACE IF NOT EXISTS thingsboard \
WITH replication = { \
'class' : 'NetworkTopologyStrategy', \
'ap-south-1' : '3' \
};"
kubectl apply -f thingsboard/tb-node-configmap.yml
cd thingsboard/install
sudo chmod +x install-tb.sh
./install-tb.sh
cd ../..
kubectl apply -f thingsboard/tb-node-sts.yml
kubectl apply -f thingsboard/tb-nlb.yml
you can access web UI from browser via EXTERNAL-IP
link from:
kubectl -n thingsboard get svc tb-nlb
default credentials are:
System Administrator: [email protected] / sysadmin
Tenant Administrator: [email protected] / tenant
Customer User: [email protected] / customer
medusa s3 secret snippet:
AWS_KEY_ID=demo12345
AWS_KEY_SECRET=demo12345
kubectl apply -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
name: medusa-s3-secret
namespace: thingsboard
type: Opaque
stringData:
credentials: |
[default]
aws_access_key_id = ${AWS_KEY_ID}
aws_secret_access_key = ${AWS_KEY_SECRET}
EOF
TODO - check if backups arent persisted locally TODO - instrucions for dedicated ami for medusa s3
reaper ui:
kubectl port-forward svc/cassandra-ap-south-1-reaper-service 8085:8080
localhost:8085/webui
kubectl get secret cassandra-reaper-ui -o jsonpath='{.data.username}' | base64 --decode
kubectl get secret cassandra-reaper-ui -o jsonpath='{.data.password}' | base64 --decode
manual backup:
DATE=$(date +%Y%m%d-%H%M) envsubst < cassandra/backup/manual.template.yml | kubectl apply -f -
TODO - Prometheus
thingsboard data generators:
login as [email protected]
entities -> devices -> "+" -> import device -> use "devices.csv"
rule chains -> "+" -> import rule chain -> use "rule-chain.json"
this will generate 440 key-values saved to cassandra per second (40 integers and 400 strings) - totalling to ~2.6gb of data written daily, with per-row TTL applied for 7, 10, 14 and 30 days (equally) - total DB size (after 1 month TTL "kicks in") should be about 40gb
^^^ explain medusa/reaper/cassandra resources limits/request were done per this dataload
troubleshooting: https://docs.k8ssandra.io/tasks/troubleshoot/
monitoring - i will do mcac (or vector?) expose + separate prometheus deployment; k8ssandra could also deploy prometheus as part of k8ssandracluster, see https://docs.k8ssandra.io/components/metrics-collector/
k8ssandra automatically deletes PVC with cluster resource deletion
there is no delete policy available from k8ssandra side, so we need to ensure volumes are persisted via reclaim policy patch:
kubectl patch pv <pv-name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
if you need to re-bound PVs to PVCs after k8ssandra was deleted:
- check that PVs have same names, labels and size as desribed in k8ssandra CRDs
- patch your PVs:
kubectl -n thingsboard patch pv pvc-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx -p '{"spec":{"claimRef": null}}'
- after you re-deploy k8ssandra cluster - old PVs should automatically bound to new PVCs
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
- migrate monitoring to helm chart (thats also dependency for k8ctelemetry.prometheus
)