Quickstart Install Kubernetes

Pre-Requisites

  1. git installed

  2. helm v3 installed

  3. envsubst installed (a dependency of our helm charts)

  4. eksctl installed or an already running kubernetes cluster.

Steps

1. Install Kubernetes

NOTE: if you already have a kubernetes cluster up and running, move to step 2. Just verify you can connect to the cluster with a command like kubectl get nodes

For this deployment, we'll use EKS to automatically provision a Kubernetes cluster for us. The eksctl will use our pre-configured AWS credentials to create master nodes and worker nodes to our specifications, and will leave us off with kubectl setup to manipulate the cluster.

The regions, node type/size, etc can all be tuned to your use case, the values given are simply examples.

eksctl create cluster \
--name production \
--version 1.15 \
--nodegroup-name workers \
--node-type m4.2xlarge \
--nodes=2 \
--node-ami auto \
--zones us-east-1a,us-east-1b \
--profile default
[] using region us-east-1
[] subnets for us-east-1a - public:192.168.0.0/19 private:192.168.64.0/19
[] subnets for us-east-1b - public:192.168.32.0/19 private:192.168.96.0/19
[] nodegroup "workers" will use "ami-0d373fa5015bc43be" [AmazonLinux2/1.15]
[] using Kubernetes version 1.15
[] creating EKS cluster "production" in "us-east-1" region
[] will create 2 separate CloudFormation stacks for cluster itself and the initial nodegroup
[] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-east-1 --name=production'
[] CloudWatch logging will not be enabled for cluster "production" in "us-east-1"
[] you can enable it with 'eksctl utils update-cluster-logging --region=us-east-1 --name=production'
[] 2 sequential tasks: { create cluster control plane "production", create nodegroup "workers" }
[] building cluster stack "eksctl-production-cluster"
[] deploying stack "eksctl-production-cluster"
[] building nodegroup stack "eksctl-production-nodegroup-workers"
[] --nodes-min=2 was set automatically for nodegroup workers
[] --nodes-max=2 was set automatically for nodegroup workers
[] deploying stack "eksctl-production-nodegroup-workers"
[] all EKS cluster resource for "production" had been created
[] saved kubeconfig as "/home/user/.kube/config"
[] adding role "arn:aws:iam::828920212949:role/eksctl-production-nodegroup-worke-NodeInstanceRole-EJWJY28O2JJ" to auth ConfigMap
[] nodegroup "workers" has 0 node(s)
[] waiting for at least 2 node(s) to become ready in "workers"
[] nodegroup "workers" has 2 node(s)
[] node "ip-192-168-29-248.ec2.internal" is ready
[] node "ip-192-168-36-13.ec2.internal" is ready
[] kubectl command should work with "/home/user/.kube/config", try 'kubectl get nodes'
[] EKS cluster "production" in "us-east-1" region is ready

2. Clone the Grey Matter Helm Charts Repo

Though Helm is not the only way to install Grey Matter into Kubernetes, it does make some things very easy and reduces a large number of individual configurations to a few charts. For this step, we'll clone the public git repository that holds Grey Matter and cd into the resulting directory.

NOTE: this tutorial is using a release candidate, so only a specific branch is being pulled. The entire repository can be cloned if desired.

git clone --single-branch --branch release-2.2 https://github.com/DecipherNow/helm-charts.git && cd ./helm-charts
Cloning into 'helm-charts'...
remote: Enumerating objects: 337, done.
remote: Counting objects: 100% (337/337), done.
remote: Compressing objects: 100% (210/210), done.
remote: Total 4959 (delta 225), reused 143 (delta 126), pack-reused 4622
Receiving objects: 100% (4959/4959), 1.09 MiB | 2.50 MiB/s, done.
Resolving deltas: 100% (3637/3637), done.

3. Setup Credentials

The helm-charts repository contains some convenience scripts to make setup easier. For the first step, we need to create a credentials.yaml file that holds some secret information like usernames and passwords. Simply enter make credentials and follow the prompts.

make credentials
./ci/scripts/build-credentials.sh
decipher email:
first.lastname@company.io
password:
Do you wish to configure S3 credentials for gm-data backing [yn] n
Setting S3 to false
"decipher" has been added to your repositories

Note that if your credentials are not valid, you will see the following response:

Error: looks like "https://nexus.production.deciphernow.com/repository/helm-hosted" is not a valid chart repository or cannot be reached: failed to fetch https://nexus.production.deciphernow.com/repository/helm-hosted/index.yaml : 401 Unauthorized

4. Install Grey Matter component Charts

Grey Matter is made up of a handful of components, each handling different pieces of the overall platform. Please follow each installation step in order.

  1. Add the charts to your local Helm repository, install the credentials file, and install the Spire server.

    helm dep up spire
    helm dep up edge
    helm dep up data
    helm dep up fabric
    helm dep up sense
    make secrets
    helm install server spire/server -f global.yaml
  2. Watch the Spire server pod.

    kubectl get pod -n spire -w

    Watch it until the READY status is 2/2, then proceed to the next step.

    NAME READY STATUS RESTARTS AGE
    server-0 2/2 Running 1 30s
  3. Install the Spire agent, and remaining Grey Matter charts.

    helm install agent spire/agent -f global.yaml
    helm install fabric fabric --set=global.environment=eks -f global.yaml
    helm install edge edge --set=global.environment=eks -f global.yaml
    helm install data data --set=global.environment=eks --set=global.waiter.service_account.create=false -f global.yaml
    helm install sense sense --set=global.environment=eks --set=global.waiter.service_account.create=false -f global.yaml

    If you receive Error: could not find tiller after running the above commands, then you're running an older version of Helm and must install Helm v3. If you need to manage multiple versions of Helm, we highly recommend using helmenv to easily switch between versions.

    While these are being installed, you can use the kubectl command to check if everything is running. When all pods are Running or Completed, the install is finished and Grey Matter is ready to go.

    kubectl get pods
    NAME READY STATUS RESTARTS AGE
    catalog-5b54979554-hs98q 2/2 Running 2 91s
    catalog-init-k29j2 0/1 Completed 0 91s
    control-887b76d54-gbtq4 1/1 Running 0 18m
    control-api-0 2/2 Running 0 18m
    control-api-init-6nk2f 0/1 Completed 0 18m
    dashboard-7847d5b9fd-t5lr7 2/2 Running 0 91s
    data-0 2/2 Running 0 17m
    data-internal-0 2/2 Running 0 17m
    data-mongo-0 1/1 Running 0 17m
    edge-6f8cdcd8bb-plqsj 1/1 Running 0 18m
    internal-data-mongo-0 1/1 Running 0 17m
    internal-jwt-security-dd788459d-jt7rk 2/2 Running 2 17m
    internal-redis-5f7c4c7697-6mmtv 1/1 Running 0 17m
    jwt-security-859d474bc6-hwhbr 2/2 Running 2 17m
    postgres-slo-0 1/1 Running 0 91s
    prometheus-0 2/2 Running 0 59s
    redis-5f5c68c467-j5mwt 1/1 Running 0 17m
    slo-7c475d8597-7gtfq 2/2 Running 0 91s

5. Accessing the dashboard

NOTE: for easy setup, access to this deployment was provisioned with quickstart SSL certificates. They can be found in the helm chart repository at ./certs. For access to the dashboard via the public access point, import the ./certs/quickstart.p12 file into your browser of choice.

An Amazon ELB will be created automatically when we specified the flag --set=global.environment=eks during installation. The ELB is accessible through the randomly created URL attached to the edge service:

$ kubectl get svc edge
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
edge LoadBalancer 10.100.197.77 a2832d300724811eaac960a7ca83e992-749721369.us-east-1.elb.amazonaws.com 10808:32623/TCP,8081:31433/TCP 2m4s

Visit the url (e.g. https://a2832d300724811eaac960a7ca83e992-749721369.us-east-1.elb.amazonaws.com:10808/) in the browser to access the Intelligence 360 Application

Intelligence 360 Application

Cleanup

Delete the Grey Matter Installation

make uninstall

Delete The EKS Cluster

NOTE: this deletion actually takes longer than the output would indicate to terminate all resources. Attempting to create a new cluster with the same name will fail for some time until all resources are purged from AWS.

eksctl delete cluster --name production
[] using region us-east-1
[] deleting EKS cluster "production"
[] kubeconfig has been updated
[] cleaning up LoadBalancer services
[] 2 sequential tasks: { delete nodegroup "workers", delete cluster control plane "prod" [async] }
[] will delete stack "eksctl-production-nodegroup-workers"
[] waiting for stack "eksctl-production-nodegroup-workers" to get deleted
[] will delete stack "eksctl-production-cluster"
[] all cluster resources were deleted