Install on Kubernetes
This guide covers the necessary steps to install and configure the Grey Matter service mesh on a cloud based Kubernetes cluster. While these instructions should support all versions of Kubernetes, they have been tested and confirmed against versions 1.18, 1.19, and 1.20.
The Grey Matter mesh installed at the end of this guide is not intended for production use. Contact Grey Matter Customer Support for more information on a production deployment.

Prerequisites

    1.
    helm v3
    2.
    A cloud CLI
      1.
      eksctl
      3.
      Azure CLI
    3.
    kubectl
    4.
    Grey Matter credentials requested via Grey Matter Support
These instructions assume that the user installing Grey Matter has permissions to create ServiceAccounts, ClusterRoles and ClusterRoleBindings. The Helm Charts will automatically create the necessary RBAC permissions for Grey Matter in the Kubernetes cluster.

Steps

1. Create Kubernetes Cluster

If you already have a Kubernetes cluster up and running, move to Step 2. Just verify you can connect to the cluster with a command like kubectl get nodes.
Amazon EKS
Google GKE
Microsoft AKS
For this deployment, we'll use Amazon EKS to automatically provision a Kubernetes cluster for us. eksctl will use your preconfigured default AWS credentials to create master and worker nodes to our specifications, and configure kubectl so we can manipulate the cluster.
The regions, node type/size and other settings used below may need to be tuned to your use case. The minimum Kubernetes version supported by each platform slowly moves over time, so this may need to be updated periodically. Run, the following commands:
1
eksctl create cluster \
2
--name production \
3
--version 1.18 \
4
--nodegroup-name workers \
5
--node-type m5.2xlarge \
6
--nodes=2 \
7
--node-ami auto \
8
--region us-east-1 \
9
--zones us-east-1a,us-east-1b \
10
--profile default
Copied!
After 10 - 15 minutes, your cluster should be ready. You can test that your configuration is correct by running:
1
eksctl get cluster --region us-east-1 --profile default
2
eksctl get nodegroup --region us-east-1 --profile default --cluster production
Copied!
For this deployment, we'll use Google GKE to automatically provision a Kubernetes cluster for us. gcloud will create master and worker nodes to our specifications, and configure kubectl so we can manipulate the cluster.
The regions, node type/size and other settings used below may need to be tuned to your use case. The minimum Kubernetes version supported by each platform slowly moves over time, so this may need to be updated periodically. Run, the following commands:
1
gcloud container clusters create production \
2
--machine-type e2-standard-8 \
3
--num-nodes 2 \
4
--cluster-version 1.18 \
5
--zone us-central1-a \
6
--node-locations us-central1-a
Copied!
After 3-5 minutes, your cluster should be ready. You can test that your configuration is correct by running:
1
gcloud container clusters list
Copied!
For this deployment, we'll use Microsoft AKS to automatically provision a Kubernetes cluster for us. az will create master and worker nodes to our specifications, and configure kubectl so we can manipulate the cluster.
The regions, node type/size and other settings used below may need to be tuned to your use case. The minimum Kubernetes version supported by each platform slowly moves over time, so this may need to be updated periodically. Run, the following commands:
1
az group create --name production --location eastus
2
az aks create --resource-group production \
3
--name production \
4
--node-vm-size standard_b8ms \
5
--node-count 2 \
6
--kubernetes-version 1.18.14 \
7
--enable-addons monitoring \
8
--generate-ssh-keys
Copied!
Now we'll use az to retrieve our credentials and setup kubectl for access to the cluster.
1
az aks get-credentials --resource-group production --name production
Copied!
After 3-5 minutes, your cluster should be ready. You can test that your configuration is correct by running:
1
az aks list
Copied!

2. Set up Credentials

The credentials identified in the prerequisite steps are used to create a Kubernetes Image Pull Secret that grants access to the Grey Matter Docker images in the Grey Matter Nexus Repository.
If you do not have credentials yet, please contact Grey Matter support.

3. Get the Grey Matter Helm Charts

The Grey Matter Helm Charts are available from our GitHub helm-charts repository. Run the following command to add our helm charts repository.
1
helm repo add greymatter https://greymatter-io.github.io/helm-charts
2
helm repo update
Copied!

4. Set up Secrets

Using your credentials, create the following file and save it as credentials.yaml.
1
dockerCredentials:
2
- registry: docker.greymatter.io
3
email: <username>
4
username: <username>
5
password: <password>
Copied!

5. Generate Configurations

Run the following to download the base configuration file:
1
wget https://raw.githubusercontent.com/greymatter-io/helm-charts/main/global.yaml
Copied!
This file is where you can specify any custom configurations for the installation. The file downloaded as is will install Grey Matter with all default values. If you wish to modify the defaults, make changes to any existing values in global.yaml.
Some configuration changes will change the installation process.
If you set global.spire.enabled to false, skip the server and agent release installations in step 6.
To set more complex configurations like image versions, service environment variables, etc., check out the values files in each of the Grey Matter helm charts. Any additional configurations you wish to set can be added to the global.yaml file with the same directory structure as found in the <chart>/values.yaml file.
For example, to set the version of the Grey Matter proxy for the edge proxy, just simply replace the version as indicated below:
1
edge:
2
version: '<desired-version>'
Copied!

6. Install

Once you have set up your credentials.yaml and global.yaml files, run the following steps in order:
    1.
    Install the necessary secrets
    Grey Matter requires several Kubernetes secrets. The necessary secrets have been extracted into a single helm-chart for ease of installation.
    1
    helm install secrets greymatter/secrets --version 4.0.0 -f credentials.yaml -f global.yaml -n greymatter --create-namespace
    Copied!
    2.
    Install SPIRE server and agent
    This guide will use SPIRE to issue certificates to enable mTLS through the mesh. These commands will install the SPIRE server and agents into the spire namespace.
    1
    helm install spire-server greymatter/server --version 4.0.1 -f global.yaml -n spire --create-namespace
    Copied!
    Before installing the agent, watch the server pod come up:
    1
    kubectl get pods -n spire -w
    Copied!
    Wait until the server pod is 2/2:
    1
    NAME READY STATUS RESTARTS AGE
    2
    server-0 2/2 Running 1 3h54m
    Copied!
    Then, install the SPIRE agent:
    1
    helm install spire-agent greymatter/agent --version 4.0.1 -f global.yaml -n spire
    Copied!
    Verify SPIRE installation
    1
    kubectl get pods -n spire -w
    Copied!
    The SPIRE agent runs as a DaemonSet, so the number of Agent pods is directly related to how many nodes are in your cluster.
    1
    NAME READY STATUS RESTARTS AGE
    2
    agent-5d7q4 1/1 Running 0 3h54m
    3
    agent-8c9lq 1/1 Running 0 3h54m
    4
    agent-s2svz 1/1 Running 0 3h54m
    5
    server-0 2/2 Running 1 3h54m
    Copied!
    3.
    Install Grey Matter Charts
    At this point, we're ready to install Grey Matter. Grey Matter is installed through a series of Helm Charts that install the different components of Grey Matter. You'll notice we're also setting a few values on the command line. These could be updated in the global.yaml file, but we wanted to call them to your attention here.
      global.environment: This is set to eks to drive EKS specific configurations
      edge.ingress.type: This is set to LoadBalancer to update the Edge service to modify it to a type of LoadBalancer.
      global.waiter.service_account.create: This is set to false to prevent the Sense helm-chart from attempting to create the waiter service account.
    1
    helm install fabric greymatter/fabric --version 4.0.2 -f global.yaml --set=global.environment=kubernetes -n greymatter
    2
    helm install edge greymatter/edge --version 4.0.2 -f global.yaml --set=global.environment=kubernetes --set=edge.ingress.type=LoadBalancer -n greymatter
    3
    helm install sense greymatter/sense --version 4.0.1 -f global.yaml --set=global.environment=kubernetes --set=global.waiter.service_account.create=false -n greymatter
    Copied!
    Notice in the edge installation we are setting --set=edge.ingress.type=LoadBalancer, this value sets the service type for edge. The default is ClusterIP. In this example we want an AWS ELB to be created automatically for edge ingress (see below), thus we are setting it to LoadBalancer. See the Kubernetes publishing services docs for guidance on what this value should be in your specific installation.
    If you receive Error: could not find tiller after running the above helm commands, then you're running an older version of Helm and must install Helm v3. If you need to manage multiple versions of Helm, we highly recommend using helmenv to easily switch between versions.
    While these are being installed, you can use the kubectl command to check if everything is running. When all pods are Running or Completed, the install is finished and Grey Matter is ready to go.
    1
    kubectl get pods -n greymatter
    Copied!
    The running output will look like the following:
    1
    NAME READY STATUS RESTARTS AGE
    2
    catalog-75d4c66477-h9nnr 2/2 Running 0 49s
    3
    catalog-init-v778j 0/1 Completed 0 49s
    4
    control-78f8ccf5f6-krlbd 1/1 Running 0 111s
    5
    control-api-0 2/2 Running 0 111s
    6
    control-api-init-pblsb 0/1 Completed 0 111s
    7
    dashboard-dd95bddbc-hrzm5 2/2 Running 0 49s
    8
    edge-d56fd6795-qdctl 1/1 Running 0 63s
    9
    jwt-redis-6b84846ffc-fkpjg 1/1 Running 0 111s
    10
    jwt-security-7bc8bfb9f-jbz6l 2/2 Running 0 111s
    11
    mesh-redis-0 1/1 Running 0 111s
    12
    prometheus-0 2/2 Running 0 49s
    Copied!

7. Accessing the application

    1.
    Get the User Certificate
    By default, Grey Matter leverages mutual TLS (mTLS) communications for all traffic, including inbound traffic to the mesh. This means that all https requests must include TLS certificates whether that be via a web browser or RESTful client. The Grey Matter helm charts have the ability to generate random Ingress and User certificates to ensure unique certificates every time a cluster is launched. For web based authentication, these certificates can then be imported into a web browser, to access resources in the mesh.
    Following the instructions in this guide, Grey Matter will automatically provision the required mTLS certificates for the server and the user.
    To get the user certificate, run these commands:
    1
    kubectl get secret greymatter-user-cert -n greymatter -o jsonpath="{.data['tls\.crt']}" | base64 -d > tls.crt
    2
    kubectl get secret greymatter-user-cert -n greymatter -o jsonpath="{.data['tls\.key']}" | base64 -d > tls.key
    3
    kubectl get secret greymatter-user-cert -n greymatter -o jsonpath="{.data['ca\.crt']}" | base64 -d > ca.crt
    Copied!
    Then create a new p12 certificate and load it into your browser:
    1
    openssl pkcs12 -export -out greymatter.p12 -inkey tls.key -in tls.crt -certfile ca.crt -passout pass:password
    Copied!
    If you want to provide your own valid certificates for ingress, set .Values.global.auto_generate_edge_certs to false and provide the cert information in the secrets chart, at .Values.edge.certificate.ingress. You will need to ensure you have a valid User certificate from the same Certificate Authority for Grey Matter to authenticate the user.
    2.
    Access Grey Matter
    When spefiying the --set=edge.ingress.type=LoadBalancer during installation, your cloud provider will automatically create a LoadBalancer that exposes Grey Matter publically. We can get the IP or DNS address of our LoadBalancer by running the following command:
    1
    kubectl get svc edge -n greymatter
    Copied!
    The output will look like the following:
    1
    NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    2
    edge LoadBalancer 10.100.252.235 a255c23a43350427a93a860856d52155-1106205970.us-east-1.elb.amazonaws.com 10808:31098/TCP,8081:31216/TCP 4m
    Copied!
    GKE will return a single IP address and not a DNS'd entry.
    Visit the address listed under EXTERNAL-IP in the browser to access the Grey Matter application. For example:
    1
    https://a255c23a43350427a93a860856d52155-1106205970.us-east-1.elb.amazonaws.com:10808/
    Copied!
    Since this is an ingress directly to the Kubernetes service, you must specifiy the 10808 port number of the DNS address from above.
Grey Matter application

Configure the Grey Matter CLI

In order to add or modify service configurations, make sure you have the greymatter CLI installed.
The Grey Matter CLI is configured by several environment variables. Below is are the required variables and settings that will work with this deployment.
We need to capture the external DNS from step 5 to use in the Grey Matter CLI configurations. To do this, run the following command:
1
export GM_HOSTNAME=$(kubectl get svc edge -n greymatter -o jsonpath="{.status.loadBalancer.ingress[0].hostname}")
Copied!
The references to GREYMATTER_API_SSL[CERT | KEY] refer to the TLS cert and key and key objects that were created by extracting the data from kuberetes in the "Accessing Grey Matter" step above. If you extracted the tls.cert, tls.key and ca.crt to a different location than $(pwd), update the below references to point to the corect location.
1
export EDITOR=vim
2
export GREYMATTER_API_HOST=${GM_HOSTNAME}:10808
3
export GREYMATTER_API_INSECURE=true
4
export GREYMATTER_API_PREFIX=/services/control-api/latest
5
export GREYMATTER_API_SSL=true
6
export GREYMATTER_API_SSLCERT=tls.crt
7
export GREYMATTER_API_SSLKEY=tls.key
8
export GREYMATTER_CATALOG_HOST=${GM_HOSTNAME}:10808
9
export GREYMATTER_CATALOG_PREFIX=/services/catalog/latest
10
export GREYMATTER_CATALOG_INSECURE=true
11
export GREYMATTER_CATALOG_SSL=true
12
export GREYMATTER_CATALOG_SSLCERT=tls.crt
13
export GREYMATTER_CATALOG_SSLKEY=tls.key
14
export GREYMATTER_CATALOG_MESH=zone-default-zone
15
export GREYMATTER_CONSOLE_LEVEL=debug
Copied!
Now if you can run greymatter list cluster and greymatter list catalog-service and there are no errors, the CLI is properly configured.

Cleanup

Delete the Grey Matter Installation

1
helm uninstall -n greymatter sense edge fabric secrets
2
helm uninstall -n spire spire-agent spire-server
Copied!

Delete The Kubernetes Cluster

This deletion actually takes longer than the output would indicate to terminate all resources. Attempting to create a new cluster with the same name will fail for some time until all resources are purged.
Amazon EKS
Google GKE
Microsoft AKS
1
eksctl delete cluster --name production
Copied!
1
[] using region us-east-1
2
[] deleting EKS cluster "production"
3
[] kubeconfig has been updated
4
[] cleaning up LoadBalancer services
5
[] 2 sequential tasks: { delete nodegroup "workers", delete cluster control plane "prod" [async] }
6
[] will delete stack "eksctl-production-nodegroup-workers"
7
[] waiting for stack "eksctl-production-nodegroup-workers" to get deleted
8
[] will delete stack "eksctl-production-cluster"
9
[] all cluster resources were deleted
Copied!
1
gcloud container clusters delete production --quiet
Copied!
1
Deleting cluster production...done.
2
Deleted [https://container.googleapis.com/v1/projects/psychic-era-307017/zones/us-central1-a/clusters/production].
Copied!
1
az aks delete --resource-group production --name production -y
Copied!
Last modified 2mo ago