Before you begin installing greymatter.io, you will need to install the following prerequisites.
- kubectl, the Kubernetes CLI for interaction with Kubernetes clusters. We support versions
- git, the Git CLI for interaction with Git repositories. You may already have this on your machine. Check via
- jf, the CLI for jFrog (Artifactory), where we store release images and binaries.
- CUE, the CUE CLI for interacting with configurations written in the CUE language.
- Lens, an excellent application to easily interact with multiple Kubernetes clusters.
You will need a greymatter.io account to gain access to our software. Contact us for more information.
To get started with greymatter we recommend at least a 3 node cluster with 8 cores and 16GB of memory on each node.
As a point of reference for production loads, we benchmarked greymatter using 300 total unique application containers. More specifically, in each of 10 namespaces an application edge proxy deployment, with 10 replicas, was configured to proxy traffic to each of the 30 unique upstream deployments in that namespace. Load was induced with 10 in-cluster Vegeta clients, each sending 100 requests per second to the edge proxies. The upstream services were targeted roughly evenly. This was deployed on an AKS cluster with 15 Standard_F8s_v2 nodes.
Given this load, the greymatter namespace in total consumed up to 2.5 cores and 5.5 GiB of memory, plus 0.75 cores and 700 MiB of memory per node for the greymatter-audits agents. The gm-operator namespace consumed up to 0.5 cores and 650 MiB of memory. These namespace totals are an aggregate of the default container resource limits for each greymatter component defined in greymatter-core under the
Tenants should plan for 0.2 cores and 512 MiB of memory for the greymatter Sync service in addition to 0.33 cores and 115 MiB memory per proxy (application edge and sidecars).
A simple way to get started with Kubernetes, on your local machine, is to use Rancher Desktop. With Rancher Desktop, you can have a Kubernetes cluster up and running quickly, and have tools to make it easy to change allocated CPU and Memory, and reset your cluster back to a ground-zero state.
There are number of cloud-providers that make it easy to set up a Kubernetes cluster in the cloud.
If you’re interested in deploying your own Kubernetes cluster to cloud or on-premise infrastructure, we recommend kOps.