1.18
, 1.19
, and 1.20
.kubectl get nodes
.eksctl
will use your preconfigured default
AWS credentials to create master and worker nodes to our specifications, and configure kubectl
so we can manipulate the cluster.gcloud
will create master and worker nodes to our specifications, and configure kubectl
so we can manipulate the cluster.az
will create master and worker nodes to our specifications, and configure kubectl
so we can manipulate the cluster.az
to retrieve our credentials and setup kubectl
for access to the cluster.global.yaml
.global.spire.enabled
to false
, skip the server
and agent
release installations in step 6.global.yaml
file with the same directory structure as found in the <chart>/values.yaml
file.credentials.yaml
and global.yaml
files, run the following steps in order:spire
namespace.The SPIRE agent runs as a DaemonSet, so the number of Agent pods is directly related to how many nodes are in your cluster.
global.yaml
file, but we wanted to call them to your attention here.global.environment
: This is set to eks
to drive EKS specific configurationsedge.ingress.type
: This is set to LoadBalancer
to update the Edge service to modify it to a type of LoadBalancer.global.waiter.service_account.create
: This is set to false to prevent the Sense helm-chart from attempting to create the waiter
service account.--set=edge.ingress.type=LoadBalancer
, this value sets the service type for edge. The default is ClusterIP
. In this example we want an AWS ELB to be created automatically for edge ingress (see below), thus we are setting it to LoadBalancer
. See the Kubernetes publishing services docs for guidance on what this value should be in your specific installation.Error: could not find tiller
after running the above helm commands, then you're running an older version of Helm and must install Helm v3. If you need to manage multiple versions of Helm, we highly recommend using helmenv to easily switch between versions.kubectl
command to check if everything is running. When all pods are Running
or Completed
, the install is finished and Grey Matter is ready to go.https
requests must include TLS certificates whether that be via a web browser or RESTful client. The Grey Matter helm charts have the ability to generate random Ingress and User certificates to ensure unique certificates every time a cluster is launched. For web based authentication, these certificates can then be imported into a web browser, to access resources in the mesh.If you want to provide your own valid certificates for ingress, set.Values.global.auto_generate_edge_certs
tofalse
and provide the cert information in the secrets chart, at.Values.edge.certificate.ingress
. You will need to ensure you have a valid User certificate from the same Certificate Authority for Grey Matter to authenticate the user.
--set=edge.ingress.type=LoadBalancer
during installation, your cloud provider will automatically create a LoadBalancer that exposes Grey Matter publically. We can get the IP or DNS address of our LoadBalancer by running the following command:EXTERNAL-IP
in the browser to access the Grey Matter application. For example:10808
port number of the DNS address from above.GREYMATTER_API_SSL[CERT | KEY]
refer to the TLS cert and key and key objects that were created by extracting the data from kuberetes in the "Accessing Grey Matter" step above. If you extracted the tls.cert
, tls.key
and ca.crt
to a different location than $(pwd)
, update the below references to point to the corect location.greymatter list cluster
and greymatter list catalog-service
and there are no errors, the CLI is properly configured.