Prerequisites
- kubectl v1.21+ with cluster administrator privileges
cue
CLI- An existing Kubernetes v1.19+ cluster
- A running greymatter.io operator within your cluster
- A configured Mesh custom resource that includes the
default
namespace in itswatch_namespaces
list
Create a Deployment
The operator is capable of assisting deployments by optionally injecting, and (independently optionally) configuring that sidecar to be accessible through the default edge gateway. If you add the necessary annotations to your Deployment or StatefulSet, (and they are deployed into a “watched” namespace according to the operator’s core CUE configuration) the operator will respond accordingly.
To get started, create a new file called workload.yaml
with the following contents:
apiVersion: apps/v1
kind: Deployment
metadata:
name: simple-server
annotations:
greymatter.io/inject-sidecar-to: "3000"
greymatter.io/configure-sidecar: "true"
spec:
selector:
matchLabels:
app: simple-server
template:
metadata:
labels:
app: simple-server
spec:
containers:
- name: server
image: python:3
command: ["python"]
args: ["-m", "http.server", "3000"]
Then apply workload.yaml
to deploy the service to your Kubernetes cluster.
kubectl apply -f workload.yaml
Creating a Deployment or StatefulSet in a namespace specified in a Mesh custom resource’s watch_namespaces
array will signal to the operator to include its resulting pods in the mesh network. This occurs in three steps:
- The operator will edit the Deployment/StatefulSet by adding some labels to its nested Pod template spec based on its
metadata.name
field:
spec:
selector:
matchLabels:
app: simple-server
# This label specifies the unique ID of a service a greymatter.io mesh.
greymatter.io/cluster: simple-server
# This label specifies the secure workload identity for mutual TLS.
# Note: `mesh-sample` refers to the Mesh this service is a part of.
greymatter.io/workload: mesh-sample.simple-server
template:
metadata:
labels:
app: simple-server
greymatter.io/cluster: simple-server
- The operator will then inject the resulting Pod(s) with a data plane container that is a part of the mesh network. This configuration is generated by the operator from the Mesh custom resource:
# spec.template.spec.containers[1]:
- name: sidecar
image: docker.greymatter.io/development/gm-proxy:latest
imagePullPolicy: IfNotPresent
env:
- name: ENVOY_ADMIN_LOG_PATH
value: /dev/stdout
- name: PROXY_DYNAMIC
value: "true"
- name: XDS_HOST
value: control.greymatter.svc.cluster.local
- name: XDS_PORT
value: "50000"
- name: XDS_ZONE
value: default-zone
- name: XDS_CLUSTER
value: simple-server
- name: SPIRE_PATH
value: /run/spire/socket/agent.sock
ports:
- name: proxy
containerPort: 10808
protocol: TCP
- name: metrics
containerPort: 8081
protocol: TCP
- The operator will bootstrap the mesh configurations needed for the data plane container to receive traffic from the
edge
data plane and proxy it to other containers in the same pod.
Access your service through the mesh
After the operator configures the data plane container and control plane mesh configurations, you can connect to your service through the mesh via the edge
data plane at an address that matches the format http://{edge-address}/services/{workload-name}/
.
The edge-address
will vary based on how your Kubernetes cluster is exposed to the internet.
In the case of our sample-server
deployment, we can access the service through edge and the injected sidecar at:
http://{edge-address}/services/simple-server/