greymatter.io Fabric supports service discovery from Kubernetes. See the greymatter.io Control Kubernetes discovery setup documentation for how to configure this with greymatter.io Control.
Kubernetes Deployments
Kubernetes (k8s) has a number of internal APIs that are used for the complex orchestration of containers. When using k8s as the underlying orchestration platform, greymatter.io Control can utilize some of these APIs to also do easy service announcement and discovery.
The greymatter.io Control server will discover services based on their pod IP and container port, see the kubernetes specifications in the setup docs for how to configure your deployments for service discovery. Also, see the example deployment.
Behavior
As described in the greymatter.io Control service discovery kubernetes docs, the greymatter.io Control server will discover from namespaces specified on startup by the environment variable GM_CONTROL_KUBERNETES_NAMESPACES
. In the case that a Kubernetes namespace that Control is configured to discover from is or becomes undiscoverable , the Control server follows a specific pattern of behavior.
If a namespace goes down, the Control server will label the connection to the namespace as having a bad state. It will retain the last known state of discovered pods for the namespace and generate clusters for them. If the namespace eventually becomes discoverable again, it will update its pod list accordingly. In the meantime, it will continue to poll the unavailable namespace, and it will be left up to EDS to determine the endpoint health for the last known state of the pods.
Example Deployment
The Kubernetes Deployment below is properly setup (label and port) to be discovered by the greymatter.io Control server.
apiVersion: apps/v1
kind: Deployment
spec:
selector:
matchLabels:
app: example
replicas: 1
template:
metadata:
labels:
gm_cluster: example
spec:
containers:
- name: example-service
image: docker.greymatter.io/internal/example-service:latest
- name: sidecar
image: docker.greymatter.io/release/gm-proxy:1.2.1
imagePullPolicy: Always
ports:
- name: http
containerPort: 9080
- name: metrics
containerPort: 8081
env:
- name: PROXY_DYNAMIC
value: "true"
- name: XDS_CLUSTER
value: example
- name: XDS_HOST
value: gm-control
- name: XDS_PORT
value: "50000"
Kubernetes Discovery
Kubernetes (k8s) has several internal APIs to orchestrate containers. When using k8s as the underlying orchestration platform, gm-control
can use these APIs for easy service announcement and discovery.
Pod Label and Named Port
To discover services, you must configure two important bits of information on each deployed pod:
- the cluster label
- the port name
Without these two pieces of information, the control plane will ignore the pods.
Cluster Label
The cluster label is a small piece of metadata attached to the pod to determine what specific service is running in this pod. All pods of the same service will be grouped together and load balanced in the mesh. The default metadata key that will be used is gm_cluster=<service_name>
, but this is user configurable.
Port Name
The port name determines which named port to expose in the mesh.
Since it’s possible for many ports to be open for different purposes, just one must be designated for routing normal traffic. This defaults to the port named http
, but is user configurable.
Example Deployment
The Kubernetes Deployment below is properly set up (label and port) to be discovered by the gm-control
server.
apiVersion: apps/v1
kind: Deployment
spec:
selector:
matchLabels:
app: example
replicas: 1
template:
metadata:
labels:
gm_cluster: example
spec:
containers:
- name: example-service
image: deciphernow/example-service:latest
- name: sidecar
image: docker.production.deciphernow.com/deciphernow/gm-proxy:0.9.1
imagePullPolicy: Always
ports:
- name: http
containerPort: 9080
- name: metrics
containerPort: 8081
env:
- name: PROXY_DYNAMIC
value: "true"
- name: XDS_CLUSTER
value: example
- name: XDS_HOST
value: gm-control
- name: XDS_PORT
value: "50000"
Service Accounts
To discover services from the internal Kubernetes APIs, the pod running the Control server must have additional permissions granted by an admin. To grant this access, have a cluster admin apply one of the following resources (replacing the greymatter
namespace with the namespace of the running control
server), and then add the service account to the pod running gm-control.
spec:
serviceAccountName: control
Single Namespace
This is used when gm-control
is only discovering services in the same namespace it’s running in.
apiVersion: v1
kind: ServiceAccount
metadata:
name: control
namespace: greymatter
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: control-manager
namespace: greymatter
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["list"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: control-binding
namespace: greymatter
subjects:
- kind: ServiceAccount
name: control
namespace: greymatter
roleRef:
kind: Role
name: control-manager
apiGroup: rbac.authorization.k8s.io
Multiple Namespaces
This is used when gm-control
will be discovering services from multiple namespaces across the cluster.
apiVersion: v1
kind: ServiceAccount
metadata:
name: control
namespace: greymatter
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: control-manager
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: control-binding
subjects:
- kind: ServiceAccount
name: control
namespace: greymatter
roleRef:
kind: ClusterRole
name: control-manager
apiGroup: rbac.authorization.k8s.io