Deploy a Service
This guide is a step by step walkthrough on deploying a new service into an existing Grey Matter deployment. This guide is for a SPIFFE/SPIRE enabled deployment.

Prerequisites

    1.
    An existing Grey Matter deployment running on Kubernetes
    2.
    kubectl access to the cluster
    3.
    greymatter CLI setup with access to the deployment

Overview

    1.
    Launch a pod with the service and sidecar
    2.
    Create the Fabric configuration for the sidecar to talk to the service
    3.
    Create the Fabric configuration for the Edge to talk to the sidecar
    4.
    Add an entry in the Catalog service to display in the Grey Matter application

Steps

We'll launch a simple example service. It has one route, /fibonacci/{n}, that calculates the nth Fibonacci number. Note: All of the configuration necessary to launch the Fibonacci service into Kubernetes and Grey Matter is available at https://github.com/greymatter-io/deploy-a-service. Please clone this repository and follow along inside.

1. Launch pod

The first configuration is a Kubernetes Deployment object:
deploy-a-service/deployment.yaml at main · greymatter-io/deploy-a-service
GitHub
Note the SPIRE-specific configurations to the deployment - the volume and volume mount spire-socket and the environment variable SPIRE_PATH. These are the additions that will need to be made to any deployment for a service you wish to add to the mesh with SPIFFE/SPIRE.
Apply with:
1
cd 1_kubernetes
2
kubectl apply -f deployment.yaml
Copied!

2. Configure Local Routing

The next steps are to create objects in the Fabric API. These objects will create all the configuration for the Sidecar to handle requests on behalf of the deployed service.
This step creates and configures the Grey Matter objects necessary to allow the sidecar container in the deployment to route to the Fibonacci service container. We will refer to this as "local routing". The next step will configure the Edge proxy to route to the Fibonacci sidecar, thus fully wiring the new service into the mesh.
This guide goes over deploying a new service and configuring it for ingress routing. To configure a service for both ingress and egress routing within the mesh, see the guide.
For each Grey Matter object created, create the local file and send them to the API using the greymatter CLI.
Move to the 2_sidecar directory:
1
cd ../2_sidecar
Copied!

Domain

The first object to create is a Grey Matter Domain the ingress domain for the Fibonacci sidecar. This object does virtual host identification, but for this service we'll accept any host ("name": "*") that comes in on the port 10808 (the port with name proxy-or value of Grey Matter Control environment variable GM_CONTROL_KUBERNETES_PORT_NAME-in the sidecar container).
See the domain documentation for more information.
deploy-a-service/domain.json at main · greymatter-io/deploy-a-service
GitHub
Apply with:
1
greymatter create domain < domain.json
Copied!

Listener

The next object is the ingress listener. This is the physical binding of the Sidecar to a host interface and port and is linked in the field domain_keys to a specific domain. The listener and domain configurations determine where the sidecar should listen for incoming connections on and what kind of connections it should accept.
The listener object is also the place to configure Grey Matter filters. See the listener documentation for more information.
Note the secret field. This field is required for service-to-service communication in a SPIFFE/SPIRE setup. This secret tells the sidecar to fetch its SVID (with ID spiffe://quickstart.greymatter.io/fibonacci) from Envoy and present it to incoming connections. It also will set a certificate validation context with match subject alternative names specifies to only accept incoming requests with SAN spiffe://quickstart.greymatter.io/edge. See the SPIRE documentation for specifics. The listener secret configuration will be important for the Edge to Fibonacci cluster.
deploy-a-service/listener.json at main · greymatter-io/deploy-a-service
GitHub
Apply with:
1
greymatter create listener < listener.json
Copied!

Proxy

The proxy object links a sidecar deployment to its Grey Matter objects. The name field must match the label on the deployment (in this case greymatter.io/control) that Grey Matter Control is looking for in its environment variable GM_CONTROL_KUBERNETES_CLUSTER_LABEL. It takes a list of domain_keys and listener_keys to link to the deployment with cluster label matching name.
See the proxy documentation for more information.
deploy-a-service/proxy.json at main · greymatter-io/deploy-a-service
GitHub
Apply with:
1
greymatter create proxy < proxy.json
Copied!

Local Cluster

The next object to create is a local cluster. The cluster is in charge of the egress connection from a sidecar to whatever service is located at its configured instances, and can set things like circuit breakers, health checks, and load balancing policies.
This local cluster will tell the sidecar where to find the Fibonacci container to send requests. From the deployment above, we configured the Fibonacci container at port 8080. Since the sidecar and Fibonacci containers are running in the same pod, they can communicate over localhost.
See the cluster documentation for more information.
deploy-a-service/cluster.json at main · greymatter-io/deploy-a-service
GitHub
Apply with:
1
greymatter create cluster < cluster.json
Copied!

Local Shared Rules

The shared rules object is used to match routes to clusters. They can do some features of routes like setting retry_policies and appending response data, but they also can perform traffic splitting between clusters for operations like blue/green deployments.
This local shared rules object will be used to link the local route, in the next step, to the local fibonacci-cluster we just created.
See the shared rules documentation for more information.
deploy-a-service/shared_rules.json at main · greymatter-io/deploy-a-service
GitHub
Apply with:
1
greymatter create shared_rules < shared_rules.json
Copied!

Local Route

Routes match against requests by things like URI path, headers, cookies, or metadata and map to shared_rules. Since this service only needs to forward everything it receives to the local microservice, the setup is fairly simple.
This local route will link the fibonacci-domain to the fibonacci-local-rules we just created. We know that the fibonacci-local-rules object is used to link routes to the fibonacci-cluster, thus with this route object applied, the Fibonacci sidecar will be configured to accept requests and route to the Fibonacci service.
See the route documentation for more information.
The path indicates that any request coming into the sidecar with path / should be routed to the Fibonacci service. We will see in the next step when configuring edge routes that all requests from the Edge proxy to the Fibonacci service will come in at this path.
deploy-a-service/route.json at main · greymatter-io/deploy-a-service
GitHub
Apply with:
1
greymatter create route < route.json
Copied!
The sidecar will now be configured to properly accept requests and route to the Fibonacci service. The next step will configure the Edge proxy to route to the sidecar.

3. Configure Edge Routing

Now that the Sidecar-to-Service routing has been configured, we will set up the Edge-to-Sidecar routing because we want this service to be available to external users.
The process will take similar steps to what was done before, but we only need to create a cluster, a shared_rules object pointing at that cluster, and two routes.
Move to the 3_edge directory:
1
cd ../3_edge
Copied!

Edge to Fibonacci Cluster

This cluster will handle traffic from the Edge to the Fibonacci Sidecar. The Edge has existing domain (with domain key edge), listener, and proxy much like the ones we just created for the Fibonacci service. The first step to configure the Edge to Fibonacci service is to create a cluster to tell it where to find the Fibonacci sidecar.
NOTE that there are several differences between this cluster and the local cluster created above:
    1.
    The instances field is left as an empty array, whereas the fibonacci-local-cluster instances were configured. This is because Grey Matter Control will discover the Fibonacci deployment and the instances array will be automatically populated from this service discovery: the instances will go up and down whenever the service scales or changes. To do this, (in the same way as described in creating the proxy object above) the name field must match the cluster label on the deployment.
    2.
    This cluster has a secret set on it, and require_tls is true. This is because the edge proxy and the Fibonacci sidecar are running in different pods so they can't connect over localhost and must use their SPIFFE SVIDs for communication.
    The secret here mirrors the one set on the Fibonacci listener. As stated above, the cluster is in charge of the egress connection from a sidecar to whatever service is located at its instances.
    In this case, the secret is telling the Edge proxy to fetch its SVID (with ID spiffe://quickstart.greymatter.io/edge) from Envoy SDS and present it on its outgoing connections. It will also only accept connections that present a certificate with SAN spiffe://quickstart.greymatter.io/fibonacci. See the SPIRE documentation for specifics.
    As described in the secret configuration on the Fibonacci listener, these are opposites. The request from this cluster will be accepted by the Fibonacci sidecar and vice versa.
deploy-a-service/cluster.json at main · greymatter-io/deploy-a-service
GitHub
Apply with:
1
greymatter create cluster < cluster.json
Copied!

Edge to Fibonacci Shared Rules

The edge to Fibonacci shared_rules will be used to link the edge to Fibonacci routes to the edge-to-fibonacci-cluster we just created.
deploy-a-service/shared_rules.json at main · greymatter-io/deploy-a-service
GitHub
Apply with:
1
greymatter create shared_rules < shared_rules.json
Copied!

Edge to Fibonacci Routes

In the same way that the local route was connected to the fibonacci-domain, these routes will be connected to the edge domain, and will configure how the edge sidecar sends requests meant for our fibonacci service.
The route_match and prefix_rewrite blocks send all traffic intended for /services/fibonacci/ (note the trailing /) to our fibonacci service via the appropriate shared_rules created above. Then, in order to support a URL without the trailing slash, the redirects block creates a permanent redirect from /services/fibonacci to /services/fibonacci/.
deploy-a-service/route.json at main · greymatter-io/deploy-a-service
GitHub
Apply with:
1
greymatter create route < route.json
Copied!
Once these routes are applied, the service is fully configured in the mesh! You should be able to access the service at https://{your-gm-ingress-url}:{your-gm-ingress-port}/services/fibonacci/ with response Alive. To send a request for a specific Fibonacci number, https:///{your-gm-ingress-url}:{your-gm-ingress-port}/services/fibonacci/fibonacci/<number>
If you don't know your gm-ingress-url and you followed the Quickstart Install Kubernetes guide, run
1
kubectl get svc edge -n greymatter
Copied!
and copy the EXTERNAL-IP and port (by default the port will be 10808).

4. Add Service to Grey Matter Catalog

The last step in deploying a service is to add the expected service entry to the Grey Matter Catalog service. This will interface with the control plane, and provide information to the Grey Matter application for display.
deploy-a-service/entry.json at main · greymatter-io/deploy-a-service
GitHub
Apply with:
1
cd ../4_catalog
2
greymatter create catalog-service < entry.json
Copied!
If the addition was successful, you'll receive a JSON response with the object you added, along with a few additional read-only fields such as instances, status, protocols, and authorized.
When you navigate to the Grey Matter application, you should be able to see the service displayed.
Grey Matter application
Last modified 1mo ago