Light Dark Auto

Setup Multi-Mesh

Connect greymatter.io meshes together (1.x)

Configuration

To get services in one mesh to talk to services in another, create a cluster that points to the Host/IP(s) of the ingress edge.

Example Cluster Configuration

The example below is a cluster configuration that could be applied in Mesh A, which points to the location of Mesh B. It also tells any proxies that try to route to this cluster what certs it should have on disk to connect.

{
  "cluster_key": "cluster-mesh-b",
  "zone_key": "zone-default-zone",
  "name": "mesh-b",
  "instances": [
    {
      "host": "192.168.99.102",
      "port": 31581
    }
  ],
  "require_tls": true,
  "ssl_config": {
    "cert_key_pairs": [
      {
        "certificate_path": "/etc/proxy/tls/sidecar/server.crt",
        "key_path": "/etc/proxy/tls/sidecar/server.key"
      }
    ],
    "trust_file": "/etc/proxy/tls/sidecar/ca.crt",
    "sni": ""
  }
}

Once you’ve configured a cluster, you can tell the mesh how to route to this cluster using a shared_rules object.

The light array contains a list of clusters to which requests will be sent. In this simple case you want all traffic routed to the Mesh B cluster and can link to it using the cluster_key.

{
  "shared_rules_key": "mesh-b-shared-rules",
  "name": "mesh-b",
  "zone_key": "zone-default-zone",
  "default": {
    "light": [
      {
        "constraint_key": "",
        "cluster_key": "cluster-mesh-b",
        "metadata": null,
        "properties": null,
        "response_data": {},
        "weight": 1
      }
    ]
  }
}

Service to Ingress Edge Setup

Once the Mesh B cluster has been created, routes can point to it just like any other service within the mesh, as long as those service sidecars have the correct certs on disk.

The following is an example of a route configuration for a service called service-a that uses the traffic rule shown above (mesh-b-shared-rules) to route to Mesh B. If a request with the path /mesh-b/ is made to service-a’s proxy, it will rewrite /mesh-b/ to a forward slash / and send the request along.

{
  "route_key": "route-service-a-to-mesh-b",
  "domain_key": "domain-myservice",
  "zone_key": "zone-default-zone",
  "path": "/mesh-b/",
  "prefix_rewrite": "/",
  "shared_rules_key": "mesh-b-shared-rules",
}

If you wanted service-a in Mesh A to only route to a specific service in Mesh B, you can call it service-b, you could use prefix_rewrite to point directly to it:

{
  "route_key": "route-service-a-to-service-b",
  "domain_key": "domain-service-a",
  "zone_key": "zone-default-zone",
  "path": "/mesh-b/",
  "prefix_rewrite": "/services/service-b/1.0",
  "shared_rules_key": "mesh-b-shared-rules",
}

Service to Egress Edge Setup

Another way to achieve a multi-mesh setup is to stand up a dedicated egress edge that handles cross-mesh traffic.

Instead of pointing each service to the ingress edge of the other mesh like in the example above, only the egress proxy knows about the second mesh and all services route to it instead.

To achieve Service-to-Egress Edge Setup you can deploy a standalone proxy like any other service. Then add a route which points to the Mesh B cluster:

 {
  "route_key": "route-egress-to-mesh-b",
  "domain_key": "domain-egress",
  "zone_key": "zone-default-zone",
  "path": "/",
  "shared_rules_key": "mesh-b-shared-rules",
}

Then update your service routes to point to the egress proxy’s shared_rules:

{
  "route_key": "route-service-a-to-mesh-b",
  "domain_key": "domain-service-a",
  "zone_key": "zone-default-zone",
  "path": "/mesh-b/",
  "prefix_rewrite": "/",
  "shared_rules_key": "egress-shared-rules",
}

Identity Propagation

The greymatter.io inheaders (Ingress Headers) filter should be enabled on each mesh’s ingress edge in order to correctly propagate user and service identity throughout the mesh.

This is configurable on the proxy object by adding it to the active_proxy_filters array. gm_inheaders also has a debug option which is helpful when looking at the proxy logs:

{
  "proxy_key": "edge-proxy",
  "zone_key": "zone-default-zone",
  "name": "edge",
  "domain_keys": [
    "edge"
  ],
  "listener_keys": [
    "edge-listener"
  ],
  "active_proxy_filters": [
    "gm.metrics",
    "gm.inheaders"
  ],
  "proxy_filters": {
    "gm_inheaders": {
      "debug": true
    }
    ...
  }
}

User and Service Identity Procedure

User and service identity flows through the meshes as follows:

  1. The client uses a PKI or oAuth token to hit the Mesh A edge
  2. The inheaders filter on the Mesh A edge edge proxy grabs the USER_DN from the incoming headers as well as the DN from the SSL certificate
  3. service-a proxy propagates the headers as the request flows through
  4. When the request exits Mesh A and hits the edge proxy of Mesh B, the inheaders filter will check what is already set. USER_DN already exists so it will keep passing it along, however, it will rewrite the EXTERNAL_SYS_DN and SSL_CLIENT_S_DN headers to reflect the DN of the last certificates in the chain. In this example, it would be the DN of our server.crt that you configured for cluster-mesh-b.
  5. service-b finally receives the request. It has the USER_DN of the client that first initiated the request and the identity of the service that last touched the request in Mesh A.