Tracing

Tracing can be set up to monitor and track requests, optimize performance and latency, improve observability, and preform root cause and service dependency analysis. Grey Matter supports Envoy's tracing capabilities for the visualization of call flows.

Table of Contents

Configuration Reference

To properly set up tracing in Grey Matter, there must be a tracing server running with known address and port at the time the Sidecar is deployed. The Sidecar takes a series of run-time environment variables to set up a static cluster as the tracing server and configure its HTTP tracer. Then, the listener object takes a tracing configuration to configure specifics about the information in the spans.

Tracing can then be configured using the tracing_config field of any listener object. Properly configured, the Sidecar will send spans to the trace collector server with information on the request and its path through the mesh.

Sidecar Runtime Config

Environment Variable

Description

Type

Default

tracing_enabled

Indicates whether or not to enable tracing

bool

false

tracing_address

The host for the trace collector server

string

"localhost"

tracing_port

The port for the trace collector server

int

9411

tracing_collector_endpoint

The endpoint on the tracing server to send spans

string

/api/v1/spans

tracing_use_tls

Use TLS to connect to trace collector server. If true, tracing_ca_cert_path, tracing_cert_path, and tracing_key_path should be set.

bool

false

tracing_ca_cert_path

The path to the CA certificate

string

./certs/egress_intermediate.crt

tracing_cert_path

The path to the certificate file.

string

./certs/egress_localhost.crt

tracing_key_path

The path to the key file.

string

./certs/egress_localhost.key

Fabric tracing_config

Set in the listener object.

Attribute

Description

Type

Default

ingress

Does the listener trace incoming or outgoing traffic?

boolean

true

request_headers_for_tags

headers to convert into trace tags

[]string

null

Detailed Configuration

Listener Tracing Config

ingress

The boolean value set for ingress determines the operation_name value in the Envoy HTTP connection manager tracing configuration. By default in both Grey Matter and Envoy, ingress is true and "operation_name": "INGRESS". If ingress is set to false, operation_name will be "operation_name": "EGRESS". This determines the traffic direction of the trace.

request_headers_for_tags

This field takes a list of header names to create tags for the active span. By default it is null, and no tags are configured. If values are configured, a tag is created if the value is present in the requests headers, with the header name used as the tag name, and the header value used as the tag value in a span.

Examples

Sidecar Deployment

The below example uses a k8s deployment only as reference. The key points from the example are the TRACING_* environment variables set.

apiVersion: apps/v1
kind: Deployment
metadata:
name: myService
spec:
replicas: 3
selector:
matchLabels:
app: myService
template:
metadata:
labels:
app: myService
spec:
containers:
- name: sidecar
image: docker.greymatter.io/internal/example-service:latest
ports:
- name: http
containerPort: 8080
- name: metrics
containerPort: 8081
env:
- name: PROXY_DYNAMIC
value: "true"
- name: XDS_CLUSTER
value: myService
- name: XDS_HOST
value: gm-control
- name: XDS_PORT
value: "50000"
- name: TRACING_ENABLED
value: "true"
- name: TRACING_ADDRESS
value: "jaeger"
- name: TRACING_PORT
value: "9411"

Listener

Once the Grey Matter Sidecar is configured to talk to a trace server, the tracing_config on the Grey Matter listener for the desired service configures the mesh to start sending traces to this server.

The listener object with tracing_config set will look something like the following:

{
"listener_key": "example-listener",
...
"tracing_config": {
"ingress": true,
"request_headers_for_tags": null
},
...
}

The values configured in this field will be used to set the tracing options in Envoy and configure specifics about the trace's sent to the server.

Trace

The traces will be sent to the server and a trace object in JSON will look something like:

{
"data": [
{
"traceID": "ccb5ba44bf450e9b",
"spans": [
{
"traceID": "ccb5ba44bf450e9b",
"spanID": "5b1cc9e0a6db97ab",
"operationName": "localhost:8080",
"references": [
{
"refType": "CHILD_OF",
"traceID": "ccb5ba44bf450e9b",
"spanID": "ccb5ba44bf450e9b"
}
],
"startTime": 1582125495267502,
"duration": 1775,
"tags": [
{
"key": "component",
"type": "string",
"value": "proxy"
},
{
"key": "node_id",
"type": "string",
"value": "d885090f63ffa467"
},
{
"key": "zone",
"type": "string",
"value": "default-zone"
},
{
"key": "guid:x-request-id",
"type": "string",
"value": "8f313280-1fcc-93d9-8c34-678e528bc10c"
},
{
"key": "http.url",
"type": "string",
"value": "http://localhost:8080/default/"
},
{
"key": "http.method",
"type": "string",
"value": "GET"
},
{
"key": "downstream_cluster",
"type": "string",
"value": "-"
},
{
"key": "user_agent",
"type": "string",
"value": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.106 Safari/537.36"
},
{
"key": "http.protocol",
"type": "string",
"value": "HTTP/1.1"
},
{
"key": "request_size",
"type": "string",
"value": "0"
},
{
"key": "upstream_cluster",
"type": "string",
"value": "service"
},
{
"key": "http.status_code",
"type": "string",
"value": "304"
},
{
"key": "response_size",
"type": "string",
"value": "0"
},
{
"key": "response_flags",
"type": "string",
"value": "-"
},
{
"key": "span.kind",
"type": "string",
"value": "client"
},
{
"key": "internal.span.format",
"type": "string",
"value": "zipkin"
}
],
"logs": [],
"processID": "p1",
"warnings": null
}
],
"processes": {
"p1": {
"serviceName": "sidecar",
"tags": [
{
"key": "ip",
"type": "string",
"value": "172.23.0.6"
}
]
},
"p2": {
"serviceName": "myService",
"tags": [
{
"key": "ip",
"type": "string",
"value": "172.23.0.7"
}
]
}
},
"warnings": null
}
],
"total": 0,
"limit": 0,
"offset": 0,
"errors": null
}

Jaeger UI

For a walkthrough example using docker and Jaeger, see the tracing example. The Jaeger dashboard when set up for tracing using this walkthrough looks like the following screenshot:

Jaeger Dashboard

And the below is what the Jaeger trace timeline looks like for a Trace:

Trace Timeline