ocsetup with access to the cluster
health_checker.http_health_checkwill determine the endpoint that the health check request is sent to. You should configure this value to an endpoint in the service that will return a
200response as long as the service is healthy.
/objectivesevery 10 seconds, with a timeout of 2 seconds. If there are 6 unhealthy responses, the SLO upstream cluster will be set as
/objectiveshere because the SLO service has an endpoint at that path that will return a
200as long as the service is healthy.
kubectl get pods -l greymatter=edgeto get the name of the Edge pod. Then exec into the Edge pod:
1. The values for
failureare a count, so these stats will reflect the total number of failed and successful health checks. The value
healthyindicates the status of the health check at that moment, and will determine its value based on the
/objectivesat the configured interval:
active_http_filterslist, and the configuration below to the
kubectl get pods -l deployment=sloto get the name of the SLO pod. Then exec into the SLO pod:
OK. If you check the SLO sidecar logs again you will now see:
503instead of a
200and no longer forwards the request. It knows this request is a health check from the header value for user-agent,
Envoy/HCthat we configured in the filter. Other requests into the SLO service will not fail because of this configuration.
unhealthy_thresholdset on the cluster, the upstream cluster will be determined unhealthy, and the stats value for
healthywill be 0: