kubectl
or oc
setup with access to the clusterpath
in health_checker.http_health_check
will determine the endpoint that the health check request is sent to. You should configure this value to an endpoint in the service that will return a 200
response as long as the service is healthy.health_check
object:/objectives
every 10 seconds, with a timeout of 2 seconds. If there are 6 unhealthy responses, the SLO upstream cluster will be set as unhealthy
./objectives
here because the SLO service has an endpoint at that path that will return a 200
as long as the service is healthy.kubectl get pods -l greymatter=edge
to get the name of the Edge pod. Then exec into the Edge pod:healthy
is 1
. The values for success
and failure
are a count, so these stats will reflect the total number of failed and successful health checks. The value healthy
indicates the status of the health check at that moment, and will determine its value based on the healthy_threshold
and unhealthy_threshold
values./objectives
at the configured interval:path
."envoy.health_check"
to the active_http_filters
list, and the configuration below to the http_filters
map:kubectl get pods -l deployment=slo
to get the name of the SLO pod. Then exec into the SLO pod:OK
. If you check the SLO sidecar logs again you will now see:503
instead of a 200
and no longer forwards the request. It knows this request is a health check from the header value for user-agent, Envoy/HC
that we configured in the filter. Other requests into the SLO service will not fail because of this configuration.unhealthy_threshold
set on the cluster, the upstream cluster will be determined unhealthy, and the stats value for healthy
will be 0: