Light Dark Auto

Audit (TCP)

Audit TCP Protocols

GSLTCPAvailable since: v1
The Network Observables Filter configures the Proxy to emit a JSON payload with every TCP request made to the microservice. This JSON payload will vary depending on the information and protocol of the request. For a raw TCP observable, the schemaVersion is 1.1 and the payload will include the request body and, if emitFullResponse is true, the response body. If the TCP observable is being decoded, the payload information will change.

Observable publishing defaults to stdout but can also be published to a Kafka topic or location on disk.

Configuration

The base GSL type is #AuditTCPFilter

useKafka

Boolean

Publish observable message to a Kafka topic

enforceAudit

Boolean

Block requests until an observable has been successfully published to Kafka. Only applies if `useKafka` is `true`.

timeoutMs

uint32

Timeout in ms for Kafka producer Only applies if `enforceAudit` is `true`. Default is 10000 ms.

encryptionAlgorithm

String

Type of encryption. Must be 'aes' or blank.

encryptionKey

String

Must be blank or base 64 encoded string of 16, 24, or 32 bytes. 32 bytes is recommended.

encryptionKeyID

uint32

User supplied number to identify the key used in encryption.

eventTopic

String

The Kafka topic that will hold the published observable messages.

kafkaZKDiscover

Boolean

If true, Kafka will be discovered through a zookeeper node. Default is false.

kafkaServerConnection

String

Comma delimited list of Kafka broker addresses, or if `kafkaZKDiscover` is `true`, a list of ZooKeeper addresses.

useKafkaTLS

Boolean

Enable TLS communication to the supplied kafka brokers.

kafkaCAs

String

List of file URLs that point to trusts to be used when connecting to kafka.

kafkaCertificate

String

File URL pointing to certificate to use when connecting to kafka over TLS

kafkaCertificateKey

String

File URL pointing to certificate key to use when connecting to kafka

kafkaServerName

String

Certificate server name to use when connecting to kafka.

decodeToProtocol

String

Must be one of: "", "kafka". If "", raw TCP observable is output. If "kafka", TCP data will attempt to be decoded into kafka protocol and the observable Payload field will be modified with kafka specific information.

decodeSkipFail

Boolean

If true, when data cannot be decoded into the protocol specified in `decodeToProtocol`, no observable will be output. If false, a raw TCP observable will be output. Default is false.

Example

gsl.#AuditTCPFilter & {
  #options: {
    emitFullResponse: true
    useKafka: false
    enforceAudit: false
  }
}

Decode to Protocol

If the protocol of the incoming TCP message is known, the Observables TCP filter can be configured to attempt to decode the message to the given protocol and the payload of the greymatter.io observable will be tailored to include protocol specific information from the decoded message.

The current options for this field are "kafka" and "". Setting decodeToProtocol to "" will emit the raw TCP observable (the default behavior).

If there is an error decoding the request, to the specified protocol, the raw TCP will be emitted unless decodeSkipFail is set to true. In this case, if there is an error while decoding, no observable will be output.

Kafka Protocol

If decodeToProtocol is set to kafka, the filter will take the incoming requests and attempt to decode them into a Kafka message. In a successful decode, any information in the message request and response will be added to the payload, and the schemaVersion will be 1.2. The type of request or response will be the key nested under kafka.requestInfo or kafka.responseInfo, and the information included will depend on this request/response type.

Encryption

Users can roll over the encryption key dynamically by changing the Observables configuration in the Proxy.

To enable convenient decryption, each key should be assigned a unique key ID.

Kafka Headers

We normally write the key ID to Kafka Record Headers. Such headers are only available after Kafka Version 0.11.

Have an older version of Kafka? Avoid errors by using a key ID of zero. However, this means you cannot rotate encryption keys dynamically.