This article is a continuation of Monitoring Kafka with Burrow - Part 1.
Before diving into evaluation rules, HTTP endpoint API and notifiers, I would like to point-out a few other tools that are utilizing Burrow.
The status of a consumer group in Burrow is determined based on several rules evaluated against the offsets for each partition the group consumes. Thus, there is no need for setting a discrete threshold for the number of messages a consumer is allowed to be behind before alerts go off. By evaluating against every partition the group consumes, the entire consumer group health status is evaluated, and not just the topics that are being monitored. This is very important for wildcard consumers, such as Kafka Mirror Maker.
The lagcheck configuration determines the length of the sliding window, specifying the number of offsets to store for each partition that a consumer group consumes. This window moves forward with each offset the consumer commits (the oldest offset is removed when the new offset is added). For each consumer offset, the following are stored: the offset itself, the timestamp that the consumer committed it, and the lag at the point Burrow received it.
The lag is calculated as the difference between the head offset of the broker and the consumer's offset. Because broker offsets are fetched on a fixed interval, the result could be a negative number, however, by convention, the stored lag value is zero.
The following rules are used for evaluation of a group's status for a given partition:
The HTTP Server in Burrow provides a convenient way to interact with both Burrow and the Kafka and Zookeeper clusters. Requests are simple HTTP calls and all responses are formatted as JSON. For bad requests, Burrow will return an appropriate HTTP status code in the 400 or 500 range. The response body will contain a JSON object with more detail on the particular error encountered. Examples of requests:
|Healthcheck||GET /burrow/admin||Healthcheck of Burrow, whether for monitoring or load balancing within a VIP.|
|List Clusters||T /v2/kafka GET /v2/zookeeper||List of the Kafka clusters that Burrow is configured with.|
|Kafka Cluster Detail||GET /v2/kafka/(cluster)||Detailed information about a single cluster, specified in the URL. This will include a list of the brokers and zookeepers that Burrow is aware of.|
|List Consumers||GET /v2/kafka/(cluster)/consumer||List of the consumer groups that Burrow is aware of from offset commits in the specified Kafka cluster.|
|Remove Consumer Group||DELETE /v2/kafka/(cluster)/consumer/(group)||Removes the offsets for a single consumer group from a cluster. This is useful in the case where the topic list for a consumer has changed, and Burrow believes the consumer is consuming topics that it no longer is. The consumer group will be removed, but it will automatically be repopulated if the consumer is continuing to commit offsets.|
|List Consumer Topics||GET /v2/kafka/(cluster)/consumer/(group)/topic||List of the topics the topics that Burrow is aware of from offset commits consumed by the specified consumer group in the specified Kafka cluster.|
|Consumer Topic Detail||GET /v2/kafka/(cluster)/consumer/(group)/topic/(topic)||Most recent offsets for each partition in the specified topic, as committed by the specified consumer group.|
|Consumer Group Status||GET /v2/kafka/(cluster)/consumer/(group)/status or GET /v2/kafka/(cluster)/consumer/(group)/lag||Current status of the consumer group, based on evaluation of all partitions it consumes. The evaluation is performed on request, and the result is calculated based on the consumer lag evaluation rules. There are two versions of this request. The endpoint "/status" will return an object that only includes the partitions that are in a bad state. The endpoint "/lag" will return an object that includes all partitions for the consumer, regardless of the evaluated state of the partition. The second version can be used for full reporting of consumer message lag on all partitions.|
|List Cluster Topics||GET /v2/kafka/(cluster)/topic||List of the topics in the specified Kafka cluster.|
|Cluster Topic Detail||GET /v2/kafka/(cluster)/topic/(topic)||Head offsets for each partition in the specified topic, as retrieved from the brokers. Note that these brokers may be up to the number of seconds old specified by the broker-offsets configuration parameter.|
|List Clusters||GET /v2/kafka GET /v2/zookeeper||List of the Kafka clusters that Burrow is configured with.|
Two notifier modules are available to configure to check and report consumer group status: email and HTTP.
The email notifier is used to send out emails to a specified address whenever a consumer group is in a bad state. Multiple groups can be configured for a single email address, and the interval to check the status on (and send out emails on) is configurable per email address.
Before configuring any email notifiers, the [smtp] section needs to be configured in Burrow configuration file. Example of configuration:
[smtp] server=mailserver.example.com port=25a auth-type=plain username=emailuser password=s3cur3! firstname.lastname@example.org template=config/default-email.tmpl
Multiple email notifiers can be configured in the Burrow configuration file. Each notifier configuration resides in its own section. Example of configuration:
[email "email@example.com"] group=local,critical-consumer-group group=local,other-consumer-group interval=60
The email that is sent is formatted according to the template specified in the [smtp] configuration section. A default template is provided as part of the Burrow distribution in theconfig/default-email.tmplfile. The template format is the standard Golang text template. There are several good posts available on how to compose Golang templates:
A timer is set up inside Burrow to fire everyintervalseconds and check the listed consumer groups. The current status is requested for each group, and if any group in the list is not in an OK state, an email is sent out with the status of all groups. This means that the email can contain listings for both good and bad groups, but no email will be sent out if everything is OK.
The HTTP notifier reports error states for all consumer groups to an external HTTP endpoint via POST requests. DELETE requests can be also sent to the same endpoint when a consumer group returns to normal.
The HTTP notifier is used to send POST requests to an external endpoint, such as for a monitoring or notification system, on a specified interval whenever a consumer group is in a bad state. This notifier operates on all consumer groups in all clusters (excluding groups matched by the blacklist). Incidents of a consumer group going bad have a unique ID generated that is maintained until that group transitions back to a good state. This allows notification systems to handle incidents, rather than individual reports of consumer group status, if needed.
The configuration for the HTTP notifier is specified under a heading [httpnotifier]. This is where is configured the URL to connect to, as well as the templates to use for POST and DELETE request bodies. Extra fields can be provided as they are provided to the template. An example HTTP notifier configuration looks like this:
[httpnotifier] url=http://notification.server.example.com:9000/v1/alert interval=60 extra=field1=custom information extra=field2=special info to pass to template template-post=config/default-http-post.tmpl template-delete=config/default-http-delete.tmpl timeout=5 keepalive=30
The request body that is sent is with each HTTP request is formatted according to the templates specified. A default template is provided as part of the Burrow distribution in theconfig/default-http-post.tmplandconfig/default-http-delete.tmplfiles. The template format is the standard Golang text template. There are several good posts available on how to compose Golang templates:
A timer is set up inside Burrow to fire everyintervalseconds. When the timer fires, all consumer groups in all Kafka clusters are enumerated and the current status is requested for each group. For each group that is not in an OK state, a unique ID is generated (if it does not already exist) and a POST request is generated for that group. For each group that is in an OK state, a check is performed as to whether or not an ID exists for that group currently. If it does, the ID is removed (as the group has transitioned to OK). If the DELETE template is specified, a DELETE request is generated for that group.
The most important metric to watch is whether or not the consumer is keeping up with the messages that are being produced. Until Burrow, the fundamental approach was to monitor the consumer lag and alert on that number. Burrow monitors the consumer lag and keeps track of the health of the consuming application automatically monitoring all consumers, for every partition that they consume. It does this by consuming the special internal Kafka topic to which consumer offsets are written. Burrow provides consumer information as a centralized service that is separate from any single consumer, based on the offsets that the consumers are committing and the broker's state.