elasticsearchhow to

Building a keg scale alerting system

By January 11, 2019 August 18th, 2022 No Comments
elasticsearch alerts blog

In a previous blog, I detailed how I used the weight of the cold brew coffee keg in our office to send Slack alerts to let us know when we’re running low and need a cold brew coffee keg refill.

In this second in our two-part series, I’ll show you how I built the scale. Mainly, I wanted something without a whole lot of time and money invested, and there wasn’t anything available already that met our needs.

Elasticsearch keg set up requirements

First of all, I needed to break down the requirements for the project:

  • Cheap (<$100)
  • Minimal care and feeding
  • Remotely managed
  • Easily report the keg level
  • Determine every time there was cold brew taken from the keg (a stretch goal)
  • Most importantly, it needed to involve minimum time investment*

*I do have a day job and building keg scales is not in my job description.

Initial research

The first challenge was to figure out not only how to gather data on the keg, but what data to gather.

Some open source projects provide for monitoring kegs, and after a little bit of research, the two major ways for measuring are by

  • weight
  • using a flow rate meter

Most of what’s already available uses flow rate meters. It’s a little trickier to determine whether the keg is “empty” or not with flow rates and you need some kind of user intervention to determine when the keg has been changed out. Plus, it requires putting a device in the actual coffee line, which has its own potential issues.

So despite the lack of examples to borrow, we made the call to start with a scale and use weight to determine the keg level.

The keg scale

For the actual scale hardware, I looked for off-the-shelf solutions that could connect to a small computer like a Raspberry Pi. There are tons of WIFI and Bluetooth scales out there, but most were not continuous read; or were battery powered; or a big hassle with which to interface. A few keg-specific scales do exist, but they were too expensive.

Once again, I needed to go DIY.

I’m experienced in building with a Raspberry Pi, so I needed hardware to build a scale with one. I found a set of 50kg load cells with a HX711 amplifier on Amazon for exactly this use case.

There are lots of references out there for building an Arduino scale with the HX711 but far fewer are available for the Pi. Luckily, I found tutorials from Instructables coupled with a Github repo from user tatobari that included some python HX711 utilities that would fit the bill.

Overall architecture

First of all, I needed to create a simple schematic of what the overall system would look like:

Elasticsearch Keg Alert Setup

The hardware build

The first challenge was to find a scale thin enough to place under the keg and still fit in the kegerator that was sturdy enough not to bend under the weight of a 150lb keg.

I tried multiple designs and methods to reinforce the scale, but they were either unstable or added too much thickness that ultimately put pressure on the top tubes.

The final build included the old grill grate, some small pieces of steel c channel from the hardware store to reinforce the grate, and a bunch of duct tape, and zip ties.
ObjectRocket cold brew keg with Elasticsearch alertsKibana cold brew dashboard

It’s not perfect but it ultimately met all our design requirements (anyone out there have a little sturdier solution in mind?)

Wiring it up

We needed to connect the HX711 board to the Raspberry pi via 4 GPIO pins and the scale to the HX711 board via the four wires from the Wheatstone bridge (see Instructables).

The kegerator is super space-constrained and it can get pretty rough when kegs are loaded and unloaded, so it didn’t seem like a good decision to locate the HX711 and the raspberry pi in with the keg. Instead, we located the Raspberry Pi and the HX711 right next to it, outside the kegerator, using standard cable (good old cat 5 or 6) to make the longer run from the actual scale to the HX711. Add in an RJ45 connector by the scale at one end and the HX711 at the other end. Now we can use any old  Ethernet cable to connect the two.

Between the HX711 and the raspberry pi, since they’re located right next to each other, we used standard jumper wires.

The software

Using a Raspberry Pi means I can pretty much use anything that runs on Linux for the software.

The first piece is the code to interface with the HX711 amplifier via the GPIOs.

Although most of the HX711 projects out there are for Arduino, you can still find a decent number of GitHub repos out there with HX711 code for the Raspberry Pi, such as this one written in Python.

Using the HX711 example code, I wrote a small Python utility that simply writes out periodic weight readings to a file that we can ship to Elasticsearch. I expanded it to work from a config file and to include a few testing modes, but other than that it’s fairly simple code.

Shipping the data to Elasticsearch

Next, I needed to ship the data to Elasticsearch.

I used Filebeat to read the weight files and then ship the data to Elasticsearch. You could write directly to Elasticsearch from the Python script; or use Logstash to either read the files; or directly call the script to get readings on the fly. Filebeat is super lightweight and simply required the least amount of work.

The only small complication with using Filebeat is that Elastic does not provide binaries for Raspberry Pi/ARM by default. I had to build it.

Luckily, the instructions on the Elastic site are pretty clear. As long as you pick the right version of Go that matches the version of Filebeat that you’d like to use, the build should be pretty straightforward. (While it’s usually called out in the developer guide, make sure you select the right Beat version on the right side of that page.)

Once all of the components were working, the final step was to configure the scale reader scripts and Filebeat as systemd services and have them startup at boot.

Indexing the data

I needed to get that weight data indexed in Elasticsearch. Using Filebeat, I decided to parse the data on the Elasticsearch side using an ingest pipeline. Once again, Logstash is an option, but the goal was to create the absolute minimum load on the Raspberry Pi so I can use it as a local dashboard and potentially expand it later with more sensors (like temperature).

Every few seconds, the simple weight reader spits out log lines like this: “2018-06-22T20:57:02+0000 – 134.0”. This is extremely simple to parse as it’s just a timestamp followed by a “-” and a weight reading, so the ingest pipeline is equally simple.

  "description" : "Parse the readings from the keg scale",
  "processors" : [
  	"grok": {
    	"field": "message",
    	"patterns": ["^%{TIMESTAMP_ISO8601:readtime}\\s+-\\s+%{BASE10NUM:weight}$"]
  	"convert": {
    	"field" : "weight",
    	"type": "float"
  	"date" : {
    	"field" : "readtime",
                                            	"formats" : ["ISO8601"]
  "on_failure" : [
  	"set" : {
    	"field" : "ingest_error",
    	"value" : "{{ on_failure_processor_type }} - Error processing message - {{ _ingest.on_failure_message }} : {{ message }}"

The result is a document every couple of seconds with a floating point “weight” and a timestamp, along with the various metadata that Filebeat includes (which we could use later if we expand it to multiple kegs).

Elasticsearch cluster

ObjectRocket makes this part very easy, since I could spin up a basic cluster, open up a few ACLs, set up a couple of users, and then just start shipping data. Other than that, the only real requirement here is a functional Elasticsearch cluster with Kibana.

Wrapping up

With everything set up and in place, all I needed to do was plug it in.

Voila! It just runs.

You can see for yourself the initial results:

Kibana Keg Dashboard

We ended up with a simple gauge to show us:

  • Percentage of cold brew left in the keg
  • A Timelion trendline
  • Date of the most recent refill

The system has room for many more add-ons, with cool new projects like Canvas and Vega, but for now, it gets the job done.