An LED display for Home Assistant Part 1

I’ve been working on this particular project for about nine months now, on and off. It’s finally at a point where I consider it complete, though there is still room for improvement and further ideas. Because I’ve been working on this for so long without documenting it here, this post is going to have to be spread over two posts otherwise I’d never finish it.

Unlike pretty much every other project I’ve ever tackled, this one did not start out as a single idea to be fulfilled. Instead it evolved after playing with various bits of technology.

It started out with looking at LED matrix displays. I was intrigued by these when we were on one of our typical British seaside holidays. As usual, we visited the local video arcade and they had a cool Space Invaders game. Not the 1978 version, but a two player sit-down game played on a large LED matrix display and controlled with hand held “blaster” controllers:

(Image courtesy of: arcadeshenanigans.com)

Being a geek, I was not particularly interested in playing the game, but I was struck by the size and uniqueness of the display. Of course I looked around the back of the unit and established that it was made of a grid of smaller displays which were LED matrices.

After getting home I did some research and found out that you can get individual panels for relatively little money. A 64 x 32 RGB panel made to a 4 mm pitch – thus it’s about 256mm by 128mm – can be found for about £40, or half that if you don’t mind buying them from lesser known brands. The connector that controls the screen is a 16 pin IDC. I immediately bought this one, badged as by AdaFruit, from The PiHut.

The next thing to do was to figure out how to drive the display.

I was somewhat put out when I discovered the unit itself did not have any kind of integrated framebuffer; instead it uses a protocol known as HUB75. Rows must be lit in sequence, and the display relies on persistence of vision to present a flicker-free image to the human eye. However, I read that with effort it was possible to produce images with more then just the 7 different colours an RGB LED would suggest. In short PWM would be required.

The 16 pins of the HUB75 connector are wired as follows:

The display has bottom and top halves, each of 16 rows, hence there are two sets of Red, Green and Blue inputs and only 4 bits of row address.

The basic protocol for clocking in a row of pixels is described by the following timing diagram:

(Image courtesy of justanotherelectronicsblog.com)

After the 4 bits of address inputs are set, pixels are clocked in on the rising edge of the clock. Once 64 pixels have been clocked in, the row is latched. During clocking, the display is turned off. The current row is then turned on, briefly, via the “blank” signal. Other sources, and my own project files, call this the Output Enable input, which is active low. After the current row is turned off the cycle repeats for the 15 other rows. This mechanism is sufficient for 7 colours (red, green, blue, magenta, cyan, yellow and white). Producing other colours, and levels of grey, requires repeating rows and shortening the Output Enable signal accordingly.

But we are getting ahead of ourselves. After obtaining a panel, the next thing was to breadboard something up to see if I could reproduce the HUB75 protocol in code:

The astute will notice a breakout board with a battery wedged into the breadboard. This was a DS3231 (PDF) I2C RTC module. My original idea was to build a pretty looking clock, and then see where that would take me.

Also, the display was powered via my bench supply. These panels are rated for 5V at 2A, though the current draw varies massively with what is actually displayed on the screen at any one time. It’s certainly too much current to power from USB alone.

The microcontroller board was a Pi Pico W. I’d had a couple of these sitting on a shelf for a few months, and now seemed like a good chance to have a play with one. The Pico W is pretty much identical to the original Pico, as used by my SPI flash programmer described in the previous blog post, except it has Wi-Fi capabilities.

My initial attempt was simply to generate patterns on the screen. This was fairly easy, but at this point the code was only a hundred lines long and there were no complexities to speak of, just a single loop that drove the HUB75 pins:

At this point I was pretty pleased, and was pondering what I was actually going to do with the display.

The first step to doing something useful would be to introduce a multitasking environment. It seemed clear that if I wanted to do anything moderately complex, such as drawing into a framebuffer whilst that framebuffer was output to the display, a multitasking environment was going to be very useful. And whilst I could have explored the idea of implementing a task switcher and other low level OS routines myself, it wasn’t something I was particularly ecstatic about doing.

So, whilst there are several options for a light-weight Operating System to run on small controllers like the RP2040 (PDF), the logical choice was FreeRTOS.

FreeRTOS provides a rich suite of APIs for multitasking; everything from task management to timers. Whilst it abstracts away the details of the multitasking environment, the MCU-specific APIs are still there. It is not, therefore, a complete solution to re-targeting codebases that are written for one MCU to a different MCU, though it can help in that task.

In my case I was only really interested in the following functional areas:

  • Task-switching: that is, I wanted to be able to define C functions and run them as tasks that would share the MCU’s computing resources
  • Message passing: tasks would need to exchange data in a safe way
  • Timers: more specifically, tasks would need to be able to be scheduled to sleep for a specified amount of time

After following some basic instructions for building up FreeRTOS and adding the libraries to my project, it was time for some serious thinking about how the runtime environment (tasks) would need to be structured.

My initial design used just two tasks:

  1. An animation task, which drew into the framebuffer memory using simple graphical primitives
  2. A “Matrix” task which would manipulate the HUB75 output pins

It was important that the matrix task would not be held up by the animation task. If it was ever delayed the display would flicker. The way this was overcame was to make use of a, at the time, experimental feature of FreeRTOS when used with the RP2040: SMP support. The two tasks were thus run on each of the separate cores. The alternative to this would have been to raise the priority of the matrix task, but since it can be given its own dedicated core, this was not really required.

The framebuffer API was quite a lot of fun to work on, and indeed, it is the part I most enjoyed working on throughout the whole project.

To make things a little cleaner, I chose to implemented much of this code in a very thin C++ style, making use of classes to encapsulate some of the data-structures required.

A pixel is defined as a uint32_t in the form of RGBx, where x is not used. However to keep the code looking tidy, the uint32_t is pared (via a union) with a struct containing uint8_ts of red, green, blue and a dummy value for the unused byte.

The following drawing primitives are available in the current implementation. They were added over time as they were needed:

  1. clear: The 64×32 array of pixels is set to the specified colour value.
  2. point: The pixel at x, y is set to the specified colour.
  3. hollow_box: An empty box at x, y and width, height is drawn in the colour specified.
  4. filled_box: Same for a solid box.
  5. shadow_box: The pixels at x,y to width,height are read in and then updated using the given gamma (intensity) level. This is used to darken a portion of the screen so it can then be drawn over, probably with text.
  6. print_char: Prints a single character at the given x,y in the given colour and in the given font, which are discussed below.
  7. print_string: Prints a string in the given font at the given x,y, in the given colour.
  8. string_length: Returns the length of the string, if it were to be drawn in the given font. Useful for centering text and the like.
  9. print_wrapped_string: Prints longer strings across multiple lines. Strings are always printed starting from the left of the screen. Used for displaying very short paragraphs of text.
  10. show_image: Displays the given image at coordinates x,y. Images are discussed more below.
  11. show_image: An overloaded version of this method is provided that can output an image with a reduced intensity. It’s also possible to skip over black pixels in the source image, so that the output pixel for those pixels is not set to black but is instead left as-is.

There are several types of fonts that are supported:

  1. Fixed width 8×8 fonts
  2. Proportional fonts, all 8 pixels high
  3. Grayscale fonts

The fixed width fonts are trivial enough; I chose the original IBM PC font as I’ve not used it in any projects before.

Proportional fonts are similar and are stored one byte per row. The first first byte in a character is the count of horizontal pixels that are valid for this character, up to a limit of 8. I ended up creating my own font, pretty much from scratch, that is optimised to fit in a small amount of space but still be readable.

The grayscale fonts are by far the most complicated. I found a neat website that can convert TrueType Fonts into C arrays of anti-aliased glyths.

The details of how an individual font is used in the code is abstracted; to use a font all that is required is to obtain its struct pointer by opening it by name and then that pointer is passed to one of the print routines.

Images are simple arrays of 4 x uint8_t pixels, with a record describing the width and height of each image. Again each image is named. It is possible to have multiple instances of the same named image, each with a different width and height.

To produce the C source code for an image I wrote a simple Go program to convert BMP images to C source code fragments.

I then set about exercising these routines by bouncing some blocks and text around the display:

(When I took this picture I was experimenting with approaches to diffuse the light from the LEDs in the matrix. I have yet to settle on a solution.)

At this point I’d also had some limited success at PWMing the Output Enable line to show pixels at different colour intensities. This worked by dividing the frame into 8 time slices and showing the pixel as being on for a proportion of those 8 frames depending on what the colour intensity was. 8 intensities was significantly less then 256, but it was better then just on or off.

After getting this working, it was again time to ponder what I might want to do next.

The first thing to do was to return to the idea of building a clock. This was pretty easy as I had most of the pieces required: I had a set of APIs for drawing text in the framebuffer and I had a reliable way to display the framebuffer on the panel. The only missing piece was access to the I2C hardware on the RP2040, but this turned out to be largely trivial. Certainly much easier to when I played with I2C on an AVR.

To keep things tidy I put the I2C action in its own task and periodically posted the BCD data to the animation task using the MessageQueue FreeRTOS APIs.

In Part 2 I will go into detail about how the Home Assistant integration was achieved, discuss some problems found with driving the HUB75 panel from the Pi Pico, and describe the hardware solution to that problem I went for.

In the meantime I’ve made a video of this project in operation:

I’m generally very pleased with how it turned out, though this video doesn’t really do it, and in particular the LED matrix itself, justice. It’s very difficult to make an LED matrix look great with a camera phone…

 

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.