It’s been eight months since I last wrote in this blog. To my regular readers, should I happen to have any, I’m really sorry about that.
The main reason for the large gap is I had a full time job and therefore had a massively reduced amount of free time. I say had because it’s no longer the case for reasons not of my choosing. The job was very interesting in that it exposed me to two new programming languages.
The first was Go. I’d played with Go previously, but had never written anything substantial in it. It has its quirks, like any language, but overall it is excellent. It has all the modern ecosystem characteristics of, say, Python, but it is compiled so performance is fantastic. Like all good languages, it is statically typed, but has type inference, reducing the programmer’s burden.
The other language was Rust.
I could write a tome about Rust, but suffice to say the learning curve is nearly vertical in places. Overall I found it extremely challenging to learn, despite having been a software developer of some sort or other for more than 30 years.
The feature of Rust that people talk about the most is the Borrow Checker, but in fact once you get your head around it, working with it becomes fairly natural. The thing I really like about the language is the type system, especially it’s implementation of enumerated types and, related to it, Result types and error handling.
One thing I especially like about both Go and Rust is neither makes use of exceptions, which I’ve never liked in C++ code as they making following execution paths so difficult. A thing I didn’t like at first, but ended up loving, is that both languages have an enforced coding style. Thus there is no arguing amongst programmers about the placement of braces, etc.
Perhaps in my next role (I’m still looking, by the way!) I will be able to use these newly acquired skills.
To projects then.
First of all, neither MAXI030 nor my 32 bit softcore processor are dead.
On the MAXI030 front, the last thing I was working on was accelerated console graphics. I nearly have it working. These changes replace the software implementation (kernel code) for copying screen data with a hardware blitter and supplement that with hardware for drawing rectangles on the screen, all implemented within the video card FPGA. The main purpose of this is to speed up scrolling. Unfortunately the speed up was not as great as I’d hoped, because the most costly console graphics function is not the actual scrolling, but drawing new characters on the screen, something which is very hard to accelerate because the source character data (including colours) is not stored in graphics card memory. It remains very frustrating that there appears to be no way to apply this hardware to the task of speeding up the X server, at least without writing my own X server. If this was possible I’m pretty sure I could produce a fairly usable X environment.
After that I intend to tackle the mammoth job of restructuring the MAXI030 main board FPGA design to include a DMA controller. This will be extremely involved because with a DMAC driving the busses, the FPGA address bus, amongst other pins, must become bi-directional; it will drive the bus when the DMAC is in use and set them as inputs when the CPU is the bus master. Also, address decoding and wait state generation will no longer be controlled just by the CPU, but by the DMAC within the FPGA as well. In many ways the design for a DMAC would be much simpler if it was a separate FPGA with its own pins. I’m also worried about hitting the speed limits on the FPGA. None the less, this all seems like a nice challenge and, once complete, should be useful to greatly speed up IDE disk access, amongst other uses. For reference, the last time I implemented something like this was for MAXI09, which had a simple DMAC within its main FPGA.
On the CPU32 front, a very poor name and no-mistake, the road ahead is much less clear.
I do know that I have very little interest in working on the Intermediate Representation translation side of my LLVM target. After getting very basic C code to generate what looks like reasonable assembly code, I hit a brick wall of complexity in the underlying LLVM code generation frameworks. Basically to make a “complete” code generator capable of compiling even a subset of C requires putting in too big an investment in time which I’m not willing to give because there are more interesting things to be doing. I will, though, continue to work on the assembler side of my LLVM target. I also need to push up my, now very out of date, LLVM fork so it doesn’t get lost.
The long term goal of producing a softcore design with a pipeline is also not dead, but it’s some way off before I even think about starting it.
Home Assistant next.
I’ve had an on/off thing with “smart” devices in the home for years, ever since I bought my first Squeezebox network music player which, incidentally, does not require any cloud servers. I’ve also played with the Amazon Echo products, but have always been put off by the impact on privacy, and their general reliability.
However, a few months ago a friend introduced me to Home Assistant, which he was looking at using to supplement some of the functionality he was obtaining from other mainstream providers in his home. The idea of home automation without involving any external cloud services (and the privacy benefits that brings) seemed intriguing enough that I thought I would give it a try, privacy being a big “selling point” for Home Assistant.
A couple of months later and my house is now equipped with smart bulbs, switches, motion sensors and other gadgets.
Home Assistant is currently running in a Docker container on my fileserver. I have avoided using Wi-Fi for the link to the smart gadgets where ever possible because of, principally, the security implications in running potentially dodgy firmware on devices on my LAN. Instead I’m using Zigbee with a ConBee II USB dongle.
A brief list of some of the Zigbee devices I’ve had good success with:
- IKEA TRÅDFRI dimmer
- IKDEA STYRBAR dimmer
- Various unbranded bulbs from Amazon
- Aqara E1 blind motor
- Aqara temperature and humidity sensor
- Aqara cube
- Samotech dimmer
- Sonoff motion sensor
- SonoffF MINIR3 Smart Switch
This is not a guide to my Home Assistant setup. I suggest you check out the following You Tube channels if this topic interests you:
I will mention one solution I’ve employed, which I think could be useful to others.
One of the pitfalls with implementing smart lighting in the home is the wall mounted switches. It’s often the case that someone will add smart bulbs to their home, but leave the existing light switches in place, controlling the lighting with motion sensors or simply their phone. The downside to this is someone, eg. a guest, can easily turn a light off at the physical light switch.
There are several solutions to this:
- Cover up the old physical light switch
- Replace the physical light switch with a smart switch powered by the mains and use a smart bulb with it
- Use a “dumb” bulb with a smart relay mounted in the patress box behind the old physical light switch
- Something else
In one room, my office, I have gone for solution 3. I have used a Sonoff MINIR3 Smart Switch. The relay sits in the backbox and attaches to the old physical light switch. I can control the light using Home Assistant automations, or the old physical light switch. One downside is the relay behind the physical switch makes a clicking noise when the it switches state. Another advantage, though, is that the light switch is able to control the light even if the Home Assistant service is not running, or the server dead, etc.
For most of the other lights my solution is an IKEA TRÅDFRI dimmer placed over the physically wired switch using a mount I bought off ebay:
The advantage with this approach is nearly all the lights in my house are now dimmable and can be automated, but if I ever have a problem with Home Assistant, like my server dies, I can still operate the lights by popping off the dimmer, which is attached to the mount with a magnet.
Incidentally, whilst I could have CADed up my own mount design, the ones I purchased from ebay are printed in nicer-then-PLA ABS plastic, and they have been treated to reduce layer lines. They basically look like an injection molded part. All in all, I’m happy to spend a little money on a better result.
Lastly, because of wanting to make my own “smart” devices, I’ve also been playing around with ESPHome. ESPHome is a platform for ESP32 (and ESP8266) based boards which turns them into smart devices for hooking up to Home Assistant. It can be used to produce sensors, displays, light panels, audio output devices, or combinations of the above.
I’d not previously played with any ESP32 boards, and they were an eyeopener. Compared to the AVRs or even small ARM controllers, these devices are extremely sophisticated. The most interesting aspect is that they have built in Wi-Fi and a simple TCP/IP stack, all in a 2 inch by 1 inch board.
ESPHome takes almost all the effort out of building simple (and not so simple) Home Assistant devices.
The project I undertook was a simple epaper display, showing upcoming calendar events, weather forecasts, and temperature information from sensors inside, and outside, the house. The end result is pretty decent, I think:
Hardware wise, this used the following parts:
- WEMOS LOLIN32 ESP32 Lite
- EEMB Lithium Polymer battery 3.7V 1700mAh
- Best 4.2inch E-paper Display Module 400×300 Resolution
- 3D printed case of my own design
The case STL files have been uploaded to Thingiverse in case anyone finds it useful.
The wiring is trivial and doesn’t need a schematic. Eight wires hook up the display to the ESP32 board, including power. The bus used is a one directional SPI connection.
Getting to the software, it’s a bit of a bodge really. ESPHome is great in its flexibility, but in my view it has a critical missing feature: you cannot enumerate Home Assistant published sensors from the ESPHome device. I have not bothered to upload the code because I’m not proud of it at all, but if someone is desperate to see it I can do so.
One ESP32 feature my epaper display makes use of is deep-sleep. In it’s current configuration the board will sleep for 59 minutes and 30 seconds in the hour, only waking up to grab the latest data and refresh the display, before falling back to sleep. With luck, the panel should run for a few months before needing a recharge.
It’s worth pointing out that I borrowed some weather-related code from another ESPHome based project at https://github.com/kotope/esphome_eink_dashboard, so the code is not all mine.
Anyway, that’s where things stand today. I’m looking forward to getting back to MAXI030, blowing off the dust and getting knee deep in the graphics card VHDL…
> but drawing new characters on the screen, something which is very hard to accelerate because the source character data (including colours) is not stored in graphics card memory.
Can’t you just store pre-rendered bitmaps in vram, and then “colorize” them on the fly? Liko monochrome-to-color expansion in old XAA terms ….
It’s all possible, it’s a question of the time needed to learn the existing code. Patching in acceleration of block copies and filled rectangles was simple enough, but this would not be simple at all.
Ultimately for console mode the display hardware should reproduce a character addressable text mode. I’m just not sure how Linux kernel abstracts that, like it does so beautifully for the framebuffer. If that could be done you’d have a jolly fast text console experience.