The last couple of times I’ve done a project involving a laser-cut Pi case, people have asked me to put together an in-depth tutorial on how to design them. So I’ve prepared this tutorial using an open-sourced software package called Inkscape to do just that.
Inkscape is a free vector-based graphics editor that is available for Windows, Mac and Linux, so you can even run it on your Raspberry Pi. If you don’t have it installed already, visit their downloads page to download it for your device.
This tutorial is going to focus mainly on the design of the case, so I’m not going to go into much detail on how to use the basic functions of Inkscape. There are loads of guides and tutorials for this already, so it’ll be good to be somewhat familiar with the package to start.
Once you’ve got Inkscape installed on your device, grab your Raspberry Pi and a vernier or ruler to take measurements from it and you’re ready to start.
With the case design completed, let’s get the case components cut out and see how it looks.
I’m going to cut these on the Atomstack X20 Pro, this is a fantastic machine for cutting plywood and MDF sheets. The 20W diode is much faster than the 5W and 10W diodes and the air assist keeps the cuts really clean. I use LaserGRBL to control my diode laser machines as it’s easy to use and free.
I cut these components out using a speed of 250mm/min and the laser power at 90%.
With the components cut out, we can then glue them together. I usually use PVA wood glue and either clamp or tape the components together for an hour or two while the glue dries.
It looks like our Pi fits into our case perfectly.
So now you know how to design and build your own Pi cases using free software and a diode or CO2 laser cutter.
I hope you’ve found this tutorial helpful, please let me know if you’ve got any design questions on the tutorial in the comments section below and let me know if there are any other tutorials you’re interested in.
A couple of weeks ago I was inspired by an old LTT video to try to make my own portable Bluetooth speaker. They used some 2″ full-range Dayton Audio drivers and 1″ tweeters along with an inexpensive Bluetooth amplifier module. They set themselves a goal of beating the $180 price tag that the LG XBOOM Go PL7 carried at the time. They came up with a pretty cool design, it had some quirks but overall performed reasonably well.
They did however blow out quite spectacularly on the budget when they included their labour costs. So I thought I’d try out this type of project and see what I could come up with.
I started off by scouring the internet for hardware and some design inspiration. I settled on using some 2.5″ full-range Dayton Audio PC68-4 drivers, which would be powered by a ZK-502T Bluetooth amplifier.
I felt that the slightly larger 2.5″ drivers would provide a bit more bass than the 2″ ones they used and I didn’t want to go down the path of including tweeters and a sub as this would increase the size and cost quite substantially and would require a larger amplifier and crossovers.
I also liked that the amplifier had bass and treble controls so there was some opportunity to make adjustments to the sound to suit the final speaker enclosure design.
I primarily use a Bluetooth speaker in a fixed spot in my workshop or in my home office, so I don’t need it to be battery powered although this would be nice for portability. Rather than include a battery pack within the speaker design, I opted for a 12V inline UPS that I could use to provide portable power to the speaker if I needed it.
Designing The Bluetooth Speaker
With the hardware selected, it was time to start working on the speaker enclosure design. I start off looking at different ported speaker designs but was eventually drawn to the visual appeal and experimental nature of transmission line speakers. This was a rabbit hole if ever I’ve seen one! It turns out that the best way to design a transmission line speaker is to follow a pretty rough design guideline and then do a lot of trial and error adjustments until it sounds good.
To start you need to use your speaker’s free air resonant frequency to calculate the corresponding wavelength. My speaker’s resonant frequency is 117.1 Hz, so the corresponding wavelength is 2.929m. We then need to divide this by four to get our recommended transmission line length, which for our speaker is 732mm.
So we essentially now need to design a transmission line housing with a 732mm path from the back of the speaker to the front of the housing. The easiest way to do this is by creating a labyrinth, or a path that crosses back and fourth a number of times, within the enclosure.
So I sat down with Fusion360 and spent a few hours designing an enclosure to house the drivers, provide a 732mm path from the back of the Bluetooth speaker to the front again and house the amplifier. This is the design that I came up with.
The main internal parts of the speaker, the amplifier housing and the handle would be 3D printed and I’d then use some laser-cut acrylic panels as covers to box them up.
I liked this layout for a couple of reasons, it leaves the transmission line design visible, which I thought looked quite cool, but it also allows the sides to be opened up to add or remove damping material to get it to sound right. Another neat feature of this design is that the amplifier can be swapped out for a different model, or the speaker size can be changed without having to redesign the whole enclosure again. You can just redesign the new amplifier housing to drop in or scale the speaker enclosure to fit the new driver size.
Making Up The Speaker Components
Next came a lot of 3D printing. Each housing took around 36 hours to 3D print. I printed them using black PLA with a 20% infill.
We also had a couple of cold nights at the same time, causing the prints to fail by lifting at the corners, but I eventually got the four components made up.
I then laser-cut the side panels from 3mm clear acrylic. 3mm acrylic sheets are one of the most popular thicknesses, so you could easily replace the sides with other transparent or opaque colours or even just use matt black sheets if you don’t want them to stand out.
Assembling The Bluetooth Speaker
Now that we’ve got all of our components made up, we can now start assembling the speaker.
Preparing The 3D Printed Parts
If you’ve printed your parts the way I have then you shouldn’t have any supports to remove, but we do need to add some brass inserts to the parts before assembling them. I did this because I figured I’d be taking the side panels off quite often while experimenting with the sound and they need to be held in place quite tightly so that they don’t vibrate, which I didn’t think plain 3D printed holes would handle.
There are a number of 4mm holes around the four prints that we need to melt brass inserts into.
All of the 4mm holes in the amplifier housing – four at the top for the cover and two on each side to connect to the speaker housings (8 in total).
And then almost all of the 4mm holes in each speaker housing – four for the driver, seven on each side for the clear covers and four on the bottom for the feet (22 in total for each housing). The holes that don’t require inserts are the two on the inside bracket that connects to the amplifier housing and the three on the top for the handle – these are all clearance holes for the screws to pass through.
Lastly, all of the holes on the handle – three on each side to connected to the speaker housings (6 in total).
The inserts are just melted into place using a soldering iron that’s set above the melting temperature of the 3D printing filament. Make sure that you get them set as close to square with the print as possible, if they go in skew then try to straighten them up a bit before removing the soldering iron tip.
Preparing The Amplifier Housing
Next, let’s install our amplifier in its housing using the included standoffs. Look for the smallest M2 standoffs included with the amplifier, the ones with a short male thread on one side and a female thread on the other.
These need to be screwed into the four holes in the base of the amplifier housing. Use a small pair of needle nose pliers to do this. Alternately you can melt them into place with the soldering iron as well, but be really careful to set the correct height and ensure that they are perfectly upright.
Add the amplifier to the housing by feeding the potentiometer stems through the three holes on the front first, then gently pressing the back into position.
Secure the amplifier to the brass standoffs with the included black M2 screws.
It looks like my initial hole measurements were off for these, so my front standoffs don’t align with the holes, but the two at the back hold it in place well enough. I have corrected this in the model, so your prints should all align correctly.
Lastly, you’ll need to stick the included heatsink onto the chip in the centre of the board – the one with the shiny surface.
Assemble The Remaining Components
Before installing the drivers in the housing, I’m going to solder some two-core wire onto them to run to the amplifier. You can use speaker wire for this or any spare wire you have at home of a suitable gauge. I used some wire from an old printer power cable.
Push the drivers into the holes in the front of the housing, feeding the wire through first. The drivers are then held in place with some M3 x 8mm screws. I used black screws for all of the ones that are visible on the outside to keep with the general aesthetic.
The inner acrylic side panels can then be installed on the housings, again using some more M3 x 8mm screws.
We can then mount the amplifier housing between the two speaker housings. For this, I’m going to use slightly longer M3 x 12mm screws.
There are two holes in each speaker housing that feed through the 3D printed bracket at the bottom and through the clear acrylic cover to screw into the threaded insert in the amplifier housing.
Then we can install the handle on top of the speaker to provide some additional support and a place to carry the speaker around. This is a bit tricky to get the screws into from inside the housing, but you can get a hex key into the space to tighten them. I used M3 x 8mm screws for these as well.
Now let’s hook our speaker drivers up to the speaker outputs on the amplifier. These just hook up to positive and negative in the same way they’re connected to each driver. I tinned the ends of the speaker wires first before I screwed them into the terminals.
Finally, we can close up the remaining covers with some more M3 x 8mm screws.
I really like how the engraving has come out on the amplifier’s cover.
I’m going to throw some soft fabric into the bottom of the speaker enclosures as a starting point. You need to do some experimentation with different size materials to try and eliminate as much of the higher frequencies as possible, so this will probably need to be revisited a number of times but should be fine as a starting point.
To finish it off, I’m going to screw 8 rubber feet onto it so that it doesn’t vibrate on the surface that it’s placed on. These are also held in place with some M3 x 8mm screws – don’t screw these on too tightly or you risk bursting through the inside of the speaker housing.
Then we can press the silver knobs onto the amplifier’s controls.
And that’s our speaker complete. All that’s left to do is to plug it in and try it out.
Testing Our New 3D Printed Bluetooth Speaker
I have to admit that I didn’t have particularly high hopes for this project when I started it, I’ve got very little experience with audio projects and everything I’ve done here is based on a few hours of googling, but I’m actually quite impressed with the final product. There is definitely some room for improvement and I’ll play around with different materials within the speaker as well, but I’m really happy with this as a starting point.
Have a listen to the audio at the end of my build video to hear it for yourself. It’s obviously difficult to convey the sound well through a video and audio recording, but you can get some idea of what it sounds like and what its limitations are.
To make the Bluetooth speaker portable, we just need to put the UPS in line with the power supply for an hour or so to charge and we can then unplug the power cable to use it.
The controls on the amplifier are great for tuning it to the type of music you like to listen to and your listening preference.
Final Thoughts on the Bluetooth Speaker
Taking a look at the cost, the drivers and amplifier cost me $50, the UPS was another $35 for portability and the filament, screws, inserts, feet and acrylic cost me about another $25, so all up the hardware cost of this speaker was about $110. In terms of time, it took me about 30 hours in total to research, design and build the speaker, so even at minimum wage here in Australia, that is about another $450.
So if you’ve got time on your hands, $110 for the hardware is quite good value for money, but you can definitely get something a lot better than what I’ve built if you value your time.
I’m really happy with the finished product and I’m looking forward to using it in my workshop.
Let me know what you think of my Bluetooth speaker design in the comments section below.
I feel like I might look at adding a bass driver to the void in the middle of the speaker as an optional add-on in future, so let me know if you’ve got any suggestions for that.
If you’ve tried to buy a Raspberry Pi in the past year or so then you’ve probably experienced some level of difficulty in getting one. They’re out of stock almost everywhere, there are generally purchasing limits on any that are in stock, and they’re often being sold at way over their recommended retail price.
A big part of what makes Raspberry Pi boards so attractive is that they’ve got really good documentation and support and a large online community, so you’ll easily find projects, tutorials and answers to any issues you run into along the way.
With that said, there are a large number of single-board computers available that offer similar features to Raspberry Pi’s, so I thought it would be interesting to get a few and try them out.
The Raspberry Pi 4B is one of the most popular choices for current projects, so I looked for some alternatives that offered similar specs to the 4B and were similarly priced.
I’m not looking for high-end hardware, this isn’t meant to be a benchmarking exercise, my intention is for these boards to be suitable Raspberry Pi alternatives for tinkering with electronics as well as basic web browsing and video playback. There might be more powerful or newer versions of these boards available for an increased price, but I looked at the ones that I felt provided the best value for money for use as a tinkering board. I also had a brief look at the documentation available for each before buying them to make sure that they had some basic guidelines for getting started.
Here’s my video trying out the three boards, read on for the write-up:
The Raspberry Pi Alternatives That I Choose
After sifting through pages and pages of options, these are the three boards that I settled on.
First up is the Orange Pi 3 LTS:
This board runs an Allwinner H6 Arm Cortex A53 quad-core processor running at 1.8Ghz. It’s got 2GB of DDR3 RAM and 8GB of onboard eMMC storage. It was the cheapest of the three boards at $35.
The second is the Khadas VIM2:
This board has got an 8-core Amlogic A53 SoC running at 1.5Ghz. It’s got 2GB of DDR4 RAM and 16GB of onboard eMMC storage. This was the midrange of the three at $80.
The third, and the most expensive of the three, is the ASUS Tinkerboard 2S:
This board runs a 6-core Rockchip RK3399 SoC consisting of a dual-core Arm Cortex A72 processor running at 2.0Ghz and a quad-core Arm Cortex A53 processor running at 1.5Ghz. It’s got 2GB of DDR 4 RAM and 16GB of onboard eMMC storage.
This board cost the most, at $120, which is a little more than the recommended retail price of even the 8GB Pi 4B, but it looked like it had the most comprehensive documentation. It also looked like it was the most suited for electronics projects using the GPIO pins rather than being used as a media player or home server like the other two.
This was just my first impression when looking through the documentation of all three boards, so that’s why we’re going to try them out.
For each board, we’ll take a closer look at the hardware features, then have a quick look at the operating system that it is shipped with, then try to get an LED to blink using the GPIO pins (which may require a different operating system to be loaded) and finally we’ll look at the power consumption of each.
Trying Out The Orange Pi 3
Hardware
Let’s start by taking a look at the hardware around the board, we’ve got onboard WiFi and Bluetooth, an IR receiver, 26 PIN GPIO headers, USB 2.0 and USB 3.0 ports, a 3.5mm audio jack, microphone, full-size HDMI port, power button, USB C power input and then a microSD card slot on the bottom.
The GPIO pins roughly mimic pins 1 to 26 on a Raspberry Pi, so you may be able to use some shields and adaptors that only use a few pins on the Pi, but my experience is that these are few and far between. It’s more likely that this layout will just be useful if you’re already familiar with the Pis GPIO layout.
Operating System It Ships With
The Orange Pi 3 ships out with an Android operating system image pre-installed on its eMMC storage, so let’s take a look at that first. This and the Khadas board look like they’re intended to be used primarily as media player devices – so this preloaded operating system is probably quite useful for that.
The Android operating system that it ships with is quite bare, you’ll need to install your own apps on it to get any meaningful use out of it. The pre-installed apps will just let you play content from a connected drive. So we can’t really do much without installing additional software.
Using The Orange Pi Debian Distribution
If we want to use the Orange Pi for an electronics project that makes use of the GPIO pins, we’re going to need to install Debian. They provide a Debian operating system image on their website, so let’s get that installed on a microSD card and boot it up.
For all three boards, I’m going to use Win32 Disk Imager to flash the operating image to a 32 Sandisk Ultra microSD card.
With Debian booted up, let’s try playing some video content to see how the hardware handles it. I’m going to try to play Big Buck Bunny on Youtube on each device to see how they perform with video streaming.
The Orange Pi 3 seemed to handle this first pass reasonably well, with only a few missed frames. It looked like the display was running on a low resolution though, and heading over to the settings confirmed this. So I switched over to 1080p and tried again.
This time the Orange Pi really struggled with the playback. It was noticeably stuttering and dropping frames, and it required some buffering during playback, which is not a limitation caused by my network. So you probably wouldn’t want to use this Pi running Debian for media playback, even at only 1080P.
Turning An LED On and Off Using the GPIO Pins
As far as documentation goes, the user manual covers a pretty broad range of tests to check the basic functionality of almost all of the features of the Orange Pi. It’s written reasonably well too. They have a section in the manual on using the GPIO pins, with one in particular for the control of the digital pins, so I’m going to work through that.
I ran an update, and then downloaded and compiled the wiringPi library, following the instructions.
Now let’s connect our LED to the GPIO pins. I first checked that the LED works when connected to a GND and 5V pin, so I knew that the pins are powered. I then connected it to Pin 7 to test.
Using the GPIO readall command we can see what GPIO number corresponds to physical Pin 7 in the table, so that’s GPIO118 and wPi pin 2.
If we set it as an output pin we now see that the mode has changed to out.
Then we can try setting the pin high or low using a 0 or 1, and our LED is now turning on and off.
There are also a few examples in the wiringPi library to help you get started with coding your own projects that use the GPIO pins.
So it was relatively easy to get an LED to turn on or off using the GPIO pins. They also have a dedicated forum with a reasonably active community. Most questions or issues raised get useful answers in a day or two and they cover a range of topics, from questions for beginners to troubleshooting assistance, help with drivers and even topics on various distributions – all of which seem to still be active.
Power Consumption
Taking a look at the power consumption on the Orange Pi 3, it uses around 2.3W at idle and around 4.3W when the CPU is loaded. So it’s quite an efficient board – that’s less than 1A draw at 5V, even when loaded.
So for $35, I’d be happy with the hardware and the community around the Orange Pi 3.
Trying Out the Khadas VIM2
Hardware
Taking a look at the hardware around the board, we’ve got two USB 2.0 ports (so no USB 3.0), we’ve got Gigabit Ethernet, a USB C power input, a PWM fan connector, reset, function and power buttons, an RTC header, a 40-pin GPIO header, infrared receiver, and onboard dual-band WiFi and Bluetooth. On the underside, we’ve also got a microSD card slot and then a range of pads for power input, MCU and GPIO connections which are great if you plan to use this board on an expansion module or PCB.
The VIM2 has a 40-pin GPIO header like the Raspberry Pi, but the pinout is quite different so you won’t be able to use any Raspberry Pi shields or hats on the VIM2 directly.
Operating System It Ships With
Like the Orange Pi, the VIM2 also ships out with an Android operating system pre-installed. This version of Android has a few useful apps pre-installed, including the Chrome browser, so we can actually try streaming Big Buck Bunny directly.
The VIM2 actually did a much better job at streaming this than the Orange Pi. This wasn’t really a fair test and is probably also partially to do with the ligher weight operating system. To keep it fair, we’ll also see how well it runs on the Linux-based operating system. This is also running at 4K, so it’s at a much better resolution than the Orange Pi could handle as well.
Using The Khadas Ubuntu Distribution
To be able to use the GPIO pins to turn an LED on and off, we’re going to need to install a Linux image. They provide a list of up-to-date operating system images in their product documentation, so it’s as easy as heading over to the page for your board and downloading the image that you’d like to use.
With the operating system image loaded onto our microSD card, we now need to boot the VIM2 from the microSD card rather than from the built-in eMMC storage. To do this, we need to enter Keys mode using the side buttons.
Now that we’ve got it booted, let’s try streaming on it. Before playing the video, I also checked to make sure that it is running at 1080P like the Orange Pi was.
The VIM2 also struggles a bit with streaming HD content on the Linux-based operating system, with similar issues to the Orange Pi. So if you’re going to be using your board as a media player then you’re probably much better off running an operating system that’s designed for use as a media centre like Android, Plex or Kodi.
Turning An LED On and Off Using the GPIO Pins
Next, let’s try to plug the LED into the GPIO pins and turn it on. I’m going to plug it into GPIO pin 7. I again tested that the LED works on the 5V and GND pins first, so I knew that the GPIO pins have power at least.
In the documentation, they tell you that the Amlogic chips include two GPIO ranges and they tell you to first figure out the range base for your GPIO pins using a terminal command. You can also get the pin index listed for each GPIO pin by entering another command. They provide this for both of the GPIO ranges but then there is no information on which range is used for what or how these are actually mapped to the GPIO pins.
I found it easier to just get the information using the GPIO readall function as I did previously on the Orange Pi.
If we look at the table, physical pin 7 corresponds to GPIO number 471.
So now let’s run through the process to set that pin up as an output pin and turn on the LED.
If we set it as an output in the terminal and then check its status in the table, we’ve actually now got pin 6 set as an output.
If we cycle it on and off, the LED is not doing anything and from the table it looked like it was cycling pin 6 on and off. So I moved the LED to pin 6 and tried again.
Now we can turn our led on and off.
This obviously seems like a trivial issue, but small issues like this can leave you wasting hours fault finding. If I hadn’t used the GPIO readall table, I probably wouldn’t have found this issue and I would have spent time going back through the setup and control steps trying to figure out what I had done wrong.
Other Issues With The VIM2
In using the VIM2, I also found two issues that I found to be somewhat annoying.
The first is that the USB C power port is too close to the HDMI port, so unless you’re using a low-profile cable, you land up having to wedge the two in alongside each other. You can usually just force them into place but this puts unnecessary stress on the ports and you may land up eventually damaging the smaller USB C port.
The second was that the buttons on the side were really easy to push when trying to remove cables. When trying to plug or unplug a device or cable in (made worse by the above issue), I’d often press one of the buttons by mistake when holding the board. This then caused it to turn off or reset, which was frustrating. You could simply be trying to plug in a mouse dongle and you press the reset button by mistake and then have to wait for it to boot up again (and risk corrupting the software).
Khadas also have fairly good documentation. There is a lot to work with, and they have a good spread of information on the hardware and software side, but there are some obvious omissions. They also have an online community and forum which has open topics, but the community doesn’t seem to be as active as the Orange Pi community.
Power Consumption
Taking a look at the power consumption on the Khadas VIM2, it uses around 1.5W to 2.0W at idle and about 3.5W when loaded. So it’s a bit more efficient than the Orange Pi, and I already thought that that was quite good.
For $80, I’d say that this is probably a bit better than the Orange Pi for a media centre, but it looks like it’s got a smaller online community and a bit less support. So you’d probably want to stick with the Orange Pi for electronics projects and tinkering.
Trying Out the Tinker Board 2S
Hardware
The Tinker Board 2S, although the most expensive of the three, is probably the closest to a Raspberry Pi. It’s got the same footprint and general layout as a Pi 3b, with a couple of standout differences.
It’s got three USB 3.2 Gen 1 type A ports and a single USB 3.2 Gen 1 type C port, with the ability to drive an external display hooked up to the USB type C port – so you can run dual displays although it’s only got a single HDMI port. It’s also got dual-band WiFi and Bluetooth, a DSI and CSI connector, a 5.5mm DC barrel jack for power, 2 pin fan connector, a RTC battery connector and 40 pin GPIO header, and on the back is a microSD card slot.
Another appealing feature of the Tinker Board 2S is that the GPIO layout is exactly the same as the Raspberry Pi. Since they share the same footprint as well, you should be able to use some of the same shields and hats on the Tinker Board.
Operating System It Ships With
I couldn’t find any information on whether the Tinker Board’s onboard eMMC storage was preloaded with a particular operating system, so let’s just plug it in and see whether it boots.
After a few minutes, nothing had come up. So I guess it isn’t preloaded with any operating system, which is a bit strange for a device with onboard storage. But we can now move on to loading the operating system onto the Tinker Board.
Using Tinker OS
Tinker OS is ASUS’ distribution of Debian that is designed to be run on the Tinker Board series. There are two options to boot the Tinker Board from, the first is to load the operating system image onto a microSD card and the second is to load the image onto the built-in eMMC storage. I’m going to load it onto the microSD card as that’s what I’ve done for the others.
From their website, you can download a prepared operating system image. Make sure that you select the correct version for your Tinker Board version. They also have some other operating system options available.
Now that we’ve got TinkerOS installed and booted up, let’s check that the monitor resolution is set to 1080P and then try streaming Big Buck Bunny.
Of the three boards, this one did the best by far when playing video content on Linux. There were a couple of stutters initially, but the image quality is great and the stream is actually quite usable.
Turning An LED On and Off Using the GPIO Pins
Unfortunately, the good start was short-lived. It was at this stage that I realised that the documentation was quite in-depth on the hardware side but was almost nonexistent for the software.
After about an hour of reading through forums and pages online, I found a Github repository that was linked to by a few sources as being the best way to start using the GPIO pins.
I tried this out a bunch of times in different ways and even on different versions of TinkerOS and just ran into errors – some of which said that this library could only be used on ASUS boards.
I eventually found an answer to another person’s question on a semi-unrelated topic saying that you don’t need to do the install that I had been trying to do as the libraries were already integrated into the later versions of TinkerOS.
This then lead me to the next issue. All of the examples that I could find use GPIO pin numbers like 0, 10 or 12, but don’t ever say what physical pins these correspond to. These numbers aren’t mapped out on any diagram or in a table that I could find.
I eventually figured out that pin 12 referred to in the scripts, mapped to CPU pin 146, which corresponds to physical pin 32, which was labelled GPIO4C2. Not exactly a logical sequence to follow.
So after a few more hours than I’d like to admit, I eventually got a basic python script like this to turn the LED on pin 32 on and off.
Power Consumption
In the documentation, they claim that the Tinker Board uses 3.65W at idle and 8.18W under load. My testing produced a result of about 3.3W at idle and 8.5W under load, so this lined up with their documentation reasonably well.
The Tinker Board can also handle substantially more than this through power delivery to connected USB devices and that’s why they’ve opted for the 12 to 18V barrel jack rather than a USB C power input like the other two boards.
If low power consumption is your goal then this board is obviously not as low as the previous two that we’ve tested, but it is a lot more powerful.
Final Thoughts On The Tested Pi Alternatives
So, the question I set out to answer, was whether any of these boards could be considered to be worthwhile Raspberry Pi alternatives, and would I recommend any of them?
I’d say that the Orange Pi 3 is a worthwhile option for tinkering with basic electronics projects using the GPIO pins. At $35, it’s fairly cheap and you get a good set of features for your money with a reasonably online community to help you out. You’ll probably manage with basic digital inputs and outputs just fine, but I suspect you’ll get stuck with any components that require established libraries or communication protocols to communicate with the Pi.
The Khadas VIM 2 is probably the best option of the three for a media server or TV box. It’s Android software package seemed to handle video playback well, so I suspect it’ll do a good job with other media-related operating systems as well. You’ll probably run into issues if you try to use it for electronics projects and there isn’t a whole lot of online support for it.
The Tinker Board looked like a great option on paper, and the hardware was quite impressive too, but the documentation relating to the software leaves a lot to be desired. I wasted numerous hours going down the wrong paths on the basics and while this might not happen to everyone, you’ll likely eventually stumble upon a component or piece of software that you’d like to get working and aren’t able to. At $120, I just couldn’t justify buying this over even an overpriced Pi 3 or Pi 4.
Through using these three boards, I was reminded why Raspberry Pi’s are so sought after. Their documentation, software support and online community extend far beyond the actual hardware. Anyone can copy the hardware, but it’s so much harder to build a community around the product like they’ve done around the Raspberry Pi.
I literally spent about 18 hours working on these three boards to get the basic functions I’ve shown here to work, and nothing I’ve shown is anything remotely complex. It wouldn’t have taken me more than ten minutes to get a brand new Raspberry Pi running on a new operating system installation and blinking an LED. I would have also been be able to find numerous tutorials to explain how to do so.
So if you value your time and you expect to build projects that require more complex electronics or software to function then I’d definitely still recommend spending the extra money or buying an older Raspberry Pi. You’re not just buying the hardware, you’re buying into a community, and you’ll save yourself a lot of frustration in doing so.
Last year Seeed Studios launched the reTerminal, a Raspberry Pi Compute Module 4 based touch display terminal with a pretty good list of features. One of the features that looked promising was their high-speed expansion interface on the back, which they said would be used to add plug-in modules to expand on the reTerminal’s functionality and IO.
At that stage, they hadn’t released any details on these expansion modules, but they reached out a few weeks ago and said that their first one has now been launched.
So here it is, the reTerminal E10-1, the first expansion module for the reTerminal.
Let’s open it up and see what it does and how it works.
Where To Buy The reTerminal E10-1
The reTerminal E10-1 is currently available through the Seeed Studio online store:
The reTerminal E10-1 is packaged quite similarly to the reTerminal, in a similarly sized box as well.
On the top, we’ve got a user manual and underneath it is the E10-1. They also include a small screwdriver and a pack of screws.
On the front of the E10-1 is the high-speed expansion port that’ll plug into the back of the reTerminal, along with a screw hole on each side to hold it in place.
On the left side, we’ve got some status LEDs, an Ethernet port and a power port.
You may be wondering why we’ve got the Ethernet and power ports, as these are both already on the reTerminal. That’s because this module allows you to power the reTerminal in a few additional ways. The Gigabit Ethernet port on the E10-1 supports power over Ethernet, so you can power your reTerminal through a PoE enabled network without having to use a separate power adaptor. If you don’t have a PoE network adaptor or aren’t using Ethernet for your project then you can use the 12V barrel jack to power the reTerminal instead of the 5V USB C input on the reTerminal. Additionally, the E10-1 also has a built-in UPS circuit that runs on two 18650 batteries. So this allows the reTerminal to function as a fully standalone wireless, battery-powered device, something that was requested quite a lot when the reTerminal was released.
On the right side are two industrial ports, a DP9 connector for the RS-232 interface and a 6-pin terminal connector for the onboard RS-485 and CAN interfaces. So you’ve now got a number of options for industrial interfaces on the reTerminal, something that’s not very common in the Raspberry Pi expansion board range.
Along the top are some rubber plugs, one of which is an antenna interface.
On the bottom are some vents to allow airflow for the internal fan and speaker.
The E10-1 is a bit thicker than the reTerminal, I guess that’s to allow enough space for the 18650 cells and the upright internal fan.
On the back we’ve just got the cover for the battery compartment. There isn’t an expansion port on the back of the E10-1 as well, so you won’t be able to stack multiple modules together as more become available, you’ll have to use them one at a time.
Let’s get the E10-1 attached to the reTerminal and try it out.
Attaching and Using the reTerminal E10-1 for the First Time
To install the E10-1 on the reTerminal, we need to first remove the rubber plugs on the back of the reTerminal to allow the E10-1 to plug into it. We can then secure it with the two included screws.
Once installed, the entire reTerminal assembly is now quite thick.
I’m also going to install two 18650 cells into it so that we can try out the UPS functionality. These just go into the battery compartment on the back of the E10-1.
With the E10-1 installed, it feels solidly built and like a good quality device, but it’s a bit too bulky to be a truly handheld device. It would be best to have it installed on a wall panel or into an electrical enclosure- which is made easy with the multitude of threaded mounting points.
Let’s plug in our ethernet and power cable and power it up. The CM4 module in the reTerminal has onboard WiFi, so you can use a wireless connection if you’d like to.
It looks like it works right away, the reTerminal powered up and has booted to the desktop.
There is a driver that they say needs to be installed to use the functions of the E10-1. The driver is installed using the following terminal commands:
$ git clone https://github.com/Seeed-Studio/seeed-linux-dtoverlays.git
$ cd seeed-linux-dtoverlays
$ sudo ./scripts/reTerminal.sh
Reboot the reTerminal and then enter the following command to complete the installation:
$ ls /boot/overlays/reTerminal-bridge.dtbo
I’m not sure what works with or without the drivers as I reloaded the operating system on my reTerminal to get Raspberry Pi OS Bullseye loaded. Part of this process is the installation of the latest reTerminal driver which appears to include the E10-1 drivers as well. I haven’t specifically installed the E10-1 driver and as far as I can tell everything I’ve tried has worked correctly, but I haven’t tested any of the industrial interfaces yet.
Testing Some of the reTerminal E10-1 Basic Functions
Inside the reTerminal E10-1 is a small cooling fan that is controlled using GPIO pin 23. This fan is off by default, so you need to turn it on through the terminal or through a script that runs in the background.
Let’s try turn it on through the terminal using the following command:
$ raspi-gpio set 23 op pn dh
You’ll then be able to hear a faint humming sound coming from the reTerminal E10-1.
I’m going to turn it off again as we probably don’t need it if we’re not using an SSD or something generating a lot of heat within the enclosure. This can be done with the following command:
$ raspi-gpio set 23 op pn dl
Now let’s see if it stays on when I remove the power supply. My batteries were partially charged before I put them into the reTerminal, so it shouldn’t need much time to charge first.
That looks like it has worked. It’s still running with the power cable removed.
The indicator LEDs on the side show when it’s receiving external power and when the internal batteries are charging.
I also wasn’t sure if the Ethernet port on the reTerminal is disabled when the E10-1 is plugged in, so I tried that out. Both ports worked equally well, so it looks like you can use either port if you’re not using PoE.
Opening Up the reTerminal E10-1
The reTerminal E10-1 is not just limited to external features, it’s also got a host of internal interfaces to allow for expandability. Let’s remove it from the reTerminal, then open it up and take a look at what’s inside it.
The main internal interfaces are the mini-PCIe connector, that allows you to add a 4G, LTE or LoRa module, and the M.2 B Key connector which allows you to add an SSD, or USB 3.0 ports or a 4G or 5G wireless module.
Seeed have provided a list of devices that they’ve tested with the reTerminal on their product Wiki. I’m going to try one or two of them out in a future video.
We’ve also got a sim card slot for the wireless modules, dual microphones and a speaker along the top and the PoE adaptor for the Ethernet port.
Final Thoughts on the reTerminal E10-1
I think the reTerminal E10-1 and even the reTerminal itself are geared more heavily towards mild industrial applications than home use, but could certainly be useful in certain home applications.
The touch interface on the reTerminal along with the UPS and industrial interfaces that the E10-1 add make this a great device for building industrial system HMI’s to interact with machines, systems and sensors. It’s even great for creating home automation dashboards through applications like Home Assistant, which will now be battery backed. With the addition of a wireless 4G or 5G module you can be notified of power outages and even run some security routines and still have some level of control when your home’s power is disabled or interrupted.
With the batteries and fan in the enclosure, the reTerminal E10-1 is quite a bulky add-on, but since it’s designed to be wall or panel mounted rather than handheld, this probably won’t affect most use cases.
Let me know what you think of the reTerminal E10-1 in the comments section below and let me know what kind of devices you’d like to see me test on it.
A while ago I did a bit of an experiment to compare the sound level between TMC2208 and A4988 stepper motor drivers. At the time, A4988 drivers were more commonly used on 3D printers and other hobby CNC devices. Since then, most 3D printer and CNC laser manufacturers have moved towards replacing at least the X and Y axis motors with the silent TMC2208 stepper motor driver or some other variant of silent motor driver. A question that has come up quite a lot in the video’s comments was how these drivers manage to drive the motors with such a significant sound reduction and if there was any trade-off.
So rather than just show you some diagrams, I thought I’d set the motor and drivers up again and try to show you through actual measurements.
Here’s my video of the test – read on for the write-up, although the video is the best way to hear the sound difference for yourself.
What You Need To Set Your Own Test Up
To set up your own test like I’ve done, you’ll need a few basic components:
I’m going to be using a Pokit multimeter to take current measurements using the oscilloscope function. You don’t need one of these if you just want to hear the sound difference or tinker with controlling the motors.
Understanding How Stepper Motors Work
There are some really good resources online to explain how stepper motors work, so I’m not going to go into too much detail. The simple explanation is that stepper motors have a number of poles and the driver energises the coils in the motor to align the rotor with these poles in a sequence to rotate it.
The simplest way to do this is to turn one pole on and the other off, causing the rotor to jump from one pole to another. This is simple to do electrically but causes the most noise as it induces a lot of vibration within the motor.
We can reduce the noise by rather slowly energising the one coil while de-energising the second coil so that we gently pass the rotor from one step to the next. The most optimal way to do this without producing any vibration is by producing a sinusoidal wave.
The better the stepper motor driver can replicate a sinusoidal waveform, the quieter it’s going to be able to run the motor. But replicating a sine wave perfectly requires more expensive electronics, so there is a bit of a tradeoff.
There are a few other sources of noise or humming in a stepper motor caused by things like magnetic fields, current ripple and chopper frequency. But their contribution is generally significantly less than this is.
So let’s have a look at the current waveform that the two drivers produce.
The TMC2208 Driver Test Setup and Code
I’ve got a similar setup to the last test with the two drivers hooked up in the same way to an Arduino.
The drivers are both connected to digital outputs 3 and 4 on the Arduino for step and direction control respectively. So we just need to plug our motor into the one we want to test. I’ve also added a 10K potentiometer, connected to analogue pin A0, to adjust the time delay between step pulses, which in turn will control the motor speed.
The Arduino sketch is very basic, just assigning the pin modes in the setup function and then looping through reading in the potentiometer position and stepping the motor with the measured time delay.
//The DIY Life
//Michael Klements
//30 April 2020
int stepPin = 3; //Define travel stepper motor step pin
int dirPin = 4; //Define travel stepper motor direction pin
int motSpeed = 5; //Initial motor speed (delay between pules, so a smaller delay is faster)
void setup()
{
pinMode(stepPin, OUTPUT); //Define pins and set direction
pinMode(dirPin, OUTPUT);
digitalWrite(dirPin, HIGH);
}
void loop()
{
motSpeed = map(analogRead(A0),0,1023,50,1); //Read in potentiometer value from A0, map to a delay between 1 and 50 milliseconds
digitalWrite(stepPin, HIGH); //Step the motor with the set delay
delay(motSpeed);
digitalWrite(stepPin, LOW);
delay(motSpeed);
}
Testing the Waveforms from the A4988 and TMC2208 Stepper Motor Drivers
We’re going to start with the A4988 driver by first taking a look at the sound level at different speeds.
The sound level throughout the range of speeds was an average of around 50-60dB. The sound was obviously being amplified by the wooden desk and wouldn’t be that loud with a proper vibration damping mount, but this way you get a good idea of the improvement.
To measure the waveform I’m going to use this Pokit multimeter and oscilloscope and I’m going to connect it in series with one of the motor coils to measure the current flowing through the motor coil.
In the video, you may notice that the motor sounds a bit weird when it’s connected and the oscilloscope isn’t measuring anything. This is because the oscilloscope opens the circuit when it isn’t taking readings. So the motor effectively only has one coil connected to the drive. You’ll see the shaft isn’t turning any more and is just sort of jumping in the same spot. So we’re only interested in the sound the motor makes during readings after I’ve pushed the red record button.
A4988 in Full Step Mode
With the A4988 driver running in standard full-step mode, you can quite clearly see that the driver is producing a very square wave.
It also doesn’t matter if we increase the motor speed, we still get a similar square wave that just repeats more often in the same timeframe. So this waveform is obviously quite far from a sine wave and therefore produces the most vibration within the motor, leading to the most noise being generated.
That’s not the end of the road for the A4988 driver, it can actually produce somewhat of a sine wave through microstepping.
Microstepping is essentially the ability for the driver to partially energise the coils to position the rotor in positions between the two poles, and it does so in a way that resembles a sine wave. So the most positions (microsteps) you can do between each pole, the better your sine wave is going to look.
The A4988 can do half, quarter, eighth or sixteenth step microstepping by pulling a combination of three pins high. So let’s see what those look like – we’ll start with half step mode.
A4988 in Half Step Mode
With the A4988 driver running in half step mode, we now got something that is starting to look a bit like a sine wave – but there is obviously still a lot of room for improvement.
The motor also sounded like it was running a little smoother than in full step mode. Looking at the waveform produced, you can clearly see two steps on our sine wave above and below 0.
A4988 in Eighth Step Mode
Now let’s try and improve upon our results with eighth step mode. So in this test, we should now have eight increments between the zero and the maximum on our sine wave.
The first thing you’ll notice is that the sine wave doesn’t fit into our timeframe anymore. That’s because the driver now only moves 1 micro step for each pulse, so our motor is effectively moving 8 times slower than it was in full step mode. So, for example, a motor with 200 steps per revolution running in eighth step mode will now have 1600 steps per revolution.
If we adjust the time scale, we can see our full sine wave and we’ll also notice that our motor is again moving smoother, and slower than it was when in half step mode.
A4988 in Sixteenth Step Mode
Lastly lets try sixteenth step mode, which is the most that this A4988 driver can do.
You’ll again notice that the motor is moving half as fast as eight step mode and we’re getting a wave that’s now looking a lot like a sine wave.
That’s now the end of the road for our A4988 driver. The micro stepping has made it run much smoother and a bit quieter, but it’s still quite noisy. So let’s swap over to our TMC2208 driver now.
TMC2208 Running In Legacy Mode
For compatibility with the A4988 driver’s code, we’re going to be running the TMC2208 driver in Legacy Mode. This mode essentially allows the driver to act as a drop-in replacement for the A4988 driver.
If you watched the video, at this stage you probably hadn’t noticed that the motor was running. That’s obviously a significant improvement over the A4988 drivers that produced around 50-60dB. The TMC2208 driver operates nearly silently, even when you change the speed.
A big part of how it does this is that the TMC drivers produce 256 microsteps, so sixteen times more than what the A4988 drivers do.
Let’s now hook up the oscilloscope and see what the waveform look like.
As with the previous test, the motor makes a bit of noise when the oscilloscope isn’t taking measurements as its only got a single pole connected, so it’s jumping back and fourth around the same pole. It does however go silent again when the oscilloscope is running.
As with the A4988 driver, if we change up the speed we still get the same smooth sine wave, it just repeats more often in the same time interval.
So you can see that’s a significantly improved sine wave over even the best one that the A4988 driver was able to produce.
Finals Thoughts on the TMC2208 Motor Driver Test
So now you have a basic understanding of what the TMC2208 drivers do differently to run almost silently.
As for any drawbacks. There are two primary ones.
One is a slight reduction in incremental torque, which is not usually an issue unless you’re operating near the motors torque limitations.
The second is not so much to do with the motor but to do with the microcontroller telling the driver what to do. As I’ve mentioned earlier, microstepping requires more pulses from the microcontroller to move the motor a full step. So, running in sixteenth step mode requires your microcontroller to output 16 times more pulses than it would need to in full-step mode. If you’re doing this across multiple motors or while doing other tasks, your controller quickly gets bogged down just keeping the motors running and may not be able to keep up.
Out of interest, during the tests, I was running the drivers with a 12V supply to the motor.
That’s it for today, I hope you’ve learned something and found this explanation useful. Let me know in the comments section what you’ve used these drivers for and check out some of my other projects for ideas.
I’ve been slowly adding more and more devices and sensors to my home automation setup and it’s gotten to a stage where I now have a pretty significant number of apps to control them on my phone and iPad. I’ve also wanted to set up automations and routines between devices, but the interfacing across platforms and between brands isn’t usually available or is buggy at best.
If you’ve done anything home automation related on a Raspberry Pi then you’ve probably heard of Home Assistant. It a free and open-source software package that is designed to be a central hub or control system for all of your smart home devices and it’s got a pretty substantial online community working on integration. So, for example, it allows you to do things you wouldn’t normally be able to do like use an Ikea motion sensor to turn on a Philips hue light. Something that isn’t supported by either ecosystem individually.
So today I’m going to be installing Home Assistant onto a Raspberry Pi and I’m going to use a new laser cutter, the Atomstack X20 Pro, to laser cut a housing for it so that I can put it somewhere convenient in my house without it looking like a jumble of wires, dongles and PCBs.
Here’s my video of the build, read on for the full write-up:
What You Need to Build Your Own Home Assistant Hub
The X20 Pro is a new diode laser engraving and cutting machine from Atomstack that uses a clever quad diode laser module to deliver 20W of optical power. The laser is so powerful that they claim that it can even cut 0.05mm sheet metal, which as far as I can tell is a first for consumer-level diode lasers. They also say that I can cut up to 12mm sheets of wood in a single pass and up to 8mm sheets of opaque acrylic.
The 20W laser module is quite a bit stockier than the one on the X7.
The control PCB and cooling fan are built into the metal housing and an air port on the top feeds down to a nozzle around the lens for the included air assist system. I really like how well the air assist system is integrated into the design of the module and doesn’t look like an afterthought.
The included air assist is their own branded system. I’ve used an industrial aquarium air pump previously on my K40 laser cutter, so I was expecting this to be something along those lines, but it’s actually a lot better. The unit apparently uses a two-cylinder compressor to deliver 10-25L/min of air to improve cutting and and engraving quality and speed, we’ll see how it works in a bit.
At a little over $1,000, it has a hefty price tag, so I’m hoping that this machine can do some cutting that’s at least equivalent to most entry-level 40W CO2 lasers.
So let’s get it assembled.
As with the X7 model, the X20 Pro comes largely preassembled, so assembly is pretty straight forward.
There are a couple of pages for assembly in the manual and the components are labelled for each step, so they’ve made it really easy.
The gantry is all pre-assembled so you mainly need to assemble the four-sided frame and then mount the gantry and belts onto it along with the laser module. The only fiddly job is feeding the belts through the gantry wheels and toothed pulley on either side.
It took me about 20 minutes to assemble the X20 Pro and to adjust the legs so that it sat perfectly flat on my desk.
Test Cut and Engraving on The X20 Pro
I then tried turning it on, particularly to try the air assist pump to see how loud it would be. I have to say that I was pleasantly surprised. The industrial aquarium pump that I’ve used in the past is basically as loud as a standard workshop compressor. This system is substantially quieter in comparison.
It makes quite a noise if you turn the power all the way up, but you probably don’t need to use it at more than half power for most applications. You can feel a decent amount of air coming out of the nozzle at half speed, and you’ll then hardly hear it over the fan on the actual laser module (which is quite loud for a laser module). Even at full speed its quiet enough to comfortably talk over and you don’t feel like you need hearing protection when its running. It’s not something that you’re going to want to leave running unnecessarily but it’s definitely bearable for a small workshop.
If we plug in the included MicroSD card, there are two test files ready to go, one to cut and one to engrave.
So let’s try those out first, I’m going to get it moved to my workshop so I don’t burn a hole in my desk.
The first file is a dog that was labelled to be used on 2mm plywood. I’ve only got 3mm plywood so I thought I might need to do a second pass to cut all the way through. I used the offline controller to position the laser and run the test cutting file and I used the included distance tool to set the focus distance between the laser and the wood.
The laser seemed to cope just fine with the 3mm plywood and made quick work of the dog, cutting through the sheet in a single pass.
I then tried the engraving and that too produced a great quality finish with the air assist on about 30% power. There is some debate as to whether air assist is required when engraving as it tends to blow the smoke back onto the piece. I still prefer using some masking tape over the wood that I peel off after engraving – this produces flawless results every time.
It looks like the X20 Pro is ready to take on a project, so we need to design the housing to hold the Raspberry Pi.
Designing The Home Assistant Hub Housing
I’ve sketched up a cubic style housing with some feet to lift the Pi off the shelf or desk and a fan on the top for cooling.
I wanted to integrate the Home Assistant logo into the design in some way, so I initially planned on engraving it. That made the housing look a bit too much like an ordinary box, so I decided to rather laser cut the logo out on each side.
I can then glue some clear acrylic or clear plastic sheets onto the inside of the case to keep the dust out. The RGB lighting on the fan should light up the inside of the case just enough to give the logo a bit of a glow – which will hopefully look quite good.
Let’s get the components cut on the Atomstack X20 Pro. I’m going to be cutting the components from the same sheet of 3mm plywood. I’m cutting at 300mm/min and 90% power. I’ve prepared the files in laserGRBL and I’m going to again use the microSD card and offline controller to do the actual cutting. I find this easier than having to set up a laptop near the laser.
The first piece came out perfectly and you can really see how the air assist has helped to keep the smoke away from the plywood. I started without it for the first USB C port cutout and you can see it’s surrounded by smoke stains. I then turned the air assist on to about 30% power and the rest of the cuts are really clean on the surface.
The underside gets marked from some of the reflected laser’s light, so I’ll probably look at adding a honeycomb bed at some stage. I also noticed that the localised heat from the laser caused pretty significant warping on the metal sheet once all of the pieces had been cut.
One thing that is a bit of an issue with all of these diode lasers is that there is no smoke extraction system, and cutting wood produces a lot of smoke. So you need to work in a well ventilated area.
Just as a test, I tried a piece of 6mm plywood that I had lying around. I set the laser to 200mm/min and 100% power and it had no problem cutting this out in a single pass either.
Assembling & Painting The Hub Housing
Now that we’ve got the pieces cut out, lets glue them together and give the housing a light sand.
I’m just going to use regular PVA wood glue to glue it together and then I’ll leave it to dry for a few hours before sanding it.
I used a couple of strips of masking tape to hold the sides together while the glue dried.
I’m going to paint the housing with two coats of a white universal undercoat and then two colour coats. I couldn’t find the exact colour of the Home Assistant logo, but this colour (called Fish Pond for some reason) is about as close as I could find – so I’m going to try it out and see what it looks like.
Once the glue was dry I gave the corners, edges and faces a light sand with 240 grit sandpaper.
I then painted the housing with the two coats of undercoat and two coats of enamel paint, allowing each coat to try for about half an hour before applying the next one.
After a second colour coat, it’s starting to look pretty good. I just want to fill in the edges a little more and it’ll then be done.
Engraving the Lid Using the X20 Pro’s App
Atomstack have also added an app on the software side that allows you to quickly import and engrave or cut shapes, sketches and images wirelessly, which is great to improve your workflows.
So I’m going to try and use the app to add some text to the lid of the housing. I’m going to quickly sketch my name in the app’s freehand editor and then engrave it onto the lid.
The app definitely has its limitations, but it’s a great way to quickly add details to pieces where accuracy isn’t particularly important. For anything important, I’d probably still resort to using my computer or the offline controller to control the laser more accurately.
Installing The Home Assistant Hub Electronics
Now let’s get our Pi and fan installed in the housing. I’ve intentionally left a bit of headroom in the top so that there’s space to add shields, adaptors or devices onto the GPIO pins in future as I need them.
The Pi is held in place with some M2.5 brass standoffs that are secured through the base of the housing with a nut each on the bottom.
The Pi is then secured to them with an M2.5 screw into each. You can use additional brass standoffs if you want to mount a hat or shield onto your Pi as well.
A small aluminium heatsink on the Pi will provide adequate cooling as the CPU isn’t going to be under much load during normal operation.
For the fan, I’m going to use a 40mm RGB fan to light up the inside of the case and I’m also going to use a small black dust screen between it and the plywood.
Like I’ve done previously, I’m going to press an M3 nut into each pocket in the fan to screw into. This is easiest done by putting the nuts on a desk or flat surface and pressing the fan pocket down onto them one by one.
The fan and dust screen are then held on the lid with four M3x8mm screws.
I’m going to flash the home assistant image onto a 32GB Sandisk Ultra microSD card, which we can plug in through the slot on the back of the housing. You’ll probably need to use some tweezers or needle-nose pliers to reach the card slot.
To finish the housing off, I’m going to stick some clear acrylic panels onto the inside behind each logo so that dust cant get in around the logo cutouts. These will also provide a bit of support to the thin branches on the logos so that they’re less likely to get damaged or break off.
If you don’t have acrylic you can also use some clear plastic sheets or even old containers with clear flat sides.
I’m gluing the acrylic in place with hot melt glue in four spots along the edges.
Now we just need to plug the fan into the 5V and GND GPIO pins and we can close up the housing. You can also plug the fan into one of the 3.3V pins if you’d like it to run at a reduced speed and be a bit quiter.
Adding a Zigbee Gateway to the Hub
There are two main low-power communication protocols used by smart home devices – Zigbee and Zwave.
They’re both mesh networks, meaning that every device on the network connects to every other device in range of it and they then dynamically co-operate with each other to send data between nodes through the most efficient route.
I don’t really have a preference between the two, but most of the devices I’ve got so far operate on the Zigbee standard. So, rather than have my Home Assistant hub have to talk to the hub from each manufacturer in order to communicate with its devices and sensors.
I’m going to add a Zigbee gateway to the Home Assistant hub so that it can communicate with them directly.
This will also allow me to use 3rd party Zigbee devices and sensors that don’t have hubs or aren’t part of other ecosystems – so they’re generally cheaper.
The Gateway I’m going to be using is this little USB adaptor called the ConBee II as it seems to be the most well supported by Home Assistant.
Ideally I’d like to use one that uses the GPIO pins on my Pi so that I can keep it within the housing, so if you know of any that use the Pi’s GPIO pins and work well with Home Assistant please let me know in the comments section at the end of the post.
That’s basically it, we’re now ready to start using our new Home Assistant hub to control our smart home devices, let’s get it booted up.
Using the Home Assistant Smart Home Hub
Once set up, you can scan your network to find all of your compatible smart home devices and then start building dashboards, automations and routine to control them.
You can access your dashboards through any web browser on your network so you can take control of your home through your laptop, tablet and mobile phone, or even build your own dedicated dashboards with another Raspberry Pi and a touch display.
Check out Smart Home Solver’s channel for some great ideas for home automation routines and automations – he’s got some really creative and unique ideas using a range of sensors and devices.
Using Home Assistant I’ve now got the motion sensor on my driveway camera to brighten my porch light for a minute when its on during the evening and it’ll even turn it on for a minute during the night if its off.
Next I’m going to be setting up some motion sensors or magnetic switches to turn on my pantry and closet lights when the doors are open.
Final Thoughts on the Atomstack X20 Pro
The Atomstack X20 Pro is without a doubt the best diode laser machine I’ve personally used. The powerful laser allows you to work with thicker materials and is now actually quite useful for thinner ones as well. I’m able to cut 3mm plywood three to four times faster than I could with a 5W laser. So it’s actually becoming a worthwhile alternative to my CO2 laser at this point.
The air assist works really well to get cleaner cuts and engravings and won’t leave your eardrums ringing after you’ve used it. And finally, the inclusion of WiFi and a phone app means that you’ve got another way to easily use the X20 Pro, streamlining your workflow.
I’ll definitely be looking to add a honeycomb mesh bed to the X20 Pro and I need to design an enclosure for it so that I can contain and direct the smoke out of my workshop.
Let me know what you think of the Atomstack X20 Pro in the comments section below. Also let me know if you’ve used Home Assistant in your home and what interesting devices and automations you’ve set up.
I’ve been using my Raspberry Pi in this case that I 3D printed almost two years ago. It’s been a great way to protect and cool my Pi and I’ve even made up a few other varients for UPS and SSD shields.
I printed these cases on my original Ender-3 Pro, so when Pergear reached out and said that they’d like me to try out the new Creality Ender-3 S1 Pro, I thought this would be a great opportunity to give my case a refresh.
The Creality Ender series has been my go-to 3D printer for the past three years, I started with an Ender 3 Pro, then got the Ender 3 V2 and then added a second Ender 3 V2. These three printers run for about 10 hours a day and have been doing so for two years now without giving me any significant problems.
I’ve kept them stock for the most part and have found that a well setup Ender-3 prints as capably as other printers that are 3-5 times more expensive. They also have a large online community, a range of upgrades and easily accessible spare parts. So I’m excited to see how the Ender S1 Pro stacks up, as it’s got a number of upgrades and improvements over the original.
Here’s my video of the build, read on for the full write-up:
Let’s start out by getting the new case designed so that we’ve got something to print. I’m going to use Fusion360 this time around for a more refined finish.
The previous case had a solid body with two clear sides, so I want to mix that up by now having a wrap-around clear panel from the side to the front. A small 45-degree section adds a bit of character to the design and will make the acrylic bends a bit more gradual, rather than a sharp 90-degree.
I’ve also put the USB and Ethernet ports on the back and left some headroom to add an Ice Cube cooler and fan. On the other side, we’ve got the power, HDMI and audio ports and I’ve added some vents above them for the exhaust air.
You can download the design from my Etsy store to 3D print your own case or alternately buy a kit that includes the case, bent acrylic side panel and screws so it’s ready to be assembled.
Now, let’s export the parts and get them printed on the Ender 3 S1 Pro. First, we need to get it unboxed and assembled.
Unboxing & Assembling The Ender-3 S1 Pro
Like with all my Enders, the Ender-3 S1 Pro comes well packaged and protected in a sturdy box with foam inserts.
Within the box, the Ender-3 S1 Pro is a lot more pre-assembled than the original. The whole gantry is ready to be mounted onto the base and you then just need to mount the extruder, add the display and add the filament holder.
The base is quite a bit bigger than the original Ender-3 and Ender-3 V2, so keep that in mind if you have limited desk space.
Assembly took around 15 minutes and is really simple with the included step-by-step instructions and tools.
The general shape and layout is similar to the original Ender-3 series, but they’ve made a number of quite significant upgrades with the S1 Pro.
The extruder is now a direct drive, full metal, dual gear design with a hot end that can reach 300C. This opens up the possibility to print with a wider range of filaments, including flexible and high-strength materials.
They’ve also added a filament runout sensor that’ll automatically pause the print if your filament runs out mid-print.
The display has been upgraded to a full-colour touch display, allowing them to do away with the rotary pushbutton on the older models.
They’ve also done away with a vertical axis limit switch, and have added their own CR Touch automatic bed levelling sensor to compensate for any print bed height differences. They also include a limit switch and cable as an option to add on if you don’t want to use the CR Touch sensor.
A new overhead LED light bar is a great addition for overnight prints and for keeping an eye on your prints remotely using a camera in a dark environment.
The print bed is now equipt with a spring steel magnetic build plate, and it’s got dual z-axis motors on the back, something that was a common first upgrade on the original ender.
Those are the main upgrades made to the original Ender-3 and Ender-3 V2, it also got a number of now fairly standard features like silent stepper motor drivers, a 32-bit control board and adjustable belt tensioners.
The Ender-3 S1 Pro currently retails for $499 on Pergear’s Amazon store or $480 on their web store. This is quite a bit more than the standard Ender-3 series, but you’re also getting a number of upgrades and features that are typically only available on higher ender printers.
Once assembled, I used the automatic bed levelling, set the nozzle offset and then set the printer to work on the rabbit test print with the included filament.
The results were really good – keep in mind that is a print straight out of the box without any adjustments or tinkering with the printer. I didn’t even touch the bed levelling adjustment knobs, I just let the automatic bed levelling take care of it.
Making Up The Case Components
For my case print, I’m going to use black PLA for the print and I’ll use 100% infill as the walls are already quite thin. I used 0.2mm layer height, a wall thickness of 0.8mm and a top and bottom thickness of 1.2mm.
I’m going to print the two parts separately rather than print them at the same time so that there aren’t any imperfections or seams caused when moving between the two parts.
While the 3D print is being printed, let’s make up the acrylic side panel.
I’m cutting this panel from 2mm clear acrylic and I’ll then use a bending tool to heat up the two edges where we need to make the 45-degree bends. I’ve added a cutout for the fan and some guides for the two bend lines.
Let’s get the panel cut out on my laser cutter.
These prints came out really well for one of the first prints I’ve done on the Ender-3 S1 Pro. I’m impressed by the quality of the prints and how smooth the layer lines are, they look quite professional.
Now that the two halves are printed, we need to clean up the 3D printed parts by removing the print supports.
Next let’s bend the acrylic panel to fit the case. You’ll see the small laser-cut notches along the edges that I’m going to use as guides for my bend lines – so I just need to put the bending tool between these two points and allow it to soften the acrylic.
Once the first bend has been heated, I can bend it into place to follow the profile of the case, which I’ll do with the front edge.
Now let’s do the second bend in the same way. This one I’ll need to do in place as I can’t follow the front edge again or it’ll be too big.
I’ve designed guides along the edges to hold the acrylic, so I’ll use those guides to get the final shape right.
I think that’s come out quite nicely and it looks like the acrylic follows the profile of the case quite well.
Installing The Pi And Cooler Into The Case
For cooling, I’m going to use an Ice Cube cooler by Sunfounder. This cooler is an improvement over the Ice Tower I used previously as the base has now been designed to cover the CPU, RAM, Ethernet and USB controller chips rather than just the CPU – so this should provide better cooling to the whole board.
As with my previous design, I’m going to remove the fan from the Ice Cube and move it onto the acrylic side panel rather so that it draws cool air in from outside the case.
I’m going to be installing my 8GB, Rasberry Pi 4b, running from a 32GB Sandisk Extreme microSD card with Raspberry Pi OS Bullseye flashed.
To install the Pi into the case, we need to first secure the brass standoffs in the base of the case. These protrude through the printed standoffs and are held in place with an M2.5 nut on the bottom.
Next we can position our Raspberry Pi on the standoffs and then add a second standoff onto each to hold it in place.
Lastly, we can install the Ice Cube cooler on the Pi. Remember to add the cooling pads to the heat sink before you install it.
Now we just use the included M2.5 screws to hold the cooler in place.
With the acrylic’s shape already formed, let’s mount our fan onto it using four M3x8mm button head screws and nuts. As I’ve done previously, I’m going to press the nuts into the pockets in the fan to screw into. This is easiest done by putting the nut on a flat surface and pressing the fan pocket down onto it.
I’ve also got this carbon fibre fan grill I found online that I’m going to install over the fan. You can skip this if you want to see the RGB fan more clearly.
We can then peel off the rest of the protective film and install the clear side panel.
The fan is plugged into the 5V and GND GPIO pins.
Source: RaspberryPi.org
You can also use on of the 3.3V pins if you’d like the fan to run a bit quieter, but it’ll lose a bit of performance too.
Finally, the lid of the case is held in place with three more M3x8mm screws.
And that’s it, our case is now complete. So let’s boot it up and run a test to see how the Ice Cube cooler handles a full load.
Stress Testing The Raspberry Pi & Cooler
The stress test I’m going to use is called CPU burn. It’s one that I’ve used previously for a couple of thermal tests as it seems to generate the most heat out of the tests I’ve tried.
To download it on your Raspberry Pi, open a new terminal window and enter the following commands:
Then CPU Burn can be run using the following command:
while true; do vcgencmd measure_clock arm; vcgencmd measure_temp; sleep 10; done& ./cpuburn-a53
So running at full load on all four cores pushed the temperature up quite quickly from 23 degrees to 26 degrees, and it seems to have stabilised there, which is not much of an increase at all.
Without a cooler, the Pi thermal throttles in a few seconds with this test, so these large coolers work really well.
So at 2Ghz it still stabilises at around 35 degrees, so there is probably room to overclock it a bit further if you’d like to try that. But for now, I’m really happy with the results and with how the case has turned out.
Final Thoughts on the Creality Ender-3 S1 Pro
Overall I’m really impressed with the print quality from the Ender-3 S1 Pro and I’m looking forward to trying out some more challenging materials. I’d like to try to print this case in a matt carbon fibre filament to see how that turns out.
I also like that Creality have paid attention to the community’s requests with this design, particularly in addressing the common issues that have been reported on the older models like the dual z-axis and automatic bed levelling. Even relatively minor issues like making the filament roller an actual roller has been taken care of.
As with any 3D printer, I’m sure this one with have a weakness or two and I’ll post some updates here after I’ve used it for a few months. I’m interested in seeing how the fan angled towards the print bed holds up with pulling in strands of filament and dust etc..
Check out Pergear’s Amazon store or their web store to get your own Ender-3 S1 Pro and visit my Etsy store to get your case kit to assemble your own Pi Desktop Case
Let me know what you think of the Creality Ender-3 S1 Pro in the comments section below and let me know what you think of my new case design.
This is Bittle, a ready-to-run advanced open-source robot dog by Petoi that is based on the OpenCat robotic pet framework.
If you’ve ever wanted to explore building your own robotic quadruped, but have felt overwhelmed by the amount of information and options available or have been at a loss with where to start, then Bittle is the perfect product for you. So in this review, we’ll take a look at what Bittle is, how it works and what it can be used for.
Have a look at my video review to see Bittle in action, or read on for the written review:
Where To Get Bittle
Bittle is primarily available for purchase online through Petoi’s website or their Amazon store and comes in three packages:
Base Kit – Includes all of the parts required to assemble your own robot dog
Pre-assembled Kit – All of the components included in the base kit, but pre-assembled and ready-to-run
Developer Kit – The pre-assembled kit with 10 replacement servos and an extra battery pack
Petoi have sent me the pre-assembled kit to try out and share with you, so that’s the kit that we’ll be taking a look at in this review.
What’s Included In The Box
The base kit comes in a branded box with clear protective inserts to hold the included components in place.
Included is Bittle, along with a battery pack with an integrated charging circuit, and then an accessories kit.
The accessory kit includes an infrared remote, a spare servo and some screws, a calibration tool, a small screwdriver and a pack of modules that allow communication with Bittle. These modules include a USB programming module, a Bluetooth module and a WiFi module.
Assembling Bittle
If you’ve bought the base kit then you’ll need to do some assembly work before you can start using Bittle, including making up the legs, mounting the servos in place at the joints and connecting the wiring through to the control board that makes up Bittle’s body.
If you’ve got the pre-assembled kit, like I do, then you’ll just need to snap the head into place and plug in the battery. You’ll also need to move the servos to the correct starting position as they’re packed with the joints bent in the opposite direction to make Bittle more compact.
The body and components feel like they’re well made and are good quality. Part of what makes this robot dog look great and function so well is that they’ve taken the time to design and manufacture custom parts – like the servo arms that have been specifically designed to join the leg components with the inclusion of a spring to provide a bit of shock absorption.
Controlling Bittle With The Infrared Remote
Once assembled, the included 21 button IR (infrared) remote allows you to start playing around with some of the core functions of Bittle right away. It’ll allow you to walk, run, turn and do a couple of pre-programmed skills right out of the box using a small infrared receiver on Bittle’s back.
The arrow keys control Bittl’s walking/movement directions along with speed settings and 11 skill buttons allow you to execute some of the pre-programmed skills.
Getting the first movement out of Bittle is as easy as plugging in the battery pack and then aiming the remote at his back when you press one of the buttons.
Here’s Bittle waving hello…
Exploring Bittle’s Control Board
Once you’ve tried out Bittle using the IR remote, you can either dive right in to coding your own skills or you can download the mobile app (for iOS or Android) to unlock some additional functionality, including calibration and customized commands. Either way, you’ll need to remove the black cover on the top to get to the control board to plug in one of the communication modules.
Under the cover is a custom-designed controller called NyBoard with an integrated Atmega328P chip, PCA9685 PWM servo driver, MPU6050 motion sensor, an infrared sensor and a number of ports and interfaces to add sensors and devices to.
There appears to have been some revisions made to this board as some of the versions I’ve seen online have a row of RGB LEDs along one side. The core functionality however seems to be largely the same.
I really like that they haven’t trimmed this board down to only suit the functionality and IO that the standard Bittle configuration requires. Leaving additional servo outputs, I2C interfaces and digital IO ports gives you a lot of options to build upon the basic design and make your own modifications and additions to the robot dog. This along with the open-source software means that you’re getting a development platform to learn on, build upon and explore, rather than just a finished product that you’ll probably get bored with after a couple of weeks. Part of the fun in building your own quadruped or robotic pet is that you never really finish it, there is always something else you can add, tune or modify and Bittle retains this – being a platform to build upon rather than just being a finished product.
Coding Routines, Skills And Features
Coding is best done through the Arduino IDE, and you’ll need to use the included communication module to allow your computer to program Bittle. This allows you to plug Bittle into your computer using the included micro-USB cable.
If you’re not comfortable with the Arduino IDE, you can use Python as an alternative. They even have a drag-and-drop coding interface for beginners. So there really is something for every level of experience.
Their documentation is really good and covers everything you may need to do to use and maintain Bittle as well as documentation and instructions for adding your own sensors, skills and features.
Calibrating Bittle’s Leg Positions
In Petoi’s documentation, they mention that the pre-assembled kit is only coarsely tuned. So they recommend running through the calibration process for best results. I’m going to run through the calibration sequence using their iOS app. To use the app, I need to plug in the Bluetooth communication module to allow my phone to communicate with Bittle.
To help out with the calibration process, I also 3D printed their stand with the calibration arms built into it.
We can then open up the app to pair Bittle to the phone and start the calibration process. If you head over to calibration mode, the legs will move to their calibration positions and you can then make adjustments to their positions.
Course adjustment is made by removing the arm from the servo and aligning it as best you can. You’ll need to remove the screw that holds the servo arm to the servo in order to remove it.
Fine adjustment is then done in the app until Bittle’s legs are at perfect 90-degree angles, by aligning the legs with the stand or with the included calibration tool.
You can select each join in the image at the top of the screen and then make adjustments to it using the + and – signs. It’ll only let you just the servo between an upper and lower limit before asking you to rather make a course adjustment.
The stand is also useful for trying out new movements and testing commands without having to worry about where Bittle is going or if it’s going to fall off your desk.
Working On Or Repairing Bittle
All of Bittle’s components either screw or snap into place. So it’s super easy to take apart if you need to swap out a servo, change a spring or make changes to the wiring or control board. You just need a screwdriver and you’re good to go.
If you’re doing a lot of work on it then you’ll want to get a better screwdriver than what’s included with the kit as it’s a bit small and cumbersome to work with.
The wiring is also all held in place and partially hidden by snap-on covers over the legs. These help ensure that they don’t interfere with the joint movements and also keep Bittle looking neat.
Using The iOS App To Control Bittle
We’ve already paired the app with Bittle in the calibration process, so now let’s try some customized commands. Bittle has a number of controls and skills that are preprogrammed, these can be set up to run individually or as part of routines using text inputs through the app or the Arduino IDE.
So let’s try one of them. The code to look or check around is ck, so we type in kck to run the command and we can give the quick command a name “Look Around”.
We now have a quick button to look around, which he’ll do each time we push the button.
We can try commands that aren’t available through the infrared remote, like play dead, or march on the spot. We can also string commands together to create routines and behaviour sequences.
The onboard IMU knows the orientation of Bittle, so if he stumbles or falls over, it will automatically activate a routine to flip him back over and onto his feet.
Bittle seems to manage quite well on most flat surfaces. It walks best on surfaces that are a little bit rough, like wood or concrete, but struggles on very uneven or loose surfaces like stones, sand or pebbles.
You can also use the IMU to allow Bittle to balance on uneven surfaces or when pushed or bumped.
Final Thoughts on Bittle
Petoi have clearly put a lot of time and effort into creating a good quality product that is great for a range of experience levels. If you’ve never programmed anything in your life, you’ll still be able to get started with the basic drag-and-drop interface, and the open-source code allows experienced programmers to make any changes they’d like to build upon and improve Bittle.
They also have a number of external sensors already available and are working on some additional ones to add functionality to Bittle.
These include sensors like obstacle avoidance and object tracking through a smart camera. So definitely check out the sensors if you’ve already got your own Bittle, and visit their web store if you’d like to get your own robot dog or cat.
Let me know what you think of Bittle in the comments section below and let me know if you have any project ideas that you’d like to see me try out with him.
Today we’re going to be looking at the reComputer Jetson-10, a palm-sized AI computer that can recognise people, animals and objects, while still being efficient enough to run on a battery pack.
The reComputer Jetson-10 is a new product by Seeed Studios, that consists of a palm-sized aluminium case that houses a passively cooled NVIDIA Jetson module. The module runs on their custom carrier board that is designed for AI application development and deployment. They have sent me their H0 model which runs a Jetson Nano module with 128 NVIDA CUDA cores that can deliver up to 0.5 TFLOPS of computing performance. It’s also got a Quad-core ARM A57 CPU running at 1.43 GHz, 4GBs of LPDDR4 RAM and 16GB of EMMC storage.
Here’s my unboxing and testing video, read on for the write-up:
Unboxing And First Look At The reComputer Jetson-10
The reComputer Jetson-10 comes in a matt black box within a branded sleeve. The packaging is really good, with branded foam inserts to protect the reComputer and to divide the box into two compartments.
Included in the box along with the reComputer Jetson-10-1-H0 is a 12V, 2A power supply with some options to suit a variety of international power outlets. Mine came with two euro adaptors, which I assume was a packing mistake.
The case is a really minimalistic, aluminium design with three plain sides and all of the ports on the back.
On the bottom it’s got large ventilation holes around the edges and four slotted mounting points so that it can be mounted onto a wall for deployment.
One of the sides features rubber feet so that it can stand horizontally or vertically on a desk or table. If you’re running intensive applications then it’s probably best to position the reComputer on its side as this allows the hot air to rise up through the ventilation holes.
On the back, there is a bit of variation depending on the model, but the Jetson-10-1-H0 has got a 12V power input, HDMI and display ports, 4 USB 3.0 type A ports, gigabit Ethernet and a microUSB port which is for recovery and flashing the onboard storage.
The top is my favourite part of the case design. It’s clean and unsuspecting, but to access the Jetson module, you just push up on this silver rod hidden by one of the vents and this pops the magnetically latched top cover off.
So it’s super easy to access the Jetson module to connect a camera or use the GPIO, you don’t even need to use a screwdriver. The four magnets hold it in place very well, you really can’t tell that the top cover is held in place magnetically and isn’t screwed or snapped into place.
Under the top cover, you’ll see the large passively cooled heatsink on the Jetson Nano module that’s seated in the custom carrier board.
The board has a wide range of IO, some of the nice additions are support for PoE (Power over Ethernet), an optional 4-pin fan plug, control and UART pins. On the bottom of the carrier board, you’ve got a M.2 M Key slot and an optional battery holder to supply the onboard RTC (real-time clock) module.
Preloaded Software On The reComputer
Now that we’ve had a look at the hardware, let’s plug it in and try running some software on it.
It comes preloaded with NVIDIA’s JetPack SDK, so it’s ready to plug in and boot up right away. The JetPack SDK includes the Jetson Linux Driver Package running on a Linux based operating system (Ubuntu) as well as CUDA-X accelerated libraries and APIs for deep learning, computer vision, accelerated computing and multimedia.
Through JetPack, the reComputer can run a wide range of fairly complex AI systems like full native versions of machine learning frameworks like TensorFlow, PyTorch and MXNet. So you can use it for things like people, animal and object recognition, for smart systems like traffic control and vehicle detection and even in manufacturing and logisitics.
The tutorials use a number of TensorRT-accelerated deep learning neural networks, which you’ll need to build from source code. This process is all explained in detail and in doing so you’ll learn a lot along the way.
I’ll show you some of the cool things that you can do on the reComputer once you’ve worked through them.
Object Recognition Using ImageNet
The first neural network that I’m going to show you is one that does object recognition, and we’ll start off with a still image. We’ll send a neural network called ImageNet the still image and it’ll then use TensorRT and the imagenet class to recognise the object in the image. It’ll then overlay the classification result and it’s confidence level for the result onto the image.
The package comes with a few sample images to try out, so if we run this on one of the sample images and then go to the image output folder, we can see that the reComputer is 99% sure that this is a banana.
So it’s pretty confident that it’s got this one right.
I don’t like using sample images as they’re generally chosen so that the system generates good results, so I also tried this network on three of my own images as well. I used a picture of an elephant, one of my dog and one of the Sydney harbour bridge.
One thing I did notice when running these images is that they were much slower to process. This is because I sent the full-size original images to the program and not reduced resolution images like the sample images. You’ll notice this with the result text size as it’s now much smaller than the image.
So ImageNet is 62% sure that this is a tusker, which is sort of on the right track, although this elephant is missing its tusk on the camera side.
It’s also 56% sure that my dog is a Toy Poodle. I think this confidence is a bit low because there are a few different poodle type dogs that it has been trained to recognise and they’re all quite similar.
And finally, it’s 73% sure that this is a steel arch bridge. So it got all three of the objects correct in the still images.
We can also pass the program a saved video or a live video stream and it’ll do the same thing in real-time. To do this we obviously need to add a camera to our reComputer Jetson-10, so let’s plug that in first. You can use a CSI camera like the official Raspberry Pi camera module or use a USB camera. I’m using a CSI camera for this example.
So if we try this out on different objects on my desk, you can see that we’re getting an ImageNet overlay telling us what objects have been detected in the image as well as their certainty. So it’s 98% sure that this is a teapot.
It eventually decided that the broccoli wasn’t a green lizard (better demonstrated in my YouTube video) although it wasn’t very confident in its decision.
You’ll also notice that a warning popped up when this live feed started saying that the heatsink is hot and shouldn’t be touched. So it was getting quite hot when running this neural network on a live video stream with a screen recording utility running in the background as well. It’s still quite impressive that the Jetson Nano is able to run this neural network at around 50-70 frames per second while also capturing the screen contents.
It was also able to recognise a keyboard, a pair of sunglasses and my MacBook.
Another interesting thing to look at is the data in the terminal during or after the network has been run. It displays information on the classes that the network thought were most applicable to the sample frame, along with its confidence level in each. It also displays the most likely object and its associated confidence level.
Object Recognition and Location with DetectNet
To actually use this object recognition functionality in a project, there is another network called DetectNet that’ll also give you the location and size of the object detected in the image.
So with this information, you could then build something like a robot car that follows a certain object, like your dog or cat, or a counter that keeps track of birds or certain wildlife visiting your garden etc..
Pose Estimation Using PoseNet
The next network that I had some fun with is one that does pose estimation on people, or just their hands, called PoseNet. This neural network estimates the position of joints and body parts and again can be run on still images, videos or a live video feed.
This is really useful for building robots or machines that accept gestures as inputs, like AR or VR systems or can be use to build systems that monitor human behaviour, like counting people who are sitting or standing or estimate which direction people are walking in.
So there are just some of the basic computer vision systems that you can run on the reComputer Jetson-10, but they should give you a good idea of the capability of the system.
Power Consumption of the reComputer Jetson-10
The last thing I wanted to have a look at is the power consumption. The reComputer comes with a 12V, 2A power adaptor that you’d use if you have it plugged into a permanent or non-portable setup.
Running from the power adaptor with its standard configuration, it uses around 2-3 watts when at idle on the desktop with no applications open.
It uses around 8-10 watts when it is heavily loaded – running one of the object recognition models I’ve shown you previously along with a screen recording utility.
8-10 watts is fairly low for a device running on mains power, but at this power consumption, you’d work through a set of batteries quite quickly. This is obviously not ideal for building robots and portable devices, so JetPack has a settings option that allows you to switch the Jetson module to a 5W low-power mode.
In this mode, the power consumption of the module is limited to a maximum of 5W. So if I turn this on through the toolbar, the power consumption drops.
Mine dropped to around 6W. This is probably higher than the 5W stated as I’ve got a keyboard, mouse and flash drive plugged into it as well. If you switch to low-power mode, you’ll also notice that the frames per second drop with the reduction in power. So this power reduction comes at the expense of performance, but still allows about 20-30 fps to be processed and you’ll get an improvement in battery life. So, depending on the project, this might be a suitable option for your application.
Final Thoughts
I’ve really enjoyed trying out the reComputer Jetson-10 over the past two weeks. It’s a neat, ready to run solution that would look right at home on your desk, but still has the versatility to be used in an actual project.
The magnetically latched lid makes tinkering with the carrier board or Jetson Nano a breeze, so it’s definitely one of my favourite features.
I think they could have possibly included an optional fan, as the Jetson Nano did get quite hot during my testing. This was running “flat out” though with a neural network running continuously and a screen recording utility capturing the display contents, so this is probably a worst-case scenario.
The reComputer Jetson-10 is fantastic for getting started with neural networks and deep learning on computers. Be sure to have a look at Seeed Studio’s product page and check out their store for loads of other tech and electronics products and project inspiration.
Let me know what you think of the reComputer Jetson-10 in the comments section below and let me know if there are any computer vision projects that you’d like to see me try out with it.
Today I’ve got an exciting package to share with you, it’s the new Turing Pi 2 which the guys at Turing Machines have sent me to try out and share with you. So a big thanks to them for making this project possible.
This is the successor to the original Turing Pi, and if you’re wondering what a Turing Pi is, it’s essentially an all-in-one solution for creating a compact Raspberry Pi cluster, without the hassle of sourcing power supplies, cables and network adaptors, and then finding a way to connect them together. Something that I know all too well from my last cluster build.
All of the components required to build your ARM cluster are built into a single board. The original allowed 7 Pi Compute Module 3’s to be clustered together. While this new board has a number of improvements and upgrades over the original, the most significant being that it’s designed to use the newer Compute Module 4’s, so it’s a lot more powerful.
Here’s a video of my unboxing and assembly of my Turing Pi 2 cluster, read on for the write-up:
It’s got an onboard managed gigabit ethernet switch that networks the 4 slots and makes them accessible through one of the two onboard Ethernet ports.
An onboard management controller manages things like fan speed through a J17 connector, interface buttons and LEDs, as well as power to each slot.
Each slot also has some additional interfacing associated with it, so you’ve got HDMI, GPIO and a mini PCIe port available to slot 1, a mini PCIe port available to slot 2, two SATA III ports available to slot 3 and four USB 3 ports available to slot 4.
If you’re going to be using CM4 modules, like I am, then you’ll need to use these adaptor boards to be able to plug them into the SO-DIMM slots.
These adaptor boards also have onboard SD card slots, which you’ll need for the operating system image if you’re using a Compute Module without onboard EMMC storage.
Preparing The CM4 Modules
If you can source the right CM4 modules, you can theoretically create a 16 core cluster with 32GB of ram. Unfortunately, CM4 modules are pretty scarce at the moment, so I have to use what I’ve got available. I’ve got two 4GB CM4 modules with 32GBs of onboard EMMC storage, and I’ve got two 2GB CM4 Lite Modules, meaning that they don’t have any onboard storage. One of these Lite modules has WiFi and the other doesn’t, but we’re not going to be using that in this cluster in any case.
The CM4 modules just snap into place on the adaptor boards. There are four holes in the corners to hold them together with some machine screws, but I prefer not to use these as they tend to bend the CM4 modules if you don’t use the right size spacers.
On the two Lite modules, I’ll need to use micro-SD cards to load the operating system. I’m using Sandisk Ultra Plus cards for this, they’re reasonably cheap but are still fast and reliable.
The modules can then just be pressed into the SO-DIMM slots and they’re then ready to go. They are also apparently hot-swappable, meaning you can plug in or remove them from the slots without having to turn the power off, although I’d prefer not to chance this.
Before I plug all of them into the board, we need to do something to assist with keeping the modules cool. I’m going to be using these black aluminium heatsinks by Waveshare. They are just screwed into place over the CM4 module, using the four screw holes in the corners, with some thermal tape between the heatsink and the CPU and Ethernet controller.
Waveshare’s instructions are for the nuts to face outwards, but I think they look better with the brass standoffs and screws the opposite way around so that the screw heads face outwards. This doesn’t seem to cause any issues with the spacing, the nuts fit perfectly between the CM4 modules and the adaptor boards.
Let’s add the heatsinks to all of the modules and we can then plug them into our Turing Pi 2 board.
With that done, our cluster is basically assembled. All we need to finish it off is to plug in a power supply and an Ethernet cable.
Powering The Turing Pi 2
Power is supplied to the board through a 24 pin ATX connector from a typical computer power supply. They recommend using a compact supply, like the PicoPSU, mine hasn’t arrived yet, so I’m going to be using a 450 watt power supply from another project.
The board only needs a maximum of around 60 watts, so I’ll definitely be changing over to the PicoPSU as soon as it arrives.
Designing & Laser Cutting A Case For The Turing Pi 2
As I mentioned earlier, you can put the Turing Pi 2 board into any mini ITX case. I had a look online for some options, but they’re all too bulky for what I am going to be using the cluster for. I also like the look of the Turing Pi 2 board and modules, especially once all of the power and activity lights are on, so I’m going to design and cut my own from clear acrylic.
I started out with a similar form factor to my water-cooled Raspberry Pi build. Since the mini-ITX board already has screws in the four corners, I could use nylon standoffs and do away with the 3D printed corner pieces. So I could make an all-acrylic design.
I added cutouts for the ports at the back and cutouts for three 40mm 5V fans on the front. You could rather use a single 120mm fan on the side as a quieter solution, but they’re quite thick and the fan will then cover up the CM4 modules, which is what I wanted to avoid in the first place. I also added a cutout for a power button on the front panel and then some ventilation holes to allow the fan’s air to escape at the top and on the back.
With the design done, let’s get it cut out on my laser cutter.
I’m going to use 6mm clear acrylic for the larger side panels to give it some rigidity.
The other panels will all be cut from 3mm acrylic.
Assembling The Turing Pi 2 Case
Once the panels are all cut, we can start assembling our case.
As mentioned earlier, I’m going to be mounting the board using some M3 nylon standoffs. So let’s start by melting an M3 brass insert into each of the holes in the back side panel so that we’ve got something to screw the standoffs into. The melting temperature of acrylic is about 150-160°C, so if your soldering iron has an adjustable temperature setting then set it at 160°C or slightly higher.
Once those are in place, we can screw in our Nylon standoffs. I’m using 8mm standoffs on the bottom and then a series of 20mm standoffs on top of the board until we clear the CM4 modules.
So let’s screw in the 8mm standoffs first.
We can then place the board over them, with the ATX power cable and connector running beneath it. This is hopefully temporary and will be replaced with a small cable and barrel jack once the PicoPSU arrives.
Let’s then add the remaining nylon standoffs to each so that the front side panel clears the CM4 modules. I found that 3 x 20mm nylon standoffs provided enough room for the CM4 modules, so the overall internal width is 70mm.
Now we can peel the protective film off of our other acrylic pieces and push them into place.
Before we close up the main side panel, we also need to mount the power button and fans onto the front panel.
I’m going to use three 40mm RGB fans that I’ll screw into place using some M3 button head screws and nuts. I’m going to leave them unplugged for now as I’ll need to make up a harness to connect them to the 5V supply pins.
The power button I’m going to use is the same one I used for my water-cooled Pi build, the cable should just be long enough to reach the required pins on the opposite side of the board.
Once the fans and power button are secured on the front panel, we can re-insert the front panel into the slots on the 6mm back side panel.
The last thing we need to do is to place the 6mm front side panel over the top to lock the other pieces into place. We’re not going to do that just yet as we need to first flash the operating system onto and prepare SD cards for our CM4 modules. So let’s move on to the software.
Loading The Operating System Onto The CM4 Modules
Before we can boot the Pi’s up, we need to load the operating system that we’re going to be using on each of them. This is where you have a few options, depending on what you’re going to be doing with your Turing Pi 2.
You could load different operating systems and or apps onto each of your Pi’s and use them as individual servers on your network, so for example have Pi-hole running on one, OpenMediaVault on another, Home Assistant on the third and a Plex server on the fourth. Each Pi will have its own IP address, will be identifiable by its own mac address, and will act in the same way it would if it were individually connected to any switch on your home network.
Another option, which is the option that I’m going to be setting up, is to install Rapsberry Pi OS on each, then install Kubernetes. Kubernetes will have a master node and three worker or slave nodes and I’ll then be able to just tell Kubernetes what apps I’d like deployed on the cluster and it’ll manage the deployment of the apps automatically. So it’ll decide which Pi to run each app on and can do things like load balancing and adjust for a missing node if one is removed.
So I’m going to start by flashing Raspberry Pi OS onto each Pi. I’ll have to do this in two ways because two of my modules have onboard storage and two require SD cards.
The ones that have onboard storage need to be installed on the board (or another carrier board) and need to be powered up with boot mode disabled. They can then be individually connected to my computer using the slave USB port so that they act like SD cards, visible to Raspberry Pi Imager.
For the ones without eMMC storage, I need to just flash two microSD cards using a microSD card reader.
In Raspberry Pi Imager, I’ll set the name of each node and turn SSH on so that we can access it over the network to continue the installation of Kubernetes.
I’ve put the SD cards back into nodes 3 and 4, which have our Lite modules on them, and I’ve flashed Raspberry Pi OS onto nodes 1 and 2. So we can now power it up.
To close up the case. the acrylic pieces need to be lined up with the slots in the main side panel and we can then push it down into place and secure it with four M3 button head screws into the nylon standoffs.
I’m not going to screw the side panel down just yet as I might need to open it up again to get to the modules or SD cards while setting it up.
Boot Up The Cluster For The First Time
I’ve now connected the fans up to get 5V from a USB port, so let’s try boot up our Pi’s and continue with the installation of Kubernetes.
When you push the power button, the board’s management system starts up each Pi in succession, so first node 1, then nodes 2, 3 and 4.
There are a number of LEDs assigned to each slot and on the adaptor boards. These show power to the slot, Ethernet activity, power on the adaptor board and activity for each CM4 module. So those are what I wanted to keep visible with the clear case design.
After a few minutes, the Pi’s should all have finished their first boot process. You can also monitor the progress on node 1 by plugging the Turing Pi 2 into a monitor.
You should notice significantly less flashing of the activity LED on the back of each carrier board. You can then move on to setting up Kubernetes.
Setting Up Kubernetes On The Turing Pi 2
I’m just going to go through a summary of the installation process of Kubernetes, if you want to set it up on your own cluster I suggest following Network Chuck’s video, he’ll take you through the entire process step-by-step.
The Kubernetes distribution that I’m going to be installing is called K3S, which is a lightweight distribution that is designed for use on resource-constrained devices like our Raspberry Pis.
After allowing the Raspberry Pis to boot up, we’ll need to SSH into them to install and set up Kubernetes. I’ve already assigned hostnames and static IP addresses to each node on my local network, this ensures that each node is given the same IP address by my router every time it comes online.
I’m going to SSH into each node using Putty on my windows PC and I’m going to start by setting up the master node.
We’ll install Kubernetes as the root user using a single line with some setup information following it:
curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644" sh -s -
Once it is installed, we’ll need to copy our master nodes key or token as we need this to set up our worker nodes:
sudo cat /var/lib/rancher/k3s/server/node-token
We now have a basic cluster running, although it only consists of a single node. So let’s log into our other three nodes and install Kubernetes so that they can join our cluster.
We do this with a similar command to the master, but this time including the master node’s token and IP address:
curl -sfL https://get.k3s.io | K3S_TOKEN="<INSERT TOKEN>" K3S_URL="https://<INSERT SERVER IP>:6443" K3S_NODE_NAME="servername" sh -
Replace <INSERT TOKEN> and <INSERT SERVER IP> with the token that you copied from the master node and your master node’s IP address.
Once we have completed the setup on the fourth node, we should have our cluster ready.
We can confirm that all of our nodes are connected and available by again running the kubectl command on our master node:
kubectl get nodes
Our 4 nodes are now available and our cluster is ready for us to deploy apps on it. I’m not going to go into this in this video as it’ll then be too long, but this essentially involves creating a .yaml configuration file for each app you’d like to deploy on your cluster and then a single command line to deploy it from our master node.
Final Thoughts On The Turing Pi 2
Before we finish off, let’s take a look at its power consumption. The cluster uses around 25W once it is running a few apps, and when heavily loaded this goes up to a maximum of about 30W. So this is significantly less than running an old laptop or computer instead of the cluster.
It’s also worth keeping in mind that this is with a 450W power supply, so it’ll probably come down by about 5-10W once I switch the cluster over to a smaller PSU. I’ll post an update here when I do.
Overall, I really like how the case has turned out. It’s simple, protects the Turing Pi 2 and still allows you to see into it and see all of the activity and indication LEDs. One addition I might make on the next version is to add some space for one or two 2.5″ SATA drives to be mounted so that they can be easily plugged into the available ports.
Is there anything else you’d like to see me add to the case design? Let me know in the comments section below.
I think the Turing Pi 2 has a lot of potential; the upgrade to CM4 modules unlocks a significant amount of computing power and the all-in-one solution really makes it easy to get started. There is a lot of interfacing available on the board and it’ll hopefully all be made available and accessible through updates to the firmware in the coming months. I look forward to improving my cluster as the community evolves with it.
This Turing Pi 2 board and it’s firmware are still beta versions, so there will likely be a few tweaks and changes made before the final production runs. But the good news is that they’re launching on Kickstarter this week, so definitely go check their campaign out. I’ll leave a link to it as soon as it goes live. You can sign-up for their newsletter and updates in the meantime to stay informed.
Let me know what you think of the Turing Pi in the comments section below, what are you going to use it to run?