Robot Project: Acrylic treat dispenser

Over the holidays, after evaluating alternatives for cutting the acrylic and aluminum, I got a bench top band saw.

processed_PXL_20201230_204526609.jpg

It has allowed me to make all the parts for the feeder quickly and reasonably straight. I even figured out how to make the treat ramp inside to keep them from getting stuck. I used CA glue to bond everything. That was a little learning curve. My new clamping surface plus some masking tape to avoid gluing the parts to the bench worked great. So, here is the assembled unit:

processed_PXL_20210101_210739393.jpg
processed_PXL_20210101_210757998.jpg

I'm very happy with how it came out. Next is installing the servos and getting it running under power. The placement is going to be tricky.

Robot Project: Audio In/Out, Cat Treat Feeder, Refining the Patrol code

Whenever we’re away for a long time we worry about our cats. There are people who come in to feed them, play with them, and change the cat box, but still we worry. Over the years I’ve developed more and more technological solutions to allow us to check in on them remotely. Now, before we go I set up multiple wifi cameras around the house, each of which is in a likely cat activity space. I can reposition the cameras remotely and they capture based on motion and save up a nice movie of all the activity broken down by hour. They have infrared cameras and so we get twenty four hour coverage. This works pretty well and reduce worry a lot.

There was only one gap in this system, sometimes the cats decide to be completely outside of the view of all of the cameras for many hours. Being an engineer I felt compelled to try and build a solution to fill this gap. I decided that I would build a robot rover that could do the following:

  1. Autonomously roam one or more rooms of the house

  2. Take pictures during it’s patrol and upload them

  3. Record sounds

  4. Play our voices calling for the cats and offering treats

  5. Dispense treats

  6. Allow me to take over and control it’s functions manually over the internet

The theory would be that while a robot would likely make the cats wary, something that sounded like us and dispensed tasty treats would make them get over that and come close to be seen on camera somewhat on demand.

Now that I have the robot assembled and I’ve figured out how to program it, I’ve been working on the patrol code. It is still a work in progress but I’ve managed to get ten probes into a room and not get stuck, I have some ideas on how to get to the next level. Along the way I’ve discovered some interesting edge conditions in the python api code, for example in some locations where they convert from degrees to a number of intervals from the wheel sensor they use integer math. Not in general a problem except when the division comes back as zero in which case the onboard controller takes that as meaning “turn on the motors until you’re told to stop” instead of “stop after some number of wheel sensor signals" So, every so often everything would be going well and then the robot would need to turn a small amount and it would just start spinning in place until it hit the code to go forward which at that point was in a random direction. There was a similar issue with the code to go forward a certain number of centimeters resulting in running right into whatever wall was in front of it.

I’ve got all of the safeguards in my code now to prevent those conditions now, and it has been interesting debugging both in the code and physical world. One of the mechanical things that I ended up having to fix was that I had originally mounted the servo on the front and the ultrasound sensor on the lower deck of the robot ( they didn’t say you shouldn’t and it was out of the way of the camera .) Turns out, that you don’t get full range of motion of the sensor on the servo because the circuit board hits the upper deck. More importantly for something that rolls around the floor it was easy for the low mounted sensor to miss things that would interfere with the top half of the robot. So, I moved it up to the upper deck. I just pivot it out of the frame when I use the camera.

I also learned that you shouldn’t operate the motors and take a picture at the same time, the ribbon cable ( not shielded ) for the camera runs right past the power circuit for the motors. There is a lot of RFI when they’re running and so you get some pretty funky images as a result.

Robot showing amplified speaker, microphone, and new placement of the ultrasonic sensor

Robot showing amplified speaker, microphone, and new placement of the ultrasonic sensor

I got a very cheap USB microphone dongle to use for audio input… bad idea… the gain on the device was terrible, it worked but the audio was barely audible over the noise. The noise was because all of it’s circuits were yes, you guessed it, right near the motor controller board. I ended up using a nice USB microphone via a USB cable clipped to the side of the robot, that worked like a champ and had enough gain to pick up a cat vocalizing.

I also got a tiny rechargeable amplified speaker to plug into the Raspberry Pi’s headphone jack and that worked fine. I got a 90 degree adapter to keep the wire away from the wheels and mounted it to the back with some double stick tape. Using the pygame module it is easy to play any audio file so we should be able to record ourselves calling to the cats and use that.

Design for cat treat feeder…

Design for cat treat feeder…

The final item is the treat dispenser, this is an entire project in itself. I have a design that I’m working on and I’ve created a prototype of the dispenser mechanism. I have been using components from the Lego Technic collection of parts. The great thing is that they have all kinds of shafts, bearings, gears, wheels, beams, etc… that are all compatible with each other. I’m not qualified to machine anything or even do a competent job designing something to be machined. So, these are a godsend and have allowed me to build the core mechanism of the treat dispenser and get it working. The power for the treat dispenser will come from two additional servos one on each side which will push a peg on a drive wheel, they are set 180 degrees apart. (see video for demonstration in the prototype) Fortunately the folks who make the robot provide a kit with an add on servo controller board that can control multiple servos and it comes with a couple of big servos that should work. It also has it’s own batteries and so it won’t be dependent on the robot’s power.

You can see the up and down action of the servo being translated to rotation to drive the belt.

I’m going to make the case for the treat dispenser out of 1/8” acrylic sheet and I’m going to have to cut these parts a lot more cleanly that the last one because I intend to glue them together. My Dremel has a straight cutting bit and a guide collar and I think that would work with the panel clamped firmly to a sacrificial piece of wood and pressing the Dremel against a guide fence. I’m going to be experimenting soon.

That’s where I’m at for the moment.







Robot Project: Adding pi camera and making a mounting bracket

I’ve started hacking away on a prototype program to have the robot drive around the room and investigate the objects it finds with the ultrasound sensor. Part of that was to take a picture, so I needed to get a camera. Raspberry Pi makes a nice add on camera which just comes with a ribbon cable and the camera card. You can just let the ribbon cable anchor the camera to the upper deck of the GoPiGo robot, but it seemed to be too low so that you’d always take pictures of the ultrasound sensor and also not well anchored so when the robot drove around it would flop back and forth. So I needed to fabricate a bracket.

I decided to make it in two parts, a lower right angle made out of .032” aluminum and an upper mounting plate made of the same 1/8” acrylic sheet that the rest of the body parts are made of. I made the cuts in both the acrylic and the aluminum using my Dremel tool and a cutting disk. It worked pretty well just needing to take my time and not heat the acrylic up too much. The only downside is that there isn’t a way to use a guide ( or shall I say I haven’t figured that out yet ) so the cut edges are a bit erratic. The resulting bracket is charmingly imperfect, but completely functional. I secured it to the upper deck with 8x3mm screws, nuts and a compression washer, the mounting plate is attached to the aluminum angle in a similar manner. I almost had an issue because unlike everything else on the GoPiGo the camera board has 2mm holes. Fortunately I had a couple of 8x2mm screws and nuts and I was able to use them to attach the board to the mounting plate. Because of the inaccuracies of my fabrication I had resigned myself to crooked pictures, but due to a happy accident, my inaccuracies canceled out and the camera ends up quite level.

CameraMount.jpg

Now back to debugging my patrol code and adding in the image captures.

Robot Project: Assembly and programming

Back in July of 2016 I guess I thought I’d have some time on my hands to play around with robots… that didn’t happen… but I did buy a GoPiGo kit from Dexter Industries along with some accessories. It has been in my office drawer for four years until this week when I got it out and put it together. It is a nice kit and targets kids 10 years and older and is really quick and easy to assemble. I would say that the youtube videos are the documents of record on how to assemble and get started with the GoPiGo. The robot is a simple rover powered by a Raspberry Pi and a Debian Linux distribution called Raspbian for Robots. It also has servos, cameras, and ultrasonic sensors that you can add on. There is a simple python api for programming the robot and interacting with the servos and sensors.

The version that I have is the GoPiGo 2 and the current one is GoPiGo 3, they have an archive of all of the software and documentation for old versions and the YouTube videos for the old ones are also still there. I got it assembled with only one mishap where I cracked a small Plexiglas part by over-tightening a fastener, I was able to mend it and continue. The software setup was equally easy, just plugged in an Ethernet cable, pointed a browser at it and they have a vnc based desktop where you can configure WiFi. Once it is on the wifi network you unplug the cable and you can just log on to the robot where ever it is via WiFi.

I created a github project for my programs and using git I can just pull them onto the robot. That way I can code on my desktop and just move things over via git. You can power the robot for programming purposes using a USB power adapter so that you don’t deplete the batteries while you’re debugging. I’ve successfully shipped over a couple of test programs and tested moving the robot and also manipulating the servo and reading the ultrasonic sensor. Now I’ve just got to think about what my first substantial robot project is going to be.

IMG_4712.JPG
IMG_4713.JPG
IMG_4714.JPG


terminal-dashboard v0.0.5

I’ve made a lot of progress on the testing for the dashboard tool, I’m at about 80% coverage across the whole codebase. It was interesting getting the fixtures written for each of the different data sources so that they could be tested. I also added some features for testing and also some refactoring to enable testing. I moved most of the code for loading and processing the configuration file out of the main script and into a separate module. This allows those functions to be tested in isolation but also they now represent a usable API in their own right. I didn’t want to include the config code directly in the dashboard module since it is coupled to all of the different types of data sources and their dependencies. This way the dashboard code could be used with totally alternative graph objects or data table objects without any direct coupling to those specific implementations or their dependencies.

I used a model of snapshot comparison to verify the graphics rendering code, this involved creating static data sets and controlling the screen size carefully. It also requires some honesty on the part of the tester to carefully verify the renderings and not accept bugs. I have at least one bug in my example dashboard that I so-far haven’t been able to reproduce in the test automation… but it is on my list to track down.

My plan is to finish getting through the CLI and config testing and then on to the next round of features and enhancements.

Software Project: terminal-dashboard v0.0.1

Back in 2017 when I was working at Elastic and leading the Kibana team ( Kibana if you don’t know is a web based tool for querying, analyzing, and visualizing data that is stored in elasticsearch, which has as one of it’s core use cases rendering dashboards for collections of operational monitoring data in data centers ) and being the contrarian that I am, I thought: “What about the green screen folks who manage lots of systems, only use the command line and a terminal, and who would balk at installing a whole server just to run Kibana? Why can’t we have a dashboard tool that presents graphs of the same data in a terminal window?” Probably an insane idea, but you’re reading the words of someone who wrote graphical games for the TRS-80 Model I so it didn’t seem insane to me… So I started out using curses and block graphic characters to create a drawing package with all the essential graphics primitives. Then I built a data table abstraction to allow flexible access to the data without too much coupling… Then things got really busy at work and my attention span disappeared and I put the project aside.

Three intense years later I’m retired and I’m spending some time converting old projects to Python 3, moving old svn repos to git, and I come across this project again… I have the time now so I decided to push the project forward a bit more. I’ve managed to implement some basic chart types, line, bar, pie and table. I’ve created data sources for reading the syslog and summarizing it and for getting the machine’s process, memory, and disk statistics. I’ve written a driver script that loads a JSON configuration file that sets up the pages, panels and graphs of the dashboard and displays them and allows interaction as well as an unattended cycling kiosk mode. At this point I think this thing could actually be useful. Here’s a short demo of it running:

There’s a few things that I want to add in the near future:

  • Ability to monitor other systems via ssh

  • A flexible data source for querying elasticsearch so folks could use it with an existing Elastic Stack deployment

  • Management of credentials using the system keyring so it can auto connect and self deploy through ssh to multiple systems

The sub-modules I think might be useful for other applications since they are pretty general and not coupled to this particular application.

It’ll be fun to show the next steps…

Cooking...

I’ve been getting back into cooking now that I feel like I have more time and mental bandwidth. I used to experiment a lot more and try new recipes and ingredients, but when my work life got to a certain level of intensity I fell back to more of a rotation of greatest hits and familiar things. Without a work life anymore, I find that it’s easier to take my time and look at a bunch of the cook books I’ve accumulated or search the interwebs for something interesting to cook.

It is also farm share season and the constant influx of a box of vegetables from Appleton Farm helps to drive cooking activities and planning for the week. Making dishes or food products that can be frozen or preserved is a part of this too, we can’t eat all those vegetables in one week, but over the rest of the year we do. Pickles and sauerkraut are also great things to make and enjoy later. I also have a fish share from Cape Ann Fresh Catch and this means that I'll be cooking at least one fish dish every week.

I also like classic dishes done in a classic way, they became classics for a reason, they became cliches due to overuse and losing focus on what was great about them. For example I made a classic Caesar Salad a little while ago from scratch, it was amazing especially with ultra fresh romaine lettuce from the farm. It was nothing like the watery bland stuff that you get in most restaurants nowadays… The umami from the anchovies and egg yolks against the bite of the garlic and the acid from the lemon is just amazing.

Cooking, when you can do it in a relaxed way, is extremely satisfying, the process is engaging, the result happens in a finite time, and it is something you can share and so it is also gratifying beyond just consuming the food. When I was much younger I cooked in restaurants as a short order cook, turning and burning as it were, and I learned how to relate to food, how to prep, how to be confident about the process, and how to multi-task to get things out at the same time. I remember closing the restaurant and feeling completely drained and yet calm after an especially intense shift. It also formed bonds with the rest of the team in the kitchen even if you were very different people, the intensity that you shared drove you together for that time and when you were flowing together and it worked it was a great experience.

Software Project: slides-sound v1.0.0 shipped...

So, the final project that I’m going to convert to Python 3 has shipped. I restructured it to a standard python project, converted it to Python 3, fixed a bunch of bugs and things that just weren’t good… added packaging and did some manual testing. It is here on github (https://github.com/jpfxgood/slides-sound) and from pypi by doing python3 -m pip install slides-sound.

This is really an experiment if anything and I intend to keep iterating on it, so possibly massive changes will come. I’m not doing formal tests for this one because honestly it doesn’t warrant it yet. If it ever stabilizes then I’ll dig in and do that.

There were more changes than I expected because some of the packages I used had changed in unexpected ways, some of the way I was doing things was completely inefficient, and stuff was broken because it was just hacked together ;-). So now it is a lot cleaner and less hacked…

You can check out the readme and the doc for further information…

This is the end of the Python 2 to Python 3 saga for now, there are a few other things that I haven’t published that I may convert in the future, but these are the ones I use most often and they’re published.

Here’s a parting video generated using the slides and music scripts:

Pictures from my several years commuting to Berlin.


Software Project: ped-editor v1.2.0 shipped!

A new feature release of the editor. The feature is that either when you pipe content into the editor like “find ~ | ped” or you use the F10 shell command dialog the buffer will update in the background for as long as the command produces output. You can press Ctrl-F in that buffer and toggle following, i.e. the screen will automatically show you the end of the stream and the latest content. If you want to follow a file on disk you can use “tail -f filename.log” as the command and the resulting buffer can be used to view this information. All of the regular read only buffer commands are available, copying content, searching with regular expressions, etc…

There are new tests since the underlying StreamEditor class is used in the StreamSelect component and so the existing tests caught a lot of side-effects that I had to clean up as I did this. I have to say having sensitive tests has made this whole thing a lot safer, I probably would have pushed my first cut on this in the past and then discovered the problems later. Now, it all works and is covered with regression tests.

I’ve also updated the wiki usage documentation with up to date screen shots and more information about how the editor works.