Categories
Uncategorized

DotBot (Another Mastodon Bot)

Well, it’s been two weeks since I wrote about the @WeShouldBuild bot on Mastodon and somehow I created another bot. This one is called @DotBot and it creates a new piece of dot-based art once per day. (You can see an example above.)

I once again used Mastodon.py for this (GitHub and Docs) and there are twice as many lines of code for this one, which is still less than 50 lines. This one generates an image, saves it to disk, then posts that along with a small amount of text. A cron job kicks things off at 8:30am CST every day.

As for these dots? I realized it’s something I’ve been toying with for over a decade!

The image above was generated with a Processing sketch in April 2011. At that point I had probably been using Processing for less than a year. I had a lot of fun exploring creating images with code.

Here’s some fun I had with the images being projected onto a large while wall and taking photos of myself in front of it.

I found a folder dated September 2017 that had a bunch of generated images which I think I may have compiled into a video with each image being a frame. (I’ve done this other times, I just can’t find an output file for these. I probably did them last minute for the Maker Faire Milwaukee Dark Room and never got a chance to document them!)

I did find these images in the folder though… Not sure what I was doing, but I like it. If I had to dig up some related things I’d pick annular (more info) from 2013 and maybe Shape Grid Circles from 2019. I realized there are a few other related things but maybe they warrant their own post. Especially if I can find the videos. (If not, I can recreate them.)

Okay, that went a bit off course (of course) but the important thing is, if you’re on the Fediverse and want to see new dot-based colorful art each day, just follow @dotbot@botsin.space

And if you’re not on the Fediverse you can still get the RSS feed! Just grab https://botsin.space/@dotbot.rss

Categories
Uncategorized

We Should Build (A Mastodon Bot)

I joined Mastodon at the end of October 2022 (I’m @rasterweb@mastodon.social) and it’s been amazing. I can’t tell you how much I love a platform with no ads, no tracking, and no far-right extremists. So after (exactly) two months I’m pleased to be releasing my first Mastodon bot. If you’re interested, give a follow to @weshouldbuild@botsin.space

I’ve described it as “A helpful bot that will give you suggestions for things to build…” and it’s just simple and silly and it was an easy way for me to get started with bots on Mastodon. It comes from the work I did on the MMPIS at Milwaukee Makerspace.

I used Mastodon.py for this (GitHub and Docs) and even though I set out to dig deeper into Python back in 2020, things got derails (ha, as did the world) so my Python skills are still lacking, but this is seriously less than 20 lines of code and I had a fun time writing it.

Besides my main Mastodon account I have another account that is really just a backup account (for testing and… because backup) and two accounts on servers run by two friends on their own servers. I chose to put this on botsin.space because it’s specifically for Mastodon bots. (I also sent the admin a donation to help cover the server costs.)

I’m excited about creating another bot already, and I have an idea what I want to do. I think this will be a good way for me to flex my coding muscles since I’ve been writing the same sort of things for the past two years and more Python experience could certainly come in handy.

Categories
Uncategorized

Game Boy Camera Dumper

As previously mentioned, I finally got my Game Boy Camera up and running with the ability to transfer images to a computer. In this post I’ll outline the method I am now using to do that, which is an update from the first post.)

I put the code I am using on GitHub in a repository called Game Boy Camera Dumper. (I wasn’t sure if I should use the word “transfer” or “capture” instead of “dumper” but I’ve always been a fan of Perl’s Data::Dumper module, so that clinched it.)

You’ll need Python3 installed, along with the pySerial and argparse libraries. pip (The Python Package Installer) should get those installed easily enough.

Once you’re ready you type python3 gbc-dumper.py /dev/tty.wchusbserialfa440 in the terminal (after you are in the right directory) and wait for the magic to happen… (And yeah, you don’t type /dev/tty.wchusbserialfa440 part unless that’s the USB port on your computer that your Arduino shows up as. On Windows it’ll be COM3 or something else, on Linux… if you use Linux you’ve probably got this part figured out already.)

The script will echo back the serial port you specified (yeah, I should add something more useful there in the way of feedback) and then wait for you to “print” the photo from the Game Boy.

You may also notice a new file appear. Our output file gbc-output.txt has been created, thought it’s still empty at this point.

So move over to your Game Boy Camera Unit and do the print thing! Send that data, and the Game Boy screen will show a “Transferring” image…

Here comes the data! You’ll see the data from the Game Boy/Arduino flowing into the terminal. It is also being saved to the gbc-output.txt file as well…

At some point the data will stop flowing. You can either end it here, or continue to “print” photos. The script will keep adding to the output file and easily hold all your images until you decide to stop. I usually transfer all of the photos on my Game Boy. (Though I don’t delete them all until I’m sure I’ve got them all safely on my computer.)

How do you end it? Hit Control-C to stop the script. It will end, the file is written to and closed, and you’re done. For now. (I should add that bit of instructions into the script.)

Here’s our output file. You’ll want to open it and copy the contents to the clipboard. (Or you can just cat gbc-output.txt | pbcopy on macOS.)

You can head over to the decoder (choose V1 or V2 depending on what firmware your Arduino is running) and then paste in the text you copied and translate all that text into beautiful low-resolution images! (And then save them.)

Because I try not to rely on anything that isn’t running on my own computers if I don’t have to, I run a copy of the decoder locally, and I’ve stripped it down a bit to just show me the essentials. (Big thanks to Brian for publishing and sharing his work.)

Oh, there is one more thing I do. I upscale the images to 1280 x 1152 using another Python script. I made it a drag ‘n drop application on macOS using Playtpus for ease of use. (The Python code is pretty simple, and you can see it here.)

That’s my process for now. In a discussion with another Game Boy Camera enthusiast I suggested that it might be possible to write a Processing application to dump the image data, display the image, and then save the image to disk, but I don’t know that it’s something I’ll take on anytime soon.

Categories
Uncategorized

Python Image Resizing

python-editing

My journey to Python continues. I’ve been playing on a Raspberry Pi project and it’s been working quite well! In the meantime, I’ve still got work to do, and lately (at work) I’ve had to deal with a lot of screenshots that get emailed, and when I started it was just a few, so I’d open the PNG files in Photoshop, resize them, and save them out as compressed JPG files. This started to get painful as I had to do more of them, and as a fun little comparison, while Photoshop CS5 runs quite fast on my 2012 MacBookPro, Photoshop CC 2019 is terrible on my 2017 iMac.

Side-rant: I hate Adobe’s forced subscription model, and Photoshop CS5 does most of what I need to do. Adobe’s Creative Cloud is caught in an forced update cycle where they have to add things, many of which may not be useful (to you) and are probably resource hogs, too. CC 2017 was quite a bit faster than CC 2019. (End of rant!)

Anyway, back to Python. Since I’m often looking for excuses to write Python lately, I checked to see if there was an easy way to resize images and convert from PNG to JPG. There sure is, and it took me just a few minutes to find some working code. This sped up my screenshot situation quite a bit! The one hiccup in finding Python example code online is that it’s often for Python 2.x instead of Python 3.x and needs some minor adjustments.

dropscript

While typing Python commands in the terminal was working fine, I’ve been around the block before with this sort of thing… In fact, in 2012 Wilfredo Sanchez put out DropScript and I used it for many Perl scripts to make my life easier. Luckily I wasn’t the only fan of DropScript, and there’s a modern version of DropScript! Oh yeah, so the way it works is that you can take a script (written in whatever language) and make it run when you drop files on it. It’s a customizable droplet. I remember I had a big folder full of droplets for web development and file management. It was handy.

So it took another 10 minutes, but I converted my Python script to a DropScript application so now things are even faster… drag and drop, no typing!

I still need to make my Python a little more accepting. For instance, I have to drop files on it, but it would be nice to be able to drop a folder full of files onto it. Baby steps, right? I’ll keep hacking away at Python as I get time. And I will admit, the cleanliness of Python is sort of nice…

Categories
Uncategorized

uArm Swift Pro Plotting

urm-swift-pro-00

We recently got a uArm Swift Pro robot arm at Brinn Labs and I’ve been putting it through its paces. It comes with software called uArm Studio that lets you do a lot of things, but for this post I’ll focus on drawing (or “plotting”) using a pen.

heart-bw

One of the tools I’ve used in the past with the Egg-Bot was StippleGen2 from our friends at Evil Mad Scientist Laboratories. StippleGen2 is a program written in Processing that takes an image and converts it into a series of lines, or more specifically, a single line, which is suitable for plotting.

Above is the image I started with, an “8-bit heart” as I call it. It’s a simple black and white image of a low-res heart. (Great for Valentine’s Day, right?)

heart-tsp

After running the heart through StippleGen2 and choosing the appropriate complexity of the line drawing I wanted, I saved out the file as an SVG format vector file. Perfect for plotting. (In fact, since it’s a single line, the z axis never has to move up once it starts.)

uarm-studio

I fired up uArm Studio and chose the Draw/Laser feature, and then loaded in the SVG file. I did have to scale it up a bit, as I still don’t have the exact dimensions I should use for artwork in uArm Studio.

Once the file is loaded, you hit start and there’s a step where you set the z axis so it know where the pen hits the paper. At this step, I wish there was a little more control over how the z axis moves. I think the smallest increment is one millimeter, and I think it should be smaller. (Most CNC software has some adjustment to how much you move things, so I can see adding in 0.5mm and perhaps 0.2mm as well.)

urm-swift-pro-01

While the uArm Swift Pro is awesome, I’m still going to be a little critical… One of the issues I’ve come across in the Draw/Laser part of the software is that the speed seems to be hard-coded, with no way to adjust it to go faster (or slower.) As someone who understands G-code and how CNC machines work, I found this a little annoying…

urm-swift-pro-03

So I set about to find a solution. I first posted a message on the forum, but then Chinese New Year hit and it seemed as though it would be two weeks (or more) before I got an answer. I had dug around and found that the G-code created by the Draw/Laser part of the software generates a file and drops it at ~/uarm/Temp/files/gcode/tmp_pen.gcode and I assumed that by editing the feedrate in that file, I could speed things up… I was right!

urm-swift-pro-04

My first attempt was to connect with the arm using Universal Gcode Sender, which in this case was not universal, and failed to properly talk to the arm. I dug around a bit more and found simple_stream.py which is a Python script to stream G-code to a device. Sadly, it was not compatible with Python3, but luckily, I’ve been writing a lot of Python lately, so I fixed it.

After I got it working (that is, after much hacking at the original code) I found that I could easily speed up the drawing to half the time. In this case, 5 minutes with the Python script versus 10 minutes with uArm Studio. For TSP art, great precision tends not to matter too much, and I think I could speed it up even more.

I’ll work on cleaning up my Python code and seeing if I can get it online in case others want to muck around with it.

Update: Code is here: https://github.com/raster/uArm-GCode-Streamer