I built my first RepRap back in 2012, and it worked for a few years, and things broke, and I usually fixed them, but when I got my Maker Select Plus I sort of pushed the old RepRap off into a corner. I eventually loaned it to someone at Milwaukee Makerspace, who promptly broke it, and then I repaired it (again) and got it working, a bit…
To be honest, the machine is quite a mess, but I’ve decided to stop being sentimental about it. It’s existed for a while now as a “This is how we used to build printers!” example, but I decided that the time has come to take drastic measures, and it’s all coming apart.
Over the years I’ve managed to scrounge up some nice components for a new build. Some 450mm lead screws from an unnamed medical facility, some 12mm x 720mm smooth rods from an old laminator, etc. Couple that with the donor parts from the old RepRap and I’ve got most of what I need to build a new machine.
The one thing I don’t have is extrusion for the frame, but Mark (of SoM and UMMD fame) does. He’s got a pile of 40mm Aluminum extrusion which I might acquire to start on this new RepRap journey. (If I do not acquire it, I may end up going with 20mm extrusion. Not ideal, but possibly more affordable.)
I won’t go to the lengths that Mark has in building his heavy-duty industrial-style printers, but I’m headed in that direction just a bit. I’ve looked at the Wilson TS, other T-slot designs, and any other printer using Aluminum extrusion, and I’ve got a rough design figured out. I may try to use machined parts rather than printed parts where I can (meaning where it’s practical and affordable.)
I’ll probably stick with a 200mm x 200mm heated bed for now (since I’ve got two of them) but since I have the long lead screws already, I’ll be shooting for a 200mm x 200mm x 400mm build volume. Quite a bit more Z than the 180mm of my Maker Select Plus.
While I want this to be an economical build, mostly by using components I already have, I’m not trying to build a super-cheap 3D printer. I considered buying a second printer, but the pile of parts and a mostly functional donor machine convinced me to go the route of designing and building my own. Plus, this means I’ll have a printer to experiment on while still having another that actually works! (In theory, anyway.)
It’s that time! Time for another LinkDump post. It’s basically a blog post that links to other things on the World Wide Web, and often has little to no commentary. Every now and then I’ll just post some links to things I’ve read or looked at or need to check out in the future, or just want to share.
One of the end effectors included with the uArm Swift Pro is a stylus which can be used with a tablet instead of a human finger, which is handy, because robots don’t have human fingers.
Over at Brinn Labs we’ve been trying to diagnose this problem we have with an iPad in a kiosk that seems to be going to a black screen. I’ve already done a few tests to diagnose it, but one of the tests I couldn’t easily do was stress test it by running through the app over and over again… enter our robot overlords!
In uArm Studio you can use a “Blockly” interface to program the movements. If you’ve used Scratch or another block-based programming interface, Blockly is one of those.
Blockly is easy to use, but can also be frustrating if you know how to write code. I wanted to walk through this exercise just using Blockly. Now for a non-coder, this is a great, easy to use interface… for someone who loves code, not so much.
And then I clicked on the Javascript view. Aha! Real code! But! You cannot edit it… it appears to be read-only, or just a rendering of the code you created using the block interface. That’s not fun.
But wait! There’s an XML view… featuring XML you’d probably never write. But the XML version is important because that’s how uArm Studio stores the file on disk… in XML format. I haven’t tried editing the XML yet to see what uArm Studio does with it, but it might be worth a try.
To be honest, I’m much more interested in the uArm Python SDK which looks like something I’d enjoy digging into. (Especially with my new-found love of Python.)
Anyway, here’s a video of the uArm Swift Pro in action touching the iPad to work through the app… and after that is a time lapse from a camera that was running over the weekend to make sure nothing went wrong.
My journey to Python continues. I’ve been playing on a Raspberry Pi project and it’s been working quite well! In the meantime, I’ve still got work to do, and lately (at work) I’ve had to deal with a lot of screenshots that get emailed, and when I started it was just a few, so I’d open the PNG files in Photoshop, resize them, and save them out as compressed JPG files. This started to get painful as I had to do more of them, and as a fun little comparison, while Photoshop CS5 runs quite fast on my 2012 MacBookPro, Photoshop CC 2019 is terrible on my 2017 iMac.
Side-rant: I hate Adobe’s forced subscription model, and Photoshop CS5 does most of what I need to do. Adobe’s Creative Cloud is caught in an forced update cycle where they have to add things, many of which may not be useful (to you) and are probably resource hogs, too. CC 2017 was quite a bit faster than CC 2019. (End of rant!)
Anyway, back to Python. Since I’m often looking for excuses to write Python lately, I checked to see if there was an easy way to resize images and convert from PNG to JPG. There sure is, and it took me just a few minutes to find some working code. This sped up my screenshot situation quite a bit! The one hiccup in finding Python example code online is that it’s often for Python 2.x instead of Python 3.x and needs some minor adjustments.
While typing Python commands in the terminal was working fine, I’ve been around the block before with this sort of thing… In fact, in 2012 Wilfredo Sanchez put out DropScript and I used it for many Perl scripts to make my life easier. Luckily I wasn’t the only fan of DropScript, and there’s a modern version of DropScript! Oh yeah, so the way it works is that you can take a script (written in whatever language) and make it run when you drop files on it. It’s a customizable droplet. I remember I had a big folder full of droplets for web development and file management. It was handy.
So it took another 10 minutes, but I converted my Python script to a DropScript application so now things are even faster… drag and drop, no typing!
I still need to make my Python a little more accepting. For instance, I have to drop files on it, but it would be nice to be able to drop a folder full of files onto it. Baby steps, right? I’ll keep hacking away at Python as I get time. And I will admit, the cleanliness of Python is sort of nice…
In the interest of not waiting too long after I build something to share it here, I’m going to drop a bunch of random info into this post, knowing that it may not be well written or cohesive, but I’ve decided that’s okay…
Logo is an educational programming language known for its use of turtle graphics. Commands for movement produce drawings on a screen that teach students to understand, predict and think about the turtle’s motion by imagining what they would do if they were the turtle.
And finally, if you’re not familiar with Brinn Labs (where this was created) we are were the exhibit development group of a children’s museum, and we make made safe, durable, fun, educational exhibits for museums and other institutions, mostly aimed at early learners.
Here’s the original concept I shared with the team. Things always change during the development process, but this is still pretty close to what we came up with.
In designing the physical board and pieces, I took inspiration from the block interface used by Scratch. In Scratch pieces fit together to help you figure out where things go, and color also help denote function.
Here’s a mock-up of the board where the pieces will go. The board has pockets that the pieces fit into, and in each pocket is a sensor that reads the back of the piece. Note that the pieces only fit into the pocket that matches the function. You can’t put a direction piece into the color pocket. The pieces also cannot fit in backwards or be rotated in the wrong direction. Designing this to work took some time, and it’s all there to help you get it right.
The right side shows the indent for the loop code, as well as the use of parens that many programming languages use. Note that the loop pieces are not required, but if you put in the beginning loop statement and not the end loop statement (or vice versa) you’ll get an error.
We’re using the QTR-8RC Reflectance Sensor Array boards from Pololu. Each pocket has one sensor board to read the barcode on the back of the pieces. The barcode allows us to use a binary code for each piece. This means no technology is embedded into the pieces, and they’re (fairly) cheap and easy to replace as needed.
Here’s a look at the print file for some of the pieces, showing the back. You can also (faintly) see the cut lines for the pieces.
Here’s a portion of the binary code for some of the pieces. We actually doubled it up as a sort of checksum to make sure things were read right.
Each sensor is mounted from the back of the panel with a 3D printed mount that sets it right below the surface. We found a gray filament that was a pretty close match to the gray HDPE we used for the front panel. The top part of the sensor mount matches the holes milled into the HDPE by the CNC router, and includes the radius of a 1/4″ router bit.
My favorite part about 3D printing the sensor mounts is that you can set the height of the part where the screws go to match the screws you use and the material you are screwing into so they go in the exact distance you want, and not too far. No screws poking through material, and no screws not inserted enough to get a good hold.
We couldn’t leave the sensors exposed, so we laser cut clear acrylic pieces to place into the pockets. Hopefully they’ll stand up to the abuse they’ll get, but if they get too scratched or worn, it’s easy to swap in a new one. (If you’re keep track, we’ve used a 3D printer, laser cutter, and CNC machine so far…)
Here’s a look at a mock-up of the pieces. This is an early iteration but shows some of the color choices as well as all the pieces together.
I’m not showing you the inside of the cabinet, but all of the sensors connect to a microcontroller that is connected to a computer. The microcontroller reads the sensors, determines which pieces are in which pockets, and reads the button presses to run the code on the computer and reset things.
To program the turtle, you put the pieces in the tray, and press “Run”. If you get an error it will show on screen and also blink an LED on the board for the “line” where the error is. Once you correct your error(s) you can hit “Run” again and watch your code execute. You can run over and over until you want to clear the screen by pressing the “Reset” button, which clears things and sends the turtle back home.
We didn’t implement pen up/pen down, because this is a simplified version meant for young children/early learners. I think that as far as a STEM component it turned out well, and creating it allowed me to combine my love of computers, programming, turtles, Logo, and design into something awesome.
I hope Seymour Papert would be proud of this. If you’re not familiar with his work, he’s been considered the world’s foremost expert on how technology can provide new ways to learn and teach mathematics, thinking in general, and other subjects.
People laughed at Seymour Papert in the 1960s, more than half a century ago, when he vividly talked about children using computers as instruments for learning and for enhancing creativity, innovation, and “concretizing” computational thinking.
Here’s some kids having a good time learning about coding!