Categories
Uncategorized

Scripting Illustrator

I never thought I would write Javascript for Adobe Illustrator but here we are. A script to rename artboards sequentially.

You can grab the script Rename Artboards.jsx from gist.github.com because it’s simply one file and I didn’t feel like making an official repository for it.

Why? I found that at work I was renaming artboards in Illustrator so they’d have sequential file names/numbers when exporting them as separate PNG images. I got sick of doing it manually and found a solution. Since Illustrator is from Adobe I doubted (at first) that it would be easily scriptable, but hey… it is! Well, with Javascript. Meh. Anyway, it was easy enough to find some sample code that was close enough to what I needed.

I started by looking for a solution. This was my first result: How (to) batch rename artboards in Adobe illustrator?

I assumed Illustrator would have some sort of floating script palette, but no… So then I found: How to keep Illustrator Scripts handy?

That page points to Script Panel 2 and Adobe CC Scripts Panel. I’ve tried them both. I think I prefer the second, but both seem to work fine.

This seems pretty handy, so the next time I need to automate some task in Illustrator I’ll have to dig in and see if it can be solved with a script.

Categories
Uncategorized

Copy Specific File Types

As I’ve been using Illustrator I’ve found it’s very common to need to reuse elements in new documents. As all of my projects consists of folders within folders many layers deep it can take a bit of time to navigate to the correct folder to find the file I need. I’d also have to make sure I either opened and old file and copied out what I needed without making any changes (or worse, damaging the document) or I would make a copy and open that, but then need to remember to delete it later.

What I wanted was a destructible copy of all my old files I could easily browse through, open, mangle, destroy, etc. with no affect on my workflow. I’ve got something that seems to work for now. (Until I come up with something better.) My original plan was to construct a Perl or Python script to walk the directory and copy any Illustrator files (with the .ai extension) to another folder named EXAMPLES. Before I got started writing code I did a few searches, first wondering if rsync could it, and it can, but it didn’t seem elegant. I ended up reading a bunch of posts about how to do this and I didn’t bookmark the page that had the closest solution, but my script is below.

#!/bin/sh

/usr/bin/find /Users/pete/Projects/ -name '*.ai' -exec cp -p \{\} /Users/pete/EXAMPLES/ \;
 
# copy just the .ai files

Difficult to read the code? See the gist on GitHub.

Yup, the good old find command to the rescue. It’s not perfect, as files might overwrite other files if the names are not unique. In this case if names are the same, it’s probably because I’ve got the same source file in multiple locations. With a bit more code I could deal with that, but again, doesn’t matter for this use case.

The nice thing about this is that I can just create a cron job to run it every night and I get all the fresh files copied into in the EXAMPLES folder ready to reference. The files are (mostly) tiny so it takes very little resources or space.

This is one of those things I’m posting because there’s a 97% chance I’ll find this useful in the future. And if anyone else finds it useful… You’re Welcome! Keep Being Awesome.

Categories
Uncategorized

Google Reader Subscription List backup shell script

If you’re interested in exporting (or backing up) your Google Reader Subscription List you can log into Google Reader, go to Manage Subscriptions, and then Import/Export and then export your subscription as an OPML file (which is basically an XML file.)

Google Reader - Export Subscription List OPML

If you want to automate this process, there are a few steps involved… I used curl, which is easy, but other tools can also work.

The first thing you need to do is get an Auth code:


curl -daccountType=GOOGLE -d Email=[USERNAME]@gmail.com -d Passwd=[PASSWORD] -d service=reader https://www.google.com/accounts/ClientLogin 

Substitute your own Google username for [USERNAME].

Substitute your own Google password[PASSWORD].

Once you do this, you’ll get 3 lines returned, that look something like this:


SID=HFDY49j4ljlkfgdg4tfh03fdkjgldkhfl945840598djglkjh40hi5h... 
LSID=HFDY49j9ljlkfh03fhgfh565dkjgldkhfl945840598djglkjhhi5h... 
Auth=HFDY49j7ljlkfh03fdkjgldkhfl945840598djglkjhjgh6640hi5h... 

Note: I’ve shortened these (and made them up) but it’s but it’s basically 3 keys SID, LSID, and Auth and their associated values. You’ll need the value for the one labeled Auth.

Now, use curl to request the following:


curl -H "Authorization:GoogleLogin auth=HFDY49j7ljlkfh03fdkjgldkhfl945840598djglkjhjgh6640hi5h..."  http://www.google.com/reader/subscriptions/export 

Again, I’ve shortened the Auth code (it’s really long!) You’re basically passing the authorization in the header of the request. It should go without saying that the SID, LSID, and Auth should be kept private. (Which is why I just made up a random string in the example above.)

OK, if it all worked, curl returned your subscription list as OPML. Hooray! Also, you just used OAuth, so Double Hooray!

And here’s our shell script, which will download/backup your subscription list as OPML file. (It’s similar to our mysql backup schell script.)


#!/bin/bash

DT=`date +"%Y%m%d"`

curl -s -o /home/backups/SubscriptionList-$DT.opml -H "Authorization:GoogleLogin auth=HFDY49j7ljlkfh03fdkjgldkhfl945840598djglkjhjgh6640hi5h..."  http://www.google.com/reader/subscriptions/export 

Each time you run it, it will get the date with the year, month, and day and use it in the name. So %Y%m%d would produce something like 20100816. This should work fine if you run just one backup per day. (And of course you can store it somewhere besides /home/backups/ if you like. cron is your friend here.)

I know that most people believe that Google will not lose their data, or if the day comes they want to export this data, they’ll just go to the site and export it, but this lets you prepare for the day you can’t get to the site and export your data… or the day Google loses it, or deletes it, or whatever.

By the way… I found most of this information in the Google Reader API wiki. It’s nice that Google is providing an API for things, I just wish some of the info was easier to find… as of this post, that’s the only damn page in that entire wiki!

This is all part of my renewed interest in putting my own data into my own hands, and I may be bugging Jason (@plural) a bit more in the future. ;)

Update: Jason reminded us of dataliberation.org, which I’ll discuss in another post. :)

And just for fun: This gem from 2007: Data Loss At Google Reader.

Categories
Uncategorized

Pretty Print XML with Perl

Let’s say you’ve got a file named “file.xml” and want it pretty printed, all indented nice and everything…

For just such an occasion I have a Perl script named “pretty.pl” and I just run my XML file through it like so: cat file.xml | perl pretty.pl

Here’s the code I use:

#!/usr/bin/perl

use XML::Twig;
use XML::Parser;

my $xml = XML::Twig->new(pretty_print => 'indented');

$xml->parse(\*STDIN);

$xml->print();

You can even pass it through right as it comes in over the wire: curl http://example.com/data/file.xml | perl pretty.pl

Here’s an example of data from Foursquare without pretty printing. (I used curl to grab the data. Also, I added in some line breaks, just to make it a little more readable.):

<?xml version="1.0" encoding="UTF-8"?>
<checkins><checkin><id>123847273</id>
<created>Mon, 09 Aug 10 00:50:33 +0000</created>
<timezone>America/Chicago</timezone><venue><id>2357761</id>
<name>The Kiltie</name><primarycategory><id>79067</id>
<fullpathname>Food:Ice Cream</fullpathname><nodename>Ice Cream</nodename>
<iconurl>http://foursquare.com/img/categories/food/icecream.png</iconurl>
</primarycategory><address></address><city></city><state></state>
<geolat>43.107391</geolat><geolong>-88.464475</geolong></venue>
<display>Pete P. @ The Kiltie</display></checkin></checkins>

And here’s the same data, again using curl to grab it, and then passing it through the pretty.pl script:

<?xml version="1.0" encoding="UTF-8"?>
<checkins>
  <checkin>
    <id>123847273</id>
    <created>Mon, 09 Aug 10 00:50:33 +0000</created>
    <timezone>America/Chicago</timezone>
    <venue>
      <id>2357761</id>
      <name>The Kiltie</name>
      <primarycategory>
        <id>79067</id>
        <fullpathname>Food:Ice Cream</fullpathname>
        <nodename>Ice Cream</nodename>
        <iconurl>http://foursquare.com/img/categories/food/icecream.png</iconurl>
      </primarycategory>
      <address></address>
      <city></city>
      <state></state>
      <geolat>43.107391</geolat>
      <geolong>-88.464475</geolong>
    </venue>
    <display>Pete P. @ The Kiltie</display>
  </checkin>
</checkins>

I still find Perl extremely useful for this sort of task… I’m sure there are other command line ways to do this, but this one works for me.

(Hat tip to A Curious Programmer where I picked up this Perl code from…)

Categories
Uncategorized

Finding modified files

I needed to find all files on a Linux box that had been modified in the last 24 hours. Years ago I used to have a custom Perl script for such things, which I would use for various software development projects, but I found this awesome command:

find / -type f -mtime -1 -exec ls -al {} \;

You can change the ‘/’ if you want to look somewhere specifically, like your home directory, or /etc. Of course you can pipe it to grep for just grabbing certain matches. The -1 specifies 1 day. I always love finding simple commands that are so powerful.

Oh, I found that bit at DZone Snippets, it’s titled Search for files modified the last … days.