Categories
Uncategorized

Copy Specific File Types

As I’ve been using Illustrator I’ve found it’s very common to need to reuse elements in new documents. As all of my projects consists of folders within folders many layers deep it can take a bit of time to navigate to the correct folder to find the file I need. I’d also have to make sure I either opened and old file and copied out what I needed without making any changes (or worse, damaging the document) or I would make a copy and open that, but then need to remember to delete it later.

What I wanted was a destructible copy of all my old files I could easily browse through, open, mangle, destroy, etc. with no affect on my workflow. I’ve got something that seems to work for now. (Until I come up with something better.) My original plan was to construct a Perl or Python script to walk the directory and copy any Illustrator files (with the .ai extension) to another folder named EXAMPLES. Before I got started writing code I did a few searches, first wondering if rsync could it, and it can, but it didn’t seem elegant. I ended up reading a bunch of posts about how to do this and I didn’t bookmark the page that had the closest solution, but my script is below.

#!/bin/sh

/usr/bin/find /Users/pete/Projects/ -name '*.ai' -exec cp -p \{\} /Users/pete/EXAMPLES/ \;
 
# copy just the .ai files

Difficult to read the code? See the gist on GitHub.

Yup, the good old find command to the rescue. It’s not perfect, as files might overwrite other files if the names are not unique. In this case if names are the same, it’s probably because I’ve got the same source file in multiple locations. With a bit more code I could deal with that, but again, doesn’t matter for this use case.

The nice thing about this is that I can just create a cron job to run it every night and I get all the fresh files copied into in the EXAMPLES folder ready to reference. The files are (mostly) tiny so it takes very little resources or space.

This is one of those things I’m posting because there’s a 97% chance I’ll find this useful in the future. And if anyone else finds it useful… You’re Welcome! Keep Being Awesome.

Categories
Uncategorized

Google Reader Subscription List backup shell script

If you’re interested in exporting (or backing up) your Google Reader Subscription List you can log into Google Reader, go to Manage Subscriptions, and then Import/Export and then export your subscription as an OPML file (which is basically an XML file.)

Google Reader - Export Subscription List OPML

If you want to automate this process, there are a few steps involved… I used curl, which is easy, but other tools can also work.

The first thing you need to do is get an Auth code:


curl -daccountType=GOOGLE -d Email=[USERNAME]@gmail.com -d Passwd=[PASSWORD] -d service=reader https://www.google.com/accounts/ClientLogin 

Substitute your own Google username for [USERNAME].

Substitute your own Google password[PASSWORD].

Once you do this, you’ll get 3 lines returned, that look something like this:


SID=HFDY49j4ljlkfgdg4tfh03fdkjgldkhfl945840598djglkjh40hi5h... 
LSID=HFDY49j9ljlkfh03fhgfh565dkjgldkhfl945840598djglkjhhi5h... 
Auth=HFDY49j7ljlkfh03fdkjgldkhfl945840598djglkjhjgh6640hi5h... 

Note: I’ve shortened these (and made them up) but it’s but it’s basically 3 keys SID, LSID, and Auth and their associated values. You’ll need the value for the one labeled Auth.

Now, use curl to request the following:


curl -H "Authorization:GoogleLogin auth=HFDY49j7ljlkfh03fdkjgldkhfl945840598djglkjhjgh6640hi5h..."  http://www.google.com/reader/subscriptions/export 

Again, I’ve shortened the Auth code (it’s really long!) You’re basically passing the authorization in the header of the request. It should go without saying that the SID, LSID, and Auth should be kept private. (Which is why I just made up a random string in the example above.)

OK, if it all worked, curl returned your subscription list as OPML. Hooray! Also, you just used OAuth, so Double Hooray!

And here’s our shell script, which will download/backup your subscription list as OPML file. (It’s similar to our mysql backup schell script.)


#!/bin/bash

DT=`date +"%Y%m%d"`

curl -s -o /home/backups/SubscriptionList-$DT.opml -H "Authorization:GoogleLogin auth=HFDY49j7ljlkfh03fdkjgldkhfl945840598djglkjhjgh6640hi5h..."  http://www.google.com/reader/subscriptions/export 

Each time you run it, it will get the date with the year, month, and day and use it in the name. So %Y%m%d would produce something like 20100816. This should work fine if you run just one backup per day. (And of course you can store it somewhere besides /home/backups/ if you like. cron is your friend here.)

I know that most people believe that Google will not lose their data, or if the day comes they want to export this data, they’ll just go to the site and export it, but this lets you prepare for the day you can’t get to the site and export your data… or the day Google loses it, or deletes it, or whatever.

By the way… I found most of this information in the Google Reader API wiki. It’s nice that Google is providing an API for things, I just wish some of the info was easier to find… as of this post, that’s the only damn page in that entire wiki!

This is all part of my renewed interest in putting my own data into my own hands, and I may be bugging Jason (@plural) a bit more in the future. ;)

Update: Jason reminded us of dataliberation.org, which I’ll discuss in another post. :)

And just for fun: This gem from 2007: Data Loss At Google Reader.

Categories
Uncategorized

mysql backup shell script

This is what I tend to use for a simple MySQL database backup script… I wanted to post this so I can look it up when I need it. There are probably better ways to do this (tell me about them!) but this works for me.

#!/bin/bash

DT=`date +"%Y%m%d%H%M%S"`

mysqldump -u [USERNAME] -p[PASSWORD] [DATABASENAME] > /home/backups/[DATABASENAME]-$DT.dump

gzip /home/backups/[DATABASENAME]-$DT.dump

mysqlsm

Substitute your MySQL user for [USERNAME]. (There should be a space between the ‘-u’ and the [USERNAME])

Substitute your MySQL user’s password for [PASSWORD]. (There should not be a space between the -p and the [PASSWORD])

Substitute your MySQL user’s database for p[DATABASENAME].

Each time you run it, it will get the date with the year, month, day, hours, minutes, seconds, and use it in the name. So %Y%m%d%H%M%S would produce something like 20100711090854. If you are running one backup per day, you could shorten it to %Y%m%d.

This would put the files in the /home/backups directory. Set this to wherever you want the files to go.

The gzip command compresses the dumped database file. If you don’t want to compress it (and save disk space) then don’t use it.

(BTW, you don’t type the [ brackets ]. They are just there to highlight the words you need to fill in.)

Categories
Uncategorized

Finding modified files

I needed to find all files on a Linux box that had been modified in the last 24 hours. Years ago I used to have a custom Perl script for such things, which I would use for various software development projects, but I found this awesome command:

find / -type f -mtime -1 -exec ls -al {} \;

You can change the ‘/’ if you want to look somewhere specifically, like your home directory, or /etc. Of course you can pipe it to grep for just grabbing certain matches. The -1 specifies 1 day. I always love finding simple commands that are so powerful.

Oh, I found that bit at DZone Snippets, it’s titled Search for files modified the last … days.

Categories
Uncategorized

Recursive FTP using wget

Here’s the scenario… you don’t have ssh access, but you do have ftp access, and need all the files…

wget -r ftp://username:password@domain.com/directory/

Let wget do it’s thing for a bit, and you should have all the files you need.

(Of course you really shouldn’t be running plain old insecure ftp when sftp is available…)