This page contains (in the near future) a lot of small problems I've encountered in linux and solved using very short scripts. I'll eventually organize everything to have an index and make it all fancy, but for the moment, I've just put in misc hacks I can think of off the top of my head.
Be very careful if you decide to use this, start by just outputting the
files which will be deleted. The following is an example of how to delete
all .obj
files that were not created in the month of August.
find -type f | grep "\.obj" | xargs ls -al | grep -v "Aug" | cut -d'.' -f2- | sed -e "s/^\///g" | xargs rm -f
Download full websites by following links. Note that this will go to
other websites if there are links to the site on the site we are
trying to download.
wget -r -l [depth] http://www.cs.hmc.edu/~aarvey/index.html
With curl
, you can specify chunks of websites to download
curl -O http://www.library.cornell.edu/nr/bookcpdf/c1-[0-4].pdf
Moving from comma separated values file to a tab delimited file.
cat $FILES | sed -e "s/,/\t/g"
Using sed on all .cpp files. commmands.sed contains sed commands.
Files are changed in place and backups are made with extension .bak.
See above for how to remove all .bak files if operation succeeds.
find -type f | grep "\.cpp" | xargs sed -i=bak -f commands.sed
Move all .bak files back to original names if
operation fails.
$TO_CHANGE=`find -type f | grep "\.cpp\.bak"` for FILE in $TO_CHANGE mv -f $FILE `echo "$FILE" | sed -e "s/\.bak//g"` done