jump to navigation

Move with wildcards December 30, 2010

Posted by maxmil in : bash , add a comment

Just found problems with a directory of images that had some extensions in uppercase (JPG) and others in lowercase (jpg).

I wanted to move all the uppercase JPG to lowercase.

This is the script that did it:

for f in *.JPG
mv $f ${new}.jpg

List all files in directory tree by modification date April 1, 2009

Posted by maxmil in : bash , add a comment

find app -type f -printf "%T@\t%t\t%p\n" | sort -r -n

List a directories content by size September 19, 2008

Posted by maxmil in : bash , add a comment

This is not complicated but very useful
du --max-depth=1 /usr/local/apps | sort -n -r

Find and replace text in files October 22, 2007

Posted by maxmil in : bash , add a comment

Taken from here

To replace all occurances of a string:

find /your/home/dir -name "*.txt" | xargs perl -pi -e 's/stringtoreplace/replacementstring/g'

To replace the first occurance:

find /your/home/dir -name "*.txt" | xargs perl -pi -e 's/stringtoreplace/replacementstring/'

To replace all files in a folder:

for arg in `ls -C1`; do perl -pi -e 's/stringtoreplace/replacementstring/g'; done;

you can do more cool tricks using the for shell command as demonstrated above. you can add more specific searches. However, you might be better off just writing a shell script. Here is an example of the first find:

for arg in `find /your/home/dir -name "*.txt"` ; do perl -pi -e 's/string/replacement/g' $arg; done;

Find and remove on the command line September 14, 2007

Posted by maxmil in : bash , add a comment

In order to find and remove files from a directory tree:

find ./ -name "file-or-directory-name-to-remove" | xargs rm -Rf

Shell script to prepare photos for the web July 9, 2007

Posted by maxmil in : bash , add a comment

Just needed to optimize a folder of jpg’s for the Web. I needed to:

1) Reduce their dimensions so that their longest dimension was at most 700px.
2) Reduce their quality to 75%

Using the convert command that forms part of imagemagick this was easy. The script takes two parameters, an input directory and an output directory. Both without slashes.

if [ $# -ne 2 ]
echo "usage: $0 [input directory] [output directory]" >&2
for file in $1/*
convert $file -resize 700x700 -quality 75 $2/`basename $file`

Negate find July 22, 2006

Posted by maxmil in : bash , add a comment

In order to search for files that don’t match a certain pattern…

find . -not -name "myfilename"

Command line stuff June 12, 2006

Posted by maxmil in : bash , add a comment

Some tricks to learn from http://www.die.net/doc/linux/HOWTO/VMS-to-Linux-HOWTO-11.html

UNIX’ core idea is that there are many simple commands that can linked together via piping and redirection to accomplish even really complex tasks. Have a look at the following examples. I’ll only explain the most complex ones; for the others, please study the above sections and the man pages.

Problem: ls is too quick and the file names fly away.


$ ls | less

Problem: I have a file containing a list of words. I want to sort it in reverse order and print it.


$ cat myfile.txt | sort -r | lpr

Problem: my data file has some repeated lines! How do I get rid of them?


$ sort datafile.dat | uniq > newfile.dat

Problem: I have a file called ‘mypaper.txt’ or ‘mypaper.tex’ or some such somewhere, but I don’t remember where I put it. How do I find it?


$ find ~ -name “mypaper*”

Explanation: find is a very useful command that lists all the files in a directory tree (starting from ~ in this case). Its output can be filtered to meet several criteria, such as -name.

Problem: I have a text file containing the word ‘entropy’ in this directory, is there anything like SEARCH?

Solution: yes, try

$ grep -l ‘entropy’ *

Problem: somewhere I have text files containing the word ‘entropy’, I’d like to know which and where they are. Under VMS I’d use search entropy […]*.*;*, but grep can’t recurse subdirectories. Now what?


$ find . -exec grep -l “entropy” {} \; 2> /dev/null

Explanation: find . outputs all the file names starting from the current directory, -exec grep -l “entropy” is an action to be performed on each file (represented by {}), \ terminates the command. If you think this syntax is awful, you’re right.

In alternative, write the following script:

# rgrep: recursive grep
if [ $# != 3 ]
echo “Usage: rgrep –switches ‘pattern’ ‘directory'”
exit 1
find $3 -name “*” -exec grep $1 $2 {} \; 2> /dev/null

Explanation: grep works like search, and combining it with find we get the best of both worlds.

Problem: I have a data file that has two header lines, then every line has ‘n’ data, not necessarily equally spaced. I want the 2nd and 5th data value of each line. Shall I write a Fortran program…?

Solution: nope. This is quicker:

$ awk ‘NL > 2 {print $2, “\t”, $5}’ datafile.dat > newfile.dat

Explanation: the command awk is actually a programming language: for each line starting from the third in datafile.dat, print out the second and fifth field, separated by a tab. Learn some awk—it saves a lot of time.

Problem: I’ve downloaded an FTP site’s ls-lR.gz to check its contents. For each subdirectory, it contains a line that reads “total xxxx”, where xxxx is size in kbytes of the dir contents. I’d like to get the grand total of all these xxxx values.


$ zcat ls-lR.gz | awk ‘ $1 == “total” { i += $2 } END {print i}’

Explanation: zcat outputs the contents of the .gz file and pipes to awk, whose man page you’re kindly requested to read ;-)

Problem: I’ve written a Fortran program, myprog, to calculate a parameter from a data file. I’d like to run it on hundreds of data files and have a list of the results, but it’s a nuisance to ask each time for the file name. Under VMS I’d write a lengthy command file, and under Linux?

Solution: a very short script. Make your program look for the data file ‘mydata.dat’ and print the result on the screen (stdout), then write the following script:

# myprog.sh: run the same command on many different files
# usage: myprog.sh *.dat
for file in $* # for all parameters (e.g. *.dat)
# append the file name to result.dat
echo -n “${file}: ” >> results.dat
# copy current argument to mydata.dat, run myprog
# and append the output to results.dat
cp ${file} mydata.dat ; myprog >> results.dat

Problem: I want to replace `geology’ with `geophysics’ in all my text files. Shall I edit them all manually?

Solution: nope. Write this shell script:

# replace $1 with $2 in $*
# usage: replace “old-pattern” “new-pattern” file [file…]
OLD=$1 # first parameter of the script
NEW=$2 # second parameter
shift ; shift # discard the first 2 parameters: the next are the file names
for file in $* # for all files given as parameters
# replace every occurrence of OLD with NEW, save on a temporary file
sed “s/$OLD/$NEW/g” ${file} > ${file}.new
# rename the temporary file as the original file
/bin/mv ${file}.new ${file}

Problem: I have some data files, I don’t know their length and have to remove their last but one and last but two lines. Er… manually?

Solution: no, of course. Write this script:

# prune.sh: removes n-1th and n-2th lines from files
# usage: prune.sh file [file…]
for file in $* # for every parameter
LINES=`wc -l $file | awk ‘{print $1}’` # number of lines in file
LINES=`expr $LINES – 3` # LINES = LINES – 3
head -n $LINES $file > $file.new # output first LINES lines
tail -n 1 $file >> $file.new # append last line

I hope these examples whetted your appetite…

ps syntax June 4, 2006

Posted by maxmil in : bash , add a comment

To get a full list of information on the currently running processes:

ps -eo pid,ppid,rss,vsize,pcpu,pmem,cmd --sort=pid

Replace –sort=pid with what your interested in

Using grep to search in files April 28, 2006

Posted by maxmil in : bash , add a comment

This looks recursively for the text "my text string" in all files in /home and subdirectories

grep -r "my text string" /home/*

Note: If search string does not contain spaces quotes are not necessary.