This article is part of the article series "Bash One-Liners Explained."
<- previous article next article ->

I love being super fast in the shell so I decided to do a new article series called Bash One-Liners Explained. It's going to be similar to my other article series - Awk One-Liners Explained, Sed One-Liners Explained, and Perl One-Liners Explained. After I'm done with this bash series, I'll release an e-book by the same title, just as I did with awk, sed, and perl series. The e-book will be available as a pdf and in mobile formats (mobi and epub). I'll also be releasing bash1line.txt, similar to perl1line.txt that I made for the perl series.

In this series I'll use the best bash practices, various bash idioms and tricks. I want to illustrate how to get various tasks done with just bash built-in commands and bash programming language constructs.

Also see my other articles about working fast in bash:

Let's start.

Part I: Working With Files

1. Empty a file (truncate to 0 size)

$ > file

This one-liner uses the output redirection operator >. Redirection of output causes the file to be opened for writing. If the file does not exist it is created; if it does exist it is truncated to zero size. As we're not redirecting anything to the file it remains empty.

If you wish to replace the contents of a file with some string or create a file with specific content, you can do this:

$ echo "some string" > file

This puts the string "some string" in the file.

2. Append a string to a file

$ echo "foo bar baz" >> file

This one-liner uses a different output redirection operator >>, which appends to the file. If the file does not exist it is created. The string appended to the file is followed by a newline. If you don't want a newline appended after the string, add the -n argument to echo:

$ echo -n "foo bar baz" >> file

3. Read the first line from a file and put it in a variable

$ read -r line < file

This one-liner uses the built-in bash command read and the input redirection operator <. The read command reads one line from the standard input and puts it in the line variable. The -r parameter makes sure the input is read raw, meaning the backslashes won't get escaped (they'll be left as is). The redirection command < file opens file for reading and makes it the standard input to the read command.

The read command removes all characters present in the special IFS variable. IFS stands for Internal Field Separator that is used for word splitting after expansion and to split lines into words with the read built-in command. By default IFS contains space, tab, and newline, which means that the leading and trailing tabs and spaces will get removed. If you wish to preserve them, you can set IFS to nothing for the time being:

$ IFS= read -r line < file

This will change the value of IFS just for this command and will make sure the first line gets read into the line variable really raw with all the leading and trailing whitespaces.

Another way to read the first line from a file into a variable is to do this:

$ line=$(head -1 file)

This one-liner uses the command substitution operator $(...). It runs the command in ..., and returns its output. In this case the command is head -1 file that outputs the first line of the file. The output is then assigned to the line variable. Using $(...) is exactly the same as `...`, so you could have also written:

$ line=`head -1 file`

However $(...) is the preferred way in bash as it's cleaner and easier to nest.

4. Read a file line-by-line

$ while read -r line; do
    # do something with $line
done < file

This is the one and only right way to read lines from a file one-by-one. This method puts the read command in a while loop. When the read command encounters end-of-file, it returns a positive return code (code for failure) and the while loop stops.

Remember that read trims leading and trailing whitespace, so if you wish to preserve it, clear the IFS variable:

$ while IFS= read -r line; do
    # do something with $line
done < file

If you don't like the to put < file at the end, you can also pipe the contents of the file to the while loop:

$ cat file | while IFS= read -r line; do
    # do something with $line

5. Read a random line from a file and put it in a variable

$ read -r random_line < <(shuf file)

There is no clean way to read a random line from a file with just bash, so we'll need to use some external programs for help. If you're on a modern Linux machine, then it comes with the shuf utility that's in GNU coreutils.

This one-liner uses the process substitution <(...) operator. This process substitution operator creates an anonymous named pipe, and connects the stdout of the process to the write part of the named pipe. Then bash executes the process, and it replaces the whole process substitution expression with the filename of the anonymous named pipe.

When bash sees <(shuf file) it opens a special file /dev/fd/n, where n is a free file descriptor, then runs shuf file with its stdout connected to /dev/fd/n and replaces <(shuf file) with /dev/fd/n so the command effectively becomes:

$ read -r random_line < /dev/fd/n

Which reads the first line from the shuffled file.

Here is another way to do it with the help of GNU sort. GNU sort takes the -R option that randomizes the input.

$ read -r random_line < <(sort -R file)

Another way to get a random line in a variable is this:

$ random_line=$(sort -R file | head -1)

Here the file gets randomly sorted by sort -R and then head -1 takes the first line.

6. Read the first three columns/fields from a file into variables

$ while read -r field1 field2 field3 throwaway; do
    # do something with $field1, $field2, and $field3
done < file

If you specify more than one variable name to the read command, it shall split the line into fields (splitting is done based on what's in the IFS variable, which contains a whitespace, a tab, and a newline by default), and put the first field in the first variable, the second field in the second variable, etc., and it will put the remaining fields in the last variable. That's why we have the throwaway variable after the three field variables. if we didn't have it, and the file had more than three columns, the third field would also get the leftovers.

Sometimes it's shorter to just write _ for the throwaway variable:

$ while read -r field1 field2 field3 _; do
    # do something with $field1, $field2, and $field3
done < file

Or if you have a file with exactly three fields, then you don't need it at all:

$ while read -r field1 field2 field3; do
    # do something with $field1, $field2, and $field3
done < file

Here is an example. Let's say you wish to find out number of lines, number of words, and number of bytes in a file. If you run wc on a file you get these 3 numbers plus the filename as the fourth field:

$ cat file-with-5-lines
x 1
x 2
x 3
x 4
x 5

$ wc file-with-5-lines
 5 10 20 file-with-5-lines

So this file has 5 lines, 10 words, and 20 chars. We can use the read command to get this info into variables:

$ read lines words chars _ < <(wc file-with-5-lines)

$ echo $lines
$ echo $words
$ echo $chars

Similarly you can use here-strings to split strings into variables. Let's say you have a string "20 packets in 10 seconds" in a $info variable and you want to extract 20 and 10. Not too long ago I'd have written this:

$ packets=$(echo $info | awk '{ print $1 }')
$ time=$(echo $info | awk '{ print $4 }')

However given the power of read and our bash knowledge, we can now do this:

$ read packets _ _ time _ <<< "$info"

Here the <<< is a here-string, which lets you pass strings directly to the standard input of commands.

7. Find the size of a file, and put it in a variable

$ size=$(wc -c < file)

This one-liner uses the command substitution operator $(...) that I explained in one-liner #3. It runs the command in ..., and returns its output. In this case the command is wc -c < file that prints the number of chars (bytes) in the file. The output is then assigned to size variable.

8. Extract the filename from the path

Let's say you have a /path/to/file.ext, and you wish to extract just the filename file.ext. How do you do it? A good solution is to use the parameter expansion mechanism:

$ filename=${path##*/}

This one-liner uses the ${var##pattern} parameter expansion. This expansion tries to match the pattern at the beginning of the $var variable. If it matches, then the result of the expansion is the value of $var with the longest matching pattern deleted.

In this case the pattern is */ which matches at the beginning of /path/to/file.ext and as it's a greedy match, the pattern matches all the way till the last slash (it matches /path/to/). The result of this expansion is then just the filename file.ext as the matched pattern gets deleted.

9. Extract the directory name from the path

This is similar to the previous one-liner. Let's say you have a /path/to/file.ext, and you wish to extract just the path to the file /path/to. You can use the parameter expansion again:

$ dirname=${path%/*}

This time it's the ${var%pattern} parameter expansion that tries to match the pattern at the end of the $var variable. If the pattern matches, then the result of the expansion is the value of $var shortest matching pattern deleted.

In this case the pattern is /*, which matches at the end of /path/to/file.ext (it matches /file.ext). The result then is just the dirname /path/to as the matched pattern gets deleted.

10. Make a copy of a file quickly

Let's say you wish to copy the file at /path/to/file to /path/to/file_copy. Normally you'd write:

$ cp /path/to/file /path/to/file_copy

However you can do it much quicker by using the brace expansion {...}:

$ cp /path/to/file{,_copy}

Brace expansion is a mechanism by which arbitrary strings can be generated. In this particular case /path/to/file{,_copy} generates the string /path/to/file /path/to/file_copy, and the whole command becomes cp /path/to/file /path/to/file_copy.

Similarly you can move a file quickly:

$ mv /path/to/file{,_old}

This expands to mv /path/to/file /path/to/file_old.


Enjoy the article and let me know in the comments what you think about it! If you think that I forgot some interesting bash one-liners related to file operations, let me know in to comments also!

This article is part of the article series "Bash One-Liners Explained."
<- previous article next article ->


Person Permalink
June 01, 2012, 17:09

""If you don't like the to put < file at the end, you can also pipe the contents of the file to the while loop:""
or just stick at the front
%< while IFS= read -r line; do
echo $line;

$ packets=$(echo $info | awk '{ print $1 }')
$ time=$(echo $info | awk '{ print $4 }')

awk is bogus, use cut instead

June 01, 2012, 17:38

You can't put it in the front because you must put redirections at the end of a compound command. You can put them anywhere in a simple command. Also it should always be echo "$line" not echo $line because of word splitting happening if you don't quote it. I don't get your comment about awk being bogus.

June 01, 2012, 19:24

In order to get lowercase or uppercase from a variable. Let's see an example:

USER is a shell variable with the current user. For the purpose USER="admin".

- Uppercase
echo ${USER^} # Admin (min match)
echo ${USER^^}# ADMIN (max match)

Now we have te opposite situation, USER="DBADMIN"

- Lowercase
echo ${USER,} # dBADMIN (min match)
echo ${USER,,} # dbadmin (max match)^

Most of the people would be amazing with the funcionalities you can find on bash.

Nice post ;)

June 01, 2012, 19:46

These are awesome! I'll cover these when I write about working with strings.

June 03, 2012, 10:21

I didn't knew that I will must use it!
Thanks! :-)

Gaurav Permalink
June 05, 2012, 17:06

nice one .....really this is an amazing thing it is also working on other variable also ----

$ echo $SHELL

$ echo ${SHELL^^}

June 08, 2012, 10:51

Be careful with using this feature of bash, as it requires bash version 4 or higher. So on a mac (including Lion), this functionality isn't there.

dave @ [ bahamas10 :: (Darwin) ] ~ $ echo "$BASH_VERSION"
dave @ [ bahamas10 :: (Darwin) ] ~ $ echo "${BASH_VERSION^^}"
-bash: ${BASH_VERSION^^}: bad substitution
dave @ [ bahamas10 :: (Darwin) ] ~ $ 

The only safe way to do this in older versions of bash is to use tr, and even then, to make sure you are using a locale-safe way of translating text.


dave @ [ bahamas10 :: (Darwin) ] ~ $ echo "$BASH_VERSION" | tr '[[:lower:]]' '[[:upper:]]'

Ugly, I know, but safe.

June 01, 2012, 19:27

8: You could also use basename(1):

filename=$(basename $path)

June 01, 2012, 19:46

Oh right. And for 9 also dirname=$(dirname $path). I'll update the article with these!

Richard Michael Permalink
June 01, 2012, 20:54

basename and dirname are not bash built-ins. So what makes 8 and 9 interesting is the param expansion solution - thanks!

June 02, 2012, 10:50

Also true!

Gustavo Chaves Permalink
June 01, 2012, 20:30

Very interesting. I learned a few very usefull bits from this article. Thanks.

Perhaps you should mention that item's 4 last solution deserves a Useless Use Of Cat Award, though. :-)

June 01, 2012, 20:54

Great to hear you found it useful!

Michiel Permalink
June 01, 2012, 22:54

Hi Peter,

great reading as always. I wondered if you had a solution for a different user preference in #10: I personally prefer to have the extension intact when renaming the file to its name.

I.e. cp /path/to/file.ext /path/to/file_bk.ext

Probably not something I want to do by hand every time, but would be great as a small bash script e.g. 'bk' to first create a safe backup of a file I want to mess with and then automatically launch vim to edit the original.


June 02, 2012, 17:57

What do you think about this?:

cp /long/path/to/file{,_bk}.ext

June 02, 2012, 20:32

Here is a function called bk that does it. It works with files both of form file.ext and also files of form file (without ext). It also checks if the destination file doesn't exist. If it does, it adds _bk until such file doesn't exist. After it's copied the file it opens the original in vim.

function bk {
 if [[ -z $1 ]]; then
    echo "Usage: bk <file>"

 file=$(basename $1)

 while :; do
  case $file in
    newfile=$(echo $file | sed 's/\(.*\)\.\(.*\)/\1'$bk'.\2/')
  if [[ ! -e $newfile ]]; then

 cp "$1" "$(dirname $1)/$newfile"
 if [[ $? -eq 0 ]]; then
    vim "$1"
June 03, 2012, 10:24

Helpful script :-)

Michiel Permalink
June 04, 2012, 21:32

Awesome. That is a tutorial in and by itself! The infinite loop is a construct I hadn't used before in bash.

Thank you so much.

jalal hajigholamali Permalink
June 02, 2012, 12:47


I recently received 'Bash One-Liners Explained, Part I: Working with files '

this a very good and useful article

thanks a lot

Cam Hutchison Permalink
June 04, 2012, 11:40

You have to be careful doing a read loop over stdin, as any programs inside the loop will also have their stdin attached to the same source as the read command. It occasionally produces screwy results and causes a lot of head scratching.

An alternative is to read from a different file descriptor:

exec 3< input_file.txt  # open input_file.txt on fd 3
while read -u 3 -r line ; do
    # do stuff here
exec 3<&-  # close fd 3

This potentially has the same problem because fd 3 will be inherited by the programs in the loop, but there is very little likelihood that they will try and read from it, as they may with fd 0 (stdin).

And now we're getting even further from a one-liner...

June 04, 2012, 20:30

Great comment!

Gaurav Permalink
June 05, 2012, 15:46

Hi Peter, all are very nice and useful description....

Number 8 and 9 are not working in my system...what could be the reason....

and one more interesting command i would like to give that will give you size of each directory recursively.

find ./ -type d -exec sh -c "echo -n {}' ' ; du -sh {}" \;

June 08, 2012, 10:48

Great post, you covered awesome stuff, and unlike most bash tips, you actually understand bash and all the gotchas it has.

One quick note on #4, where you cat the file and | into while read line, be careful with this approach. The while loop is now created in a subshell, so any variables modified within the context of the while loop will not persist when the loop exits.

ex. (let's say foo.txt has 10 lines)

cat foo.txt | while read line; do ((i++)); done
echo "$i"

will print 0.

April 15, 2014, 07:33

Do it ike this:
i=0;cat foo.txt | (while read line; do ((i++)); done; echo "$i")

Peteches Permalink
June 12, 2012, 16:32

nice work, I Love bash scripting, looking forward to the rest of this series!

For number 4 though you say that the while read line format is the only way to read in a file line by line, however bash 4.x mapfile allows you to read in a file to an indexed array more efficiently, you can then run the for loop over the array. eg

mapfile ARRAY < file

There are also a number of options which allow you to control input
-s count - skip the first count lines
-n count - read in at most count lines
-c quanta - set a quanta for the -C option
-C command - run command every quanta lines passing the index of the array about to be assigned
-0 index - start assigning at array[index] instead of 0
-t - strip trailing newline
-u fd - read from file descriptor fd instead of stdin

excellent alternative to the while read loop in my opinion


Vishal Permalink
June 20, 2012, 13:12

Why < < is required in:
read -r line < <(head -1 InstId.txt)
where as not in:
diff -q <(sort InstId.txt | uniq) <(sort secId.txt | uniq)
or not in,
comm -12 <(sort InstId.txt | uniq) <(sort secId.txt | uniq)

peteches Permalink
June 21, 2012, 11:46

Hi Vishal

When you use the <( cmd ) construct bash replaces the <( cmd ) with the path to the fd of cmd's stdout. eg

[0]pete.uttley@jackfrog::1901$ echo <(cat /tmp/frog-uuid-patch )

as diff takes file paths as arguments it can handle this with no problem. It doesn't take them as stdin.

Read expects stdin to assign to it's variable so you need to redirect the contents of the file descriptor, hence the need to redirect the contents of the file handle.

charith Permalink
June 21, 2012, 06:03

Thanks. Perl1Liners.txt Awesome and waiting for bash1line.txt;

jadenity Permalink
August 24, 2012, 06:41

Thanks for the helpful introduction! Also thanks to everyone for the very helpful comments/additions!

Liam Permalink
November 29, 2012, 00:44

No mention of for loops? For example,

for f in *.avi; do ffmpeg -i "$f" -vn "${f/%avi/mp3}"; done

will convert every avi file in a directory into an MP3. I'm sure you could come up with a more appropriate example using standard bash commands.

Other than that, this is a really neat collection of tips and I've learned some things that I'll use in the future, thank you.

October 16, 2013, 12:12

Good articles, may i translate it into chinese, and share it with more friends?

Todd Ramsey Permalink
October 28, 2013, 12:23


I am trying to use the example in #6 to read a file with pairs of machine names to do a diff. The list looks like this:

machine1a machine1b
machine2a machine2b

Here is the relevant code section for that:

cat /path/to/filename | while read -r aside bside
echo $aside
echo $bside
echo $aside >> localDiffList
echo $bside >> localDiffList
diff <(ssh root@$aside '/bin/cat /path/to/configfile') <(ssh root@$bside '/bin/cat //path/to/configfile') >> localDiffList
echo >> localDiffList

The problem seems to be that after it reads the first pair I guess it gets a return of 0 and then exits out. How can I get this to read all of the lines in the file? Thanks.

Todd Ramsey Permalink
October 28, 2013, 16:27

If anyone sees this then disregard. I found another solution.

December 09, 2013, 18:07

cat file | while... should be avoided. It isn't a like or dislike but is technically different. The pipe creates a subprocess, which has overhead and variable scope issues.

This is only a comment, but it would be better to show them doubled-quoted. The comment might change to an echo and newbies might get the wrong idea. Quoting for splitting, expansion and special characters is probably the biggest gotcha and most misunderstood bash topic.

# do something with $field1, $field2, and $field3
echo do something with "$field1", "$field2", and "$field3"

These should be quoted too, like:
echo $lines
echo $words
echo $chars
echo "$lines"; echo "$words"; echo "$chars"

December 09, 2013, 19:51

Cam's could also be written:

while read -r line <&3; do
# do stuff here
done 3< input_file.txt

I don't believe the explicit close is necessary.
I added lsof -c "${0##*/}" >log"${id}" in and outside the
loop and fd3 was not present after the while.

April 15, 2014, 02:57

I cannot be super fast in the shell because I need more than an hour to figure out how to assign a string ending in new line to a variable.

pingulino=´$(echo -en "tqbfjotld\n")´

What kind of escape-quote combination I must use instead of ´ and "?
Is that the problem or is it something else?

And everything is like that. The worst thing is that in the information system 'computer + linux) I have no clue, where that basic information?

April 15, 2014, 04:49

You write very clearly, thank you, but this time I think something like this would be clearer

>>"Let's say you have a /path/to/file.ext, and you wish to extract just the filename "file.ext". How do you do it? A convenient solution is to assign the whole path to a variable (mypath in this case) and use the parameter expansion mechanism:

$ filename=${mypath##*/}

This one-liner uses the..."

Surya Sabulal Permalink
December 30, 2018, 16:14

It was a really helpful article!! Really feeling grateful to have come across this article.. Thanks a lot for all the efforts put in for sharing this knowledge!!


Leave a new comment

(why do I need your e-mail?)

(Your twitter handle, if you have one.)

Type the word "computer_325": (just to make sure you're a human)

Please preview the comment before submitting to make sure it's OK.