How to delete a directory automatically when an executable is killed

Quite often, we run an executable that needs to write / read some temporary files. We usually create a temporary directory, run the executable there, and delete the directory when the script is done.

I want to delete the directory even if the executable is killed. I tried to wrap it in:

#!/bin/bash
dir=$(mktemp -d /tmp/foo.XXXXXXX) && cd $dir && rm -rf $dir
/usr/local/bin/my_binary

When my_binary dies, the last process the kernel will delete the directory, as the script is the last process holding that inode; but I can’t create any file in the deleted directory.

#!/bin/bash
dir=$(mktemp -d /tmp/foo.XXXXXXX) && cd $dir && rm -rf $dir
touch file.txt

outputs touch: file.txt: No such file or directory

The best I could come up with is to delete the temp directory when the process dies, catching the most common signals, and run a cleanup process with cron:

#!/bin/bash
dir=$(mktemp -d /tmp/d.XXXXXX) && cd "$dir" || exit 99
trap 'rm -rf "$dir"' EXIT
/usr/local/bin/my_binary

Is there some simple way to create a really temporary directory that gets deleted automatically when the current binary dies, no matter what?

Here is Solutions:

We have many solutions to this problem, But we recommend you to use the first solution because it is tested & true solution that will 100% work for you.

Solution 1

Your last example is the most fail safe.

trap 'rm -rf "$dir"' EXIT

This will execute as long as the shell itself is still functional. Basically SIGKILL is the only thing that it won’t handle since the shell is forcibly terminated.
(perhaps SIGSEGV too, didn’t try, but it can be caught)

If you don’t leave it up to the shell to clean up after itself, the only other possible alternative is to have the kernel do it. This is not normally a kernel feature, however there is one trick you can do, but it has it’s own issues:

#!/bin/bash
mkdir /tmp/$$
mount -t tmpfs none /tmp/$$
cd /tmp/$$
umount -l /tmp/$$
rmdir /tmp/$$

do_stuff

Basically you create a tmpfs mount, and then lazy unmount it. Once the script is done it’ll be removed.
The downside other than being overly complex, is that if the script dies for any reason before the unmount, you’ve not got a mount laying around.
This also uses tmpfs, which will consume memory. But you could make the process more complex and use a loop filesystem, and remove the file backing it after it’s mounted.


Ultimately the trap is best as far as simplicity and safety, and unless you’re script is regularly getting SIGKILLed, I’d stick with it.

Solution 2

You could use the wait builtin to wait for a background job to finish:

blah &
wait
rm -rf $DIR

Note: Use and implement solution 1 because this method fully tested our system.
Thank you 🙂

All methods was sourced from stackoverflow.com or stackexchange.com, is licensed under cc by-sa 2.5, cc by-sa 3.0 and cc by-sa 4.0

Leave a Reply