How can I count all the files on my system from root?

I want to know the total number of files on my server. Can this be done?

Here is Solutions:

We have many solutions to this problem, But we recommend you to use the first solution because it is tested & true solution that will 100% work for you.

Solution 1

Depending on what exactly you want to count, you are better doing this per filesystem rather than counting all files under root. Counting everything under root would also count /proc and /sys files which you may not want to include.

To count everything on the root filesystem using GNU find, you could do:

find / -xdev -type f -printf '\n' | wc -l

The -printf '\n' will just print a newline for every file found, instead of the filename. This way, there are no problems with filenames that contain newlines themselves and would count as multiple files.

With a POSIX find you could simply do:

find / -xdev -type f | wc -l

Or POSIXly and avoiding any file containing newlines from being counted twice:

{ printf 0; find / -xdev -type f -exec sh -c 'printf "+ $#"' sh {} +; echo; } | bc

Here each file becomes a different argument to sh which then prints the total number of arguments. In case more than one sh process is called, as will be the case for many files, each sh output is summed in bc.

Update

A simpler (but slower) POSIX solution:

find / -xdev -type f -exec printf '%s\0' {} + | tr '\n\0' '?\n' | wc -l

Update 2

As noted by @Gilles, using -type f with find only counts regular files. To also include device files, you could use -type f -o -type b -o -type c. To count directories as well, don’t use any -type option.

Another point by Gilles was that files with multiple hard links will be counted as different files. This may not be desirable on a filsystem where, for example increment backup trees have been created by hardlinking unchanged files in a newer tree to those in an older one. To overcome this with GNU tools you could do:

find / -xdev -type f -printf '%i\n' | sort -u | wc -l

Using POSIX tools:

find / -xdev -type f -exec ls -iq {} + | sort -buk 1,1 | wc -l

No problems with newlines in filenames here since the -q option to ls means that it will replace them with ?.

Solution 2

df -i / gives you the number of used inodes on the root filesystem (on filesystems that use inodes, which includes the ext2/ext3/ext4 family but not btrfs). This is the number of files on that filesystem, plus a few inodes that are preallocated for the use of fsck.

If you want the number of files in a directory tree, then you can use the following command:

du -a /some/directory | wc -l

Add the option -x to du if you don’t want to traverse mount points, e.g. du -ax / | wc -l. Note that this will return a larger count if you have file names containing newlines (a bad idea, but not impossible).

Another way to count is

find /some/directory | wc -l

or, to cope with file names containing newlines, with GNU find (non-embedded Linux or Cygwin):

find /some/directory -printf . | wc -c

Add -xdev (e.g. find /some/directory -xdev -printf .) to skip mount points. Note that this directory entries, not files: if a file has multiple hard links, then each link is counted (whereas the df and du methods count files, i.e. multiple hard links to the same file are only counted once).

Solution 3

UPDATE

This is the fastest method I can imagine that this can be done fully portably. I use tar below because it will automatically add hard linked files only once:

   find /./ -xdev ! -type d | tar -cf - - | tar -t | sed -n '\|^/\./|c.' | wc -l

portable and very fast:

find / -xdev | sed -n '/^[./]/c\.' | wc -l

I don’t believe you do need all of the rest – though @Graeme was correct about the possible misses below. This, however, does not have the same shortcomings:

find /./ -xdev | sed -n '\|^/\./|c.' | wc -l

All you need to do is ensure a full path to root and you don’t have to jump through all of the other hoops.

NOTE: As Gilles points out, using -type f is an egregious error. I have updated this post to reflect this.

Also for a more accurate count, you need only do:

du / --inodes -xs

Provided your tools are up to date, that will provide you with the exact number of files in your root filesystem. This can be used for any directory as well.

Here’s a means of getting an exact count of all files in any filesystem or subdirectory excluding hardlinks with only very commonly available tools. First isolate target root with a mount --bind away:

mkdir /tmp/ls ; sudo mount -B / $_ ; cd $_

Next, count your files:

sudo ls -ARUi1 ./ | 
grep -o '^ *[0-9]*' | 
sort -un | 
wc -l

I did something very similar to this for another answer a few days ago – though that was a little more complicated because the goal was to sort by subdirectories containing the highest file counts – so the command there looks a little different than the one here. But this is still very fast, by the way.

Anyway, the heart of that is the ls command. It -Recursively searches the entire tree, listing -Almost-all -inodes -Unsorted at -1 file per line. Then we grep -only the inode [0-9]numbers, sort on -unique -numbers and wcount -lines.

Solution 4

From your root directory:

find . -type f | wc -l

You can change the path (here .) to whatever the directory you want to count the files in.
If you don’t want to go in subdirectories, add the option -maxdepth 1

Note: Use and implement solution 1 because this method fully tested our system.
Thank you 🙂

All methods was sourced from stackoverflow.com or stackexchange.com, is licensed under cc by-sa 2.5, cc by-sa 3.0 and cc by-sa 4.0

Leave a Reply