In this post we will see why the underlying file system used by Tilecache can be an issue because the high number of generated tiles.
Currently, main Linux distros use the Ext3 file system by default, which works with a fixed number of inodes and, hence, a limited number of files. In section 1 the problem is presented in more detail. In section 2 two ‘alternative’ file systems, JFS and XFS, less prone to run out of inodes, are presented.
1. “No space left on device”: running out of inodes.
TileCache can generate a large number of tiles. Let’s suppose we have a map resolution of 285 m/pixel, a scale quantization factor equal to 2, 10 zoom levels and that the cached area has 16,200 square km.; then, for each layer the number of tiles will be over 1,000,000. Say that for instance, we have 25 layers that must be cached; then, the grand total will be over 25,000,000 tiles. In case tiles are stored in the file system – cache types ‘Disk’ or ‘GoogleDisk’ -, the obvious problem we have to deal with is ensuring that the used file system will have room for all the tiles.
However, there is a not so obvious way a file system can also run out of space: all inodes[2, 3] have been consumed. An inode “is a data structure on a filesystem on Linux and other Unix-like operating systems that stores all the information about a file except its name and its actual data.” Filesystems like Ext3 or Ext4[4, 5] use inodes in a way such their number is fixed and cannot be easily altered [note 1]. Thus, given that each file there will consume one (and only one) inode, it is possible to run out of them despite the partition has unused space. In an scenario like this, error messages like “No space left on device” will be seen.
$ df -i /path/to/tiles/dir/ Filesystem Inodes IUsed IFree IUse% Mounted on /dev/hdc1 13762560 13762560 0 100% /path/to/tiles/dir/ $ df -h /path/to/tiles/dir/ Filesystem Size Used Avail Use% Mounted on /dev/hdc1 207G 151G 46G 77% /path/to/tiles/dir/ $ cat /etc/fstab # /etc/fstab: static file system information. # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc defaults 0 0 /dev/hda1 / ext3 errors=remount-ro 0 1 /dev/hda9 /home ext3 defaults 0 2 /dev/hda8 /tmp ext3 defaults 0 2 /dev/hda5 /usr ext3 defaults 0 2 /dev/hda6 /var ext3 defaults 0 2 /dev/hda7 none swap sw 0 0 /dev/hdc /media/cdrom0 udf,iso9660 user,noauto 0 0 /dev/fd0 /media/floppy0 auto rw,user,noauto 0 0 /dev/hdc1 /path/to/tiles/dir/ ext3 defaults 0 2
2. Two suitable file systems for TileCache.
If using Linux – and likely other flavors of Unix -, there are other file systems than Ext3 that can be suitable for TileCache. Two of them are IBM’s Journaled File System and Silicon Graphics’ XFS.
2.1 IBM’s Journaled File System.
IBM’s Journaled File System[6, 7] (JFS) is a 64-bit, metadata only journaling file system created by IBM and licensed under the GNU General Public License since December, 1999. One of the the main features of JFS is dynamic inode allocation, which means that (a) space for new disk inodes is dynamically allocated as necessary and (b) unused inodes are removed when no longer needed. Thus, we do not have to care about the number of available inodes.
Support for JFS has been available in the Linux kernel since version 2.4.18pre9-ac4 (year 2002). Many Linux distros support JFS, among them Debian since its version 3.0, RedHat since 7.3, SuSE since 7.3, TurboLinux since 7.0…
2.2 Silicon Graphics’ XFS.
Silicon Graphics’ XFS [note 2] is a 64-bit, metadata journaling file system created by Silicon Graphics [note 3] and licensed under the GNU General Public License since May, 2000. This file system also seems a suitable candidate because it does dynamic inode allocation, which suggests that the number of inodes is not an issue. However, according to a question in the XFS FAQ, it seems its number is limited if using 32-bit inodes:
By default, with 32bit inodes, XFS places inodes only in the first 1TB of a disk. If you have a disk with 100TB, all inodes will be stuck in the first TB. This can lead to strange things like “disk full” when you still have plenty space free, but there’s no more place in the first TB to create a new inode. Also, performance sucks.
To come around this, use the inode64 mount options for filesystems >1TB. Inodes will then be placed in the location where their data is, minimizing disk seeks.
Beware that some old programs might have problems reading 64bit inodes, especially over NFS. Your editor used inode64 for over a year with recent (openSUSE 11.1 and higher) distributions using NFS and Samba without any corruptions, so that might be a recent enough distro.
Support for XFS has been available in the Linux kernel since version 2.4.25. Many Linux distros support XFS, among them are Debian, Ubuntu, SuSE, Fedora Core, Gentoo and Slackware.
2.3 Some notes about file system benchmarks.
Justin Piszcz[9, 10] and Hans Ivers have done file system benchmarks where Ext2, Ext3, XFS, JFS, Reiser3 and Reiser4 file systems were compared. Both author’s choice was XFS.
While JFS is not the fastest file system, nor a slow one, it shows good performance under many different kinds of load. Another remarkable feature of JFS is that CPU usage is very low, even under heavy disk activity. It has been reported that JFS will perform better when the Linux kernel uses the Deadline I/O Scheduler [12, 13].
[note 1] The number of inodes depends on the partition size and the inode size. Inode size is set on filesystem creation, typically when formatting the partition. Once the file system has been created, it seems that the number of inodes cannot be altered without file system re-creation. It also seems that partition resizing – a tricky and risky task – will not alter the number of inodes.
[note 3] An interesting article, divided in six chapters, on the rise and fall of Silicon Graphics (also known as SGI) is “What led to the Fall of SGI?“.
 types of TileCache caches: http://tilecache.org/docs/Caches.html
 inode, Wikipedia entry: http://en.wikipedia.org/wiki/Inode
 inode definition, Linux Information Project: http://www.linfo.org/inode.html
 “Ext4″, Linux Kernel Newbies: http://kernelnewbies.org/Ext4
 “Anatomy of ext4″, IBM developerWorks: http://www.ibm.com/developerworks/linux/library/l-anatomy-ext4/
 “IBM Journaled File System”, Wikipedia entry: http://en.wikipedia.org/wiki/IBM_Journaled_File_System_2_(JFS2)
 “JFS for Linux”, documentation: http://jfs.sourceforge.net/jfslldoc.html
 “XFS filesystem”, Wikipedia entry: http://en.wikipedia.org/wiki/XFS
 “Benchmarking Filesystems”, Linux Gazette issue #102: http://linuxgazette.net/102/piszcz.html
 “Benchmarking Filesystems Part II”, Linux Gazette issue #122: http://linuxgazette.net/122/TWDT.html#piszcz
 “Filesystems (ext3, reiser, xfs, jfs) comparison on Debian Etch”, Debian Administration: http://www.debian-administration.org/articles/388
 “JFS Filesystem”, archlinux.org: https://wiki.archlinux.org/index.php/JFS_Filesystem#Deadline_I.2FO_Scheduler
 “Kernel Korner – I/O Schedulers”, linuxjournal.com: http://www.linuxjournal.com/article/6931?page=0,2