Tag: TileCache

TileCache and the Ext3, JFS and XFS file systems.

Posted by – Friday 2010-12-24

In this post we will see why the underlying file system used by Tilecache can be an issue because the high number of generated tiles.

Currently, main Linux distros use the Ext3 file system by default, which works with a fixed number of inodes and, hence, a limited number of files. In section 1 the problem is presented in more detail. In section 2 two ‘alternative’ file systems, JFS and XFS, less prone to run out of inodes, are presented.

1. “No space left on device”: running out of inodes.

TileCache can generate a large number of tiles. Let’s suppose we have a map resolution of 285 m/pixel, a scale quantization factor equal to 2, 10 zoom levels and that the cached area has 16,200 square km.; then, for each layer the number of tiles will be over 1,000,000. Say that for instance, we have 25 layers that must be cached; then, the grand total will be over 25,000,000 tiles. In case tiles are stored in the file system – cache types[1] ‘Disk’ or ‘GoogleDisk’ -, the obvious problem we have to deal with is ensuring that the used file system will have room for all the tiles.

More…

Benchmarking TileCache’s metatiling.

Posted by – Saturday 2010-12-04

In WMS cache systems, metatiling consists on requesting larger tiles than ‘regular’ tile size – for us, 256×256 pixels -, and then splitting them into tiles of regular size. TileCache has this feature, and it can be used to dramatically speed up the WMS cache seeding process.

In the first section we will see how to enable TileCache metatiling in a Debian server. In the second section we will see a benchmark showing that speed up in the seeding process, and a simple method to get the best metatile size. Finally, in the third section we see how to avoid two frequent errors when using an instance of MapServer as Web Map Service (WMS) server.

1. Enabling metatiling

In TileCache metatiling is enabled in the configuration file – tilecache.cfg – and must be done in every layer we want to take advantage of this feature. Below a configuration file snippet is shown:

[cache]
type=Disk
base=/path/to/wms-cache/directory/

[municipalities]
debug=yes
type=WMSLayer
url=http://myserver/sdi-lugo?service=WMS&transparent=true
extension=png
size=256,256
bbox=580000,4688000,680000,4850000
layers=municipios
srs=EPSG:23029
extent_type=loose
maxResolution=285
levels=10
metaTile=yes
metaSize=15,15

More…

Some TileCache deficiencies (and workarounds).

Posted by – Sunday 2010-11-14

TileCache, as every piece of software complex enough, is not free of deficiencies. In this post I write about some TileCache features (or lack of) which, in my opinion, are deficiencies. These are: (a) a TileCache layer will only work with one set of map resolutions, (b) poor logging, (c) the TileCache seeder always stops on error, (d) the TileCache seeder cannot be restarted from a given point and (e) the lack of support for distributed computing. In some cases a workaround to partially overcome limitations is given.

TileCache version 2.11 is covered in this post.

1. Only one set of map resolutions per TileCache layer.

By nature WMS caches work with finite sets of map resolutions. In my opinion, one deficiency of TileCache is that a layer will only work with one set of map resolutions. For each layer the set of map resolutions is set in the configuration file by using the parameter ‘resolutions’ or by giving both ‘maxResolution’ and ‘zoomLevels’ parameters.

The problem comes when you have to work with more than one set of map resolutions. Say you have a WMS layer named ‘municipalities-boundaries’ that it is cached and must be shown at two different sets of map resolutions, say one with maximum map resolution 100 and the another one with map maximum resolution 200. In this case there would be two possible workarounds. One of them is creating two TileCache layers, say ‘municipalities-boundaries-200’ and ‘municipalities-boundaries-100’, as show in the following TileCache configuration file snippet:

More…

TileCache + lighttpd + FastCGI logging.

Posted by – Thursday 2010-09-30

In this post we will see how to configure the lighttpd HTTP server to get written to its log files the activity of TileCache (version 2.10) when they communicate via the FastCGI protocol.

Finally, we see what it seems a lighttpd feature bug that prevents the TileCache debug log get written to lighttpd log file and how to override this issue.

1. TileCache logging.

TileCache sends its debug output to the (standard error stream by default. This output includes information like the requested tile (its bounding box and x, y, z coords, where z represents the zoom level), time and whether it was a cache hit or cache miss. This information will be indispensable for debugging or estimating how much time the seeding of a WMS cache will last.

2. Configuring lighttpd to receive TileCache debug output.

The very first step to get lighttpd receiving TileCache debug output is that TileCache does generate debug output. In the TileCache configuration file the parameter debug must be set to ‘yes’ for every layer whose debug output is desired.

[cache]
type=Disk
base=/mnt/geodata/tilecache/
 
[municipalities]
debug=yes
type=WMSLayer
url=http://myserver/sdi-lugo?service=WMS&transparent=true
extension=png
size=256,256
bbox=580000,4688000,680000,4850000
layers=municipalities
srs=EPSG:23029
extent_type=loose
maxResolution=200
levels=10

More…

Running two (or more) instances of TileCache.

Posted by – Thursday 2010-09-16

TileCache is caching system that caches a Web Map Service (WMS) at a discrete set of map resolutions. Sometimes two (or more) WMS caches working at different sets of map resolutions are needed, and a possible solution can be running two (or more) instances of TileCache.

In first section we briefly introduce WMS caches and TileCache.

In the second section, it is seen why running two instances of TileCache it is needed. We have created a dynamic map, based on the OpenLayers library, which is inserted in a web page. Depending on client display resolution, the map can be shown cropped. To avoid it, on map initialization its resolution (map units per pixel) is changed in function of client display resolution. A side effect of this solution is that in case the WMS layers are cached, we will need to run more than one instance of TileCache.

In the third section we show how to install and configure one instance of TileCache on a Linux server.

Finally, in the fourth section the previous setup is generalized to install two instances of TileCache.

1. About WMS caches and TileCache.

Rendering WMS images is a CPU intensive task, usually resulting in high load times on client side. Thus, WMS cache systems, that store and send already rendered images as response to WMS GetMap requests, are of interest for two resasons: on server side the rendering work already done by the computer is reused, and on client side load times decrease by orders of magnitude.

WMS caches are conceptually very similar to other types of caches. When a WMS GetMap request is received by a WMS cache, a cache hit or a cache miss will happen. In case of cache hit, the requested image is already rendered and stored in the WMS cache and it is sent to the requesting client. In case of cache miss, the requested image is not present in the WMS cache, so (a) the request is forwarded to the WMS server, (b) the WMS server renders the image, (c) the rendered image is sent to the WMS cache, (d) the rendered image is sent to client and (e) it is also stored in the WMS cache. Note that it is assumed that clients must always send all their WMS requests to the WMS cache, not to the WMS server.

More…