Imagine you are a designer working on the next Lord of the Rings movie. You have seen thousands of images, graphics, and photos. However, you can only recall a few characteristics of the images (perhaps it had a blue sky or sand dunes, etc.). How do you find the visual similarities? Perhaps you are a journalist who needs to make a comparison of the New Year celebrations from around the world. How do you find the right video shots? Visual Information Retrieval is focused on finding visual imagery.

As databases are more and more popular, the claim for storing and querying images from databases has also appeared. Though special tools for solving these problems cannot be used in all cases. For example, when a given image database is only an extension of an existing large database containing text data (e.g., police registration). Storing images in legacy database is more cost-effective than procuring special new database engines only for storing images. Obviously, in each case (mainly in the latest), retrieval of images is based on a given matching strategy, which contains a given algorithm in most cases.

Storing of images and, mainly, their retrieval from databases differs from the storing and retrieving of other non-multimedia-like data. By the spread of the new Object-Relational or fully Object-Oriented databases, further possible solutions have occurred. Nevertheless, matching algorithms and strategies used for retrieving images from the databases at present time do not really support the usage of complex matching.

There are several retrieval paradigms used in Visual Information Retrieval. When text annotation is available, it can directly be used for keyword-based searches. In many situations, text annotation does not exist or it is incomplete.

When text annotation is unavailable, we must turn to content-based retrieval methods. When using content-based retrieval methods, search is performed on features derived from the raw visual media such as the colour or texture. The VIR paradigms include querying for similar images, sketch queries and iconic queries. In similar image queries the user selects a query image, and the system gives a set of similar images to the query image. In sketch-based queries, the user manually sketches a skeleton, which will be the base of the query. When using iconic search, the user places symbolic icons where the visual features should be.

                In my talk, I will introduce strategies and paradigms used by the most popular image querying systems. Based on my recent results, some possible directions for future research will be discussed as well.