Login | Register   
LinkedIn
Google+
Twitter
RSS Feed
Download our iPhone app
TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK
Browse DevX
Sign up for e-mail newsletters from DevX


advertisement
 

Go Beyond Keywords! Perform a Visual Image Search  : Page 2

Searching graphics files via keywords can be tedious. Learn how to use image-matching technology to find images by matching shapes, patterns, colors, and textures.


advertisement
Step 1: Analyze new images
Before you can use eVe to search and retrieve an image from a database or flat file, you must process the image. When the toolkit processes a new image, it automatically segments it into regions that correspond roughly to objects or parts of objects in the image. It then applies statistical pattern recognition techniques to automatically extract four distinct attributes from each region:
  • Color
  • Shape
  • Texture
  • 3D shading
eVe stores a condensed descriptor of these attributes in several vectors called Visual Signatures. During the Search phase, you compare the similarity of images based on any one of the attributes of color, shape etc., or a weighted sum of attributes (see Figure 1).

 
Figure 1: A Visual Signature contains four visual attributes for every object in the image: color, shape, texture, and 3D shading.

To organize and keep track of the images, eVe stores a thumbnail of the image and all the information related to that image in a MediaObject. The MediaObject includes keywords or descriptions (textual metadata), the file name and path, the Visual Signature, and the segmentation mask (see the section "Step 3: Segmentation Map & User Selection" later in this article).

To start the project, you create a new MediaObject, insert an image into it and analyze the newly created MediaObject with the analyze() method of the Analyze class from the eVe SDK.

First, you need to import the eVe SDK classes.

import com.evisionglobal.eve.*; import com.evisionglobal.eve.kernel.*; ...

Next, create String objects that represent the input path to the image.

String imagePath = new String ("/myMediaCollection/image/myImage.jpg");

Now instantiate the MediaObject and load the image into it.


MediaObject myMediaObject = (MediaObject) Eve.newMediaObject(); myMediaObject.loadImage(imagePath);

Finally, instantiate an Analyze object and call the analyze() method, passing the MediaObject containing the image as a parameter. The analyze() method extracts the Visual Signature for the image. The eVe.properties file (an editable text file stored in /com/evisionglobal/eve/eve.properties) defines the default values of the parameters. eVe.maxRegions and eVe.maxIterations are set to 3 and 999 respectively. eVe.maxRegions defines the maximum number of object regions into which the image will be divided. It is an upper limit. It is possible that the image may yield fewer regions, depending on the complexity of the image. eVe.maxIterations is the maximum number of iterations that the analysis is allowed to perform. The segmentation process starts with an initial partition and iteratively improves it. Some images require far fewer iterations than others do. Increasing the maxRegions and maxIterations results in increased accuracy, but setting them too high will result in very long analyze times.

Analyze myAnalyzer = (Analyze) Eve.newAnalyze(); myAnalyzer.analyze(myMediaObject,Eve.maxRegions, EVe.maxIterations);

At this point, you can also add metadata, such as keywords. For example, to add the original filename of the image to the MediaObject use: myMediaObject.setProperty("originalFilename", "myImage.jpg"); so the MediaObject will have a metadata key: "originalFilename" with the value: "myImage.jpg".


Comment and Contribute

 

 

 

 

 


(Maximum characters: 1200). You have 1200 characters left.

 

 

Sitemap
Thanks for your registration, follow us on our social networks to keep up-to-date