Mobile Image Webs
The widespread availability of digital cameras and ubiquitous Internet access have facilitated the creation of massive image collections. These collections can be highly interconnected through implicit links between image pairs viewing the same or similar objects. We build graphs called Image Webs to represent such connections.
In this project, our goal is to use image webs to provide useful services for mobile users including content based image retrieval, query expansion, image annotation and augmented reality.
Image web is to discover the connections between images in the collection induced by shared objects. We can connect images that are visually different.
Connection through repeated objects
The town hall of Calais and the Palace of Westminster are connected by a cast of Rodin's sculpture.
Connection through mobile objects
Two different buildings at Stanford are connected by a campus bus.
The structure of image webs
- The idea of Image Webs is to interlink images through a variety of link types.
- Image web is a graph where vertices are associated with parts of an image and edges represent relations between the regions.
- PRPL: Personal-Cloud Computing Infrastructure
- Compact structures for image retrieval on mobile devices
- Commercial CBIR Apps on iPhone or Android(Google Goggles, Snaptell, LookTel)
First, affine covariant local features are extracted from the images. Harris-affine and Hessian-affine keypoint detectors are used in order to detect keypoints and keypoints are described using the SIFT descriptor. Next, images are indexed using bag-of-word model. For each image, its matching candidates can be found using traditional CBIR algorithm. In order to get the correct matches in the matching candidates, we apply the RANdom Sample Consensus algorithm (RANSAC) to find a maximal set of feature matches such that features in one image can be mapped to their corresponding features in the other by an affine transformation.
|Art museum (1200 images)||Stanford Campus (a fraction of 5000 images)|
We use both CBIR and EdgeRank link selection to increase the connectivity of the graph so the image web can be constructed efficiently. The following shows the image web construction times (using up to 500 compute nodes).
Art museum (1200 images ~ 0.8 minutes)
London (18,000 images ~ 14 minutes)
Pittsburgh (50,000 images ~ 80 minutes)
Peer to Peer Image Webs
When two mobile users have their own image collections and image webs respectively, we need to find similar images in two collections without sending all images to a central place. We utilize the computing power of the mobile devices to find the compact signature of each image and compute the distance between images. Multiple mobile users can share similar images with each other and merge their own image webs in a peer to peer way.
Cooperative Image Annotation
We have the image web stored in the cloud. The user can upload new images and corresponding annotations to the image web. Labels are propagated in the image web using graph learning algorithm. When the mobile user add a new image to the image web, the existing label will transfer to this new image and the user will get the annotations and related images as well.
We are working to combine the image web with PRPL infrastructure. Each user has one image web and can access the public images shared by friends. The annotations of images are propagated in the community.
 K. Heath, M. Ovsjanikov, M. Aanjaneya, N. Gelfand, L. J. Guibas "Image Webs: Computing and Exploiting Connectivity in Image Collections" Twenty-Third IEEE Conference on Computer Vision and Pattern Recognition (CVPR2010)