Deliverable 5.2 - the School of Engineering and Design - Brunel ...
Deliverable 5.2 - the School of Engineering and Design - Brunel ... Deliverable 5.2 - the School of Engineering and Design - Brunel ...
ICT Project 3D VIVANT– Deliverable 5.2Contract no.:248420Search & Retrieval Mechanisms &Tools4 SEARCH FOR SIMILAR CONTENTIn theonline phase of the framework, the database should be already prepared and waiting for queries.The framework supports two kinds of querying methods. The first is the typical method of a userposing a query multimedia object (a single query – see section 4.1) and the system replies with theresults list. The second method is a “batch search” method where a user may need to pre-compute theresults for a set of queries to use them in the future or outside the framework (section 4.2). Thissecond approach is used in the hyperlinker demo process. This will be presented in detail inDeliverable D5.44.1 SINGLE FRAME SEARCHIn the single frame search functionality the user selects to perform a new query from the menu or thetoolbar icon. A new search thread is generated and a new window is presented in the UI (Figure 12).The window provides an “open” button that opens the platform’s native “file open” dialog. The userselects the integral frame file that he wants to use as query and loads it into the window. The “loadmask” button loads the corresponding segmentation mask from the database. The segmentation maskloading opens the mask image file and reads from the corresponding XML file or computes on the flythe bounding boxes and clickable areas for each object in the frame. After this process the user is ableto click the exact object (see Figure 13) in the loaded integral image and use it as a query. In this casethe query is only the selected object and not the entire query image. If the user selects to not load themask and just click the search button, the whole integral image is used as a query image.Figure 12: Query formulator window4/03/2013 18
ICT Project 3D VIVANT– Deliverable 5.2Contract no.:248420Search & Retrieval Mechanisms &ToolsFigure 13: segmentation mask is loaded and the user may click the object to use as queryAfter the query is submitted with either of the two methods, the search process is started. Thedescriptor extractor process is started and the descriptors are generated. Then the manifold learningalgorithm is called to project the set of selected descriptors to the multimodal space. This finaldescriptor vector is used to query the index for similar content.The final results of the process appear in a new window (see Figure 14) that displays a list of objectsfrom the most to the least similar one, using both a central viewpoint image and the original modalityobject (e.g. integral image or 3D model etc.).4/03/2013 19
- Page 1 and 2: ICT Project 3D VIVANT- Deliverable
- Page 3 and 4: ICT Project 3D VIVANT- Deliverable
- Page 5 and 6: ICT Project 3D VIVANT- Deliverable
- Page 7 and 8: ICT Project 3D VIVANT- Deliverable
- Page 9 and 10: ICT Project 3D VIVANT- Deliverable
- Page 11 and 12: ICT Project 3D VIVANT- Deliverable
- Page 13 and 14: ICT Project 3D VIVANT- Deliverable
- Page 15 and 16: ICT Project 3D VIVANT- Deliverable
- Page 17: ICT Project 3D VIVANT- Deliverable
- Page 21 and 22: ICT Project 3D VIVANT- Deliverable
- Page 23 and 24: ICT Project 3D VIVANT- Deliverable
- Page 25 and 26: ICT Project 3D VIVANT- Deliverable
ICT Project 3D VIVANT– <strong>Deliverable</strong> <strong>5.2</strong>Contract no.:248420Search & Retrieval Mechanisms &ToolsFigure 13: segmentation mask is loaded <strong>and</strong> <strong>the</strong> user may click <strong>the</strong> object to use as queryAfter <strong>the</strong> query is submitted with ei<strong>the</strong>r <strong>of</strong> <strong>the</strong> two methods, <strong>the</strong> search process is started. Thedescriptor extractor process is started <strong>and</strong> <strong>the</strong> descriptors are generated. Then <strong>the</strong> manifold learningalgorithm is called to project <strong>the</strong> set <strong>of</strong> selected descriptors to <strong>the</strong> multimodal space. This finaldescriptor vector is used to query <strong>the</strong> index for similar content.The final results <strong>of</strong> <strong>the</strong> process appear in a new window (see Figure 14) that displays a list <strong>of</strong> objectsfrom <strong>the</strong> most to <strong>the</strong> least similar one, using both a central viewpoint image <strong>and</strong> <strong>the</strong> original modalityobject (e.g. integral image or 3D model etc.).4/03/2013 19