|
|
|
|
|
Adaptive Multimodal Feature Fusion for Content-Based Image Classification and Retrieval |
|
PP: 699-708 |
|
doi:10.18576/amis/140418
|
|
Author(s) |
|
Samy Bakheet,
Mahmoud Mofaddel,
Emadedeen Soliman,
Mohamed Heshmat,
|
|
Abstract |
|
Content-Based Image Retrieval (CBIR) is a potential application of computer vision to the image retrieval problem to search images from large-scale image databases according to a users request in terms of a query image. Semantic gap remains an endemic and awkward challenge for the development of high accuracy CBIR systems. It arises due to the inherent difference between the digital representation of images by machine and high-level semantic concepts of images. In this paper, we introduce an adaptive feature fusion framework for Content-Based Image Classification and Retrieval (CBICR) based on stacked random forests for feature fusion, where salient multimodal features, including low-level visual features (e.g., color, edge histogram, Hu moments, etc.) are automatically extracted from image regions and adaptively fused together. Then, a particular sampling and classification mechanisms of Random Forests are exploited to adaptively fuse the utilized features together. To assess the effectiveness of the proposed method, various experiments are carried out on a large scale dataset of real and synthetic images. The results demonstrate desirable performance of the proposed method in terms of efficiency, effectiveness, and robustness. |
|
|
|
|
|