Bing is enabling people to automatically detect intent and provide great search results for images with a new feature called Bing Visual Search, available on Thursday, as part of existing Bing image search tools.
Bing already provide ‘Search By Image’ capability, that lets a user search by an image. Now, what if a user want to search for a certain object in an web image, or the one they photographed, this is where the new Visual Search tool will help along side the existing image search tools.
The tool available now on Bing.com on both desktop and mobile, as well as in Bing mobile app. In addition, Bing is also making its visual search available to developers via Image Search APIs.
Explaining how this visual search tools work, bing in an example says, “for example when you look bing for kitchen decoration inspiration, and is attracted with an image — now a click on the thumbnail result will bring up the ‘Detail View’ with a magnifying glass in the top left of the image called “visual search button.”
Clicking this button displays a visual search box on the image, just click and drag it to cover just the object of interest — or simply draw a box around the oject. Bing will instantly trigger a visual search using the selected portion as the query.
For related products, Bing automatically detect the shopping intent and, in addition to regular image search, also run a product search for matching products. Now, just click on the right product, pick a merchant on the detail page and finalize the purchase.
Not happy with results, you can continue with exploration of similar images by clicking on “Related Images.”
For better results, Bing advises to “tweak the visual search box to fully capture the object of interest to get the best results.”
Bing shared some example screenshots of its new visual image search:
So how does it all work under the hood?
Bing explains, “as a first step to understand the query-image, they run Image Processing Service to perform object detection, extraction of various image features including DNN features, recognition features, and additional features used for duplicate detection.
Next, it generate best text query to represent the input image based on a pipeline used in its image search to identify BRQs (‘best representative query’). Subsequently, leveraging expertise used in Bing answers, a model triggers to identify different scenarios for search by image.
In the next step i.e. matching, it employs a technique known as Visual Words, which narrow down a set of candidates from billions to several millions.
After the matching step, we enter the stage of multilevel ranking. We need to rank millions of image candidates, and we’ll do that based on feature vector distances. To speed up the calculations, we use an innovative algorithm developed by Microsoft Research in collaboration with University of Science and Technology of China called
After this, an innovative algorithm called ‘Optimized Product Quantization’ decomposes original high-dimensional vector into many low-dimensional sub-vectors that are then quantized separately.
Then, the distances between query-image and result-image vectors are calculate through a table lookup against a set of pre-calculated values to speed things up even further.
As a result, they now perform a multiple levells of ranking images, which is followed by the de-duplication to end up with the final result set that will be returned to the user.”