By expanding its product taxonomy to cover over 70 fashion product categories and adding 225 more attributes, ViSenze is making it possible for shoppers to find the exact outfit that they are looking for
SAN FRANCISCO & SINGAPORE–(BUSINESS WIRE)–#artificialintelligence—ViSenze, the market leading AI company powering visual commerce, shares in its latest whitepaper how 65% of online shoppers who use text search, type in more than one attribution. This shows why having deep and wide product attribution helps shoppers find what they want quickly and easily, and also potentially leads to higher conversions.
According to the results (below) from the ViSenze Annual Visual Shopping Survey 2019, one of the biggest obstacles to paths conversion is the difficulty in finding a product. Many potential sales do not have a fair chance to happen because consumers cannot find what they want.
However poor and inaccurate search experience remains one of the top frustrations that consumers have when shopping online.
18% of consumers don’t know how to describe a product
43% of consumers can’t get the right search results
39% of consumers feel there are too many search results
“Shoppers have the luxury of options with new patterns, colours, and designs being churned out every season. Many of them know what they want and are able to describe products with greater detail because they know that optimizing their search results gets them closer to what they want. Without deep attribution, a product can easily fall through standard search filters and might never be found,” says Oliver Tan, CEO, ViSenze.
Besides uplifting shopper search experience, a wider and deeper taxonomy attribution also helps improve personalisation for shoppers.
“A deeper, wider and more comprehensive taxonomy delivers greater relevancy in offering more personalized choices for shoppers with enhanced attribute based recommendations like apparel necklines and occasion wear to help their desired shopping intents,” Tan continued.
Beyond direct consumer benefits, ViSenze also trains its A.I. models to improve sketch tagging use cases for fashion designers.
“We’ve been training our models on an exciting use case for fashion within the design and production phase, where we can now provide accurate attribute tagging on technical flat drawings, also known as ‘Flat Sketches’. This allows design teams to effortlessly tag all attributes in their flat sketches at scale for convenient sales performance recall and to pull visual trend analysis data to help inspire and inform future designs,” adds Guangda LI, CTO, ViSenze.
Click here to download our whitepaper.
ViSenze powers visual commerce at scale for retailers and publishers. The company delivers intelligent image recognition solutions that shorten the path to action as consumers search and discover on the visual web. Retailers like Rakuten and ASOS use ViSenze to convert images into immediate product search opportunities, improving conversion rates. Media companies use ViSenze to turn any image or video into an engagement opportunity, driving incremental revenue. http://visenze.com/
Cheryl Guzman Ng
Global Head of Marketing