How This Teen’s Artificial Intelligence App Became a Global Success by Integrating an API

Julia Gallagher
CloudSight
Published in
3 min readJan 24, 2017

--

For the blind, certain daily activities can be frustrating. Selecting matching articles of clothing, choosing a CD to listen to, or simply picking out a snack from the kitchen can be daunting for anyone who is visually impaired. However, one 12th-grader from Ontario is aiming to make such everyday tasks that much easier for the blind and visually impaired community.

Anmol Tukrel is a senior at Holy Trinity School in Toronto, and while most teenagers are learning how to drive, Tukrel is learning how to navigate the global app space with his app iDentifi — an app that can take a photo of any object, then speak aloud a description of the image to the user.

iDentifi identifies the image as “White and gold Beats headphones on wooden surface” (Apple iTunes Store)

His inspiration to help the blind came from traveling when he was younger. Growing up, Tukrel visited his aunt in Pune, India who worked at the K. K. Eye Institute, an eye care center that specializes in patients with eye disorders. His early experiences learning about his aunt’s profession combined with an internship at a firm that used computer vision for advertising led Tukrel to explore how artificial intelligence could potentially help the blind.

“Don’t be overwhelmed or intimidated by how complicated computer vision seems at first glance.”

Knowing the possibilities of computer vision, Tukrel set out to create his own neural network that could power iDentifi. However, after much research, he eventually opted to use a publicly available API as he found that it would be a lot easier to incorporate and less time consuming than creating his own network. “There are a lot of resources online to improve your knowledge about computer vision, and lots of great APIs to help you get your projects off the ground,” he says. “Don’t be overwhelmed or intimidated by how complicated computer vision seems at first glance.”

Using a pre-trained, active API model poses a lot of advantages; there’s no training or development necessary to get started, and this allows developers to instantly build applications that can understand what they see. With a growing database of over 400 million images, Tukrel eventually chose CloudSight’s API to power iDentifi’s computer vision. “I found CloudSight’s accuracy to be extremely good,” he says, and with the extra energy saved on trying to build his own neural network, Tukrel can use his efforts to focus on design and integrating more than 25 languages into his app. Today, iDentifi has users in 86 countries, has been formally commended by Ontario’s Minister of Innovation and Science, and made TechCrunch’s Top Artifical Intelligence Stories of 2016.

As for the future of iDentifi, Tukrel wants to develop the app for Android, and the ease of implementing an existing API gives him the time and freedom to expand his horizons. Tukrel says his goal is to help as many in the visually impaired community as possible, and we’re confident he’ll be able to do so.

--

--