We've all seen that delicious slice of lasagna, elaborately constructed sushi or a sophisticated slice of toast with garlicky aioli, tomatoes and snap peas on Instagram, and it's likely followed by cravings for those dishes. Now MIT researchers have figured out a way to figure out the recipe for the delectable meal you spot on Instagram, just by analyzing a picture of the food.
Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) believe that analyzing photos of food could contribute to better understand people's eating habits. In a new paper, the team trained an AI system to look at images of food and be able to predict the ingredients and suggest similar recipes.
“This could potentially help people figure out what’s in their food when they don’t have explicit nutritional information,” Nick Hynes, a graduate student at CSAIL and lead author of the paper alongside Amaia Salvador of the Polytechnic University of Catalonia in Spain, said in a statement.
Here’s how the system - dubbed Pic2Recipe - works. First, researchers built a database of food pics to train algorithms. By combing websites like All Recipes and Food.com, they developed a database of over 1 million recipes, which were annotated with information about the ingredients in a wide range of dishes. Then, researchers used that data to train a neural network to find patterns and make connections between the food images and the corresponding ingredients and recipes.
Given a photo of a food item, Pic2Recipe could identify ingredients like flour, eggs and butter, and then suggest several recipes that it determined to be similar to images from the database. In a series of experiments, the system returned the 10 most similar recipes for a photo, and the correct recipe was among them 65 percent of the time.
The system did particularly well with desserts like cookies or muffins, since that was a main theme in the database. However, it had difficulty determining ingredients for more ambiguous foods, like sushi rolls and smoothies.
In the future, the team hopes to be able to improve the system so that it can understand food in even more detail. This could mean being able to infer how a food is prepared (for instance, stewed versus diced) or distinguish different variations of foods.