Ever tried to replicate the Colonel’s secret recipe? Facebook AI has now created a computer vision program that can tell us how to make anything, all from a photo!
Lock up your cookbooks!
Social media is changing the way we eat. Whether it is a health page, a food blogger or travel information, food imagery makes up a huge proportion of posts.
Almost half of all platform users view videos or posts about food. This has spawned popular hashtags such as #foodporn which currently has 216 million posts on Instagram.
Indeed, even Heston Blumenthal has taken note! The notoriously inventive chef recently complained that diners are spending so much time photographing their Michelin star food, they are letting it go cold.
Anybody who follows a particular diet or lifestyle will find a wealth of resources on social media. From vegan recipes, paleo or keto diet suggestions, weight loss diet plans through to gluten-free and FODMAP recipes, social media provides daily access to millions of ideas.
However, what if you see something that you’d love to replicate that doesn’t have a recipe?
Previous programs worked more like a dictionary. They would look at an image or video and find the most similar dish within their files.
Inverse cooking is an AI program that is far more sophisticated. The technology uses computer vision to look at digital images and videos. This leverages two neural networks. One network breaks down the food into a list of ingredients, and the other creates a recipe to combine them.
Cooking to the test
The system is the brainchild of Michal Drozdzal and Adriana Romero, Research Scientists at Facebook AI Research.
The scientists tested it out on a savory English muffin. Their inverse cooking system identified all the ingredients accurately including cheese, broccoli, and tomato. An older retrieval system threw in a cracker, lettuce and – interestingly – Miracle Whip?!
The recipes are still a little hit and miss, but 55% of people judge the inverse cooking recipe to be a success whereas 48% felt that the retrieval system recipe was accurate.
How does it work?
As with all AI, the program only knows what it has been taught. This is the biggest limitation of artificial intelligence at its current stage of development.
Being able to teach AI to a competent level of understanding can be time-consuming, and huge data sets are required to carry out seemingly simple tasks.
Food recognition is a tough area for AI to work with because the constructs are so variable. AI scientists call this ‘high intraclass variability’.
For example, a vegetable comes in multiple colors. They are chopped and cut into a number of different shapes and sizes. Cooking methods vary, or they can be eaten raw. The color and texture of the vegetable changes with cooking and depending on what ingredients it is combined with.
Drozdzal and Romero used the Recipe1M data set with its 17,000 ingredients and one million recipes. They condensed this down to 1,500 ingredients and 350,000 recipes.
The neural networks are trained to recognize common ingredient combinations and 250,000 unique words related to cooking and foods.
I think it sounds like a lot of work for the perfect cinnamon bun!
However, the tool itself could prove useful in understanding what we eat, and perhaps creating recipes with healthier alternatives. I wonder if an inverse cooking app could be an accessible way to tackle the growing obesity crisis?
Given Facebook’s new focus on preventative health, I wonder if this is the game plan for this new AI?
Perhaps that is thinking a little outside the box, and it will be used to make beautiful food that tastes just like your favorite restaurant.
Would you try an inverse cooking app? In what ways do you think this could be useful?