Data Scientists Dictate What We Eat
The biggest factor influencing what the average American eats is the margin the grocery store makes on the products it sells -- and behind it all is data science.
Editor's note: This article was originally published at Data Science Central and is reprinted by permission of the author.]
By Mirko Krivanek, Data Science Central
Indirectly of course. There are other factors too, such as regulations that make it illegal to sell unpasteurized milk, horse meat, foie gras, etc. However, the biggest factor influencing what the average American eats is the margin the grocery store makes on the products it sells. This explains why you can't get red currants or passionfruit anymore, but you'll find plenty of high energy drinks and food rich in sugar. Of course, there's a feedback loop: Americans like sweet products, so many companies produce sweet food; due to large-scale processing, it's cheap, can be priced efficiently by grocery stores, and sells well.
Behind all of this is data science, which helps answer the following questions:
- Which new products should be tested? Red currant pie? Orange wine? French style cherry pie? Wild boar meat? Purple cheese? Red eggs? Cheese shaped like a ball? (Although anything that is not shaped like a parallel-piped rectangle is sub-optimal from a storage point of view, but that's another data science issue.)
- How do you determine success/failure for a new product? How do you test a new product (design of experiment issue)
- Which products should be eliminated (passion fruits, passion fruit juice, and authentic Italian salamis have been banned)
- How do you measure lift (increased revenue)? Do you factor in costs of marketing and other expenses?
- How do you price an item?
- How do you cross-sell? Do you identify products to cross-sell via data mining techniques?
- How to optimize ROI on marketing campaigns?
- When and where to sell each product - seasonal and local trends
- Inventory forecasting
The last time I went to a grocery store, I wanted to buy plain, non-sweet yoghurt. It took me 10 minutes to find the only container left in the store -- the brand was Danone. I'm ready to pay three times more to get that yoghurt (a product that has been consumed worldwide by billions of people over several millenia) rather than the two alternatives: low fat, or plain but sweet. Ironically, the "low fat" version has 180 calories per serving while the old-fashioned plain yoghurt has 150. This is because they added corn starch to the low fat product.
Over time, I've seen the number of product offerings shrink. More old products are eliminated than new products being introduced. Clearly, the products eliminated are those with a smaller market, such as passion fruits, but could data science do a better job at deciding what goes on the shelves, when and where, in what proportions, and at what price?
I believe the answer is yes. Better, more granular segmentation, with lower variance in forecasts of sales and revenue (per product) thanks to using models with higher predictive power, is the solution. In the case of the yoghurt, while most people avoid fat, there are plenty of thin people on the West and East coasts who don't mind eating plain yoghurt. So it could make sense to not selling plain yoghurt in Kansas City, but selling it in Seattle. Maybe just a few containers with a high price tag, among tons of cheap low fat yoghurt.
It also creates new opportunities for grocery stores like Puget Consumers Co-operative, a natural-foods co-op in the Pacific Northwest, selling precisely what supermarkets have stopped selling -- as long as it is "sellable." In short, selling stuff that generates profit but that supermarkets, due to poor retail analytics, have written off.