5 Minutes with a Data Scientist: Dean Abbott of Abbott Analytics
To provide you with interesting, current information, Upside is collecting quick perspectives from working professionals. Today, meet Dean Abbott, president of Abbott Analytics.
- By James E. Powell
- November 1, 2016
What's the best part about being a data scientist? What personality trait do data scientists need to succeed? Upside is collecting quick perspectives from working professionals. Today, meet Dean Abbott, president of Abbott Analytics, who answers these and other questions.
UPSIDE: What's the one thing you wish people knew about your job?
Dean Abbott: My job is not primarily about math or statistics. Yes, knowing math and statistics is very helpful in building predictive models and advanced analytics solutions, but most of my time is spent thinking creatively about the models that were built and what they mean to the organization.
For data scientists (at least the machine learning part of data science), in the words of David Hand, "Data is king." We care about the data and what it means. I don't care about the math behind the algorithms nearly as much as what the algorithms tell me about the data.
What's your favorite part about being a data scientist? Your least favorite part?
We'll start with my least favorite part. I don't enjoy the data cleansing part of data preparation. It's very detailed, tedious, and never-ending. That's not to say it's not critically important, it is. The other parts of predictive modeling are just more interesting.
My favorite part is building predictive models iteratively, finding ways to improve the accuracy and efficiency of the models with each iteration. The icing on the cake is when insights into why the models are accurate can be gleaned from the predictive models; gaining that "Wow, I never knew that!" from the models is the best kind of "aha" moment.
If you could go back in time, what's the one thing you would tell yourself as a new data scientist?
I'm at the back end of my career now, so I get asked this kind of question a lot by new data scientists in the field. What I tell them (only dodging the question slightly) is: get a job where you can apply data science, and apply the techniques to real-world problems for several years.
Algorithms and programming are taught well in undergraduate and graduate school, but the application to real-world problems is harder to teach. Experience with trading off algorithm strengths with the pressures of pushing out practical solutions is a good exercise for all of us.
The second thing I recommend is to find and follow people to learn from and be mentored by. If they are within your organization, all the better. If not, read their blogs, listen to their YouTube presentations, and ask questions. Learning from the experiences of others quickens your ability to learn how to best accomplish data science tasks. Again: read, read, and read. Ask questions and listen. Augment your experience with others' experiences.
What's a personality trait you think people need to succeed at your job?
The thing I look for the most when hiring predictive modelers is intellectual curiosity. They need to look at a solution and wonder why the model behaves the way it does, dig into the data, and discover something about the business process that no one ever knew before. This is the part of data science that is hardest to teach, but it's potentially the most valuable for the business.
What's a typical day like for you? Do you work mostly with a team or mostly alone?
When I was a consultant, perhaps 75 percent of the time I spent more or less alone, working with data, building models, and assessing models. The other 25 percent was spent with the broader business team -- either defining the problems, recapping what the models were telling us about the data, or building the processes for deploying the models.
What's your biggest pet peeve (abused buzzword, overhyped idea, etc.) and why?
There are so many to choose from it's hard to pick one. I'll actually break it up into two pet peeves: one about hype and one about misleading approaches.
The most overhyped technology right now is deep learning. Now I love what deep learning can do for us in the industry, and it's phenomenally accurate on many problems. The issue I have is with the perspective of some deep learning advocates that deep learning is always the best approach. We've been there and done that. If deep learning truly was the best technique always, we would see only deep learning networks winning competitions (such as Kaggle). However, this isn't the case.
The other pet peeve is how we should approach building classification models on several biased target variable populations. For example, if 99 percent of the target variable equals 0 and 1 percent equals 1. The conventional wisdom is to resample the data so the populations are equal so that we don't "always classify the record as a 0."
It turns out that for most algorithms, you don't need to resample; the algorithms will work just fine without resampling. The reason the classifier "calls everything a 0" is purely because of the software's interpretation of the probability, not anything to do with the classifier's probability itself. I've presented and written on this phenomenon extensively (for example, here are my slides from Predictive Analytics World 2013).
Where is data analytics/data science headed in the next few years?
Data science is moving to the cloud where bigger data can be used. Big data combined with cloud computing means problems that even five years ago were intractable are now possible to solve.
The advances in speed and scale don't just make new problems possible. They also free up the analyst to think more deeply about the problems they are solving and less about how to reduce problems down into a simple form that is possible to solve. We can therefore try more permutations, build more versions of the models, and even try more target variables than ever before.
James E. Powell is the editorial director of TDWI, including the Business Intelligence Journal and Upside newsletter.