What ‘not’ to do when you’re starting to learn Data Science

Sarthak Arora
6 min readMay 13, 2021

Richard Feynman, the great Physicist, has said that in order to understand a subject, understand its fundamentals to a level that you can explain it to a 10-year old kid.

Now, if you look back at how you were taught Mathematics or any other subject for that matter in pre-school, you were taught starting from the absolute building blocks. Digits, numbers, counting, arithmetic operations and so on. So, a longer path was taken to reach a particular level of understanding.

Do you see what I am getting at?

Look at the picture below. If you have to go from A to B, what do you generally do? Try to jump directly from A to B, right?

Can you think of places where most of us are lacking? Why can’t we crack the ‘simple’ interviews? Why are we not able to answer elementary questions? We are getting shortlisted for the interviews but are having a hard time clearing them.

If you are just starting your journey, this article is for you. Or, if you have already learnt a few concepts, well, this article will help you to realign and prioritise the important stuff first.

It’s my fifth month at Paisabazaar, a Fintech company, as Assistant Manager-Analytics and my opinions about a Data Science job have changed.
In other words, I have learnt what one’s priorities should be before starting to apply or work somewhere.

Let’s do some myth-busting!

This is what I believe most of us freshers think-
1. SQL and Databases are secondary to Python
2. ML Algorithms are just one-line codes in Scikit-learn
3. Model Building is the most fascinating job on this planet
4. Deep-Learning is a pre-requisite to land up a job in Data Science
5. Linear Regression and Logistic Regression cannot solve problems
6. Writing a clean code is not necessary

I guess, for someone who has just started, these are some of the ‘facts’ they have learnt or have read somewhere. Let’s talk about them one-by-one.

  1. SQL and Databases are secondary to Python-

As a Data Scientist/Analyst, what’s your raw material to make wonderful dashboards/models/summaries? It’s Data, right? And where is Data stored? Obviously, in Databases. Now, if you cannot handle your raw material, how can you expect to do the cooking?

Rather, Databases should be your friends. Using SQL/Hive to extract, summarize data should be a skill set that you should possess. To be honest, I didn’t work enough on it and I am still trying to get used to it.

When you start a job, you have to understand the business, the kind of data they collect and use to solve problems. To perform, you have to understand the context first. You are gonna spend a lot of time understanding the databases and tables that are being used across the organisation. Then only you can deliver.

Focus more on the exploration of data. When you are into a job, you might not be handed over a modelling problem right away. Pandas, Numpy should be the priority for you rather than Scikit-learn.

2. ML Algorithms are just one-line codes in Scikit-learn-

Different kinds of models have different kinds of use cases. Modelling isn’t just about using the default model to get some output. Optimizing a model, making it parsimonious (so that it can handle new data effectively and give good results), tuning the parameters- you have to do all of that to design a model.

3. Model Building is the most fascinating job on this planet-

As I mentioned in the last point, using a one-line code isn’t enough. Before you even start to model, you need to brainstorm as to what kind of data you should have to build a model on it- the variables which can impact the output. So, Data Collection from various databases/stakeholders, Data Manipulation, Data Transformation, Data Imputation takes 80% of your time to build a model.

To make sure, you are using the right kind of data to build a model is much more important than running a model on it. You can build a skeleton, only after you have a sufficient number of bones.

Moreover, after you have picked a model, you have to deploy it, track it, make improvements if it is not working as desired. Make sure you have spent a sufficient amount of time validating it, before deploying.

4. Deep-Learning is a pre-requisite to land up a job in Data Science-

Coming back to my point- We try to go from A to B directly thereby skipping step X, Y and Z. After doing that, you will definitely feel like a winner. But, in the long run, you are the one who is gonna lose.

I see that many freshers try to ‘deep ’ dive into Deep Learning and Machine Learning to bag a job in Data Science. It’s almost like buying unwanted and extra ingredients to make a simple dish in the kitchen. These ingredients would definitely compliment your profile, but what’s the point of advancing if you falter on the basics?

Moreover, the person who will likely interview you for the post won’t even know deep learning himself. Yes, that’s very much a possibility.

Rather, your focus should be to get the basics right, have some projects under your name and work on how to answer questions in the interviews. Yes, that is gonna help you land a job rather than trying to learn everything and anything in this world.

5. Linear Regression and Logistic Regression cannot solve problems-

Deep Learning models are ‘Black-Box models’ or in other words, they are non-interpretable. While they are more likely able to solve your problem at hand, you might not be able to explain it to the stakeholder. What if the concerned person asks you, “How is X variable impacting our target?” It’s tough to answer the question when if you have implemented a Black-Box Model.

Linear Regression and Logistic Regression are the starting points of modelling. When you use them, you can easily understand how different variables are affecting our target variable. There is gonna be a tradeoff between interpretability and accuracy and most business choose interpretability because that helps them make business decisions on a granular level.

Spend enough time on regression and only when you are comfortable, move ahead. Do not forget to understand the assumptions behind every model.

6. Writing a clean code is not necessary-

When you are into a job, there are going to be many tasks that will be repetitive. If you sat to write those code every time you were given the same task, you are wasting a lot of time that can be used somewhere else. If you cannot understand your own code, why are you even doing the task in the first place?

Your codes should be efficient. I used to think earlier, that as long as I am completing a task, how I am doing it does not matter. The truth is that if you are not learning from every task that you do, there is something wrong with your approach.

If some technique is slightly tough to learn but can help you do your tasks efficiently, invest some time to learn it.

Mind you- I used the word ‘invest’ instead of ‘spend’.

If you are still reading this article, I wish you all the best. I will be more than happy to carry out meaningful discussions with you. It’s the beauty of this field- There’s a scope of a lot of peer learning. You can connect with me by clicking here.

Let’s go from A to B via X, Y and Z!

Also, please give me a follow here, on Medium. That will motivate me to create content frequently for you. Cheers!

--

--

Sarthak Arora

Data Scientist @ Jupiter.co | Ex - Assistant Manager in Analytics @ Paisabazaar | I write about Data Science and ML | https://www.linkedin.com/in/iasarthak/