Starting 100 Days of Data Science Challenges
Consistently learning and building Data Science projects :)
I am starting a 100 Days of Data Science Challenges to ensure I consistently learn and build Data Science projects.
Quick Links:
Background
Back in June 2016, Alexander Kallaway made a public commitment to code for 1 hour over the coming 100 days (Original Join the #100DaysOfCode Article). His goal was to become a better developer and wanted to commit to a challenge that would keep him accountable while ensuring he would do it on consistent basis.
Since then, countless people have participated in the 100 Days of Code challenge. There is even a 100 Days of Code Website. There is even a 100 Days of X Challenges Website to help you join a community of people around any habit or activity you are trying to cultivate.
So Data Science?
Earlier this year, I publicly shared my Artificial Intelligence Study Plan and started working on Data Science Challenges. This proved a great combination as I was able to apply the concepts I was learning to solve real world challenges such as working with APIs to gather and analyze data, scrape websites to create custom datasets, etc.
Yet something was missing. Thanks to life, I wasn’t able to do this on a regular basis. I had good and I had bad weeks. I wanted to change this - I wanted to work on my courses and projects on a more consistent basis. Well, there is nothing more consistent than working on something on a daily basis.
Hence why I decided to start this 100 days of Data Science challenge.
What’s the plan?
The plan is to work through my AI Study Plan, while continuing to build projects via Data Science Challenges.
More specifically, I’m going to finish up the Probability and Statistics section of DataQuest’s Data Scientist Path. Afterwards, I will pivot to work through their Data Engineer Path.
- DataQuest’s Data Engineer Track
- DataQuest’s Data Scientist Track
- Data Science Challenges
Alongside this, I will focus on gaining familiarity with the following:
- NoSQL databases
- Building end to end data pipelines with API/Web App interfaces
- Using Spark to analyze larger datasets
- Using Airflow to schedule ETL processes
- Building and deploying Machine/Deep Learning Models
I will do this by building projects, which will continue to reinforce my learnings and expand my skills.
How will I track this?
I have created a 100 Days of Data Science Github Repository, which will contain the 100 Days of Data Science Log.
Per the instructions, I will update the log on a daily basis summarizing my work for that day and whenever it is possible, I will provide a link to my work (code or article).
Part of the challenge is to share your progress publicly via posting on Social Media. I will post my daily progress on Twitter and where it makes sense, post it on LinkedIN (milestones such as 10 days, 20 days, 50 days, etc).