Chapter 1
INTRODUCTION TO DATA SCIENCE
1.1 WHY DATA SCIENCE?
Data science is one of the fastest growing fields in the world, with 6.5 times as many job openings in 2017 as compared to 2012.1 Demand for data scientists is expected to increase in the future. For example, in May 2017, IBM projected that yearly demand for "data scientist, data developers, and data engineers will reach nearly 700,000 openings by 2020."2 http://InfoWorld.com reported that the #1 "reason why data scientist remains the top job in America"3 is that "there is a shortage of talent." That is why we wrote this book, to help alleviate the shortage of qualified data scientists.
1.2 WHAT IS DATA SCIENCE?
Simply put, data science is the systematic analysis of data within a scientific framework. That is, data science is the
- adaptive, iterative, and phased approach to the analysis of data,
- performed within a systematic framework,
- that uncovers optimal models,
- by assessing and accounting for the true costs of prediction errors.
Data science combines the
- data-driven approach of statistical data analysis,
- the computational power and programming acumen of computer science, and
- domain-specific business intelligence,
in order to uncover actionable and profitable nuggets of information from large databases.
In other words, data science allows us to extract actionable knowledge from under-utilized databases. Thus, data warehouses that have been gathering dust can now be leveraged to uncover hidden profit and enhance the bottom line. Data science lets people leverage large amounts of data and computing power to tackle complex questions. Patterns can arise out of data which could not have been uncovered otherwise. These discoveries can lead to powerful results, such as more effective treatment of medical patients or more profits for a company.
1.3 THE DATA SCIENCE METHODOLOGY
We follow the Data Science Methodology (DSM),4 which helps the analyst keep track of which phase of the analysis he or she is performing. Figure 1.1 illustrates the adaptive and iterative nature of the DSM, using the following phases:
- Problem Understanding Phase. How often have teams worked hard to solve a problem, only to find out later that they solved the wrong problem? Further, how often have the marketing team and the analytics team not been on the same page? This phase attempts to avoid these pitfalls.
- First, clearly enunciate the project objectives,
- Then, translate these objectives into the formulation of a problem that can be solved using data science.
- Data Preparation Phase. Raw data from data repositories is seldom ready for the algorithms straight out of the box. Instead, it needs to be cleaned or "prepared for analysis." When analysts first examine the data, they uncover the inevitable problems with data quality that always seem to occur. It is in this phase that we fix these problems. Data cleaning/preparation is probably the most labor-intensive phase of the entire data science process. The following is a non-exhaustive list of the issues that await the data preparer.
- Identifying outliers and determining what to do about them.
- Transforming and standardizing the data.
- Reclassifying categorical variables.
- Binning numerical variables.
- Adding an index field.
The data preparation phase is covered in Chapter 3.
- Exploratory Data Analysis Phase. Now that your data are nice and clean, we can begin to explore the data, and learn some basic information. Graphical exploration is the focus here. Now is not the time for complex algorithms. Rather, we use simple exploratory methods to help us gain some preliminary insights. You might find that you can learn quite a bit just by using these simple methods. Here are some of the ways we can do this.
- Exploring the univariate relationships between predictors and the target variable.
- Exploring multivariate relationships among the variables.
- Binning based on predictive value to enhance our models.
- Deriving new variables based on a combination of existing variables.
We cover the exploratory data analysis phase in Chapter 4.
- Setup Phase. At this point we are nearly ready to begin modeling the data. We just need to take care of a few important chores first, such as the following:
- Cross-validation, either twofold or n-fold. This is necessary to avoid data dredging. In addition, your data partitions need to be evaluated to ensure that they are indeed random.
- Balancing the data. This enhances the ability of certain algorithms to uncover relationships in the data.
- Establishing baseline performance. Suppose we told you we had a model that could predict correctly whether a credit card transaction was fraudulent or not 99% of the time. Impressed? You should not be. The non-fraudulent transaction rate is 99.932%.5 So, our model could simply predict that every transaction was non-fraudulent and be correct 99.932% of the time. This illustrates the importance of establishing baseline performance for your models, so that we can calibrate our models and determine whether they are any good.
The Setup Phase is covered in Chapter 5.
- Modeling Phase. The modeling phase represents the opportunity to apply state-of-the-art algorithms to uncover some seriously profitable relationships lying hidden in the data. The modeling phase is the heart of your data scientific investigation and includes the following:
- Selecting and implementing the appropriate modeling algorithms. Applying inappropriate techniques will lead to inaccurate results that could cost your company big bucks.
- Making sure that our models outperform the baseline models.
- Fine-tuning your model algorithms to optimize the results. Should our decision tree be wide or deep? Should our neural network have one hidden layer or two? What should be our cutoff point to maximize profits? Analysts will need to spend some time fine-tuning their models before arriving at the optimal solution.
The modeling phase represents the core of your data science endeavor and is covered in Chapters 6 and 8-14.
- Evaluation Phase. Your buddy at work may think he has a lock on his prediction for the Super Bowl. But is his prediction any good? That is the question. Anyone can make predictions. It is how the predictions perform against real data that is the real test. In the evaluation phase, we assess how our models are doing, whether they are making any money, or whether we need to go back and try to improve our prediction models.
- Your models need to be evaluated against the baseline performance measures from the Setup Phase. Are we beating the monkeys-with-darts model? If not, better try again.
- You need to determine whether your models are actually solving the problem at hand. Are your models actually achieving the objectives set for it back in the Problem Understanding Phase? Has some important aspect of the problem not been sufficiently accounted for?
- Apply error costs intrinsic to the data, because data-driven cost evaluation is the best way to model the actual costs involved. For instance, in a marketing campaign, a false positive is not as costly as a false negative. However, for a mortgage lender, a false positive is much more costly.
- You should tabulate a suite of models and determine which model performs the best. Choose either a single best model, or a small number of models, to move forward to the Deployment Phase.
The Evaluation Phase is covered in Chapter 7.
- Deployment Phase. Finally, your models are ready for prime time! Report to management on your best models and work with management to adapt your models for real-world deployment.
- Writing a report of your results may be considered a simple example of deployment. In your report, concentrate on the results of interest to management. Show that you solved the problem and report on the estimated profit, if applicable.
- Stay involved with the project! Participate in the meetings and processes involved in model deployment, so that they stay focused on the problem at hand.
Figure 1.1 Data science methodology: the seven phases.
It should be emphasized that the DSM is iterative and adaptive. By adaptive, we mean that sometimes it is necessary to return to a previous phase for further work, based on some knowledge gained in the current phase. This is why there are arrows pointing both ways between most of the phases. For example, in the Evaluation Phase, we may find that the model we crafted does not actually address the original problem at hand, and that we need to return to the Modeling Phase to develop a model that will do so.
Also, the DSM is iterative, in that sometimes we may use our experience of building an effective model on a similar problem. That is, the model we created serves as an input to the investigation of a related problem. This is why the outer ring of arrows in Figure 1.1 shows a constant recycling of older models used as inputs to examining new solutions...