Twitter

Thursday, March 12, 2015

The trinity of data science: the wall, the nail and the hammer

In Leo Breiman's legendary paper on "Statistics modeling: the two cultures" a trend in statistical research was criticized: people, holding on to their models (methods), look for data to apply their models (methods). "If all you have is a hammer, everything looks like a nail," Breiman quoted.

A similar research scheme was observed recently in data science. The availability of large unstructured data sets (i.e., Big Data) have sparkled imagination of quantitative researchers (and data scientists wanna-be) everywhere. Challenges such as the Linked-In economic graph challenge have invited people to think hard of creative ways to unlock the information hidden in the vast "data" that can potentially lead to novel data products. The exploratory nature of this trend, to some extent, resembles the quest of a hammer in search for nails. Only this time, it is a vast facet of under-utilized wall in search of deserved nails. Once the most deserved nails (or hooks and other wall installations) are identified, the most appropriate hammers (or tools) will be identified or crafted for the installation of them.

In every data science project, there is this trinity of three basic components: the problem to be addressed, the data to be used and the method/model for the hack.
The Trinity of Data Science

Traditional statistical modeling usually starts with the problem, assuming certain generative mechanism (model) for a potential data source, and device a suitable method. The hack sequence is then problem-data-methods (or first start with the nail, then choose which wall to use, and then decide which hammer to use, considering the nature of the nail and the wall).

Data mining explores data using suitable methods to reveal interesting patterns and eventually suggests certain discoveries that addresses important scientific problems. The hack sequence is then data-methods-problem (the wall, the available hammers, and the nails).

Data scientists enter this tri-fold path at different points, depending on their career path. The ones from an applied domain have most likely entered from a problem entry point, then to data and then to methods. It is often very tempting to use the same methods when one moves from one set of data to the next set of data. The training of these application domain data scientists often comes with a "manual" of popular methods for their data. Data evolve. So should the adopted methods, especially given the advancements in the methodological domain. The hammer used to be the best for the nail/wall might no long be the best given the current new collection of hammers. It is time to upgrade.

Methodological data scientists such as statisticians enter from the methodological perspective due to their training. When looking for ways to apply or extend their methods, they should consider problems where their methods might be applied and then find good data for the problem. In the process, one should never take for granted that the method can just be applied to the problem-data duo in its original form.

Computational data scientists and engineers often started from manipulating large data sets. These Algorithms were motivated by previous problems of interests or models that have been studied. When a similar large set of data become available, the most interesting problems can be answered by this set of data might be different from the ones have been addressed before in another data set. One should use creative methods to identify novel patterns in such data and discovery interesting problems to answer.

Looking at this trinity map of data science, it is then easy to understand that, for some, there will naturally be phases when one knows a few methods (from their training, or prior hacks of data sets) and looks for (other) data sets or problems to hack; and for some others, there will be phases when one has a big data set and looks for problems that can be answered by this data set. And there will also be those who start with a problem, find or collect data and apply existing or novel methods on the data.

These are all valid and natural "entry points" into data science. The most important thing here is that one remembers that there are many different hammers, many different nails and many different walls. A quest of a data scientist shall always be on finding the best match for the wall, the nail and the hammer and be willing to change, improve and create.

Wednesday, February 11, 2015

A statistical read about gender splits in teaching evaluations

A recent article on NYT's upshot shared a recent visualization project of teaching evaluations on "rate my professor", 14 millions of them. Reading this article after a long day in the office made me especially "emotional." It confirmed that I have not been delusional.
It suggests that people tend to think more highly of men than women in professional settings, praise men for the same things they criticize women for, and are more likely to focus on a woman’s appearance or personality and on a man’s skills and intelligence.
Actually, the study didn't find people focus on a woman's appearance as much as expected. 

The article made an important point:"The chart makes vivid unconscious biases. " The 14 millions reviewers were not posted to intentionally paint a biased picture of female professors compared with their male colleagues. The universities didn't only assign star male professors to teach alongside of mediocre female professors. Online teaching evaluations have known biases as people who feel strongly about what they have to say are more likely to post reviews. But this selection bias cannot explain away the "gender splits" observed. They are due to "unconscious biases" towards women.

What does this term, "unconscious biases", actually suggest? It suggests that if you are thinking that you are being fair to your female colleagues, female students or professors, you are probably not. If the biases were unconscious, how can we possibly assert that we do not have them? Most of those who wrote the 14 millions review must have felt they were giving fair reviews. Therefore, statistically speaking, if 14 millions intended "fair" reviews carried so much unconscious biases, we then have to act more aggressively better than just being fair to offset these unconscious biases.

Friday, January 30, 2015

Lego, sampling and bad-behaving confidence intervals

Yesterday, during the second lecture of our Introduction to Data Science course for students in non-quantitative program. We did a sampling demo adapted from Andrew Gelman's teaching book (a bag of tricks).

Change from candies to legos. The original teaching recipe uses candies. A side effect of that is the instructor will always get so much left over of candies as the students are getting more and more health conscious. So this time, I decided to use lego pieces. One advantage of this change is that we can save the kitchen scale and just count the number of studs (or "points") on the lego pieces.

Preparation. The night before I counted two bags of 100 lego pieces: population A and population B. Population A consists of about 30 large pieces and 70 tiny pieces. Population B consists of 100 similar pieces (4 studs, 6 studs and 8 studs).

In-Class demo. At the beginning of the lecture, we explained to the students what they need to do and passed one bag to half of the class, and the other bag to the other half, along with  data recording sheets.

Results. Before class, I asked a MA student, Ke Shen, in our program who is very good at visualization and R to create a RShiny app for this demo, where I can quickly key in the numbers and display the confidence intervals.

Here are population A samples.
Here are population B samples. 

Conclusion. Several things we noticed from this demo:
  1. sampling lego pieces can be pretty noisy. 
  2. all samples of population A over-estimated the true population mean (the red line). samples of population B seemed to be doing better. 
  3. population variation affects the width of the confidence intervals. 
  4. but even wider confidence intervals were wrong due to large bias. 

Wednesday, December 17, 2014

Stacked bar-plot to show different allocation profiles.

In our 2010 paper on estimating personal network sizes, we used the following graph to show the non-random mixing matrix we estimated for personal networks:


Each group of bars represent the composition of a certain ego group member's social network, broken down into eight groups of alters (or types of acquaintances). This figure demonstrates the homophily phenomenon in social networks that individuals tend to form ties with others who are similar.

Today, Shirin asked me about how I made this plot. Despite its "busy" appearance, it is actually pretty easy to make it. Assume you have two ego groups. Therefore you have two vectors of proportions (composition) of length 8. We assume the first 4 are for males and second set of 4 are for females.



Wednesday, December 10, 2014

Circlize your visualization!

Using a circular organization in visualization is a good way of presenting a system of information such as a network. It is also known as a chord diagram.


I found a nice R package called circlize that provides functions to create a whole range of cool visualizations. Read their tutorials to have as much fun as you would like. Here is the one I like the most. You start with a grid of images like this (Keith Haring’s doodle)

and make it into a cool circular adaptation: