Sunday, April 26, 2015

Forget P-values? or just let it be what it is

P-value has always been controversial. It is required for certain publications, banned from some journals, hated by many yet quoted widely. Not all p-values are loved equally. Because what someone popularized some 90 years ago, the small values below 0.05 have been the crowd's favorite.

When we teach hypothesis testing, we explain that the entire spectrum of p-value is to serve a single purpose: quantifying the "agreement" between an observed set of data and a statement (or claim) in the null hypothesis. Why do we single out the small values then? Why can't we talk about any specific p-value the same way we talk about today's temperature? i.e., as a measure of something.

First of all, the scale of p-value is hard to talk about, which is different from temperature. The difference between 0.21 and 0.20 is not the same as 0.02 and 0.01. It almost feels like we should use the reciprocal of the p-values to discuss the likeliness of the corresponding observed statistics assuming the null hypothesis is true. If the null hypothesis is true, it takes, on average, 100 independent tests to observe a p-value below 0.01. The occurrence of a p-value under 0.02 is twice as likely, taking only about 50 tests to observe. Therefore 0.01 is twice as unlikely as 0.02. Using similar calculation, 0.21 and 0.20 are almost identical in terms of likeliness under the null.

In introductory statistics, it is said a test of significance has four steps: stating the hypotheses and a desired level of significance, computing the test statistics, finding the p-value, concluding given the p-value. It is step 4 here requires us to draw a line somewhere on the spectrum of p-value between 0 and 1. That line is called the level of significance. I never enjoyed explaining how one should choose the level of significance. Many of my students felt confused. Technically, if a student derived a p-value of 0.28, she can claim it is significant at a significance level of 0.30. The reason why this is silly is because the significance level should convey a certain sense of rare occurrence, so rare that it is deemed contradictory with the null hypothesis. No one of common sense would argue a chance that is close to 1 out of 3 represents rarity.

What common sense fails to deliver is how rare is contradictory enough. Why 1/20 needs to be a universal choice? It doesn't. Statisticians are not quite bothered by "insignificant results" as we think 0.051 is just as interesting as 0.049. We, whenever possible, always just want to report the actual p-value instead of stating that we reject/accept the null hypothesis at a certain level. We use p-value to compare the strength of evidence between variables and studies. However, sometimes we don't have a choice so we got creative.

For any particular test between a null hypothesis and an alternative, a representative (i.e., not with selection bias) sample of p-values will offer a much better picture than the current published record of a handful of p-values under 0.05 out of who-knows-how-many trials. There have been suggestions on publishing insignificant results to avoid the so-called "cherry-picking" based on p-values. Despite the apparent appeal of such a reform, I cannot imagine it being practically possible. First of all, if we can assume that most people have been following the 0.05 "rule", publishing all the insignificant results will result in a 20-fold increase in the number of published studies. Yet it probably will create a very interesting data set for data mining. What would be useful is to have a public database of p-values on repeated studies of the same test (not just the null hypothesis as often the test depends on what is the alternative as well). In this database, p-value can finally be just what it is, a measure of agreement between a data set and a claim.

Friday, April 17, 2015

My new cloud computer, which will never need to upgrade?

What have been stopping me from upgrading my computer is the pain to migrate from one machine to the other. Over the years, I have intentionally made myself less reliant on one computer by sharing files across machines using dropbox or google drive. I still have a big and old (see note below) office desktop that holds all my career (or most of it). Every time I need to work on something, I put a folder in the dropbox and work on the laptops on the go, at home, in a coffee shop.

Using Columbia's Lion Mail google account, we receive 10 TB cloud storage on google drive. This is more than enough to hold all my files. A personal PC should have the following components: file storage, operating system, input/output devices, user softwares and user contents. I just decided to move the file storage/user content component of my computer onto the cloud. Next step would be moving the most essential user softwares onto the cloud and remotely connect to the cloud from any web browser to work. I can't wait to set up a remote cloud-based R studio server to try out.

This whole thing started when I was searching for a faster desktop replacement. I saw the price required to buy the best available desktop and compared it with the pricing of cloud computing engines. The $8000 price tag or higher of a most powerful PC/Mac will afford me non-stop computing on a 16-core cloud engine with 64MB or higher for two full years. Consider the idle time a PC is likely to have, it may be equivalent to 4-5 years. It seemed to me the best "personal" computer now is on the cloud.

Will this remove the need to go through the upgrading of our dearest work laptop? Not completely. We will still be buying new laptop to work on. But it will be more like upgrading an iPhone or iPad. This thought is enough to make my heart sing.

Note: my office pc was born 2006 and still going strong. Following an advice my mom, a computer science professor, gave me about 20 years ago, I bought the best specification possible at that time.

Thursday, March 12, 2015

The trinity of data science: the wall, the nail and the hammer

In Leo Breiman's legendary paper on "Statistics modeling: the two cultures" a trend in statistical research was criticized: people, holding on to their models (methods), look for data to apply their models (methods). "If all you have is a hammer, everything looks like a nail," Breiman quoted.

A similar research scheme was observed recently in data science. The availability of large unstructured data sets (i.e., Big Data) have sparkled imagination of quantitative researchers (and data scientists wanna-be) everywhere. Challenges such as the Linked-In economic graph challenge have invited people to think hard of creative ways to unlock the information hidden in the vast "data" that can potentially lead to novel data products. The exploratory nature of this trend, to some extent, resembles the quest of a hammer in search for nails. Only this time, it is a vast facet of under-utilized wall in search of deserved nails. Once the most deserved nails (or hooks and other wall installations) are identified, the most appropriate hammers (or tools) will be identified or crafted for the installation of them.

In every data science project, there is this trinity of three basic components: the problem to be addressed, the data to be used and the method/model for the hack.
The Trinity of Data Science

Traditional statistical modeling usually starts with the problem, assuming certain generative mechanism (model) for a potential data source, and device a suitable method. The hack sequence is then problem-data-methods (or first start with the nail, then choose which wall to use, and then decide which hammer to use, considering the nature of the nail and the wall).

Data mining explores data using suitable methods to reveal interesting patterns and eventually suggests certain discoveries that addresses important scientific problems. The hack sequence is then data-methods-problem (the wall, the available hammers, and the nails).

Data scientists enter this tri-fold path at different points, depending on their career path. The ones from an applied domain have most likely entered from a problem entry point, then to data and then to methods. It is often very tempting to use the same methods when one moves from one set of data to the next set of data. The training of these application domain data scientists often comes with a "manual" of popular methods for their data. Data evolve. So should the adopted methods, especially given the advancements in the methodological domain. The hammer used to be the best for the nail/wall might no long be the best given the current new collection of hammers. It is time to upgrade.

Methodological data scientists such as statisticians enter from the methodological perspective due to their training. When looking for ways to apply or extend their methods, they should consider problems where their methods might be applied and then find good data for the problem. In the process, one should never take for granted that the method can just be applied to the problem-data duo in its original form.

Computational data scientists and engineers often started from manipulating large data sets. These Algorithms were motivated by previous problems of interests or models that have been studied. When a similar large set of data become available, the most interesting problems can be answered by this set of data might be different from the ones have been addressed before in another data set. One should use creative methods to identify novel patterns in such data and discovery interesting problems to answer.

Looking at this trinity map of data science, it is then easy to understand that, for some, there will naturally be phases when one knows a few methods (from their training, or prior hacks of data sets) and looks for (other) data sets or problems to hack; and for some others, there will be phases when one has a big data set and looks for problems that can be answered by this data set. And there will also be those who start with a problem, find or collect data and apply existing or novel methods on the data.

These are all valid and natural "entry points" into data science. The most important thing here is that one remembers that there are many different hammers, many different nails and many different walls. A quest of a data scientist shall always be on finding the best match for the wall, the nail and the hammer and be willing to change, improve and create.

Wednesday, February 11, 2015

A statistical read about gender splits in teaching evaluations

A recent article on NYT's upshot shared a recent visualization project of teaching evaluations on "rate my professor", 14 millions of them. Reading this article after a long day in the office made me especially "emotional." It confirmed that I have not been delusional.
It suggests that people tend to think more highly of men than women in professional settings, praise men for the same things they criticize women for, and are more likely to focus on a woman’s appearance or personality and on a man’s skills and intelligence.
Actually, the study didn't find people focus on a woman's appearance as much as expected. 

The article made an important point:"The chart makes vivid unconscious biases. " The 14 millions reviewers were not posted to intentionally paint a biased picture of female professors compared with their male colleagues. The universities didn't only assign star male professors to teach alongside of mediocre female professors. Online teaching evaluations have known biases as people who feel strongly about what they have to say are more likely to post reviews. But this selection bias cannot explain away the "gender splits" observed. They are due to "unconscious biases" towards women.

What does this term, "unconscious biases", actually suggest? It suggests that if you are thinking that you are being fair to your female colleagues, female students or professors, you are probably not. If the biases were unconscious, how can we possibly assert that we do not have them? Most of those who wrote the 14 millions review must have felt they were giving fair reviews. Therefore, statistically speaking, if 14 millions intended "fair" reviews carried so much unconscious biases, we then have to act more aggressively better than just being fair to offset these unconscious biases.

Friday, January 30, 2015

Lego, sampling and bad-behaving confidence intervals

Yesterday, during the second lecture of our Introduction to Data Science course for students in non-quantitative program. We did a sampling demo adapted from Andrew Gelman and Deborah Nolan's teaching book (a bag of tricks).

Change from candies to legos. The original teaching recipe uses candies. A side effect of that is the instructor will always get so much left over of candies as the students are getting more and more health conscious. So this time, I decided to use lego pieces. One advantage of this change is that we can save the kitchen scale and just count the number of studs (or "points") on the lego pieces.

Preparation. The night before I counted two bags of 100 lego pieces: population A and population B. Population A consists of about 30 large pieces and 70 tiny pieces. Population B consists of 100 similar pieces (4 studs, 6 studs and 8 studs).

In-Class demo. At the beginning of the lecture, we explained to the students what they need to do and passed one bag to half of the class, and the other bag to the other half, along with  data recording sheets.

Results. Before class, I asked a MA student, Ke Shen, in our program who is very good at visualization and R to create a RShiny app for this demo, where I can quickly key in the numbers and display the confidence intervals.

Here are population A samples.
Here are population B samples. 

Conclusion. Several things we noticed from this demo:
  1. sampling lego pieces can be pretty noisy. 
  2. all samples of population A over-estimated the true population mean (the red line). samples of population B seemed to be doing better. 
  3. population variation affects the width of the confidence intervals. 
  4. but even wider confidence intervals were wrong due to large bias.