Welcome to the blogging hodgepodge for this month. I wanted to clear up by reading queue, and present some of these ideas and articles in an entertaining way. The topics include: modeling, significant results, and hidden variables (but perhaps not discussed in a conventional manner). As a bonus, we get career advice for scientific researchers and relevant discussion.
Mutant phenptypes from the Fukushima area of Japan. COURTESY: National Geographic.
Flawed Models Cannot Be Made Idealistic
"Essentially, all models are wrong, but some are useful" -- George Box. What makes for a bad model? Poor assumptions, oversimplication/vagueness, or underfitting with respect to available data? These articles address some of these issues, with particular relevance to societal consequences.
Kirchner, L. When Big Data Becomes Bad. ProPublica, September 2 (2015).
O'Neil, C. Big Data, Disparate Impact, and the Neoliberal Mindset. Mathbabe blog, September 7 (2015).
Schuster, P. Models: From Exploration to Prediction -- Bad Reputation of Modeling in Some Disciplines Results from Nebulous Goals. Complexity, doi:10.1002/cplx.21729 (2015).
Rickert, J. How do you know if your model is going to work? Part 2: Intraining set measures. R-bloggers, September 8 (2015).
Once upon a time, this was a viable model of how nature worked. COURTESY: Geocentric Model, Redorbit.
The Real World is Complex, Idealized Methods Notwithstanding
The debate over replicability in Psychology (and by extension sciences that are not particle physics) rages on. This month, a shot was fired from the "Psychology is not very replicable" camp. The Open Science Collaboration published a paper in Science showing that many replications of experiments fail to reproduce the same levels of statistical significance and power as the original studies.
Critics have blamed this lack of replicability on a number of culprits, including shortcomings of the NHST approach itself. Two potential culprits I have pointed to previously include complexity and cultural context, the latter which we will return to in a bit.
What explains these replicated results? . COURTESY: Figure 1, Science, 349, doi:10.1126/science.aac4716 (2015) AND Loria, TechInsider.
Open Science Collaboration. Estimating the reproducibility of psychological science. Science, doi:10.1126/science.aac4716 (2015).
Loria, K. Everything that's wrong with psychology studies in 2 simple charts. TechInsider, August 28 (2015).
The Unreasonable Effectiveness of Cultural Context*
* a play on: Wigner, E. The Unreasonable Effectiveness of Mathematics in the Natural Sciences.
Vanderbilt, T. Why Futurism Has a Cultural Blindspot. We predicted cell phones, but not women in the workplace. Nautil.us blog, September 10 (2015).
* the latest critique of futurism, this time from a sociological perspective.
* our contemporary Economic World, according to Pierre Bourdieu (as told by Leigh Wells).
Career Advice (Not Avarice):
Hossenfelder, S. How to publish your first scientific paper. Backreaction blog, September 11 (2015).
* this blog post not only provides advice on how to get started as a published researcher, but also gives advice on how to formulate research ideas and structure manuscripts that will garner the interest of editors and reviewers.
Curry, S. Peer review, preprints and the speed of science. Guardian, September 7 (2015).
* yet another article in favor of the open-science movement, in this case advocating for mechanisms (e.g. preprint servers, open peer review) that have the potential to speed up and otherwise improve the research enterprise.
McDonnell, J.J. Creating a Research Brand. Science, 349, 758 (2015).
This author uses a marketing metaphor to help imporive the efficiency of a researcher's efforts. The advice bolis down to the following:
* promote results, publications, and lectures all around a central theme.
* find the right breadth of research. This should be greater than a hyper-specialized topic, but narrow enough to constitute a unique niche.