May 5, 2013

The significance of influence metrics: some fun with Klout and Google Scholar


Here is the main Klout interface with my personal Klout score and its trend over time. Klout is (please pick the one that is most applicable):

a) a popularity time-series, like brain electrical activity or stock market performance, and is probably (in an indirect fashion) related to each [1].

b) based on what exactly? I know, this puzzled me initially as well. Find out more here.

c) an excellent way to parlay your Facebook playtime into professorial tenure [2]. Not really. But it would not be at all surprising to me, given the almost-reflexive reverence paid to social media by business and journalistic culture.

d) a way to determine you personal worth, if your life is all about social media. And, really, what person’s life isn’t these days. It’s the new religion. Praise Zuckerberg [3]!

e) all of the above. Hope this raises my Klout score!

UPDATE: Since cross-posting this to Tumbld Thoughts, my Klout score has indeed increased by 8 (it is 40 as of May 5th). Looks like it is due to an increase in Facebook activity (or perhaps more targeted Facebook activity -- suggesting that the Klout score could be incredibly easy to manipulate). World domination, here I come!

Of course, this says nothing about my "official" academic research footprint. Or does it? Perhaps we can learn more about these types of metrics by looking at Google Scholar. Holly Dunsworth at the blog Mermaid's Tale has looked at what exactly constitutes her h-index measurement. In short, just because papers are cited does not mean that they are cited for the same reasons, and thus do not have a uniform degree of influence across citations [4].

My h-index [5] is 1 (across 27 papers -- some being Figshare documents), and is highly asymmetrical. A single Nature Reviews Neuroscience paper accounts for most of the citations taken into account. I'm actually not sure what database they are using to calculate citations (and thus influence), since it is not taking into account a number of peer-reviewed conference papers and book chapters (and blog posts, for that matter - [6]).

Does this dude abide? Apparently, I've only been influencing people in a significant manner since 2011. However, the analytics engine at does things a bit differently. Does this capture additional (and useful) information?

Okay, I have not reached a definitive opinion about this as of yet. It's just food for thought, so please discuss.

[1] Here is a fun paper (for theoretical physicists, at least) on this topic from Marcus Raichle's group: He, B.J., Zempel, J.M., Snyder, A.Z., Marcus E. Raichle, M.E.   The temporal structures and functional significance of scale-free brain activity. Neuron, 66(3), 353-369 (2011).

[2] a very apt April Fools' joke deftly executed by C. Titus Brown on his blog Living in an Ivory Basement.

[3] the classic "Microsoft buys the Catholic Church" internet meme is probably appropriate here.

[4] Here is another critical assessment of citation statistics: Adler, R., Ewing, J., and Taylor, P.   Citation Statistics. arXiv:0910.3529 (2010).

In addition, Audrey Watters at Hack (Higher) Education blog (hosted by Inside Higher Ed) has a post on the problems she's experienced with Google Scholar.

[5] Of course, the h-index is just one possible way to measure research output. But caveats of the h-index and then some apply to all alternative methods.

[6] This was a problem for Jonathan Eisen (Tree of Life blog) as well. At least Google tries to be accomodating in these cases.......

In general, "The Secret History of Rock" by Roni Sarig might enlighen this discussion a bit. In many cases, relatively (or in some cases absolutely) obscure bands such as the Dead Kennedys have served to influence much more popular (but perhaps less influential) musicians and bands. Influence networks serve as the mechanism for absolute vs. relative influence. While the h-index does not capture this phenomenon well, the Klout socre might be better at uncovering this.

No comments:

Post a Comment