I've had a long-term interest in research on cancer in general, and cancer stem cells (CSCs) in particular. See, for example, "A stem cell model of human tumor growth: implications for tumor cell clonogenic assays", J Natl Cancer Inst. 1983 Jan;70(1):9-16 [PubMed]. I've been trying to keep up with the current literature about CSCs, and have found the task to be a challenging one.
Effective ways to filter the voluminous academic literature are badly needed. Social media have provided a possible route to this goal. I've been exploring a few such media, and especially Twitter.
I've been a member of Twitter since December 2008. I've posted over 3000 tweets since then. Almost all of them have been about either CSCs or open access (OA).
My tweets about CSCs have included the hashtag #cancerSC. I usually post about 15-25 tweets with this hashtag per month. Previous tweets can be accessed by searching within Twitter for the #cancerSC hashtag. A Google search for the same hashtag will provide access to the same archive of tweets.
As sources of information for recent news and publications about CSCs, I've used the following:
a) Google Alerts, to monitor the web for interesting new content about the keywords "cancer stem".
b) PubMed searches for "cancer stem", with the results sent via PubMed RSS to the RSS reader Feedly. My main focus is on articles published within the last month. PubMed is my main source of relevant information.
c) Other contributors to Twitter, such as @cancerscnews.
These sources (especially PubMed) provide a cornucopia of information about what's new in stem cell research and development. My major challenge has been an editorial one: which aspects of all this information should be selected and tweeted about?
Screening Step 1: A useful screening tool has been the Altmetric Bookmarklet. At present, this Bookmarklet only works on PubMed, the arXiv repository, or pages containing a digital object identifier (DOI).
Using the bookmarklet, I screen the results sent by the PubMed RSS, and select for further examination those articles that have non-zero article level metrics. Whether or not Altmetric has picked up sharing activity around an article, I proceed to Screening Step 2. (For those not familiar with Altmetric.com, it's a start-up that attempts to assess article level metrics or altmetrics).
Screening Step 2: The next screening step is to put the title of each article into Topsy, which allows one to search for tweets that have included this title. If a search using Topsy reveals at least a two tweets about the article, I go the 3rd Screening Step. Sometimes, secondary sources (such as 7thSpace Interactive) are identified in multiple tweets. If so, I again go to Screening Step 3.
Screening Step 3: I'm a supporter of the Open Access movement. So, I next check whether or not the article is freely accessible (no paywalls). If there are no paywalls, I prepare a tweet about the article. If I do run into a paywall, I only prepare a tweet if either the Altmetric, or the results from Topsy, or my own reading of the article, yields a very positive impression. I indicate in the tweet that the article is "not OA".
The targeted viewers for my tweets are anyone interested in current research on CSCs. The tweets are not targeted only at those active in research on CSCs. Hence the somewhat higher priority given to articles that have no paywalls.
Of course. there's no way to avoid some subjectivity in an editorial process of this kind. So, I occasionally ignore the results of the screening process and tweet about articles that I especially liked. And, no doubt, some interesting articles will be missed. The greater the sensitivity and specificity of the screening process, the more likely it is that all of the relevant articles will be found and the irrelevant articles rejected.
For an example of a positive view about tweets, see: Can tweets predict citations? Metrics of social impact based on twitter and correlation with traditional metrics of scientific impact by Gunther Eysenbach (2011).
Examples of positive views about altmetrics are: Altmetrics in the Wild: Using Social Media to Explore Scholarly Impact by Jason Priem, Heather Piwowar & Bradley Hemminger (2012); Value all research products by Heather Piwowar (2013).
I'm aware of criticisms of a screening process which relies heavily on altmetrics and tweets. For examples of such criticisms, see: Twitter buzz about papers does not mean citations later by Richard Van Noorden (2013); Why you should ignore altmetrics and other bibliometric nightmares by David Colquhoun & Andrew Plested (2014); Article-level metrics: An ill-conceived and meretricious idea by Jeffrey Beall (2013).
My own view is that tweets and altmetrics merit further exploration, as indicators of "attention". Of course, one needs to watch out for "gaming" (see: Gaming altmetrics). However, my own examination of tweets and altmetrics related to CSCs has yielded no unequivocal evidence of "gaming". Instead, the tweets I've seen (the coverage of all the altmetrics except for Twitter seems to be low) appear to be the result of authentic-looking attention from real people.
I do not believe that Impact Factors should be regarded as the unquestioned gold standard for indicators used to assess impact (see, for example, Impact Factors: A Broken System by Carly Strasser, 2013). Of course, the gold standard for oneself is one's own opinion upon reading a publication. But, no one can read everything.
An article, How to tame the flood of literature by Elizabeth Gibney in Nature (03 September 2014), provides comments about emerging literature-recommendation engines. I haven't yet used any of these, but they clearly merit attention.
I'd be very grateful for any suggestions about ways to improve the efficiency, sensitivity and specificity of a screening process of the kind outlined in this post.
See: What Jeffrey Beall gets wrong about altmetrics by Stacy Konkiel and Jason Priem, September 9, 2014.
ReplyDeleteWhen the article that's the basis for a tweet is behind a paywall, I now replace (not OA) in the tweet with ($) - fewer characters.
ReplyDeleteAbout Screening Step 2: I now only put the title of an article into Topsy if that article has a non-zero Altmetric score. My experience to date has been that it's very rare for articles with an Altmetric score of zero to yield any tweets on Topsy.
ReplyDeleteA shortened link to this post is: Tweets about cancer stem cells.
ReplyDeleteThe shortened link is: http://bit.ly/1xmScvh
DeleteSome users of Twitter focus their attention on the literature related to a particular topic. One example is Hypoxia Adaptation, "A feed for hypoxia related papers published in NCBI, ArXiv, bioArxiv, and PeerJ". Another is epigenetics_papers, "Chromatin & epigenetics paper feed from #Pubmed and #Arxiv". It's unclear what criteria (other than the topic of interest) are used as the basis for tweets from these users. So, I'm currently discounting such tweets, in comparison with others that do not originate from feeds such as these.
ReplyDeleteOops. The link to epigenetics_papers on Twitter is https://twitter.com/epigen_papers.
DeleteScreening Step 2 has been modified. As of 16 December 2015, Topsy is no longer available. See: Apple shuts down Twitter analytics service Topsy. To replace Topsy in this screening step, I'm simply using Twitter Search, and then selecting the "Live" option.
ReplyDeleteFor example, a search for the title (within quotation marks): "A Tumor Suppressor Function for Notch Signaling in Forebrain Tumor Subtypes" yields more than 35 tweets. An unusual feature of these tweets is that many of them seem to be associated in some way with AlexisSfakianakis (Αλέξανδρος Σφακιανάκης), an ENT physician in Greece. This is a situation where I've discounted many of the tweets.
Another example, where little discounting was necessary: "Barcoding reveals complex clonal dynamics of de novo transformed human mammary cells". This is an example of an article that has attracted attention.