Sunday, December 31, 2017

Tweets about cancer stem cells v2

This is Version 2 of a previous post dated September 5, 2014.

I've had a long-term interest in research on cancer in general, and cancer stem cells (CSCs) in particular. See, for example, "A stem cell model of human tumor growth: implications for tumor cell clonogenic assays", J Natl Cancer Inst. 1983 Jan;70(1):9-16 [PubMed]. I've been trying to keep up with the current literature about CSCs, and have found the task to be a challenging one.

Effective ways to filter the voluminous academic literature are badly needed. Social media have provided a possible route to this goal. I've been exploring a few such media, and especially Twitter.

I've been a member of Twitter since December 2008. I've posted over 4,500 tweets since then. Almost all of them have been about either CSCs or open access (OA).

My tweets about CSCs have included the hashtag #cancerSC. I usually post about 5-10 tweets with this hashtag per month. Previous tweets can be accessed by searching within Twitter for the #cancerSC hashtag.

As sources of information for recent news and publications about CSCs, I've used the following:

a) PubMed searches for "cancer stem", with the results sent via PubMed RSS to the RSS reader Feedly. My main focus is on articles published within the last month. PubMed is my main source of relevant information.

b) Google Alerts, to monitor the web for interesting new content about the keywords "cancer stem".

c) Occasionally, other contributors to Twitter.

These sources (especially PubMed) provide a cornucopia of information about what's new in stem cell research and development. My major challenge has been an editorial one: which aspects of all this information should be selected and tweeted about?

Screening Step 1: A useful screening tool has been the Altmetric Bookmarklet. At present, this Bookmarklet only works on PubMed, the arXiv repository, or pages containing a digital object identifier (DOI). Twitter mentions (noted by Altmetric) are only available for articles published since July 2011.

Using the bookmarklet, I screen the results sent by the PubMed RSS, and select for further examination those articles that have non-zero article level metrics. If Altmetric has picked up sharing activity around an article, I proceed to Screening Step 2. (For anyone not familiar with Altmetric.com, it's a site that provides assessments of article level metrics or altmetrics). (The Altmetric score is now called the Altmetric Attention Score).

Screening Step 2: The next screening step is to subject the title of each article to a Twitter Search, which allows one to search for tweets that have included this title. If such a search reveals at least a two tweets about the article, I go the 3rd Screening Step. I currently do a Twitter Search only if the article has a non-zero Altmetric score. My experience has been that it's extremely rare for articles with an Altmetric score of zero to yield any tweets, as assessed by a Twitter Search.

Screening Step 3: I'm a supporter of Open Access. So, I next check whether or not the article is freely accessible (no paywalls). If there are no paywalls, I prepare a tweet about the article. If I do run into a paywall, I only prepare a tweet if either the Altmetric Attention Score or the results from a Twitter Search, or my own reading of the article, yields a very positive impression. I indicate in the tweet that the article is not OA. I do this by putting ($) after the title of the article.

Some users of Twitter focus their attention on the literature related to a particular topic. One example is Hypoxia Adaptation, "A feed for hypoxia related papers published in NCBI, ArXiv, bioArxiv, and PeerJ". Another is epigenetics_papers, "Chromatin & epigenetics paper feed from #Pubmed and #Arxiv". It's unclear what criteria (other than the topic of interest) are used as the basis for tweets from these users. So, I'm currently discounting such tweets, in comparison with others that do not originate from feeds such as these.

The targeted viewers for my tweets are anyone interested in current research on CSCs. The tweets are not targeted only at those active in research on CSCs. Hence the somewhat higher priority given to articles that have no paywalls. It should be noted that only a very small percentage of articles (less than 5%) reach Screening Step 3.

Of course. there's no way to avoid some subjectivity in an editorial process of this kind. So, I occasionally ignore the results of the screening process and tweet about articles that I especially liked. And, no doubt, some interesting articles will be missed. The greater the sensitivity and specificity of the screening process, the more likely it is that all of the relevant articles will be found and the irrelevant articles rejected.

For an example of a positive view about tweets, see: Can tweets predict citations? Metrics of social impact based on twitter and correlation with traditional metrics of scientific impact by Gunther Eysenbach (2011).

Examples of positive views about altmetrics are: Altmetrics in the Wild: Using Social Media to Explore Scholarly Impact by Jason Priem, Heather Piwowar & Bradley Hemminger (2012) and Value all research products by Heather Piwowar (2013).

I'm aware of criticisms of a screening process which relies heavily on altmetrics and tweets. For examples of such criticisms, see: Twitter buzz about papers does not mean citations later by Richard Van Noorden (2013), Why you should ignore altmetrics and other bibliometric nightmares by David Colquhoun & Andrew Plested (2014) and Weaknesses of Altmetrics (undated, and authors not identified).

My own view is that tweets and altmetrics merit further exploration, as indicators of "attention". Of course, one needs to watch out for "gaming" (see: Gaming altmetrics). However, my own examination of tweets and altmetrics related to CSCs has yielded little evidence of gaming. Instead, the tweets I've seen (note that the coverage of all the altmetrics except for Twitter seems to be low) almost always appear to be the result of authentic-looking attention from real people. Occasionally, I've seen some evidence of gaming, but such articles haven't survived the screening procedure.

I do not believe that Impact Factors should be regarded as the unquestioned gold standard for indicators used to assess impact (see, for example, Impact Factors: A Broken System by Carly Strasser, 2013). Of course, the gold standard for oneself is one's own opinion upon reading a publication. But, no one can read everything.

An article, How to tame the flood of literature by Elizabeth Gibney in Nature (03 September 2014), provides comments about emerging literature-recommendation engines. I haven't yet tested all of these, but they do clearly merit attention.

I'd be very grateful for any suggestions about ways to improve the efficiency, sensitivity and specificity of a screening process of the kind outlined in this post.