Monday, December 31, 2018

A Decade on Twitter

I joined Twitter in August of 2008. My emphasis has been mainly on #cancerSC and #OpenAccess. For several years, I've used Twitter, rather than this blog, as a means for providing information about selected recent research on cancer stem cells. How much attention has been paid to these tweets?

I currently have 965 followers on Twitter. The recently-acquired followers appear to be either scientists, or people who have no obvious interest in either #cancerSC or #OpenAccess. Is 965 a satisfactory number of followers? Neither of the main foci of my Twitter posts can be regarded as mainstream topics, so my modest target has been 1000 followers. The total is almost there (after a decade!). A substantial proportion of those followers who identify themselves as scientists mention an interest in stem cells and/or regenerative medicine. Fewer reveal any interest in Open Access. I cannot provide any useful quantitative information about the individual interests of all 965 followers, because of major uncertainties about how best to classify their interests.

Perhaps the attention paid to individual tweets might provide  a quantitative assessment of their impact? If so, the results are sobering. Over the past year, no tweets hashmarked #cancerSC have earned "retweets" or "likes" beyond the single digits. (A few tweets hashmarked #OpenAccess have accumulated retweets and likes beyond the single digits, but the most popular ones have been ones that I retweeted, rather than ones that I posted myself).

How to account for these rather unimpressive statistics? Several explanations come to mind.

One is that many people who become followers on Twitter don't actually look at many tweets. Also, perhaps those who do look seldom retweet them (nor indicate that they like them).

Another explanation for the apparent lack of attention is that most of the tweets are indeed uninteresting (or far too specialized) except perhaps to a very few who do find them useful.

There are other possible explanations, but perhaps it's time to ask another question. If the individual tweets have little impact, as measured by retweets and likes, why post them?

I have two main answers to this question. Firstly, these posts provide an incentive for me to keep up with current publications about cancer stem cells. The literature on all of stem cell research is too extensive for one person to manage, but a subset of particular interest isn't. Secondly, the total number of followers (965) isn't, to my mind, a trivial number {see also above).

One last question. Are most of the followers still there because they haven't bothered to unfollow, or because they occasionally find some tweets to be of interest, or because they are interested in the author of the tweets? No relevant data. Don't know.

Best wishes for 2019!

Sunday, December 31, 2017

Tweets about cancer stem cells v2

This is Version 2 of a previous post dated September 5, 2014.

I've had a long-term interest in research on cancer in general, and cancer stem cells (CSCs) in particular. See, for example, "A stem cell model of human tumor growth: implications for tumor cell clonogenic assays", J Natl Cancer Inst. 1983 Jan;70(1):9-16 [PubMed]. I've been trying to keep up with the current literature about CSCs, and have found the task to be a challenging one.

Effective ways to filter the voluminous academic literature are badly needed. Social media have provided a possible route to this goal. I've been exploring a few such media, and especially Twitter.

I've been a member of Twitter since December 2008. I've posted over 4,500 tweets since then. Almost all of them have been about either CSCs or open access (OA).

My tweets about CSCs have included the hashtag #cancerSC. I usually post about 5-10 tweets with this hashtag per month. Previous tweets can be accessed by searching within Twitter for the #cancerSC hashtag.

As sources of information for recent news and publications about CSCs, I've used the following:

a) PubMed searches for "cancer stem", with the results sent via PubMed RSS to the RSS reader Feedly. My main focus is on articles published within the last month. PubMed is my main source of relevant information.

b) Google Alerts, to monitor the web for interesting new content about the keywords "cancer stem".

c) Occasionally, other contributors to Twitter.

These sources (especially PubMed) provide a cornucopia of information about what's new in stem cell research and development. My major challenge has been an editorial one: which aspects of all this information should be selected and tweeted about?

Screening Step 1: A useful screening tool has been the Altmetric Bookmarklet. At present, this Bookmarklet only works on PubMed, the arXiv repository, or pages containing a digital object identifier (DOI). Twitter mentions (noted by Altmetric) are only available for articles published since July 2011.

Using the bookmarklet, I screen the results sent by the PubMed RSS, and select for further examination those articles that have non-zero article level metrics. If Altmetric has picked up sharing activity around an article, I proceed to Screening Step 2. (For anyone not familiar with Altmetric.com, it's a site that provides assessments of article level metrics or altmetrics). (The Altmetric score is now called the Altmetric Attention Score).

Screening Step 2: The next screening step is to subject the title of each article to a Twitter Search, which allows one to search for tweets that have included this title. If such a search reveals at least a two tweets about the article, I go the 3rd Screening Step. I currently do a Twitter Search only if the article has a non-zero Altmetric score. My experience has been that it's extremely rare for articles with an Altmetric score of zero to yield any tweets, as assessed by a Twitter Search.

Screening Step 3: I'm a supporter of Open Access. So, I next check whether or not the article is freely accessible (no paywalls). If there are no paywalls, I prepare a tweet about the article. If I do run into a paywall, I only prepare a tweet if either the Altmetric Attention Score or the results from a Twitter Search, or my own reading of the article, yields a very positive impression. I indicate in the tweet that the article is not OA. I do this by putting ($) after the title of the article.

Some users of Twitter focus their attention on the literature related to a particular topic. One example is Hypoxia Adaptation, "A feed for hypoxia related papers published in NCBI, ArXiv, bioArxiv, and PeerJ". Another is epigenetics_papers, "Chromatin & epigenetics paper feed from #Pubmed and #Arxiv". It's unclear what criteria (other than the topic of interest) are used as the basis for tweets from these users. So, I'm currently discounting such tweets, in comparison with others that do not originate from feeds such as these.

The targeted viewers for my tweets are anyone interested in current research on CSCs. The tweets are not targeted only at those active in research on CSCs. Hence the somewhat higher priority given to articles that have no paywalls. It should be noted that only a very small percentage of articles (less than 5%) reach Screening Step 3.

Of course. there's no way to avoid some subjectivity in an editorial process of this kind. So, I occasionally ignore the results of the screening process and tweet about articles that I especially liked. And, no doubt, some interesting articles will be missed. The greater the sensitivity and specificity of the screening process, the more likely it is that all of the relevant articles will be found and the irrelevant articles rejected.

For an example of a positive view about tweets, see: Can tweets predict citations? Metrics of social impact based on twitter and correlation with traditional metrics of scientific impact by Gunther Eysenbach (2011).

Examples of positive views about altmetrics are: Altmetrics in the Wild: Using Social Media to Explore Scholarly Impact by Jason Priem, Heather Piwowar & Bradley Hemminger (2012) and Value all research products by Heather Piwowar (2013).

I'm aware of criticisms of a screening process which relies heavily on altmetrics and tweets. For examples of such criticisms, see: Twitter buzz about papers does not mean citations later by Richard Van Noorden (2013), Why you should ignore altmetrics and other bibliometric nightmares by David Colquhoun & Andrew Plested (2014) and Weaknesses of Altmetrics (undated, and authors not identified).

My own view is that tweets and altmetrics merit further exploration, as indicators of "attention". Of course, one needs to watch out for "gaming" (see: Gaming altmetrics). However, my own examination of tweets and altmetrics related to CSCs has yielded little evidence of gaming. Instead, the tweets I've seen (note that the coverage of all the altmetrics except for Twitter seems to be low) almost always appear to be the result of authentic-looking attention from real people. Occasionally, I've seen some evidence of gaming, but such articles haven't survived the screening procedure.

I do not believe that Impact Factors should be regarded as the unquestioned gold standard for indicators used to assess impact (see, for example, Impact Factors: A Broken System by Carly Strasser, 2013). Of course, the gold standard for oneself is one's own opinion upon reading a publication. But, no one can read everything.

An article, How to tame the flood of literature by Elizabeth Gibney in Nature (03 September 2014), provides comments about emerging literature-recommendation engines. I haven't yet tested all of these, but they do clearly merit attention.

I'd be very grateful for any suggestions about ways to improve the efficiency, sensitivity and specificity of a screening process of the kind outlined in this post.

Thursday, February 4, 2016

Canadian cancer stem cell Dream Team announced

A headline in today's Globe and Mail: "Canadian ‘dream team’ to probe stem-cell link to brain cancer". The article, by Ivan Semeniuk (@ivansemeniuk), is available online here.

Monday, August 31, 2015

The #cancerSC hashtag on Twitter

From the Editor: I switched some time ago to the use of Twitter, instead of this blog, as a place to post items about selected recent news or research reports related to cancer stem cells. See: https://twitter.com/hashtag/cancerSC

For a summary of the methods used to select items to be identified using the #cancerSC hashtag, see: Tweets about cancer stem cells v2.

Wednesday, October 22, 2014

SU2C Canada Cancer Stem Cell Dream Team Research Funding


Description (see: http://www.aacrcanada.ca/pages/stemcell.aspx)

The Stand Up To Cancer (SU2C) Canada Cancer Stem Cell Dream Team Research Funding represents a new, focused effort to implement advances in Cancer Stem Cell research as rapidly as possible through the creation of a collaborative, translational cancer research "Dream Team." The most talented and promising researchers across Canadian institutions will be assembled into a pan-Canadian Dream Team, forming an optimal configuration of expertise needed to solve key problems in Cancer Stem Cells and positively impact patients in the near future. This Dream Team will span multiple disciplines and utilize the new tools of modern biology, with an emphasis on genomics, to attack research questions in a coordinated way. The Dream Team will have mechanisms for sharing of resources and platforms (knowledge, talent, tools, technologies, etc.) across the Team including existing platforms and resources as well as those to be developed, incorporating new methods and technologies into the research groups, and training and networking across the Dream Team. Mechanisms to foster collaborations within and among the Dream Teams will be employed, an approach that promotes the sharing of information and a goal-oriented focus on measurable milestones of progress.
There are currently $10.6 million CAD available for this SU2C Canada Cancer Stem Cell Dream Team, with funds from the CSCC (through Genome Canada and CIHR) and SU2C Canada. The SU2C Canada Cancer Stem Cell Dream Team will be funded for a four-year term.
In addition to the funds available from the CSCC (through Genome Canada and CIHR) and SU2C Canada, the Ontario Institute for Cancer Research (OICR) has made available up to $3 million of supplemental funds to support clinical trial activities should the successful proposal include clinical trial activities within the province of Ontario (please see "SU2C Canada-OICR Cancer Clinical Trials: Canadian Dream Team Supplementary Funding" document which can be downloaded from proposalCENTRAL). It should be noted that clinical trial activities in any region of the country can be supported by the $10.6 million available from Genome Canada, CIHR, and SU2C Canada. Continued efforts to partner and collaborate with other organizations to support the objectives of the SU2C Canada Cancer Stem Cell Dream Team are ongoing. As other partnerships are confirmed the relevant information will be communicated to potential applicants. Applicants are also encouraged to explore potential partnering and collaboration opportunities.

The SU2C website is at: http://www.standup2cancer.org/

SU2C on Twitter is at: https://twitter.com/SU2C

Tuesday, October 14, 2014

Stand Up To Cancer Canada Dream Teams

This tweet was posted today: The link embedded in the tweet takes one to a press release entitled: "SU2C Canada Announces Up To $22.6 Million CAD Available for Stand Up To Cancer Canada Dream Teams". Two excerpts:
Two Dream Team funding opportunities are available, one for translational research focused on breast cancer and the other on cancer stem cells:
.......
The Stand Up To Cancer Canada Cancer Stem Cell Dream Team will provide approximately $10.6 million CAD over a four-year term, with funds from the CSCC (through Genome Canada and CIHR) and SU2C Canada. The Call for Ideas seeks to support a single, integrated, and cohesive pan-Canadian team bringing together key stakeholders – researchers, clinicians, industry, nongovernmental organizations, and funders – with the goal of improving the outcomes of hard-to-treat cancers by focusing on the role of cancer stem cells and stem cell programs on resistance and treatment failure in cancer. The team will employ new tools of modern biology, with an emphasis on genomics.
• Additionally, the two qualifying Dream Teams may each receive supplementary funds up to $3 million over four years from OICR, to support clinical trial activities in the province of Ontario.

Friday, September 5, 2014

Tweets about cancer stem cells

I've had a long-term interest in research on cancer in general, and cancer stem cells (CSCs) in particular. See, for example, "A stem cell model of human tumor growth: implications for tumor cell clonogenic assays", J Natl Cancer Inst. 1983 Jan;70(1):9-16 [PubMed]. I've been trying to keep up with the current literature about CSCs, and have found the task to be a challenging one.

Effective ways to filter the voluminous academic literature are badly needed. Social media have provided a possible route to this goal. I've been exploring a few such media, and especially Twitter.

I've been a member of Twitter since December 2008. I've posted over 3000 tweets since then. Almost all of them have been about either CSCs or open access (OA).

My tweets about CSCs have included the hashtag #cancerSC. I usually post about 15-25 tweets with this hashtag per month. Previous tweets can be accessed by searching within Twitter for the #cancerSC hashtag. A Google search for the same hashtag will provide access to the same archive of tweets.

As sources of information for recent news and publications about CSCs, I've used the following:

a) Google Alerts, to monitor the web for interesting new content about the keywords "cancer stem".

b) PubMed searches for "cancer stem", with the results sent via PubMed RSS to the RSS reader Feedly. My main focus is on articles published within the last month. PubMed is my main source of relevant information.

c) Other contributors to Twitter, such as @cancerscnews.

These sources (especially PubMed) provide a cornucopia of information about what's new in stem cell research and development. My major challenge has been an editorial one: which aspects of all this information should be selected and tweeted about?

Screening Step 1: A useful screening tool has been the Altmetric Bookmarklet. At present, this Bookmarklet only works on PubMed, the arXiv repository, or pages containing a digital object identifier (DOI).

Using the bookmarklet, I screen the results sent by the PubMed RSS, and select for further examination those articles that have non-zero article level metrics. Whether or not Altmetric has picked up sharing activity around an article, I proceed to Screening Step 2. (For those not familiar with Altmetric.com, it's a start-up that attempts to assess article level metrics or altmetrics).

Screening Step 2: The next screening step is to put the title of each article into Topsy, which allows one to search for tweets that have included this title. If a search using Topsy reveals at least a two tweets about the article, I go the 3rd Screening Step. Sometimes, secondary sources (such as 7thSpace Interactive) are identified in multiple tweets. If so, I again go to Screening Step 3.

Screening Step 3: I'm a supporter of the Open Access movement. So, I next check whether or not the article is freely accessible (no paywalls). If there are no paywalls, I prepare a tweet about the article. If I do run into a paywall, I only prepare a tweet if either the Altmetric, or the results from Topsy, or my own reading of the article, yields a very positive impression. I indicate in the tweet that the article is "not OA".

The targeted viewers for my tweets are anyone interested in current research on CSCs. The tweets are not targeted only at those active in research on CSCs. Hence the somewhat higher priority given to articles that have no paywalls.

Of course. there's no way to avoid some subjectivity in an editorial process of this kind. So, I occasionally ignore the results of the screening process and tweet about articles that I especially liked. And, no doubt, some interesting articles will be missed. The greater the sensitivity and specificity of the screening process, the more likely it is that all of the relevant articles will be found and the irrelevant articles rejected.

For an example of a positive view about tweets, see: Can tweets predict citations? Metrics of social impact based on twitter and correlation with traditional metrics of scientific impact by Gunther Eysenbach (2011).

Examples of positive views about altmetrics are: Altmetrics in the Wild: Using Social Media to Explore Scholarly Impact by Jason Priem, Heather Piwowar & Bradley Hemminger (2012); Value all research products by Heather Piwowar (2013).

I'm aware of criticisms of a screening process which relies heavily on altmetrics and tweets. For examples of such criticisms, see: Twitter buzz about papers does not mean citations later by Richard Van Noorden (2013); Why you should ignore altmetrics and other bibliometric nightmares by David Colquhoun & Andrew Plested (2014); Article-level metrics: An ill-conceived and meretricious idea by Jeffrey Beall (2013).

My own view is that tweets and altmetrics merit further exploration, as indicators of "attention". Of course, one needs to watch out for "gaming" (see: Gaming altmetrics). However, my own examination of tweets and altmetrics related to CSCs has yielded no unequivocal evidence of "gaming". Instead, the tweets I've seen (the coverage of all the altmetrics except for Twitter seems to be low) appear to be the result of authentic-looking attention from real people.

I do not believe that Impact Factors should be regarded as the unquestioned gold standard for indicators used to assess impact (see, for example, Impact Factors: A Broken System by Carly Strasser, 2013). Of course, the gold standard for oneself is one's own opinion upon reading a publication. But, no one can read everything.

An article, How to tame the flood of literature by Elizabeth Gibney in Nature (03 September 2014), provides comments about emerging literature-recommendation engines. I haven't yet used any of these, but they clearly merit attention.

I'd be very grateful for any suggestions about ways to improve the efficiency, sensitivity and specificity of a screening process of the kind outlined in this post.