Tracking scholarly digital footprints
Like many I communicate my scholarly research over multiple platforms in a range of activities that have now become routine for research-active academics. These include, for example:
- maintaining a personal archive of my publications, presentations and research reports – accessible through the publications and presentations tab on this web site
- uploading papers to open access repositories – mine go to the Edinburgh Napier repository, and are also listed with links from the web pages of the Institute for Informatics and Digital Innovation
- uploading videos of conference presentations – see for example the output from the AHRC-funded DREaM project in 2011-12 on the Library and Information Science Research Coalition Vimeo account
- sharing PowerPoint presentations – I maintain a SlideShare account
- blogging about research – here at hazelhall.org, and between 2009-2012 at http://lisresearch.org
- submitting papers to open access journals – I have submitted, for example, to Library and Information Research and Evidence Based Library and Information Practice
- tweeting links to my work – as @hazelh
I also maintain a presence on a number of researcher registries, as listed from my About.me page. In doing so I leave scholarly digital footprints that others can follow, often in real-time. These can be measured as alternative, or “alt”, metrics alongside “traditional” bibliographic measures of output such as citation counts. Altmetrics are becoming increasingly important for established researchers, and are likely to be even more so for those who are at the start of their academic publishing careers, such as PhD students.
The idea of an altmetrics day at the SGS summer school 2013
When earlier this academic year the Scottish Graduate School for Social Science (SGS) invited its subject pathways to propose sessions for the summer school in June 2013 I suggested that the Information Science pathway run a day on altmetrics. I argued that this would be worthwhile to give PhD students from across the full range of subject domains represented in the SGS the chance to learn about altmetrics, debate their value, and examine the digital trails that their own activity has already generated, and is likely to generate, in the future. It is important that PhD students be aware of the strategies that they can take to ensure that their work has reach, and impact, in an environment where altmetrics count.
Six months later on Thursday 20th June I hosted the Information Science Pathway event Altmetrics: achieving and measuring success in communicating research in the digital age at the 2013 ESRC SGS Summer School. I was joined by (1) Professor Blaise Cronin of Indiana University, who led the morning seminar on the “metricisation” of scholarship, altmetrics and their place in scholarly communication, and (2) Brian Kelly of the University of Bath, who facilitated a workshop in which participants worked on their online presences, taking into account the insights gained from the morning session. This is my report of the day.
Morning seminar led by Blaise Cronin
The first part of Blaise’s presentation considered publishing ecologies. Blaise set the scene by referring to the first formalised records of research from the seventeenth century, and how now the journal publishing industry is big business dominated by four main players. Elsevier, Informa, Springer and Wiley generate profits of approximately 30-40%, employing a business model that depends on (largely) free labour of authors, reviewers and editors in universities. Given that the universities themselves pay for the journals in which this work is published, it can be argued that academics routinely, and freely, give their content away to the publishers who then sell the same material back to them in published form. This business model was secure until the development of technologies that make it possible for others to disseminate and preserve codified knowledge. The one advantage that traditional publishers continue to enjoy, however, is that they endow the published artefact with a seal of approval or certification, like a quality brand does in retail.
Blaise explained how open access models of journal publication have disrupted the traditional model. They have been particularly successful in certain subject areas, for example PLOS1 in science and medicine. Publishing a paper by open access is often a much quicker process than doing the same following the “traditional” route. One of the students contributed to the discussion by mentioning how he had experience of an open access publisher making available all the artefacts around papers, such as the related correspondence, and including peer review comments. Blaise responded by confirming that the nature of a paper changes when it is possible to supply this additional information, and pointed to a future where amendments, comments and annotations relevant to the paper could all be gathered together post-publication. Thus the artefacts of the publishing process become scholarly products in their own right. Open peer review, which may be employed alongside open access publishing, can extend the dialogue around a submission and take advantage of the wisdom of the crowd.
A key question for higher education, however, is who pays for open access publishing in a system where revenue is no longer generated from sales? Brian Kelly contributed to the discussion at this point by saying that his own institution currently operates a “first come, first served” system for open access publishing fee payments. He predicted that this approach will become unsustainable as more scholarly research is disseminated by open access. One possible mechanism might be for universities to subscribe to particular open access titles. Thus charges for publishing will be paid through subscription fees.
While new technologies have changed the publishing industry, they have also made it possible to audit research output in many new and diverse ways, just as it is now possible to audit all sorts of activity in other fields such as sports performance. Traditional metrics consider individual output in a limited number of scholarly outputs – journal articles, conference papers, and research monographs – and subsequent citations of such output. Now the range of outputs that may have research impact is much greater, including, for example, highly rated blog posts, and “citations” to such work come in many different forms. Added to this, it is now possible to harvest metrics on aspects of academic impact that previously would have been impossible, for example on how frequently a particular author’s work appears on student reading lists. Data from which altmetrics might be generated in the future include how often a researcher’s work is:
- included in syllabi
- quoted in the press
- cited in policy documents
- recommended by others
- praised by opinion leaders
- mentioned in social media
Blaise also referred to 56 indicators of impact. This list includes indexes such as the H index and ImpactStory. Blaise compared these with the Q score, a measure of who and what is “hot” in terms of branding. Some of these new metrics are particularly welcomed because they share the credit for work more evenly than has been the case in the past. For example, strong contributions to empirical work often merit credit in acknowledgements rather than co-authorship. Using acknowledgement data as a measure of contribution recognises this work more readily. Blaise predicted that with access to more finely grained metrics, and the general growth in co-authorship across scholarly publishing, in the future credits for publications will look more like movie credits. Here many more people will be named, and their specific contribution noted.
One of the students in the class mentioned at this point that he had heard that some publishers are suspicious of single-authored publications. Blaise confirmed that these are becoming less common as interdisciplinarity in research becomes the norm, and scientific endeavour a super-collective activity. He also noted that co-authored research papers are more highly cited than those by lone authors, and that you can be more “productive” as a team author because the task of writing is shared. So co-authorship is useful for research reputation both in terms of volume of output, and how often it might be seen to be consumed.
The role of altmetrics was also discussed in the session, and in particular the extent to which they might be considered complements, correlatives or alternatives to “traditional” measures. Other issues covered related to their validity, reliability, utility, comparability and ethical value. For example, which data elements should be captured and counted, by whom, and for what purposes?
Blaise cited some research that confirms altmetrics as indicators of research value. For example, research by Gunther Eysenbach has shown that if a paper attracts attention on Twitter when first published (a real-time indicator), then it is likely to be well cited in the future (where citations may be considered time-lagged indicators). In addition, in some fields, downloads are indicator of subsequent citations, for example medicine, biological science. In others, such as the arts and humanities, they are not – and of course downloads are not an accurate proxy for actual consumption of scholarly publications. In some cases, however, altmetrics are not indicators of academic value at all. For example many tweets about publications are often more to do with odd article titles rather than scholarly merit. So, just as not all citations are equal – the value of an individual citation depends on who cites whom, where and when the citation appears, and how the citation is made (and the shape of the citation curve is as important as the quantity of citations to a particular paper) – so it is the case with altmetrics.
A further issue is how to synthesise rankings that emerge from the somewhat “feral” and “social” altmetrics with the more traditional scholarly measures measures. Blaise pointed to the dangers of looking at particular metrics in isolation. He also highlighted how academics can have significant impact in their careers in ways that don’t show up in publication metrics at all, whether they be traditional or new. For example, some academics are highly influential in mentoring PhD students, yet there is not an established measure for this kind of contribution. This reflects research on high impact publications which shows that the most influential papers are not published in the high impact journals.
The last part of Blaise’s seminar discussion considered the question of reputation. It is clear that some academics are now displacing effort from scholarly work to the promotion of scholarly work. Excessive attempts at self-promotion may, of course, just be another manifestation of established practices which to some are mild mischief and to others unethical, such as ghost and gift authoring. Nonetheless, given that academics are generally most incentivised by symbolic and social capital, such as the winning of awards and being well-networked, then it is entirely predictable that the interest of higher education in altmetrics is set to grow. This being the case – and even though it is acknowledged that they are at best not entirely reliable, and at worst absurd – Blaise made some suggestions for how altmetrics could be used. First he presented a mocked up buzzometer for altmetric indicators of the future. Then he showed how a series of spider web diagrams could display an individual’s altmetric footprint across various platforms at different career points, thus giving a sense of progression in terms of scholarly impact over time.
Blaise’s session ended with students thanking Blaise and remarking how stimulating they found his presentation.
Afternoon lab session led by Brian Kelly
Brian Kelly’s practical session in the afternoon gave the PhD students a chance to work on their own online presences. Brian himself is a role model for ensuring that his scholarly work is made accessible: the slides that he used in his introduction were already publicly available online when we took our seats in the lab.
We started the afternoon with a short discussion of the students’ practice in the use of social media tools for research purposes. It was evident that there was mixed experience within the group. Everyone was keen to progress from testing the water to actually taking the plunge, i.e. to effect a transformation from general lurking to fuller participation. In response to this ambition Brian emphasised that at first it is best to engage in a lightweight way by signing up for the “main” services such as LinkedIn and Twitter, where the former might be regarded as an online CV and the latter as an interactive business card. Brian also recommended that if you have any reservations about a service, you should not register with it.
The main questions that the students hoped would be answered in the session were also discussed at the start of the lab. The students were particularly interested in which social media applications are “best” for research purposes, how to build a personal and integrated online identity in a sensible way, and learning about any new tools with which they were not already familiar.
Brian then made a short presentation drawing on the content of his slides. After this the students experimented with, and evaluated, a number of tools including:
- Search and CV services, such as LinkedIn
- Resource sharing sites, such as SlideShare
- Researcher ID services, such as ORCID
- Researcher profile services, such as ResearchGate
- Blogging platforms, such as WordPress
- Twitter metrics services, such as TwentyFeet
- Twitter analytics services, such as Klout
- Altmetric tools, such as ImpactStory
- Citation analysis tools, such as Google Scholar
By the time we reached the plenary discussion after the practical it was evident that the students had a clearer view of how social media can enhance the visibility of their research outputs, help develop professional networks, and provide a wider range of fora in which to discuss research. Although all recognised that to set up and maintain an active online profile for research purposes can be time-consuming, each participant readily explained how they would apply their learning when they returned to their offices. The students thanked Brian enthusiastically for leading the afternoon session, and all three facilitators for a very rewarding day overall.
- Brian Kelly’s slides from the session
- Paper: Using social media to enhance research activities by Brian Kelly
- Paper: Empowering users and their institutions: a risks and opportunities framework for exploiting the potential of the social web by Brian Kelly and Charles Oppenheim
- Paper: Scholarship: beyond the paper by Jason Priem
- Video: How networks qualitatively change our capacity by Cameron Neylon