Follow that! A gold medal example of Public Engagement with Research to learn from
/Funding for British sport at the Olympics is pretty cut-throat. If your sport is unlikely to get a medal, you are likely to get nothing from UK Sport. So 1st to 3rd means money, 4th or below means none. This has been a spectacularly successful policy in terms of medal results, though controversial in wider terms of how it relates to levels of participation in sports, how the individual sports can develop sustainably, and how broad the funding should be in this all-or-none system.
Funding research
The situation is a little similar for how research is funded at UK universities. A large chunk of the government funding the universities receive is calculated from their performance in the Research Excellence Framework (REF), albeit with a 7 year cycle rather than the Olympics’ 4.
A major part of each submission is a set of Case Studies, where a piece of research, and the impact it has had, is submitted, and ranked on a 5 point scale. Ignoring the bottom (unclassified) rank, the other four levels all sound worthy of funding, yet a 1* or 2* ranking (‘quality that is recognised nationally / internationally in terms of originality, significance and rigour’) doesn’t get you less – it gets you nothing. A 3* (internationally excellent in terms of originality, significance and rigour) gets you some funding, but a 4* – the gold medal of the REF – gets you 4 times as much, with an estimated £324K for the last round, REF2014.
You have to engage with the public
Evidence of Public Engagement with Research (PER) is one of the elements that REF is pushing for, to “strengthen relevance, responsiveness and accountability and to build trust” in science. But PER is alien to many researchers, who find it hard to understand why it’s needed, what is actually required in practice, how to fit it into their already busy and stressed lives delivering the research itself, and how to even get started. There is also a sneaky suspicion that despite fine words, it isn’t actually seen as that important compared to ‘real research’ by the assessors. When push comes to shove, either in getting grants funded, or high scores in the REF, is it the labwork and the publications – their familiar world of researchers – that are really being judged? Is the PE just ‘froth’ that is in practice ignored in favour of hard science?
A gold medal example
How much notice is really taken of PER was put to the test by a group from Southampton University submitting an impact case study for REF 2014 that was 100% engagement-based, and I got in touch with the lead author, Jon Copley to ask some questions.
The case study focused on ‘Public Engagement with Deep Ocean Research’, and in an open access paper just published he explains how the PER data were collected, and how they defined and measured indicators of impact.
Universities and researchers aren’t formally told how their case studies have scored, but Copley’s university web pageconfirms that this case study got a 4* ‘gold medal’ score. “Yes, we received feedback through these sort of informal academic networks that I’m sure a lot of people did”, said Copley. He was uncomfortable to find out this way and thought knowing the grade should be automatic, both so it is fair, and also “if we have this goal of trying to achieve the best impact we can with our research, then surely feedback is desirable?”.
The case study was based on a research programme looking at life in deep sea environments such as hydrothermal vents. They discovered both new examples of vents in the ocean floor, and also identified a bunch of new species. So even in this example that focuses solely on engagement, there has to be a strong research story: we are talking about PER, not general engagement about science.
What can we learn from it?
So what can we learn from this?
Firstly, I thought it was helpful to know for sure that impact from PER can be evidenced sufficiently to get a 4* grade – and particularly, one that is in the ‘Inform/Inspire’ skin of the PE ‘onion’ as that is particularly hard to provide convincing impact data for. That should motivate scientists and also the university managers to take PER seriously.
PER grid useful
Secondly, I saw that planning, organizing and recording aspects of PER within a project can be very straightforward. For me, one of the most useful parts of the paper was a timeline grid with research and engagement activities blocked in.
This might be bread-and-butter to an advertising or comms professional, with a list of agreed social media channels and press release dates, and used to developing campaigns with every output planned. It is however quite alien to most research scientists, and simply seeing this grid felt enlightening. I could see how it’s not rocket science, and how I might do this myself, even if with different activities, or at a much reduced level.
So I love the grid that Copley used in his case study, reproduced in the paper, and also here. This is where you show for example linkage between being awarded a grant/having a paper published, and putting out a press release and some social media. Not only does it start to feel organized, it becomes easier to be clear what is ‘enough’. And as Copley said, “We realized there was a lot of stuff we were already doing it that we weren’t joining up, that we weren’t necessarily making count and that was an easy win, harnessing that together.”
Clarity on Engagement channels
This grid also shows clearly what forms of engagement they used, divided into Media, Direct and Online engagement. For each of the PER approaches, the choices made are clearly visible. They don’t have to be comprehensive, just available and relevant for you and your target audiences.
Copley was a science journalist in a former life, and he accepts that that was useful, but mainly in terms of understanding how journalists work and their motivations, which are different from those of researchers, and any alignment is a happy coincidence.
Measuring Impact – Reach and Significance
That’s what they did, but did it work? The aspect that troubles people most here is how to assess impact, both because it is so hard to get a grip on, and also because it is the ultimate goal. It is King Pellinore’s Questing Beast or Harry Potter’s Golden Snitch.
The words everyone uses here are Reach and Significance. Reach – how many people you engaged with – depended on the type of engagement, but it was good to see that if you are working face-to-face with local people, then it is OK for that number to be small.
One thing that surprised Copley was that audience numbers for many publishing outlets were near impossible to get. “If you ask for reader numbers for that webpage for that Guardian article, you won’t get it,” he said, “it’s commercially sensitive information”. However public outlets such as the BBC did give numbers, so he started to concentrate on outlets that gave him feedback he could use. The lesson there seems to be to know in advance if you can get numbers, and not to spend time in areas where you can’t, unless there is a particular reason to.
The other surprise for Copley was about Significance. “There were some very loose suggestions at the start. Things like, you could show significance of impact if for example, you do work with schools and there’s a number of students nationally taking science subject goes up. But how do you link your specific interaction with those particular people, to that general outcome?” Instead he found a more local and manageable solution was better and achievable: “We could get professional testimonials from people like teachers who could say yes, you worked with our school over several years and amongst our pupils we’ve seen an increase in applications for the study year on year.”
Many natural scientists, with the need for statistical significance drummed into them from an early age, are uncomfortable with qualitative, apparently anecdotal data. But these are the life blood of other disciplines in the social sciences. Copley agreed: “We could have probably saved ourselves time by engaging with social scientists who have the research background in this, but thinking carefully about exactly what the impact is”.
I was really encouraged by the fact that in looking for impact, the most useful markers of significance seemed to be relatively easy to come by; for example reported comments from the people involved, be they teachers, students, local residents or lifelong learners. Keep it simple, keep it local, keep it personal seems to be the message.
Most useful
I pushed Copley on what was most useful if you had limited resources – what would he keep and what would he cut if he halved his PER? “I think I’d be a lot more discerning about online engagement. I’m recognizing the differences between different social media platforms in terms of their audiences. I think in some cases the ones that I’ve fallen into using, are not actually the best for engagement. They’re probably a bit of an echo chamber. It’s not necessarily the audiences I’d really like to reach.” So for him, he thought Twitter helped to keep in contact with specific groups such as science journalists, but to reach a lot of the wider public audiences he thought he should be using Facebook.
I was concerned that all the PER work could be a drain of time and energy, but he said not: “Actually the kind of logging of impact after an engagement becomes routine and you’re doing that anyway to check whether it’s effective”.
Headspace
How much headspace did these PER aspects consume? Copley said, “It does take up head space, but I think it is a good thing because it makes you reflect on your research or research questions, their context, how different audiences, what they latch on to, what they connect to about what you’re doing. That helps with grant proposals, and you get a lot from that as a researcher.”
Conclusions
There’s a story I tell myself about researchers who struggle to get to grips with PER, and it’s about how much time they actually spend thinking about PER compared to laboratory experiments. As a scientist, I would spend enormous amounts of time thinking about experiments, during the day, evenings, nights, and weekends. My guess is that the time many researchers spend actually thinking about PER is vanishingly small. It seems unfamiliar, hard to define, complex, and potentially overwhelming, and these factors make it hard to get started, and easy to stay stuck.
What Copley’s paper says to me is that it can be much simpler than we imagine, and here’s one way to do it that we can all follow. It might not get you a gold medal, but it might get you in the game.
Resources:
Copley, J. (2018) Providing evidence of impact from public engagement with research: A case study from the UK’s Research Excellence Framework (REF). Research for All, Volume 2(2), pp. 230-243. https://doi.org/10.18546/RFA.02.2.03
REF2014 Case Study: Explore the Deep: Public Engagement with Deep-Ocean Research. https://impact.ref.ac.uk/CaseStudies/CaseStudy.aspx?Id=42992
Jon Copley: @expeditionlog
Originally published on the London Public Engagement Network (PEN) blog