What should the relationship be between evidence generated through research and what policy makers and social care practitioners do? This entails questions of what we would like the relationship to be, and of what practical limitations there are to what it can be. This has long been an issue for debate and research.
Research about the use of research, if that doesn’t seem too self-absorbed for you, is a very important and well established topic of scholarship. It’s also very relevant for the re-emerging debate on the quality of the social care evidence base, prompted by a National Audit Office suggestion that social care has a ‘weak’ evidence base (Brindle, 2014). This study asks vital questions about how reliable research evidence is used for policy decisions. It’s no use having a strong evidence base if its use remains weak.
One possible view is that all policy and practice should follow what the research evidence tells us. This would be a neat, linear relationship and a rational process. Yet, a moment’s consideration shows us that this isn’t always possible. Sometimes the evidence isn’t clear or developed enough to provide a clear light.
Practical and cultural divides between researchers and policy makers and practitioners can also hinder this process. Then we have to consider the need to contextualise evidence, such as when working with an individual who doesn’t neatly fit the profile of a client found in the research evidence. And, of course, for good or bad, there will always be some politics in the process.
If we are willing, then to allow some movement away from the neat, rational, even simple, model of the research directing policy and practice, we certainly do not end in arguing that these two can occur in evidence-vacuums. We want our practitioners and policy makers to be like you, dear reader, aware of the evidence in an area and to be deeply reflecting on it and its implications for action.
There is an argument that we are some way from this and that there is a gap between the evidence produced by researchers and its uptake in policy and practice. Understanding this gap may help us to find ways of closing it so that we move to a more desirable research-policy/practice relationship.
This is where this article by Cherney and colleagues comes in, as they look at the access that a group of policy makers in Australia have to research evidence.
What, then did Cherney and the team do? They surveyed employees in federal and state government agencies across Australia. Full details of how the survey was conducted are given by the authors, and this is something we’ll come back in the discussion below.
There were many practical difficulties in conducting the survey, but the authors achieved an impressive final sample size of 2084 respondents.
Amongst the authors’ findings are:
- Respondents said that the most important source of information for them, and the most frequently consulted, was colleagues in their organisations.
- University researchers were seen as an important source of information, but not so nearly frequently consulted.
- 58% of respondents said that they used electronic databases of research publications. Those who did not use databases had a range of reasons for not doing so, including lack of access, lack of skills in using them, and a preference for consulting colleagues and/or using internet search engines.
- A lack of time to read relevant studies was seen by many respondents as a significant practical barrier to using research.
- The authors undertook a form of mathematical analysis (logistic regression modelling) of their data to explore the relationship between variables and the relative significance of different factors. They found that ‘an overall organisational ethos and professional culture that value research have a bearing on the uptake of academic research among policy personnel, well above any perceived deficits in individual skills’ (page 13).
Discussion and conclusions
Two obvious issues to consider with this study are i) it is Australian based, and ii) the sample of respondents to the survey.
On the first one, how different is the Australian context of policy making to that of Britain? Undoubtedly there will be differences and we need to hold this in mind, but we should not dismiss the findings as completely irrelevant to understanding the same issues in the UK. Many of the issues raised in the paper have been raised by others researching the same theme in other countries, as discussed by the authors.
On the second issue, the authors clearly discuss the fact that they were not able to draw a random sample of staff in the government agencies. For various practical reasons they had to ask the government agencies to pass on the questionnaire to members of staff who fitted the criteria for the survey. The authors, then, have no way of knowing how representative the respondents are of the actual employees in the agencies. The authors discuss this clearly, and we can note that it was actually a rather large sample which may go some way to overcoming this concern.
Values and culture
We could engage in a more detailed discussion of ways in which research evidence might transmit to policy makers than the authors have. As the authors acknowledge, for example, colleagues could be a very important channel for finding out about evidence. The paper, though, does highlight some very important points with regard to understanding and closing this gap between the generating of research findings and their use in policy making.
The most important element is that people work in an organisational culture that actively and explicitly values the use of research evidence. As the authors wrote, ‘an overall willingness to seek out academic research will be determined by whether the occupational milieu in which policy personnel work is one that values academic research and sees it as important’ (page 13).
The authors then highlight a whole set of practical issues that could be improved to close the gap, some of which can be addressed by policy organisations, some by individual policy makers and others by researchers themselves. These need to be carefully addressed, but doing so without improving the organisational culture and the message it transmits about the value of research evidence will undermine the work done to remove or reduce the practical barriers.
The same point most likely applies to policy and practice in social care in the UK and we need to ensure that, where they haven’t already, organisations develop the kind of culture that Cherney and colleagues highlight as vital if we are to ensure that research is widely and appropriately used by policy makers and practitioners.
Cherney A, Head B, Povey J, Ferguson M & Boredom P (2014) Use of academic social research by public officials: exploring preferences and constraints that impact on research use. Evidence and Policy. Print ISSN 1744 2648. Online ISSN 1744 2656 [Abstract]
Brindle D (2014) Mounting pressure on social care to build evidence base The Guardian Social Care Network, Thursday 13th March 2014.