July 14, 2020

Earlier this year, OPEN spoke to David Poynter from Anglicare Victoria and Dr Bengianni Pizzirani from Monash University about their Rapid Response™ Trial evaluation. You can read more about this program in the case study here.

In this blog, we expand on the case study by sharing some of David’s and Bengianni’s thoughts and lessons about their research partnerships between academic and ‘real world’ organisations. Their insights provide food for thought for other organisations that may be considering a similar undertaking.

This partnership involved Bengianni working within Anglicare Victoria, with David as the practice lead for the project. In this model the academic researcher is part of an integrated team continually interacting with practitioners throughout the data collection process and helping to build evaluation capacity into their everyday practice.

A genuine research – sector partnership benefits both parties

For the organisation, a research partnership means that the evaluation can be co-designed in a pragmatic way, with data collection becoming a seamless part of the program delivery that can help to answer operational questions directly affecting quality of service delivery.

For the academic researcher, partnership offers a way of gaining deeper understanding of the data through direct involvement with the program and can highlight the relevance of the research. For someone usually university based, Bengianni says that ‘seeing real (rather than just theoretical) effects can be very satisfying’.

It’s very common for a university researcher to come to a project as a consultant rather than partner. In such an approach, evaluation is grafted onto the program as a separate element, often towards the end. This can limit the amount and kind of information being collected. It also means the evaluation is unlikely to improve the way the program is being delivered in a dynamic way.

Researchers need to be embedded within the organisation

Dr Bengianni describes his role as one of ‘industry based researcher’ – being present within the organisation running the program, and working directly with practice staff. This includes regular meetings discussing the details of information gathering and interpretation of findings, paying attention to research and practice perspectives and building the connections between them.

Within the Rapid Response evaluation, post-graduate students have also been added to the mix, bringing a new dimension to program implementation by helping to build the evaluation capability of practitioners, and sharing learning (in both directions).

Through this process research becomes part of everybody’s job, not something that occurs ‘over there’.

Many different forms of data have value

Quantitative and qualitative data both add significant insight, and often work together to inform each other. Systematically recording the qualitative observations of practitioners is provides valuable insight in its own right, but also helps researchers to make sense of what they are observing in the quantitative data.

Administrative data also can be useful and is essentially ‘free’ – having been previously recorded for other purposes. Any relevant information can be useful, and even a small amount of data collected in any given situation is better than none.

While the overall focus is usually on evaluating the success of a program, a ‘failure blog’ can be invaluable during implementation. Failure is a key driver of improvement in an intelligent learning system.

The ongoing evaluation process is as important as the final outcome

When money is spent in a sector, the funded organisations need to show the worth of the work and to report back to the funder that their investment (and perhaps re-investment) is warranted. In doing this there is a tendency to concentrate on measuring and demonstrating effectiveness at the end of a program, often against quite narrow criteria.

However, if this is all that happens, there is a lost opportunity for fine-tuning a program. Good evaluation maps implementation and the end point of a program. It can help project teams achieve the most effective outcomes while showing that learning occurs along the way and can improve quality and efficiency of delivery.

We need a better space to share this kind of research

To date, there has not been much reporting of evaluations that are based in practice. It can be hard for academics to publish studies that are not strictly experimental in traditional journals.

It would be valuable to have more discussion that is genuinely sector driven, to de-mystify the research process and to advocate for different types of evidence in driving best practice.

Join the OPEN community - It's Free