Robots in a lab

Greater than the sum of the parts: Evaluating the collective impact of complex programs

Our Thinking | insight

Published

Authors

10 Minute Read

RELATED TOPICS

Share insight

Idea In Brief

Collective impact

Advance Queensland is a $755 million initiative that aims to develop, attract and retain scientific and entrepreneurial talent, stimulate collaboration, address big innovation challenges and attract investment. Our work evaluating its performance revealed insights into capturing the collective impact of complex programs.

Four lessons

How do you measure the ‘X factor’ of policy making? We found four big lessons: Create an explicit link between each program and the portfolio objectives; Plan evaluations and data collection at three levels – micro, meso and macro; Clearly define the measures of collective impact; Quantify impact on the system, beyond direct recipients.

Act now

As program design increasingly aims to achieve collective impact to solve complex challenges, evaluation approaches need to evolve. Without sophisticated evaluation, policymakers will not be able to see the systemic impact of the investment or have the full picture of evidence to direct future programs.

Public policy seeks to address a wide array of issues, so it cannot rely only on one policy lever. Instead, tackling knotty problems or capitalising on big opportunities requires all the tools in a policymaker’s toolkit. But how does that policymaker then evaluate the impact of these complex programs?

That was the problem faced by the Queensland Government when it launched Advance Queensland (AQ).

AQ is a $755 million flagship initiative that aims to develop, attract and retain scientific and entrepreneurial talent, stimulate collaboration, address big innovation challenges and attract investment into the Queensland innovation ecosystem.

AQ has all the hallmarks of a complex program. It comprises about 140 programs and activities delivered by nine government departments, coordinated by Department of Tourism, Innovation and Sport (DTIS). It includes programs aimed at a diverse range of stakeholders – entrepreneurs, scientists and industry – and includes three priority groups – Aboriginal and Torres Strait Islanders, women, and regional and remote innovators. There are various types of programs, including grants, large-scale innovation events, mentoring programs, and the appointment of a Chief Entrepreneur for Queensland.

DTIS needed to understand how this multi-pronged investment changed the innovation ecosystem in Queensland. The department needed robust evidence to measure reach and progress, communicate success, identify challenges where further work is needed, and confidently direct future investment for the benefit of all Queenslanders.

It was a challenge we took on as evaluation specialists at Nous.

Understanding system-level changes from complex programs requires measuring collective impact

Multi-faceted programs such as AQ aim to achieve collective impact – that is, individual programs are more successful due to the simultaneous investment in multiple, sometimes seemingly unconnected, parts of the ecosystem.

Collective impact is a policymaker’s ideal outcome. But how do you identify when it has occurred? How do you measure the ‘X factor’ of policymaking?

Typical program-level evaluations will provide insights into how one program contributed to overarching policy objectives. They will generally provide very limited, if any, understanding of how the program worked in concert with others, let alone the overall impact of multiple investments, or any displacement (or crowding out) of outcomes elsewhere. This means government rarely gets a holistic picture of the impact of its investment at the system level.

In evaluating Advance Queensland, we used our collective impact evaluation approach to assess the AQ portfolio of programs. The results were positive. Our evaluations found AQ contributed toward:

  • increased community understanding of innovation and entrepreneurialism
  • enhanced domestic and international reputation for Queensland as a place to work and do business
  • strong collaboration outcomes, including confidence-boosting connections and formal business partnerships resulting in profit and jobs
  • increased investment in research and development
  • jobs creation and growth, particularly in the knowledge economy
  • growth in the Queensland economy, with a net present value of at least $840 million and a benefit-cost ratio ranging from 1.6 to 2.2.

Informed by this work (and other collective impact evaluations), we have distilled four success factors for policymakers to evaluate the impact of a portfolio of investments.

1. Create an explicit link between each program and the portfolio objectives

A clear line of sight between each program and the portfolio objectives enables you to measure collective impact. This link needs to be more than just a narrative alignment.

By starting with clear portfolio objectives and clear definitions of success, each program can specify precisely how it will contribute to those objectives. Laying this out visually highlights the intersections and interdependencies between programs. That is, we see how each program plays its part and reinforces the outcomes of other programs.

In contrast to the typical approach (develop a theory of change and program logic for a singular program, in isolation from other programs), this approach means each program uses common language to describe success and collects data in the same way.

The process of identifying portfolio objectives and the program linkages is ideally conducted at the policy planning and program design phase, contributing to clear program coherence. This provides an opportunity to identify the data required to measure success at the start of the program, increasing the likelihood useful data will be collected throughout.

If you are interested in using this approach but your programs are already underway – don’t worry! It may not be too late to retrofit your programs into a portfolio framework and achieve a similar evaluation outcome.

The Queensland Government did this well in setting up the Advance Queensland Evaluation Framework, which outlined the 10 objectives of AQ and aligned each program, including new programs developed over time, to these objectives. We then built on these objectives by specifying, and sometimes designing, quantifiable measures of success.

Note that the portfolio objectives don’t need to limit the program – you can still capture and report on sub-objectives that are bespoke to each program.

2. Plan evaluations and data collection at three levels – micro, meso and macro

Investment in evaluation should save the taxpayer dollars in the medium term through reducing ineffective program spend. Nonetheless, evaluation can be expensive in the short term, not only in dollars, but in hours and resources from the commissioning agency and the program’s stakeholders.

Evaluation spend should be commensurate with program risk and value. It should also be timed so outcomes can be seen and the results of the evaluation can feed into the policy cycle.

With a clear picture of the expected outcomes from the portfolio of investment, and of how each program will contribute to these outcomes, evaluation investment and stakeholder engagement can be planned to deliver useful insights to policymakers.

DTIS led the planning of the evaluation of the portfolio by identifying the need for three types of evaluation:

  • Micro evaluation focuses on one program. It makes sense for high-value and likely high-impact programs that required their own program-level evaluation.
  • Meso evaluation groups together lower-value programs with similar objectives and/or stakeholders, to be evaluated together.
  • Macro evaluation looks at the impact of the entire investment. It allows the government to understand the big-picture impact and how programs support each other.

All three evaluation types are needed to produce the information policymakers require. Where the macro evaluation provides the big picture, micro and meso evaluations allow government to understand which programs contributed most, and in what ways, to the overall impact. When combined, this information provides invaluable evidence to decision-makers for future investment.

Having an evaluation partner work alongside the organisation to conduct the evaluations at three levels enables a greater consistency in approach and can therefore lead to more meaningful insights. It can also create efficiencies.

Nous did this with DTIS. By developing evaluation plans at the micro and meso levels, we could apply a consistent approach to the translation of the AQ Evaluation Framework to the actual evaluation of programs. We could then use the findings of those evaluations in the macro evaluation.

3. Clearly define the measures of collective impact

Even when policy outcomes are well articulated, it can be challenging to use them to objectively assess whether they have been achieved.

To do this successfully, the definition needs to be easily translatable into quantitative measures of success. This requires a precise definition that considers the data required to support it, as well as the granularity of that data.

Advance Queensland’s policy outcome was to grow Queensland’s knowledge economy. To understand whether this was achieved, we worked with the Queensland Government to agree on a working statistical definition of the term.

Translating outcomes into numbers is valuable in understanding the impact, but it simplifies outcomes, risking overlooking context and nuance. Therefore evaluation also needs to gather qualitative data to provide additional context and to support analysis of root causes to why something was or was not achieved.

4. Quantify impact on the system, beyond direct recipients

High-performing investment programs do not only benefit direct recipients, but broader society.

For example, government support that results in a start-up creating a prototype that ultimately leads to it becoming a “unicorn” will not only benefit that organisation and its funders. The benefit will extend to suppliers, customers, employees – and kids that hear about the success and want to become entrepreneurs themselves. As noted above, it is also important to make sure that the unicorn inventor of that “better mouse trap” achieved more than just taking market share and jobs away from some other Queensland business that was making the “old mouse trap”.

In other contexts, an improvement in one narrow area may not generate better long term outcomes, due to constraints elsewhere in the system. For example, generating more renewable energy only to find there is not enough transmission line capacity to transport it.

That is why it is important to define – and measure – the broader system the program benefits.

There are multiple social impacts, so isolating the impact of a particular program needs careful and extensive analysis. This can be done in multiple ways, including through analysing trends before and after the program’s introduction, and creating quasi-control groups.

In the case of AQ, we used these techniques to assess its contribution to the Queensland knowledge economy, including growth in knowledge jobs, exports and productivity.

Act now or be left in the dark

As program design increasingly aims to achieve collective impact to solve complex challenges, evaluation approaches need to evolve. Without sophisticated evaluation, policymakers will not be able to see the systemic impact of the investment or have the full picture of evidence to direct future programs.

Setting up your evaluation for success – through clear linkages between program and portfolio objectives, strategically conducting individual and group evaluations, defining collective impact success and analysing broader economic and systemic impact – will deliver the insights policymakers demand.

It will also demonstrate value of a robust process, justifying the investment in the evaluation and creating a virtuous circle of recognising and funding high-quality and useful evaluations.

Get in touch to discuss how we can help your complex program evaluation.

Connect with Brianna Page and Mateja Hawley on LinkedIn.

Written with input from Rodney Williams