Close up of four people in a meeting

The 10 success factors for building evaluation capabilities in the public sector

Our Thinking | insight

Published

Authors

4 Minute Read

RELATED TOPICS

Share insight

Idea In Brief

On the agenda

Government agencies are increasingly thinking about how to get better at designing, conducting and commissioning evaluations of policies and programs. Agencies are also focusing on how to embed cultures of reflection, learning and continuous improvement.

Operating model

A fundamental decision that agencies need to make is on the size and operating model for its evaluation capabilities. Government agencies that commission and conduct regular evaluations typically employ one of three operating models: centralised, hybrid or decentralised.

Success factors

While each decision is important, in our experience 10 factors are critical to the success of the evaluation function and its support for decision-making by leaders and business areas across an agency. These can be divided into the planning, building and embedding phases.

With government finances tight, it is more important than ever for agencies to demonstrate that every dollar being spent is generating value.

So it makes sense that monitoring, evaluation and learning (MEL) is in the spotlight, and that many agencies are looking to build their internal MEL capabilities.

But building that MEL capability is no easy task. Drawing on our experience working with government agencies across Australia, the United Kingdom and Canada, we are pleased to share some key things you should consider when investing in and growing your MEL capabilities.

There is appetite for improved evaluation activities

There is renewed interest among Australian government agencies in building evaluation capability.

As Katy Gallagher, the Minister for the Public Service, said last October: “Evaluation is [a] priority for this government. It helps us see if we’re actually doing what we said we would. To understand what is working and what isn’t. And being accountable to all Australians.”

In the past 18 months, the federal Department of Finance has released a new Commonwealth Evaluation Policy, a new Evaluation in the Commonwealth Resource Management Guide, and updated guidance around embedding evaluation planning into new policy proposals (NPPs). These complement and build on previous work, such as the Productivity Commission’s Indigenous Evaluation Strategy.

This reflects a broader trend, with similar pushes to outline requirements, recommend practices and update guidance taking place in states and territories (for example, NSW recently updated its evaluation policy), as well as in countries including Canada and the United Kingdom.

Government agencies are increasingly thinking about how to get better at designing, conducting and commissioning evaluations of policies and programs. Agencies are also focusing on how to embed cultures of reflection, learning and continuous improvement.

But agencies looking to scale up their internal evaluation capabilities face several challenges:

  • making the case for long-term investment in an environment of heightening fiscal pressures
  • balancing quick wins that demonstrate the value of evaluations with laying the foundation for long-term success
  • competing for specialist evaluation skills in a tight labour market.

Meeting these challenges can require some difficult strategic decisions and deft management.

There is a spectrum of evaluation operating models

A fundamental decision that agencies need to make is on the size and operating model for its evaluation capabilities. Government agencies that commission and conduct regular evaluations typically employ one of three operating models:

While there are benefits and drawbacks to each approach, in our experience, best practice is typically realised through a more centralised operating model. This can be part of either the centralised approach or a hybrid approach shown above.

There are several benefits to this model. A centralised evaluation function can:

  • provide economies of scale relative to having discrete evaluation staff located across an agency
  • more easily drive capability development and the application of consistent evaluation practice across an agency
  • be more easily located alongside complementary functions, such as economic analysis, strategic policy, data and analytics, and performance measurement to increase the likelihood of value-adding cross-pollination
  • provide an additional level of independence by being separated from policy development and program delivery, which supports the credibility and objectivity of the evaluations
  • establish an identity and brand that can be used to drive change internally and attract external talent to the agency.

The challenge in implementing a centralised model is to retain strong links and knowledge of activity area to ensure evaluation is practical and valuable, from the front-line to strategic decision-making.

There are 10 success factors

The task of establishing a central evaluation function can be daunting. There are myriad decisions to make, each of which will influence how effectively the function supports accountability, evidence-based decision making, learning, improvement and stakeholder feedback in the agency’s policy and programs.

While each decision is important, in our experience 10 factors are critical to the success of the evaluation function and its support for decision-making by leaders and business areas across an agency.

Agencies need to source the key skills

Ultimately, to build internal evaluation expertise, agencies need staff with the right skills.

We have seen public sector agencies use a variety of strategies, including bringing together existing pockets of excellence from line areas, training staff in evaluation skills, developing capability as part of evaluation projects conducted by external providers, recruiting staff from agencies with known centres of excellence, and targeting lateral hires who have worked on evaluations. Most agencies will need to draw on several of these strategies.

Building and acquiring specialist capabilities and changing culture takes time. Agencies need to be ambitious, but also realistic about what is achievable in the short term.

As a first step, you can assess your current level of evaluation maturity – using Nous’ evaluation five-factor maturity framework – and then chart a clear path forward.

As government and citizen expectations of evaluation grows, agencies need to build their capability. For leaders, the time to act is now.

Get in touch to discuss how we can help your organisation to grow its internal evaluation capabilities.

Connect with Andrew Benoy and Kale Dyer on LinkedIn.

Prepared with support from Annette Madvig, Carlos Blanco and Robert Sale.

A version of this article was first published on the Australian Evaluation Society blog.