Idea In Brief
Nous believes in capability uplift
Imparting knowledge, and building that evaluation capability within an organisation, is a sign that we have done our job well.
How does Nous build and uplift evaluation capability?
We model evaluation skills so that you can learn them, too, and develop and deliver bespoke training and capability materials.
Evaluation capability is a strategic asset
When done well, it can transform how government departments and not-for-profit organisations think, learn, and create impact.
Governments, not-for-profit organisations and businesses across Australia are putting greater attention and weight on monitoring performance and evaluations – and on ensuring that the capability to conduct monitoring and evaluation activities exists in-house. We have already written, as part of this series, about why and how it can be valuable to bring on external evaluators. But we would remiss if we were not to tell you that, sometimes, you probably don’t need us.
Nous’ evaluations – which cover clients across different sectors both within and outside government – often involve an explicit or implicit focus on capability uplift. Imparting that knowledge, and building that evaluation capability within an organisation, is a sign that we have done our job well. It better positions our clients to undertake their own evaluations, be more discerning commissioners of evaluations, and ultimately deliver policies and programs that improve people’s lives and achieve the public good.
This article explores why evaluation capability matters, what it entails, and the strategies that we have found to be useful in building this critical competency in our client’s teams and people. It complements a previous article about building evaluation capability at an organisation-wide level.
Why evaluation capability matters
Evaluation capability is important for governments and not-for-profit organisations for a range of reasons. It plays a crucial role in empowering public and community sector employees to confidently use data and information in their decision-making processes, ensuring that policies, programs, and initiatives are grounded in solid evidence and can be continuously improved. By monitoring and measuring both outputs and outcomes, government departments and community organisations can more effectively understand how their actions lead to intended or unintended consequences—without a third party leaning over their shoulders to explain the cascade of action to consequence—and thereby come to translate these insights into tangible benefits for citizens.
This last part is important. There is a growing demand from citizens, stakeholders, and oversight bodies for higher transparency and accountability in government operations. Equally, the not-for-profit sector is increasingly expected to demonstrate the value and impact of its work to funders and donors. High-quality evaluations help to meet these expectations by providing clear, credible evidence of what works and what doesn’t, guiding resource allocation and policy adjustments. Evaluation capability, in other words, is a good way to bake accountability into your system.
What is evaluation capability, though?
Evaluation capability encompasses a range of skills, mindsets, and knowledge areas. There is no shortage of evaluation competency frameworks in the ether (for example, see here and here), but to cut a long story short we have found the most important aspects are:
- An evaluation mindset: A critical component of evaluation capability is fostering a mindset that values the investment in and use of information to make well-informed decisions. This involves cultivating an organisational culture in which data and evidence are integral to decision-making processes and in which properly analysing one’s actions, or the outcomes of a project, is seen as more than a mere box-ticking exercise. It's about instilling curiosity and rigour and then following them through to their natural conclusion.
- Research design: This involves understanding the principles of designing evaluation research to find out what needs to be known. Effective and appropriate research design is the blueprint for meaningful evaluations, understanding context, guiding the collection and analysis of data and ensuring that the evaluation addresses relevant questions.
- Practical tools and data literacy: Evaluation capability requires practical skills in collecting and analysing data. This includes familiarity with qualitative and quantitative data collection methods, as well as proficiency in data literacy. Familiarity with key tools for data collection and analysis, such as surveys, interviews, statistical software, and qualitative analysis techniques, are essential.
- Reporting, sharing and using information: Good research design and rigorous data analysis can come to nought if evaluation findings are not reported and used effectively. Being able to tell a convincing, evidence-based story that monitors progress and reports on the effectiveness of an intervention – and use this to consider options and make decisions about policies and programs – is also a critical evaluation skill.
Everyone involved in evaluations should understand the key tools and frameworks that go into a successful evaluation. This doesn't necessarily mean every team member needs to be an expert evaluator, but it does mean they should at least be knowledgeable enough to consider what program data and information they should collect to inform monitoring and evaluation activities and, where required, to commission and oversee evaluations.
But, yes, we know. We’re leaving you hanging.
How does Nous build and uplift evaluation capability?
By partnering with you. By modelling the skills outlined above so that you can learn them, too. By developing and delivering bespoke training and capability materials. You need never talk to us again (though we hope you do).
One thing is worth pointing out, though. Developing evaluation capability – learning, in other words, how to monitor and evaluate – is not a one-project thing. An evaluation methodology for one program cannot be simply applied to another program, even if it shares similar features. That’s because evaluation itself is not a one-size-fits-all exercise. There are different approaches that make sense in different contexts. Every evaluation – its purpose, its subject, its objects, its framework, its approach, its reporting and use of findings – is different. It is therefore critical to integrate pedagogy and development into evaluation learning. This helps build or uplift fundamental capabilities and enable continuous learning.
Figure 1 below outlines different ways that we build evaluation capability when working with our clients. The suitability of these will depend on context and needs.


Based on Nous’ experience, there are three ways that external parties – and, for that matter, internal leaders – can help to build and uplift evaluation capability effectively:
- Provide foundational tools and frameworks – Establishing a strong foundation is critical. This includes developing monitoring and evaluation (M&E) frameworks, program logics, and theories of change that guide evaluation efforts. For example, we recently worked with a government department to create a comprehensive M&E plan. By providing a clear structure, we helped teams understand what good evaluation looks like and how to approach it systematically throughout their portfolio.
- Embed capability building in projects: One of the most effective ways to build capability is through hands-on experience. By embedding capability-building activities within real projects, individuals and teams can learn by doing. For instance, in a recent partnership model with a client, we co-designed evaluations while simultaneously building the team’s capability to handle similar challenges independently. This included jointly conducting qualitative and quantitative data collection and analysis. This embedded approach ensures that learning is directly relevant to their work.
- Deliver targeted workshops and training: Workshops can be excellent way to address specific skills gaps. For example, we have run sessions on developing theories of change and program logics, enabling participants to design stronger evaluations. Unlike generic ‘Evaluation 101’ workshops, our approach is tailored to the organisation’s context, the specific nuances of the program or service under evaluation, and the participants’ needs. This ensures the learning is immediately applicable and impactful. We find that this form of capability development is most useful when participants have immediate opportunities to apply their learning.
What are you waiting for?
Evaluation capability is not just a technical skill. It’s a strategic asset. When done well, it can transform how government departments and not-for-profit organisations think, learn, and create impact.
As governments and not-for-profit organisations face increasing demands for accountability and evidence-based decision-making, building evaluation capability is more important than ever. By investing in the skills and confidence of your people, you can ensure you are prepared to meet these challenges and deliver better outcomes for their stakeholders.
Get in touch to discuss how your organisation can build and uplift evaluation capability.
Connect with Annette Madvig on LinkedIn.
Prepared with input from Heidi Wilcoxon.
This is the fourth in our four-part series of articles on good practice evaluation. This series focuses on the steps you can take to ensure rigorous, high-quality evaluations. Download the full series here.