Employees working together using a desktop computer.

Beyond assessment: Why evaluation matters in digital assurance

Our Thinking | insight

Published

Authors

4 Minute Read

RELATED TOPICS

Share insight

Idea In Brief

The dynamism of digital projects is often overlooked

Digital investments are often subject to investment frameworks and budget processes that have been adapted from those designed for physical infrastructure investments.

Digital technology is integral to service delivery

In fact, for end users – citizens, businesses, communities, public servants, departments – the digital platform that delivers the service may as well be the service.

Assurance should be complemented with evaluation

You have to evaluate your digital investments if you wish to learn, improve, and optimise value for money across your portfolio of platforms and capabilities.

Digital Assurance: Objective and independent assessment of a digital investment to maintain delivery confidence. [Adapted from the DTA and NSW Government digital and ICT assurance frameworks]

Digital Evaluation: A systematic and objective assessment of the design, implementation or impact of a digital investment to determine its value. [Adapted from the Australian Centre for Evaluation]

 

Over the past decade, and especially since the start of the COVID-19 pandemic, governments across Australia have invested heavily in their digital assets and capabilities. Unfortunately, they have often watched these investments fall short of the mark.

Consider the $2.2 billion spent on five failed federal IT projects, including GovERP, Modernising Business Registers, the digital passenger declaration, the visa processing system, and the entitlements calculation engine, which all significantly underdelivered. What went wrong?

We believe there has been an overreliance on assessment alone when it comes to digital investments: assurance activity that tracks level of confidence in a project’s delivery, and that gives greater weight to realisation than it does to intention. If your project is being delivered as planned, but what you planned was informed by incorrect assumptions, or your context has changed, and your investment has had negligible or even negative effect, how reassuring is the assurance? Are there other methodologies that can be used to complement assurance processes to ensure digital investments are fit-for-purpose and are still delivering benefits?

What's missing?

We believe that assurance should be complemented with evaluation – an evidence-based analysis of intended and unintended consequences, measured against what an investment actually set out to do – and that the latter must play a more central role in digital investments going forward. You have to evaluate if you wish to understand the true impact and appropriateness of your digital investments – in short, if you wish to learn, improve, and optimise value for money across your portfolio of digital platforms and capabilities.

As well as conducting an evaluation after a project has fully concluded, evaluation can be applied to great effect within delivery. This includes adopting evaluation frameworks in current assurance processes to continually assesses the appropriateness of the solution and the accuracy of its theory of change. It can also be applied during delivery to measure the impact of features delivered during large-scale Agile programs.

A digital platform is not a construction project

Digital investments are often subject to investment frameworks and budget processes that have been adapted from those designed for physical infrastructure investments. They have been improved over time to ensure that digital delivery is meeting user needs and realising intended benefits, but there is merit in continuing to improve them. The further adoption of policy tools like evaluation is needed to ensure a holistic assessment of the impacts of digital investment. 

As Jennifer Pahlka, founder of the US Digital Service, writes in Recoding America: Why Government Is Failing in the Digital Age and How We Can Do Better: “Treating software as just another commodity overlooks the fact that mission-critical software cannot simply be bought the way you buy a truck or even a building. It’s an integral part of the service you provide, and that service and the environment in which it operates are dynamic.”

This dynamism is too often overlooked. Digital projects often aim to combine new ideas, approaches, and products to generate benefits, and improve or replace products and services, in deeply complex contexts. As well as successful delivery of technology, they are highly dependent on effective change management and adoption strategies. These, too, are difficult to get right. Some digital products can become rapidly obsolete. Others can take time to gain traction with users. Two organisations, adopting the same digital platforms at the same time, can have wildly different experiences of them because of their different contexts and needs.

This is particularly pertinent for whole-of-government digital platforms and state digital assets, which need to coexist with, operate alongside and enable various other platforms and assets before they are able to deliver benefits. It becomes much trickier to identify potential outcomes and to attribute them appropriately and with confidence. Evaluation of actual outcomes is imperative when it is so difficult to predict those outcomes in the first place.

Yet assessment doesn’t do this. Instead, it applies a delivery-centric lens, making sure governments hit their milestones when they’re meant to and deliver their projects on time. What governments really need to know, in the case of digital projects, are the answers to the harder outcome- or evaluation-oriented questions: Has a digital project actually addressed the need that was central to its business case? Have better or more suitable technology or approaches been developed since the project was scoped?  Have there been unforeseen positive outcomes? Have there been unintended negative ones?

Moving beyond box-ticking assurance

The Productivity Commission highlights that, while digital activities make a significant economic contribution, their true value and impacts are embedded across various sectors and are not always directly measurable. But not even attempting to measure them, opting instead to focus on a project’s outputs rather than outcomes, does not strike us as a sensible or particularly enlightening course of action.

Perhaps the problem is that some parts of government still fundamentally view digital technology and platforms as an enabler of service delivery, and as an enabler of the service-design and policy work they do, rather than as a service in and of itself. When Nous engaged with multiple stakeholders across NSW as part of our evaluation of the state’s Digital Restart Fund, we found that there was a sense that, often, governments don't sufficiently acknowledge the degree to which digital technology is integrated into everything that they do and deliver. For end users – citizens, businesses, communities, public servants, departments – the digital platform that delivers the service may as well be the service.

Effective evaluation of government digital investment requires people who realise this, and who have sufficient expertise across government, policy, digital, and evaluation. Teams with combined capabilities are best-placed to provide the multi-disciplinary and cross-sector knowledge required to effectively help governments with evaluation, providing complex, comprehensive analysis of outcomes as opposed to box-ticking assurance. If the transformation of Australia's digital capabilities and infrastructure is to genuinely achieve outcomes for governments, users, and the country, then the traditional processes, policies, and mandates that govern digital investments, are going to need to change.

Get in touch to discuss how your organisation can evaluate its digital investments.

Connect with Will Prothero and Joshua Sidgwick on LinkedIn.