Getting good value from the Australian Government’s welfare reforms: which monitoring and evaluations tools should we use?
The success of the “Australian Priority Investment Approach to Welfare” initiative is likely to hinge on early social impact modelling of individual programs as part of robust monitoring and evaluation. But time is running out to ‘bake-in’ value-for-money provisions.
The Australian Government announced the “The Australian Priority Investment Approach to Welfare” this week and it has received a mixed reception. Dr Cassandra Goldie, chief executive of Australian Council of Social Services (ACOSS) warned that further mutual obligation was an “exceptionally bad idea”. Mikayla Novak of the Institute of Public Affairs said, “A dimension to this debate that seems to be somewhat downplayed is the potential of the investment approach to transform bureaucratic incentives, traditionally oriented toward maximising welfare budgets”.
But how can taxpayers be assured of good value for money when this ‘revolution’ involves largely unproven programs or scaling of existing initiatives? How can the social innovation be ‘de-risked’ for governments that will need to act like venture capitalists but have no systematic way to evaluate investment returns? We believe that robust tools are available that would help to make the selection of programs consistent, predictable and transparent. The government should release further guidance to accompany the $96 million “Try, Test, and Learn Fund” at the end of this year.
According to Minister Porter, analysis of social security data shows a minimum of 40 per cent of the current 11,000 Australian carers under the age of 25 are expected to get income support payments during their own lives. Of those, 16 per cent (or 1800 of them) will access income support every year for the rest of their lives. The overall lifetime cost to the system is said to add up to around $5 billion.
“And that $5.2 billion says nothing about the human cost,” he says. “That a young carer’s commitment to provide care to others leads to their own long-term welfare dependence means we effectively, all of us, but particularly government, maintain a system that helps parents at the cost of lost opportunities and unrealised potential for their children.”(AFR, 20 Sept)
The approach has merit. Allocation of scarce financial resources to prevention of problems rather than remediation is always appealing. No more ‘end of pipe’ expensive clean-ups. But execution will be key and there will be success and failures as the New Zealand experience has shown.
Traditional ways of accounting for government or philanthropic grant-funding provide some level of comfort to funders but these acquittals tend to be:
- Cumbersome on not-for-profits and social enterprises. Some organisation report duplication in data requirements of multiple information systems, reformatting of data for co-funders, and diversion of resources away from front-line services.
- Post-hoc. When conceived as a review or program close out (or even annual checkpoint) without reference to indicators of change from the baseline scenario, an acquittal tends not to offer the dynamic management ‘dashboard’ required to monitor and fine tune the program.
- Ad-hoc. Data essential to robust evaluation (and especially comparisons between similar programs) is not considered at the commencement of a program, and so is not purposefully collected, and so makes evaluations such as Social Return on Investment (SROI) difficult to complete.
We argue that what is needed is a monitoring and evaluation (M&E) framework that includes ex-ante, Social Impact Assessment (SIA). Such participative modelling of the expected change can inform both the proposal and funders’ decision-making. This approach would also support the social enterprise to control and improve programs, and ultimately share learnings with other social entrepreneurs. We wouldn’t let private companies (resources companies or infrastructure projects) have significant effects on a community without attempting to understand the intended and unintended consequences of a proposed action. We owe the same due diligence to the scaling up of the impact of untried social programs. If a Co-Design approach is to be used to design the service, the SIA should form an independent assessment of the resulting social program design.
There is a growing evidence base of multi-sector partnership and some notable successes. There are two some famous ‘failures’ such as the Riker’s Island social impact bond. The practice of monitoring and evaluation is becoming more sophisticated. The NSW Government seems to be leading the way with a newly-minted guideline on monitoring and evaluation frameworks and has established a community of evaluation practice across government with a centre of excellence in Treasury. Australian governments have adopted monitoring, evaluation, reporting and improvement (MERI) approaches for ntural resource management investment decisions and all have some form of evaluation for grant-making. Those of us that have conducted social impact assessment in the developing world have noticed lessening resistance here in Australia to local adoption of techniques that were pioneered by international development practitioners to meet the expectations of foreign aid donors and development banks. Yet it appears the “near revolutionary” approach will rely on government officers evaluating bids with largely a 20th-century toolkit.
Opportunity to Improve:
Social investment program managers are increasingly making a specific link to government policy objectives but monitoring has been undervalued and consequently, often of poor quality. Concepts such as Shared Value and Collective Impact have brought greater discipline to the strategy formulation of social program and to execution elements such as monitoring and evaluation. Program Logic – a popular tool in international development practice – is essentially the same as the Theory of Change in the Social Return on Investment (SROI) evaluation methodology. The economic concept of Opportunity Cost underpins newer social program evaluation techniques such as Most Significant Change and Best Available Charitable Option. How can all this capacity be applied to the challenge of upscaling innovative programs?
Social impact assessment is a widely-accepted technique to anticipate the full range of effects of policies, social programs and projects. The process leads to the development of an appropriate management response and monitoring framework. It grew out of the practice of environmental impact assessment in North America in the 1970s and it retains the participatory and science-based characteristics we see in assessments conducted by environmental protection agencies across the world. We commonly associate it with infrastructure projects and to a lesser extent, town planning policy and strategic land use planning. Our experience of social impact assessments of government policy in Australia (as part of regulatory impact statements) has been mixed with attention to human rights outcomes, for instance, quite basic. SIA has been increasingly applied as a quality check in private sector community investment / CSR programs.
The benefits of ex-ante social impact modelling of social programs are likely to be:
- Optimal design of a program to achieve desired social change for a given cost or minimise cost (where an experimental approach to pilot evaluation is not practical).
- Avoiding high-cost implementation of programs later found to be ineffective.
- Providing evidence of the range of impacts which can inform program and monitoring decisions including choosing sample sizes for ex-post evaluation.
- In cases where there is already a program in place, ex-ante evaluation can identify how the impacts would change if some parameters of the program were changed.
Evaluation of a social program using social impact assessment is not a substitute for ex-post evaluation but has more potential to avert costly failures.
The core SIA process can be described as:
- Developing a baseline: a dynamic picture of how the target social system is operating and is expected to operate without the program intervention
- Applying a modelled “shock” of the intervention to change the system’s functioning
- Characterising the effects of the intervention and the probability of those effects based on evidence from similar scenarios.
- Developing a monitoring and evaluation framework
Last year, Frank Vanclay of the University of Groningen led a worldwide team to revise the International Association for Impact Assessment guidance for conducting an SIA. The four key phases are summarised below. We have annotated with SIA tasks that should be considered before the roll out of Try, Test, and Learn Fund.
Tasks in social impact assessment evaluation of social programs
|Understand the issues||Understand proposed program; Determine area of influence; Profile community; Design participative process; Scope benefits/risks; Assemble baseline relevant to benefits/risk|
|Predict, analyse and assess likely impact pathways||Characterise social changes and specific impacts; identify indirect and unintended impacts; identify cumulative (additive, synergistic) impacts; consult on affected party responses; assess significance of impacts; assess alternative program design|
|Develop and implement strategies||Confirm likely positive impacts; address any negative impacts; address poor performing program elements; consider client support critical to adoption and behaviour change; develop action register to address changes; incorporate new actions into operational plans.|
|Design and implement monitoring programs||Select indicators to monitor expected change and detect unexpected; Develop participatory monitoring plan; Determine triggers for adaptive management and an appropriately tiered response strategy; Plan for evaluation and periodic review.|
The level of detail within an SIA process should reflect the risk and scale of a particular social investment program. The desire to de-risk the social investment decision does not mean ambiguity should not be tolerated. Applying SIA should treat risks and will reduce but not eliminate uncertainty.
The Australian Priority Investment Approach to Welfare has intuitive appeal as a relatively early intervention. The government’s assessment of the merit of innovative proposals will be a critical stage. The greatest challenge will come in comparing apples with oranges. Not all outcomes can be compared and programs will not deliver positive impacts in the same timeframe. Decision-makers will not be able to arrange proposals according to cost-benefit returns and select those with the highest ratio. Government should publishing guidance requiring proposals to align with specific government policy outcomes. That would allow a fairer comparison of the apples and oranges. Requiring an independent ex-ante evaluation of the likely contribution of a social program to government priorities should make the oranges more closely resemble the apples.
Promoting innovation in social enterprise delivery of social outcomes doesn’t have to mean ignoring an evidence base. Certainly, proponents will draw on previous evaluation work. The government should adopt a common framework for social impact evaluation in which Australia is something of a global leader. It will make the selection process more consistent, predictable and transparent.