Five Reasons Why Infrastructure Sustainability Assessments Fail to Manage Community Risk #3

This is the third in a series of five posts exploring common community engagement failures in infrastructure planning. It follows on from an introduction of the potential benefits and shortcomings of Multicriteria Analysis (MCA) and the consequences of an absence of Materiality Analysis. This post focuses on how biases during the assessment can affect the integrity and credibility of MCA processes.

Reason #3: Failure to recognise how criteria design reflects world-view bias

Now armed with a richer understanding of stakeholders’ objectives arranged in a hierarchy that is consistent with a project’s overall charter, it is important to consider the characteristics of effective evaluation criteria and how stakeholder values and objectives are addressed in this process. There are no right or wrong evaluation criteria for consideration of community outcomes. But there are criteria that are more useful and others that are less useful.

It is important that ultimately each design option can be judged against each criterion independently. The criteria may be an objective understood scale of measurement like the area of vegetation cleared or dollars for financial or economic impacts. The use of such ‘natural’ criteria are preferable as they are more readily understood: they directly describe the objective. But a criterion can rely on the subjective assessment of an expert and use a ‘constructed’ scale (eg opinions). A strength of MCA is its ability to accommodate and use simultaneously both forms of assessment.

In either case, however, the criterion must be defined clearly enough to be assessed. Some characteristics of good evaluation criteria (adapted from Keeney and Gregory, 2005) are:

AccurateAn unambiguous and accurate relationship exists between the criteria and the modelled consequences
UnderstandableConsequences and trade-offs can be understood and communicated by everyone involved
ComprehensiveA set of criteria should deal with the range of relevant consequences/ category of performance of the options, while the number remains manageable and there are no redundancies.
Direct and outcome- orientedEvaluation criteria report directly on the consequences and provide enough information to allow informed reasonable value judgments.
Measurable and consistently appliedCriteria allow consistent comparisons across alternatives and do not exclude qualitative characterizations of impact, or impacts that can’t be physically measured
PracticalInformation can practically be obtained to assess options
Non-redundantProvide information that is useful in comparing alternatives
Explicit about UncertaintyCriteria expose differences in the range of possible outcomes (differences in risk) associated with different alternatives

MCA criteria should use innovation to be meaningful to stakeholders. For example, first nations peoples often have difficulty in bridging the scientific world and the spiritual ways of knowing truth while taking non-Indigenous people with them on the journey. The MauriMeter tool is assisting Maori peoples to explicitly model differences in the world views of Maori and non-Maori people and how these views affect perceptions of historic (baseline) and modelled change (impacts).

Individuals criteria frequently need to be grouped to be a manageable number. Grouping of criteria has benefits:

  • help check if the set of criteria selected are appropriate for the hierarchy of objectives
  • make it easier to calculate the weighting of a large set of criteria
  • to reduce the cognitive load in taking a high-level view of the issues – especially how trade-offs between objectives are sensitive to weightings

However, the grouping of criteria alone can affect the outcomes of MCA. The principal difference between the main families of MCA techniques is the way in which criteria are grouped, weighted and how option performance on each criterion is aggregated.

The relative merits of available methods need separate discussion: suffice to say, it is not always appropriate to simply multiply the value score on each criterion by the weight of that criterion, and then adding all those weighted scores together (ie linear additive). So, an MCA that proposes (as many still do) to group evaluation criteria into three groups (economy, environment and social/community) and then give them equal weight, is at best likely to double-count performance and at worst, is unlikely to align the assessment with stakeholder values.

A grouping of criteria should be logical, unambiguous, and be both understood and supported by the stakeholders taking part in the process. Particular care should be taken when demonstrating the distribution of impacts on different parts of the population, to avoid double-counting across multiple criteria.

Key point #3

How criteria are designed and defined will have a major bearing on the outcome of an options study. This is not a technical task but one in which stakeholders should be supported to participate.