Decisions on behalf of others: different institutional and disciplinary interpretations of risk

This was a discussion topic at the UK Shelter Forum in May 2016, hosted for the first time by the Institution of Structural Engineers, with help from CARE. A number of the organisers wanted to explore the question of structural risk and I agreed to chair a panel, as long as we did not talk exclusively about buildings… We managed to draw in panellists, new to the UKSF, who could illustrate different organisational and disciplinary approaches to a range of risks and reflect on the investment/intervention decisions made about risk on behalf of the public, particularly just after a cataclysmic event.

But why this topic? My experiences as a member of the ‘international community’ left me jaded. Firstly, I thought bad decisions got made. But, having been ‘in the room’, I could not dismiss these as either conspiracy or accident: we weren’t deliberately conspiring to ride roughshod over Country X but we weren’t doing anything without thinking either. But I couldn’t find – in our industry’s self-evaluation – a satisfactory account of these processes or an analysis of our discourse that helped me understand how we could avoid doing this again. Secondly, for many, many years, it has been difficult after disasters to channel money away from imported shelters towards the repair of (motley) private dwellings. My view is that this difficulty has been exacerbated by engineers giving advice - at the highest levels - that is structurally sound but culturally inappropriate. Engineers play a part, of course, but I have come to see the structural challenge in the way we overlook repair as 'structural' in the political and economic sense: reasons to bypass repair are embedded not just in the built environment but in all of our systems and stuff.

Our panel was a chance to explore this but with an eye to the future.

It seems that we are on the cusp of some profound technological changes - some of which are likely to disrupt our ‘expert’ status as engineers and humanitarians. Machines are not just doing things for us, they are deciding things for us. We can imagine the future of work as a fusion of craft and data.

By craft, I mean being ‘on site’ making judgments about the sights, sounds and feel of material and paying attention to builders, users and place… Or – as a colleague once suggested – the skills to deal not just with complex structures in simple contexts but with simple structures in complex contexts. Marshalling analytical and diagnostic skills as well as material and human experience. Mastering the bespoke. Arriving at a shared understanding of risk before deciding unilaterally to mitigate. And, making routine, any bureaucratic or rule-based bit of design so that it can be done by an app or a cheaper human (elsewhere). But how many engineers are comfortable diagnosing unconventional materials and workmanship, non-standard details or severe damage? And what would ‘aid craft’ look like anyway? How many aid workers find that being a ‘professional’ just means being risk averse experts in our own bureaucracy?

By data, I mean more real-time, sensed data that might indicate (or not) the obsolescence of our stuff. Imagine, for example, that consumer protection and safeguards on public health & safety shift away from a system of codes, voluntary labelling and self-certification devised in ideal conditions and backed by governments. Imagine a shift towards real-time monitoring of vibration, strain or displacement – systems that can check performance in context. Imagine how this might challenge the ways engineers are regulated and self-regulate. I think this is intriguing given that we often come up with disaster ‘solutions’ that are essentially regulatory without considering the context in which such rules are created and policed. Would this make the dogmatic code-followers (and their lawyers and professional qualifications) obsolete and give new power to the lay-activist-geek-citizen? Does anyone want that?! Would this allow risks to be seen in a broader context of trade-offs or over a longer period? What if we could ‘sense’ and ‘algorithm’ ourselves away from engineered regulation and towards public shame, sanction and deliberation? Will the people that pay expensive people like us to manage risk, keep paying for anything but the messiest, local diagnostic craft monitored by the shiniest, remote machine learning?

We can foresee that the information to aid decision-making and scrutiny will increasingly be in the hands of the public, people affected by disasters, individuals and corporations and inside machines rather than held by 'experts' and governments. This will mean profound changes and carry a whole new set of risks and responsibilities. And so we can imagine a future that is politically and environmentally more risky and complex than the one in which we were trained.

Why was this worth discussing at the UKSF inside the IStructE?

Well, I often wonder whether our experience after disasters gives us both a glimpse into that future and a special insight into our professions. We work with juxtapositions worthy of SciFi apocalypse: epic destruction followed by the roll out of retinal scanning at a scale and sophistication beyond any technology we have at home. And we find ourselves temporarily more powerful than our commercial peers, who may be in thrall to London property developers but are also constrained by planning processes.

For me, the space between conspiracy and accident is where this exceptional power and our professional training collide. Just put some of the questions about repairing non-engineered dwellings against an engineer’s or a humanitarian’s code of conduct, and you will immediately find contradictions:  “to undertake only those tasks for which we are competent, have regard to the public interest, not maliciously or recklessly injure…”; or to do no harm, provide equitable help on the basis of need, build back better, minimise programme risks… Add to this the short run, the long run, risks to life and health, risks to reputations, risks of litigation … and decisions are quickly riddled with dilemmas that we rarely confront and can't resolve in isolation.

As for our training, should we not also think about the way a technical, professional education acculturates us and seems to encourage both an overblown sense of protagonism (thrusting desire to solve problems) and its twin tendency of ‘solutionism’ (preferring to abstract out of context only problems we are equipped to solve)? Is this why INGOs tend to conflate a perceived high risk of funding terror (a low probability global risk) with the risk of pseudo-legal, reputational damage (a medium probability risk to headquarters) while overlooking the risks faced by non-international staff on the ground and our non-branded partners - risks which are much more likely but local and remote? Does this explain the disproportionate panic about getting ‘gamed’ or swindled when giving people cash instead of our heoric solutions?

So, I hoped we would reflect on:

  • how we - the qualified, the professionals, the powerful but non-governmentally-organised - have been conditioned to treat risk on behalf of others and in their absence; and
  • whether we are in danger of focusing on risks we (think) we are able to understand and control, like structural risk, while overlooking the risks that poor people have to live with that are less tangible or easy for us to understand.

All this, perhaps better to examine how the choices of those affected by disaster from government and elites to the most disempowered are really shaped.

 

A more detailed write up, including bios of our eminent and stimulating speakers will follow.