Nudge & Sludge (Guest Post by Gino Engle)
By now we are all well-accustomed with the notion of ‘nudge’, as popularised by Richard Thaler and Cass Sunstein. Nudges alter the choice architecture around decision-making to enable more helpful decisions to be made, more often. Simply put, nudges change behaviour by helping individuals make better decisions: from encouraging healthier eating by placing fruit at eye-level, to boosting retirement savings by making pensions schemes the default option. Almost a decade since the release of Nudge – and with government-led ‘nudge units’ springing up across the world – this notion has become somewhat of a new orthodoxy.
As Cait Lamberton puts it, however, while nudge theory is becoming increasingly creative, there are many cases where nudges seem to fail to have their desired effect. Even when nudges successfully change behaviour (typically implied by statistical significance), there sometimes remains a large proportion of the population that is unaffected by the intervention.
These problem situations are what Lamberton dubs ‘sludge’, which refers specifically to the limiting characteristics of either: 1) the decision environment or 2) the decision-maker themselves. A great example is a recent study conducted by the Behavioural Insights Team (BIT). Originally established as the United Kingdom (UK) Government’s Nudge Unit, it has since become a private entity and extended its work outside of the British cabinet. The BIT sought to reduce the risk of diabetes in individuals by sending feedback SMSes whilst the individuals participated in a physical exercise programme. The results showed no statistically significant reduction in blood glucose levels within the test group, despite the overall reductions as a result of participating in the programme itself. Clearly, in this case, the nudge seems to have failed – and sludge analysis may uncover the reasons why. Perhaps the sludge could have been mitigated by more appropriate wording, or timing, of the messages, or even relating rewards with the feedback given.
More generally, Lamberton suggests two approaches in the effort to somehow ‘sludge-proof’ our nudge interventions:
- Bundling nudges to raise overall effectiveness, and
- Customising (i.e. personalising) nudges to individual goals and challenges.
In fact, a separate study conducted by BIT Australia supports the idea of customising nudges, albeit in a different context from the previous experiment. Tasked with motivating hospital staff to increase their physical exercise, BIT Australia found that personalised exercise targets led to a significant increase in overall exercise in the form of the number of steps walked. The results for personalised motivational feedback also supported using customisation to increase the efficacy of a given nudge. Furthermore, by employing FitBit technology, the study additionally highlights compatibility between the power of big data analytics and creative behavioural interventions.
In truth, the issues behind tackling sludge are probably much more practical than just bundling or personalising interventions. Nudges are most useful to the extent that they are relatively low-cost solutions and, essentially, adding on to interventions will cost. As Lamberton suggests, however, in some cases simply taking more time to develop more effective nudges might be worth it. This applies to the creativity of nudges as much as to the surrounding ‘brainstorm’ and design processes. An iterative of approach ‘nudging nudges’ is perhaps more likely to produce habit-shaping behavioural change.
In light of Angus Deaton’s recent rumbles about randomised control trials (RCTs), it seems as though the best practices of behavioural science are currently under question. Randomisation has become increasingly popular amongst researchers seeking to reduce biases – particularly selection bias – when trialling tests, and has its roots as the ‘gold standard’ for clinical trials.
Similarly, the practice of blinding (or masking) is standard in clinical trials, where participants are kept hidden after the test intervention. Blinded experiments may hide information from the participant alone (single-blind) or hide information from both participants and investigators (double-blind). Nevertheless, blinding has important implications for minimising bias and thus improving the decision-making process. For example, blind auditions are routine in recruiting musicians and has increased the acceptance rate of women into orchestras by up to 50%. Blind reviews are also common practice for peer review journals and, too, for grading examinations in high schools.
In their critique, Deaton and Cartwright suggest that key limitations to RCTs can be traced to underestimating the importance of blinding procedures. They are correct to claim that blinding is rarely possible in social science trials where unconscious biases surface in behaviour as participants are often aware of the treatment. In such instances, the extent to which biases are evident is simply unknown to the researcher, rendering the entire process of randomisation somewhat redundant.
A recent paper which seeks to introduce blinding into the criminal justice process, however, highlights new possibilities for current RCT practices. Taking racial bias as their point of departure – which is evidenced by large disparities in the imprisonment rates of various racial groupings in the United States – Sunita Sah, Christopher Robertson, and Shima Baughman state that “preventing racial information from reaching key decision-makers could be the best way to make justice truly blind”.Not only does their research claim that prosecutors are the most important decision-makers in the criminal justice process, but their recommendation of blinding cases by excluding information on race aims to meaningfully reduce likely prosecutorial bias with an ingenious and low-cost method.
Blinding for race or gender may have wider-ranging implications for behavioural research. Sah, Robertson and Baughman’s research effectively addresses some of the criticisms of Deaton and Cartwright, suggesting that blinding ought to be trialled more frequently as a means to more robust evidenced-based decision-making, thereby enabling effective nudges for better behavioural outcomes.
By Gino Engle (Guest from the Western Cape Govenrment)