(Reading time: 8 - 16 minutes)
smartphone zombies

Internet users are inundated with attempts to persuade, including digital nudges like defaults, friction, and reinforcement.

When these nudges fail to be transparent, optional, and beneficial, they can become ‘dark patterns’, categorised here under the acronym FORCES (Frame, Obstruct, Ruse, Compel, Entangle, Seduce). Elsewhere, psychological principles like negativity bias, the curiosity gap, and fluency are exploited to make social content viral, while more covert tactics including astroturfing, meta-nudging, and inoculation are used to manufacture consensus. The power of these techniques is set to increase in line with technological advances such as predictive algorithms, generative AI, and virtual reality. Digital nudges can be used for altruistic purposes including protection against manipulation, but behavioural interventions have mixed effects at best.

Introduction

Humans are ‘cognitive misers’ with limited conscious brainpower, relying for most day-to-day decisions on emotion and heuristics [1]. Far from making people more sophisticated, the ubiquitous information age may result in even less cognitive reflection thanks to an outsourcing of careful thought to ‘the brain in the pocket’ [2] and the effects of technology developments on emotional dysregulation [3]. Few people are sufficiently equipped to manage the cognitive vulnerabilities associated with online decision-making [4], and they may have a deleterious impact on mental health [5]. It is crucial, therefore, to catalogue the influence strategies used to alter netizens' decisions – the ‘nudges’, the technological advances empowering them, and the propagation of ideas online.

Digital nudges

Two recent papers [6,7] have codified digital nudges, which can be broadly categorised into seven types: information provision (such as adding labels or suggesting alternatives); framing (such that some choices seem more attractive or popular); salience, reminders, and prompts; defaults and commitments; adding or reducing friction; reinforcing behaviours with feedback, rewards, and punishments; and engaging users emotionally through elements like urgency, imagery, or anthropomorphism. The papers each identified a category of ‘deceptive’ nudges, though in practice these used specific psychological principles found in the other categories.

Distinguishing between ‘good’ and ‘bad’ nudges, a study of Uber drivers [8] used Thaler's [9] litmus test: a nudge is ‘bad’ if it is misleading or not transparent, if it is hard to opt out of, and if it is unlikely to improve the welfare of the person being nudged. For example, Uber's driver rating is a ‘bad’ nudge since the scoring is not transparent, there is no opt-out, and it is not always in the driver's interest to try and please difficult passengers; while being able to earn badges (e.g., ‘Great conversation’), being clear, optional, and beneficial, is ‘good’. Driver satisfaction with these nudges concurred with the categorisation as good or bad. Another paper defined ‘dark’ nudges in similar terms, stating that they are misleading (i.e., covert, deceptive, asymmetric, or obscuring), restrictive, or unfair [10].

Accordingly, several studies [11••,12,13•,14] have collated so-called ‘dark nudges’ or ‘dark patterns’. Six key themes emerge – Frame, Obstruct, Ruse, Compel, Entangle, Seduce (FORCES) - as summarised in Table 1. On effectiveness, one experiment found that such dark patterns can indeed increase purchase impulsivity [15].

Table 1. Author's typology of dark nudges based on recent reviews [•11, 12, •13, 14].

Frame: Information is presented in a way that biases choice

  • Extraneous reference prices (e.g., old sale price vs. new sale price)
  • Fake or ambiguous scarcity claims (e.g., low stock, limited time)
  • Fake social proof and parasocial pressure (e.g., high demand label, reviews, endorsements, testimonials)
  • Decoys (i.e., a product added to a set simply to make the others look more attractive)
  • False hierarchies, in which on option is more visually salient than the others
  • Confirmshaming (e.g., ‘No thanks, I don't want to be a better marketer’)

Obstruct: It is made harder for users to do what they intended to do

  • Roach motel tactics, where it is easy to subscribe or access but hard (or impossible) to leave or logout
  • Roadblocks to actions, like time delays to account deletion
  • Price obfuscation (e.g., prevent pricing comparison, bundling prices, or using intermediate currencies)
  • Adding extra steps; make navigation or privacy policies labyrinthine; hiding information
  • Using a foreign language, complex wording or jargon to inhibit understanding

Ruse: Users are tricked into making a choice other than what they intended

  • Products being sneaked into the basket, usually due to an obscured opt-out button prior
  • Drip pricing; hidden costs like delivery fees added to basket at the end
  • Ads with a delayed appearance so that users accidently click on them when they meant to click something else
  • Disguised ads (e.g., that look like a download button)
  • Ambiguous information causing users to get a different outcome to what they expected
  • Bait and switch, where the user sets out to do one thing but something else happens instead
  • Trick questions (e.g., a list of checkboxes where the first means opt-out and the second means opt-in)
  • Distraction, e.g., focusing attention on one element to distract from a small opt-out checkbox
  • Sponsored adverts disguised as normal content

Compel: Users are forced to do something they may not have wanted to do

  • Forced continuity, like automatically charging a credit card once a free trial comes to an end
  • Grinding, where gamers are forced to repeat the same process in order to secure game elements like badges
  • Forced registration to use a website, and pay-to-play
  • Nagging (e.g., to buy the premium version of a service)
  • Privacy Zuckering and Contact Zuckering, wherein users are tricked into sharing data or address book contacts
  • Defaults and pre-selected options
  • Playing by appointment (users are forced to use a service at specific times lest they lose advantages or achievements)

Entangle: Users are kept occupied for longer than they may have intended

  • Fake notifications (e.g., about content never interacted with) to draw users (back) in
  • Pausing notifications rather than being able to permanently stop them
  • Never-ending autoplay (e.g., a new video plays when the current one is finished)
  • Infinite scroll (i.e., new content continuously loads at the bottom of the feed)
  • Casino pull-to-refresh (i.e., users get an animated refresh of content by swiping down)
  • Time fog (e.g., hiding the smartphone clock so the amount of time spent in the app is not ‘felt’)

Seduce: Users are engaged emotionally rather than rationally

  • Highly emotive language or imagery; cuteness
  • Pressured selling (e.g., under time pressure)
  • Bamboozlement: Choice overload or information overload
  • Guilty pleasures (i.e., personalised suggestions that prey on individual vulnerabilities)

Importantly, while nudges can be used nefariously online, there is also potential for good. For example, they can be used to help consumers make healthier grocery choices, such as prefilling carts with healthy goods or making healthier goods more visually salient [16]. Nudges can also ironically be used to counter manipulation. One experiment found that postponing a purchase decision, being distracted from it, or reflecting on the reasons to buy or not, all reduced impulsive purchasing in the presence of dark nudges [15]. Elsewhere, a smartphone app was designed to nudge people towards more conscious social media use [17], detecting users' swiping behaviours to infer their ‘infinite scroll’ and ‘pull-to-refresh’ habits and then encouraging them to consider taking a break if needed. Across a 2-week intervention with 17 users, the app reduced compulsive pull-to-refreshes and reduced the average length of scrolling sessions (though there was no impact on total time spent on social media).

Technological advances

The usefulness of specific nudges can vary according to the psychological make-up of their audience. A review [18••] argued that individual differences can be used to make large-scale behaviour-change interventions more personalised (either by matching content to audiences or vice-versa) and thus more effective, pointing to evidence in the domains of political campaigning, health, education, consumer psychology, and organisational psychology. For example, one experiment demonstrated the efficacy of personality-matched messages in politics (e.g., “bring out the hero in you” for extraverts, and “make a small contribution” for introverts) [19]. The appeal of latent image features has been linked to the Big Five via machine learning models (with, for example, agreeableness being associated with preferred number of people, introversion with level of detail, and neuroticism with number of cats), suggesting that communications can be targeted to audiences based on aesthetics [20•].

This personalised persuasion is contingent on being able to detect the personality of the audience. An empirical review, titled ‘Can Machines Read Our Minds’ [21•], highlighted how samples of behaviour taken from online interactions (so-called ‘digital footprints’) could be subjected to machine learning algorithms to automatically infer a wide range of psychological attributes, such as the prediction of personality from gait, depression from tweets, and sexual orientation from profile photos. For instance, a computational text model based on fiction-writing subreddits predicted personality with an average performance of r = 0.33; examples of linguistic markers included swear words for disagreeableness, the word ‘game’ for introversion, and ‘damage’ for neuroticism [22].

Indeed, a meta-analysis of 21 studies demonstrated how the Big Five personality traits could be predicted from smartphone data (a phenomenon the authors called ‘digital phenotyping’), with an association of r = 0.35 for extraversion, and associations ranging from r = 0.23 to 0.25 for the other four traits [23]. For instance, extraverts show a higher frequency of calling behaviours, while neuroticism is linked to more time spent consuming media. Meanwhile, a second meta-analysis [24] similarly demonstrated how the Big Five could be predicted from social media footprints, with correlations ranging from 0.29 (agreeableness) for 0.40 (extraversion). Notably, targeting item-level nuances (e.g., gregariousness as a facet of extraversion) may lead to small but meaningful improvements in prediction accuracy [25].

Putting it all together, a large Australian bank predicted customers' personality from their interactions’ text and voice data, using this to send either personality-targeted or generic advertising messages and finding a conversion rate of 2.24% for the former and 1.24% for the latter [26•].

Of course, generative AI has enormous potential to make this kind of personalised persuasion scalable: a series of four studies [27•] found that, across 33 messages tested, 61% of personalised messages produced by ChatGPT were directionally and significantly more effective than non-personalised equivalents (a proportion significantly higher than chance).

Indeed, across three experiments and a sample of 4836 participants, messages created by ChatGPT were persuasive across a range of political issues including gun rights and carbon taxes – in fact, AI-generated messages were as persuasive as those written by humans [28], a finding echoed elsewhere [29]. There is also some mixed evidence that artificial pictures and videos created by generative AI (‘deepfakes’) can create false memories: people were significantly more likely to ‘remember’ Jim Carrey having starred in a remake of The Shining (which he hadn't) if the prompt was accompanied by a deepfake photo or video [30].

The persuasiveness of AI-generated content is mediated by its perceived verisimilitude and its perceived creativity, since creative content is more attention-grabbing and engaging [31]; another mechanism is the vividness advantage conferred on AI-generated photo or video over text, which increases credibility and engagement [32]. Concerningly, human ability to detect ‘deepfake’ images (of faces) is only just above chance and immune to interventions, while confidence in this ability is high [33].

Similarly, the burgeoning landscape of virtual and augmented reality and the metaverse is fertile ground for psychological influence. A meta-analysis of 39 social studies found that virtual reality has a significantly bigger impact on social attitudes around topics like migration, mental health and intergroup conflict than did non-immersive interventions [34•].

Propagation of ideas

Psychological principles are used to enhance the virality of ideas – and one of the biggest predictors is emotional arousal. An analysis of 3000 tweets from Austrian politicians found that high emotional arousal increased the likelihood of being reshared [35], while an analysis of 10,141 influencer tweets found that sharing is higher if emotions are more prevalent than argument quality [36]. News is similarly more likely to be shared if the headline uses surprise and exclamation marks [37].

Within emotions, there is a strong negativity bias. An investigation of 51 million tweets about current affairs hypothesised and found that message virality is driven by three things: negativity, causal arguments (e.g., apportioning blame), and threats to personal or societal values [38]. Tweets and fake news alike are more likely to go viral if they involve strong negative emotions [37,39]. In fact, an analysis of 105,000 news stories found that each additional negative word in the headline increased click-through rates by 2.3% [40].

This negativity bias may be recognised and used by bad actors, with clickbait being more likely to contain negative emotion [41] and fake news being more likely to involve sensational content like crime [42].

However, ‘clickbaitiness’ may reduce sharing due to perceptions of manipulative intent [41], and many studies have reported that positive emotion makes content more likely to be shared, perhaps because users would rather be the messenger of positive news [35,36,41]. Similarly, dominance has been found to be the strongest predictor of sharing viral advertising, and a follow-up study found this effect was mediated by a feeling of psychological empowerment [43].

Besides emotion, various elements are used to increase virality. Clickbait headlines (for example, ‘The Scary New Science That Shows Milk Is Bad For You’) are more likely to omit information, utilising a psychological principle called the ‘curiosity gap’; and the more information that is omitted, the more likely a headline is to be shared [41]. Other important feature include: interactive elements like URLs or mentions [44]; the use of simplicity, like shorter words and more readable language [37,45]; sensory language, where an additional sensory word on TikTok has been associated with 11,030 additional likes or comments [46]; and authoritative language (like more ‘we’ words and fewer negations or swear words) [36].

More covert tactics are often used online to inseminate and crystallise public opinion. An empirical analysis of the flat Earth community on YouTube [47] found a two-stage process: firstly, there is ‘seeding’, in which agents insert deceptions into the discourse, disguised as legitimate information; secondly, there is ‘echoing’, in which viewpoints become solidified through identity-driven argumentation (e.g., ‘us vs them’).

Both seeding and echoing are facilitated by ‘astroturfing’ (i.e., inflating perceptions of the popularity of an opinion or policy through bots or agents who mimic genuine humans online; manufacturing consensus); a paper in Scientific Reports found consistent patterns of coordination for Twitter astroturfing across a wide range of countries, political contexts, and time periods [48].

On the one hand, astroturfing inseminates ideas and gives the illusion of social proof: one experiment found that adding simply two contra-narrative comments underneath a news story shared on Facebook was enough to significantly bias opinion away from the direction of the news story – and that three interventions tested had no mitigating effect in the long term [49]. A similar principle is ‘inoculation’, in which, for example, agents seed a weakened version of an argument (e.g., as wrong, bad, or ridiculous) into the discourse in order to prevent the audience from engaging with or believing the idea when they encounter it ‘in the wild’ [50•].

On the other hand, astroturfing reinforces ideas via polarising debate: an analysis of 309,947 tweets found evidence of an organised network in which ‘cybertroops’ engaged in astroturfing by mixing disinformation or polarised content with benign topics of interest to the target group [51]. Indeed, more polarising influencers produce stronger engagement for the brands who sponsor them, since controversy is emotionally engaging and since the influencer's fans will rush to defend them (and thus their own identity) from attacks via motivated reasoning [52].

Importantly, the effectiveness of online nudges can be dampened by audience scepticism and reactance, and thus some have pointed to ‘meta-nudging’ as an alternative approach; it involves sending a nudge indirectly via social influencers who, being trusted authorities, are better placed to change behaviour and enforce norms [53•]. Indeed, followers can develop parasocial relationships with influencers, which in turn produce feelings of connection and community and foster the creation of personal identities [54], and make people more likely to adopt influencers' recommendations [55]. Even computer-generated influencers can have a psychological impact on audiences, wherein followers anthropomorphise them, blurring the lines between the real and the unreal and producing feelings of friendship, belonging, and jealousy [56].

Impact on mental health

These tactics for digital persuasion encourage people to engage more heavily [57] with technologies like social media which may have deleterious effects on mental health [58]. They may similarly ‘nudge’ people into unhealthy behaviours like impulsive purchasing [15] and online addiction [59]. Manipulation can also foster feelings of helplessness and thus paranoia amongst their target audiences [60]. On the other hand, well-crafted behavioural interventions online do also have the potential to help people achieve better mental health by, for example, nudging people into more conscious screen time [17] or delivering psychologically-targeted mental health campaigns [61].

Conclusion

Recent evidence demonstrates how users can be influenced online – not always to the benefit of their mental health – through dark patterns, emotionally-charged social media content, and the covert use of astroturfing and meta-nudging, while the risk of manipulation looks set to grow in line with advances in predictive algorithms, generative AI, and VR. Behavioural interventions appear to have small effects at best.

References

Can You Help, Please?

We ask yearly each Autumn for your support
to pay for web hosting & security. 
Please keep the lifeboat afloat.
Support "alternative" websites year-round!
 For freedom of speech, for freedom of information.
For your own sake.
We use browser cookies to manage authentication, for analytics, and to ensure you get the best experience on our website.