Are we nodes or are we noodles?

The new Professor of Internet Studies at the Oxford Internet Institute, Philip Howard, gave his inaugural lecture last week. It’s now available online but to save your time, I watched it and summarised what I thought were the most interesting bits, for the fourth of these posts on fake news (previous posts here, here and here). There was a certain amount of flummery at the start – not the soft pudding type – that you can skip if you decide to watch it.

flummery Flummery pudding, also known as mahalabia

Also of course, some daft clothes. But despite the Oxfordy business the OII is a useful place to know about and has done good research ever since it started. I went to the launch conference back in 2002 when I was researching internet related stuff for a doctorate. I liked their ethnographic style, thought it looked promising then and think it’s delivered since, for instance with regular surveys of British users and non-users of the internet, critical studies of Wikipedia, and a strong focus on ethical issues. The launch was at the Said Business School, the building with the ziggurats near the railway station, as the Institute itself is housed in a small building on St Giles near Balliol with no space for large events.

oii logo

Fifteen years ago at the OII launch the conference ran a session on ‘Participation and Voice’, asking whether the technology would improve or worsen the democratic process. This month Phil Howard asked something similar: Is social media killing our democracy?

He began by arguing that ‘the Internet’ is misnamed as there are now multiple internets. There’s a Snapchatty Yik-yakky one for under-17s that people like him don’t use. Far right conspiracy theorists get together on another one.  China has its own internet, built from the ground up as an instrument for social control and surveillance. Some argue that Russia and Iran have the same thing – a distinct internet. The cultures of use are so different it’s tough for researchers to study them all. The Prof then briefly narrated the development of his research by showcasing some of his publications, as he’d been coached that was the right thing to do in an inaugural lecture.

His first book, New Media Campaigns and the Managed Citizen (2005) was an ethnography of  the Gore and Bush US election campaigns. The people he studied and got to know were the first of a new breed of e-politics consultants.  He discovered that ‘a small group of people – 24 or 25 – make very significant design decisions that have an impact on how all of you experience democracy’. These people formed a small community, socialised together and worked   ‘across the aisles’ for Republicans or Democrats as needed.  At the end of the campaign several of them went off to work in the UK, Canada, Australia and various other countries,  to take the tricks they had developed in public opinion manipulation, funded by big money, to apply them in democracies around the world. His conclusion was that this is how innovation in political manipulation now circulates, i.e. via these kinds of roaming consultants with expertise for hire.

Next up, he turned to investigating the consequences of internet access in 75 mainly Muslim countries in a book that, amazingly, you can download for free . His idea was to see how things worked out in societies where censorship and surveillance are permitted and encouraged as a means of cultural protection; countries that liked to participate in the global economy but in constrained ways. He observed significant changes in gender politics, in where people went to learn about religious texts, and above all young people using information technologies to figure out that they shared grievances. He found a clear arc from the mid-2000s to the ‘Arab Spring’.  So while his first book was about political elites and the manipulation of democracy, the second was about catching the elites off guard.

His work on ‘the internet of things’ , Pax Technica, was more predictive and although the book wasn’t well received he was insistent that it’s necessary to pay attention, look back at what has already happened to online privacy and look forward to guard against what could happen next.  He reckons the privacy fight is already lost as far as the current internet(s) are concerned so we need to think ahead. To quote:

The internet of things is the next internet, the one you will not experience through a browser. It will track you in almost everything you do. For a variety of reasons it will generate almost perfect behavioural data that will be useful to social scientists but also governments and security services. In the next few years we have to wrestle with who gets this data, and why, and when…

By 2020 50 billion wireless sensors will be distributed around the world – there will be many more devices than people, to say nothing of satellites, drones and smartphones that people carry. There will be vast amounts of behavioural data on consumption habits, and in democracies any lobbyists who can will try to play with this data…

The average smartphone has 23 apps. The average app generates one location point per minute – little bits of evidence of where we are in physical space…few organisations have the analytical capacity to play with this data – Facebook does. Few do much with it – advertising firms do. Some apps read each other’s data. It’s fodder for an immense surveillance industry that’s about providing you with better consumer goods, identifying your location in space, providing data brokers [with info on us]…

His new programme of research at the OII is looking at social media, fake news, and computational propaganda, or in other words, algorithms and lies. Here are a couple of tasters. How to identify a bot: there are some giveaways. Bots tend to do negative campaigning. They don’t report positive policy ideas or initiatives.

Anger, moral judgments, pictures of politicians taken at a ridiculous angle ‘saying’ things they probably never said. Bots migrate from another topic to another e.g. latching on to Brexit after years tweeting about something else. A small handful of accounts after working on Brexit then became interested in the US election and were pro-Trump. A small number then became interested in the Italian referendum and the French elections and now back to the UK. Just a s there was a cycle of expertise from human subjects in the US who took the craft of political manipulation across multiple domains, multiple  regime types, there are now users who have humans behind them, social media accounts which have humans behind them, user accounts that craft political messages, moving from target to target, meddling in particular domains, as needed and one of the great research questions that faces us now is who are these people and to some degree how do we inoculate our democracies against their ill effects? 

Howard prefers to call them highly automated accounts because there is always a human behind them. They do not look like this.

computerBot

These automated accounts are not up and running all the time. They get turned off after an election. They have noticeable changes of strategy in response to events, for instance spikes in activity at particular moments to coincide with debates.  Howard thinks all this presents us with real problems and that social media has made democracy weak. ‘It has a compromised immune system. We’ve gone through that learning curve from social media as exciting opportunity for activists to tools for dictators.’ To make matters worse, people are selectively exposing themselves to secondhand sources of information that intensify what they already believe in a process that might be called elective affinity, so any bias doesn’t meet with much challenge.

We need to figure out what the opposite of selective exposure is.  Diversified exposure? We don’t even have a phrase. Randomised encounters? Empathic affinity? Process that allows people to encounter a few new pieces of information, candidates that they haven’t met before or faces they don’t recognise. Whatever those processes are we have to find them and identify them and encourage them.

Before reporting any more of what Howard thinks I should allow for some diversified exposure here and point out that there are other academics who might disagree. Here’s Daniel Kreiss,  who has studied political campaigning, taking a markedly different view. He says basically it’s more important to look at the history of how conservatism has been growing in the US than at social media. As for the UK, other academics agree that ‘whether done by bots or human influencers, that people may be surreptitiously emotionally engaged in online debates is deeply worrying’ and there’s plenty more rather tentative comment here.

Going back to the lecture, Howard ends with proposals for how this abundance of data on all of us might be regulated. He has a list.

  1. Report the ultimate beneficiary. You should have the right to find out who is benefiting from data being collected by an item you buy.
  2. It should be possible to add civic beneficiaries to benefit from the data.
  3. Tithes of data. 10% of the bandwidth, processing power and data should be made available to civil society organisations as a way of restoring some balance. Facebook has a monopoly platform position on public life in most countries. That should stop.
  4. A non profit rule of data mining. The range of variables that are exempt from profit should grow.

It’s not surprising to see Facebook’s data monopoly appearing here. He’s said elsewhere that researchers can only use a small percentatge of Twitter data, because that’s what is made accessible, and can’t properly research Facebook even though that’s where a lot of the political conversation – and manipulation – is happening. Facebook doesn’t share.

The Computational Propaganda project at the OII has just released its first case study series, covering nine countries, available here. It’s been covered in a few news articles (Wired, the BBC, Guardian and so forth). A brief snippet to give you the idea what’s in it:

The team involved 12 researchers across nine countries who, altogether, interviewed 65 experts, analyzed tens of millions posts on seven different social media platforms during scores of elections, political crises, and national security incidents. Each case study analyzes qualitative, quantitative, and computational evidence collected between 2015 and 2017 from Brazil, Canada, China, Germany, Poland, Taiwan, Russia, Ukraine, and the United States.

Computational propaganda is the use of algorithms, automation, and human curation to purposefully distribute misleading information over social media networks.

That’s enough lecturing. I called this post Are you a node or a noodle? for a reason. Clearly, we’re all noodles as we’re all likely to be suckered at some point by fragments of the fakery that’s all over whichever internet we’re using. In one of Howard’s books that I’ve actually read or at least skimmed, he surveys the work of one of my favourite experts Manuel Castells. Castells and the media is really an introductory reader for students who haven’t yet read Castells, which is fair enough as reading Castell’s own work is a real undertaking. (I’m aiming to add him to my experts series on this blog soon.) Howard summarises one of Castell’s key theories about the network society as ‘People may think they are individuals who join, but actually they are nodes in networks‘.

We’re all nodes as well as noodles. My takeaway message is to be careful about what we’re circulating. Every large scale tragedy or atrocity now seems to attract lies, myths and propaganda that get wide circulation through deliberate manipulation but also via unwitting noodle-nodes (us, or some of us). Howard suggests (it is a textbook after all) that readers undertake an exercise in visualising their own digital networks. I’m not going to bother with the exercise but some of the other recommendations were good ones, such as:

  • be aware of your data shadow (yes it is following you)
  • use diverse networks
  • be critical of sources and try to have several
  • be aware of your own position in digital networks
  • remember that people in other cultures have different technology habits, and that networks can perpetuate social inequality
  • understand that you are an information broker for other people.

Thanks Phil. Enjoy your new job.

 

 

Are we nodes or are we noodles?