Twisted tales of spinning and blogging

Elephantdentistry’s main themes might seem a strange mix. They include (1) what’s going on with internet and social media industries, politics and practices,  (2) experts, (3) living metaphors and catching them in the wild (fun with language), (4) feminism, (5) music and (6) Mosul. Here I’m getting most of these twisted together so let’s see how that works.

I caught three new living metaphors in the wild when I was in Wales recently, visiting the National Wool Museum. I tried a hands-on exhibit, picking up a handful of wool and following the instructions to pull and twist simultaneously until eventually I’d spun a yarn. Nearby was a painting of wool gatherers, women who would follow the drovers taking sheep to market and dart from hedge to hedge to collect scraps of wool. They would then sell their bags of wool to eke out their desperately poor livings. You could see how the scattering of wool and movement of the gatherers made sense of the metaphor, as it’s not a task that involves following a straight line. You would need to keep constantly turning, moving and switching your focus of attention. Hard work, and once you realise the point of all that apparent distraction it’s obvious that wool gatherers have had a bad press. Raw wool is indeed very woolly: it’s fuzzy, breaks apart easily and has no fixed shape.

The name overstated what the museum really was and did. It’s in an old textile mill, formerly Cambrian Mills, near Newcastle Emlyn not far from the mid-west coast, and most of the exhibits were about the mill itself, its owners, the machinery and the weaving industry, so there was almost nothing on knitting. (I make up for that later.) The mill used to produce blankets, shawls and woven wool cloth used for miners’ shirts and soldiers’ uniforms. Most of the textile exhibits were of shawls and blankets. When they were taken off the looms, large pieces of cloth had to be washed and then stretched out on tenterhooks, on tenters, in tenter fields. There the cloth would be left hanging, waiting until it was ready. We saw a tenter, with hooks, in a park displaying information about the area’s traditional crafts.Tenterhooks2

Before  machinery and mills, spinning used the older technologies of spindles and wheels and was largely women’s work carried out at home as  ‘cottage industry’. It took tedious, long hours to earn much of a living and so telling stories was intricately associated with the work of busy hands that didn’t require much mental engagement. Weaving was another cottage industry.

Now being able to work virtually and from home, according to some optimistic commentators, has brought in satisfying new cottage industries, once again involving women fitting in somewhat creative, but low paid work (even if paid at all) alongside domestic and family commitments. It’s also handy when there’s no other work available.

That’s the positive story being spun. Some experts disagree and have a bleaker but more informed analysis.

When I started researching women’s online practices years ago I remember being struck by the growth of knitblogs – blogs about knitting. It happened long before blogs were as common as they are now, and back then few of those blogs seemed to be commercial. Now I would guess most of them are a means to try and make money, one way or another.

I’ll get on to what Stephanie Taylor has to say about creativity, home working and precarity, but first I’m not dissing woolly creativity although I do still wonder about knitted cupcakes. I know they can be used as pincushions or given away but that doesn’t explain their ubiquity. The museum had a three-tier knitted cake, which is fair enough as they are the Wool Museum, but the cafe also had knitted cupcakes and they apparently sold well. Their very pointlessness seems to be the point – you can neither eat them nor wear them, so knitters can express a creative impulse without anyone giving or taking offence because nobody wants to eat cakes or wear handknits. This, on the other hand, is extraordinary and needs no excuse. It is the giant cardigan on display in Cardigan Castle marking 900 years of the town’s history.  Cardigan4

The cardigan for Cardigan was created by around 200 knitters, including  schoolchildren and other volunteers alongside the two artists behind the project. It’s amazing. My photos can’t give you the sheer scale of it.

Even more impressive and with another woolly play  on words are the wool churches created as part of the Wolly Spires project. These are models of the beautiful Lincolnshire churches built out of the wealth of the medieval wool trade, intricately knitted to represent all the details of their stone decorations and graceful architecture. Community groups worked on these too. It took 8 years for the  teams to knit 6 churches.

They’re currently on display until mid January at the 20-21 Visual Arts Gallery in Scunthorpe so if you’ve ever wanted to visit Scunny, there’s a good reason.

These geographically-based communities of knitters are not anything like the virtual community I came across. I first found knitblogs around 15 years ago when I was researching  internet ‘beginners’ and specifically the various online literacy practices (what they read,  what they wrote and how) that women were developing for themselves. That was a time before the big social media corporations, so for most people there were websites, email and not much else . Internet access was still fairly limited and there was already a significant gender gap in the UK. But I was struck by the fact that a lot of web designers were women. There seemed to be a connection between that and a sudden flourishing of knitblogs, i.e. blogs about knitting. Pixels and stitches made up colour patterns on screen or in wool. Designers needed to understand and create sets of instructions written in a specialised language. Both web design and knitting could involve home-based creativity, whether for pleasure or money. Back then, knitblogs were more about personal creativity and sharing stories about life as well as creations or wips (works in progress). They needed more technical online skills than blogging requires now.  Those very personal blogs still exist, but knitblogs are now mostly about selling patterns or getting income through ads, which brings me back to Stephanie Taylor.

Taylor is one of the authors of an article on gender and creative labour, along with Bridget Conor and Rosalind Gill. Their research into the cultural and creative industries showed  there were vast inequalities and pretty terrible conditions of work, hidden behind myths about the wonders of creativity. Work was often informal and precarious. People, women especially, coped with a ‘bulimic’ pattern of alternatively long hours and super-intense periods of work  or nothing at all coming in. Stephanie Taylor has also analysed a more general phenomenon she calls the ‘new mystique’ of working for oneself, and how it traps women seeking to combine work and childcare into long hours and low pay. She discusses the language used to describe this type of home-based self-employment, including the term ‘cottage industry’ being revived by journalists, and picks up on the parallels between what’s going on in the fields of creative work and what’s happening with the growth of self-employment. They have precariousness, long hours and low pay in common as the price of apparent flexibility. Boundaries between work and home life disappear. We’re back in the world of the poor spinners.

Amidst a burgeoning social media economy, genres of self-enterprise have emerged that enable women to profit from creative activities located within the domestic sphere, including mommy blogging, lifestyle blogging, and craft micro-economies.

That link between working on websites such as blogs and creative crafts reappears in research by Brooke Erin Duffy & Urszula Pruchniewska. Catch Duffy (quoted above) being interviewed here. She has some sharp things to say about women working as social media editors. The title of her new book, out this year, nails the issue: (Not) Getting Paid to Do What You Love, subtitled Gender, Social Media, and Aspirational Work.

Drawing on interviews and fieldwork, Duffy offers fascinating insights into the work and lives of fashion bloggers, beauty vloggers, and designers. She connects the activities of these women to larger shifts in unpaid and gendered labor…

A tiny handful have lucrative careers but there’s a vast gap between them and the rest who make little or nothing.

Alongside these aspirational bloggers, there’s a secondary industry offering training and support. This blog makes nothing and luckily doesn’t have to try to earn anything, but I’ve participated in three training courses so far to get to know more about the blogging subculture. Most of the other participants were women, and even if they’d been sent by an employer in order to start up and run a company blog, they had ambitions to do with promoting their own creative outputs. The aspirations of some bloggers reminded me of a woman learning to use the internet for the first time in a class I ran in Peckham Library who told me hopefully back in about 2000  ‘You can make money out of this, you know’. I met some realistic and genuinely helpful trainers, but also one who suggested that any blog potentially had a substantial worldwide audience and could bring in regular income through adverts. Content, or a reason to blog apart from providing ad space, was irrelevant. Learn the techniques of the clickbaiters. Ten top tips for rescuing knitting disasters. The seven things you need to know about improving your blog.

Here’s some more from the hardworking knitters of Cardigan, Cardigan2and finally a link to Mosul. The Welsh weavers had their own traditional textile patterns and the mechanical looms in the mill could produce more complex patterns or more cheaply, cloth with a simple stripe. Wool used to be vital to Mosul’s economy, although cotton was more important to the textile industry. According to Sarah Shields, writing about the nineteenth century,

Mosul’s fabrics were mostly cottons woven in traditional patterns to appeal to the regional populations. The coarse cotton calicos (ham, cit) used for garments were bleached or dyed red or blue. One of the city’s specialties was alaca, a striped fabric used for zibun, the robes men wore. Weavers prepared women’s cloaks (izar) in assorted qualities, and special looms were employed for the wool and cotton blend abaya over garments. These textiles, as well as towels and headgear, were exported into the mountains, to Persia, Baghdad, Bitlis, Siirt, and as far away as Trabzon.

Did you know that the English word muslin for a lightly woven delicate cotton fabric derives from ‘Mosul linen’? It was thought to have been invented there first.

 

 

 

Twisted tales of spinning and blogging

Are we nodes or are we noodles?

The new Professor of Internet Studies at the Oxford Internet Institute, Philip Howard, gave his inaugural lecture last week. It’s now available online but to save your time, I watched it and summarised what I thought were the most interesting bits, for the fourth of these posts on fake news (previous posts here, here and here). There was a certain amount of flummery at the start – not the soft pudding type – that you can skip if you decide to watch it.

flummery Flummery pudding, also known as mahalabia

Also of course, some daft clothes. But despite the Oxfordy business the OII is a useful place to know about and has done good research ever since it started. I went to the launch conference back in 2002 when I was researching internet related stuff for a doctorate. I liked their ethnographic style, thought it looked promising then and think it’s delivered since, for instance with regular surveys of British users and non-users of the internet, critical studies of Wikipedia, and a strong focus on ethical issues. The launch was at the Said Business School, the building with the ziggurats near the railway station, as the Institute itself is housed in a small building on St Giles near Balliol with no space for large events.

oii logo

Fifteen years ago at the OII launch the conference ran a session on ‘Participation and Voice’, asking whether the technology would improve or worsen the democratic process. This month Phil Howard asked something similar: Is social media killing our democracy?

He began by arguing that ‘the Internet’ is misnamed as there are now multiple internets. There’s a Snapchatty Yik-yakky one for under-17s that people like him don’t use. Far right conspiracy theorists get together on another one.  China has its own internet, built from the ground up as an instrument for social control and surveillance. Some argue that Russia and Iran have the same thing – a distinct internet. The cultures of use are so different it’s tough for researchers to study them all. The Prof then briefly narrated the development of his research by showcasing some of his publications, as he’d been coached that was the right thing to do in an inaugural lecture.

His first book, New Media Campaigns and the Managed Citizen (2005) was an ethnography of  the Gore and Bush US election campaigns. The people he studied and got to know were the first of a new breed of e-politics consultants.  He discovered that ‘a small group of people – 24 or 25 – make very significant design decisions that have an impact on how all of you experience democracy’. These people formed a small community, socialised together and worked   ‘across the aisles’ for Republicans or Democrats as needed.  At the end of the campaign several of them went off to work in the UK, Canada, Australia and various other countries,  to take the tricks they had developed in public opinion manipulation, funded by big money, to apply them in democracies around the world. His conclusion was that this is how innovation in political manipulation now circulates, i.e. via these kinds of roaming consultants with expertise for hire.

Next up, he turned to investigating the consequences of internet access in 75 mainly Muslim countries in a book that, amazingly, you can download for free . His idea was to see how things worked out in societies where censorship and surveillance are permitted and encouraged as a means of cultural protection; countries that liked to participate in the global economy but in constrained ways. He observed significant changes in gender politics, in where people went to learn about religious texts, and above all young people using information technologies to figure out that they shared grievances. He found a clear arc from the mid-2000s to the ‘Arab Spring’.  So while his first book was about political elites and the manipulation of democracy, the second was about catching the elites off guard.

His work on ‘the internet of things’ , Pax Technica, was more predictive and although the book wasn’t well received he was insistent that it’s necessary to pay attention, look back at what has already happened to online privacy and look forward to guard against what could happen next.  He reckons the privacy fight is already lost as far as the current internet(s) are concerned so we need to think ahead. To quote:

The internet of things is the next internet, the one you will not experience through a browser. It will track you in almost everything you do. For a variety of reasons it will generate almost perfect behavioural data that will be useful to social scientists but also governments and security services. In the next few years we have to wrestle with who gets this data, and why, and when…

By 2020 50 billion wireless sensors will be distributed around the world – there will be many more devices than people, to say nothing of satellites, drones and smartphones that people carry. There will be vast amounts of behavioural data on consumption habits, and in democracies any lobbyists who can will try to play with this data…

The average smartphone has 23 apps. The average app generates one location point per minute – little bits of evidence of where we are in physical space…few organisations have the analytical capacity to play with this data – Facebook does. Few do much with it – advertising firms do. Some apps read each other’s data. It’s fodder for an immense surveillance industry that’s about providing you with better consumer goods, identifying your location in space, providing data brokers [with info on us]…

His new programme of research at the OII is looking at social media, fake news, and computational propaganda, or in other words, algorithms and lies. Here are a couple of tasters. How to identify a bot: there are some giveaways. Bots tend to do negative campaigning. They don’t report positive policy ideas or initiatives.

Anger, moral judgments, pictures of politicians taken at a ridiculous angle ‘saying’ things they probably never said. Bots migrate from another topic to another e.g. latching on to Brexit after years tweeting about something else. A small handful of accounts after working on Brexit then became interested in the US election and were pro-Trump. A small number then became interested in the Italian referendum and the French elections and now back to the UK. Just a s there was a cycle of expertise from human subjects in the US who took the craft of political manipulation across multiple domains, multiple  regime types, there are now users who have humans behind them, social media accounts which have humans behind them, user accounts that craft political messages, moving from target to target, meddling in particular domains, as needed and one of the great research questions that faces us now is who are these people and to some degree how do we inoculate our democracies against their ill effects? 

Howard prefers to call them highly automated accounts because there is always a human behind them. They do not look like this.

computerBot

These automated accounts are not up and running all the time. They get turned off after an election. They have noticeable changes of strategy in response to events, for instance spikes in activity at particular moments to coincide with debates.  Howard thinks all this presents us with real problems and that social media has made democracy weak. ‘It has a compromised immune system. We’ve gone through that learning curve from social media as exciting opportunity for activists to tools for dictators.’ To make matters worse, people are selectively exposing themselves to secondhand sources of information that intensify what they already believe in a process that might be called elective affinity, so any bias doesn’t meet with much challenge.

We need to figure out what the opposite of selective exposure is.  Diversified exposure? We don’t even have a phrase. Randomised encounters? Empathic affinity? Process that allows people to encounter a few new pieces of information, candidates that they haven’t met before or faces they don’t recognise. Whatever those processes are we have to find them and identify them and encourage them.

Before reporting any more of what Howard thinks I should allow for some diversified exposure here and point out that there are other academics who might disagree. Here’s Daniel Kreiss,  who has studied political campaigning, taking a markedly different view. He says basically it’s more important to look at the history of how conservatism has been growing in the US than at social media. As for the UK, other academics agree that ‘whether done by bots or human influencers, that people may be surreptitiously emotionally engaged in online debates is deeply worrying’ and there’s plenty more rather tentative comment here.

Going back to the lecture, Howard ends with proposals for how this abundance of data on all of us might be regulated. He has a list.

  1. Report the ultimate beneficiary. You should have the right to find out who is benefiting from data being collected by an item you buy.
  2. It should be possible to add civic beneficiaries to benefit from the data.
  3. Tithes of data. 10% of the bandwidth, processing power and data should be made available to civil society organisations as a way of restoring some balance. Facebook has a monopoly platform position on public life in most countries. That should stop.
  4. A non profit rule of data mining. The range of variables that are exempt from profit should grow.

It’s not surprising to see Facebook’s data monopoly appearing here. He’s said elsewhere that researchers can only use a small percentatge of Twitter data, because that’s what is made accessible, and can’t properly research Facebook even though that’s where a lot of the political conversation – and manipulation – is happening. Facebook doesn’t share.

The Computational Propaganda project at the OII has just released its first case study series, covering nine countries, available here. It’s been covered in a few news articles (Wired, the BBC, Guardian and so forth). A brief snippet to give you the idea what’s in it:

The team involved 12 researchers across nine countries who, altogether, interviewed 65 experts, analyzed tens of millions posts on seven different social media platforms during scores of elections, political crises, and national security incidents. Each case study analyzes qualitative, quantitative, and computational evidence collected between 2015 and 2017 from Brazil, Canada, China, Germany, Poland, Taiwan, Russia, Ukraine, and the United States.

Computational propaganda is the use of algorithms, automation, and human curation to purposefully distribute misleading information over social media networks.

That’s enough lecturing. I called this post Are you a node or a noodle? for a reason. Clearly, we’re all noodles as we’re all likely to be suckered at some point by fragments of the fakery that’s all over whichever internet we’re using. In one of Howard’s books that I’ve actually read or at least skimmed, he surveys the work of one of my favourite experts Manuel Castells. Castells and the media is really an introductory reader for students who haven’t yet read Castells, which is fair enough as reading Castell’s own work is a real undertaking. (I’m aiming to add him to my experts series on this blog soon.) Howard summarises one of Castell’s key theories about the network society as ‘People may think they are individuals who join, but actually they are nodes in networks‘.

We’re all nodes as well as noodles. My takeaway message is to be careful about what we’re circulating. Every large scale tragedy or atrocity now seems to attract lies, myths and propaganda that get wide circulation through deliberate manipulation but also via unwitting noodle-nodes (us, or some of us). Howard suggests (it is a textbook after all) that readers undertake an exercise in visualising their own digital networks. I’m not going to bother with the exercise but some of the other recommendations were good ones, such as:

  • be aware of your data shadow (yes it is following you)
  • use diverse networks
  • be critical of sources and try to have several
  • be aware of your own position in digital networks
  • remember that people in other cultures have different technology habits, and that networks can perpetuate social inequality
  • understand that you are an information broker for other people.

Thanks Phil. Enjoy your new job.

 

 

Are we nodes or are we noodles?

You Twitfaces Round Two

Twitter, Facebook and Google (collectively known here as You Twitfaces, with acknowledgements to Benny A) have all been attacked in the old media lately.  There have been so many critical news stories and comment pieces that it’s hard to keep track of what the real issues are, who’s involved and what might happen next. What’s behind all this, especially as none of the issues is even new? Here I’m disentangling the four – as I think there are four – key concerns, puting them into some kind of timeline and pointing to a few useful sources or experts worth checking out. I’ve already flagged up some in previous posts on social media and Internet research here and here. Along the way I’ll be explaining my trip to the Royal Institution. We’re looking at hate speech, fake news, data collection and surveillance, and content theft.

First up, the Home Affairs committee, a select committee of the UK House of Commons, started investigating hate crimes and social media last year prompted largely by the murder of Jo Cox MP. The committee is now suspended because an election was called, so they had to rush out their report Hate crime: abuse, hate and extremism onlineIt was published on May 1st. As these things go it is readable and not incredibly long, and if you look it up you will get a flavour of how angry the cross-party MPs were with the corporate spokesmen (yes, they were all men) and their feeble excuses.  Witnesses who gave evidence, both individuals and organisations, are listed separately with links to the written evidence. Oral evidence is minuted. The Social Data Science Lab at Cardiff University submitted detailed well-grounded evidence based on large-scale research projects into hate crime in the UK.  They noted that most online hate speech research has been conducted on social media platforms (mainly Twitter and Facebook) but there’s a need to examine hate on emerging platforms and online gaming. They recommended more research into the relationship between hate speech online and offline hate crime.

The corporate spokesmen questioned by the committee were Peter Barron, Vice President, Communications and Public Affairs, for Google,   Simon Milner, Policy Director for the UK, Middle East and Africa, for Facebook, and Nick Pickles, Senior Public Policy Manager, Twitter. The report is scathing about their answers and evidence (available in the minutes for 14 March , and it’s eye-opening). They can’t defend the examples of hate speech, child pornography or illegal extremist content they are presented with, and don’t try. Instead they fall back on their community ‘standards’, relying on users to flag content, and on trying to improve their algorithms. They refuse to say how many human moderators they employ or how much they spend on making sure content that is illegal or violates their terms gets removed. The committee points out that when content breaches copyright, it seems to get removed much faster so they obviously have the means and they certainly have the money. A flavour of the report as a word cloud:

CommitteeReport

There are many extraordinary, wriggly exchanges. Peter Barron tries to defend Google allowing David Duke’s antisemitic videos to stay up on YouTube (which Google owns). He says their own lawyers decided the content wasn’t actually illegal. The chair points out that their own guidelines say they don’t support hate speech. Barron then tries to fall back on an alternative free expression argument which shreds any idea that their community standards mean anything.  In other exchanges Nick Pickles tries to defend Twitter’s failure to deal with volumes of abusive racist tweets directed at MPs. Simon Milner tries to defend holocaust denial material on Facebook on the grounds that it attracts a few counter-comments. The MPs make their disgust pretty plain, especially when they finally force the spokesmen to admit that whether they want to or not, their companies do in fact make money out of malicious, hateful and even illegal content.

The constant excuse that they rely on users to report abuses doesn’t go down well with the MPs. In the report’s words, ‘They are, in effect, outsourcing the vast bulk of their safeguarding responsibilities at zero expense. One MP tells them he would be ashamed to make his money the way they do. They really don’t like being told that and they seem to think that saying they work with ‘trusted flaggers’ such as specialist police units will work in their favour. That backfires. The report points out that if these social media companies earning billions in annual operating profits are relying on a publicly funded service to do their work, they should repay the cost.

So what does the report recommend? Briefly:

  • that all social media companies introduce clear and well-funded arrangements for proactively identifying and removing illegal content—particularly dangerous terrorist content or material related to online child abuse.
  • Government should now assess whether the continued publication of illegal material and the failure to take reasonable steps to identify or remove it is in breach of the law, and how the law and enforcement mechanisms should be strengthened in this area.
  • Government should consult on adopting similar principles online to those used for policing football matches —for example, requiring social media companies to contribute to the Metropolitan Police’s CTIRU for the costs of enforcement activities which should rightfully be carried out by the companies themselves.
  • social media companies currently face almost no penalties for failing to remove illegal content. There are too many examples of social media companies being made aware of illegal material yet failing to remove it, or to do so in a timely way. Government should consult on a system of escalating sanctions to include meaningful fines for social media companies which fail to remove illegal content within a strict timeframe.
  • social media companies should review with the utmost urgency their community standards and the way in which they are being interpreted and implemented, including the training and seniority of those who are making decisions on content moderation, and the way in which the context of the material is examined.
  • social media companies should publish quarterly reports on their safeguarding efforts, including analysis of the number of reports received on prohibited content, how the companies responded to reports, and what action is being taken to eliminate such content in the future.  If they refuse, the Government should consult on requiring them to do so.
  • Google is currently only using its technology to identify illegal or extreme content in order to help advertisers, rather than to help it remove illegal content proactively. They should use their existing technology to help them abide by the law and meet their community standards.
  • Government should review the entire legislative framework governing online hate speech, harassment and extremism and ensure that the law is up to date. It is essential that the principles of free speech and open public debate in democracy are maintained—but protecting democracy also means ensuring that some voices are not drowned out by harassment and persecution, by the promotion of violence against particular groups, or by terrorism and extremism.

After the election, the committee will move on to considering fake news. In a BBC Panorama programme broadcast on May 8 you can see clips of the committee hearings and comment from MPs, ex-Facebook employees and others. The interview with Simon Milner from Facebook is excruciating: he keeps repeating the same bland unconvincing statements rather than answer the questions. A disaffected ex-colleague of Mark Zuckerberg, Antonio Garcia Martinez, comes out with what he thinks Facebook really thinks about all the fuss (my transcript): You know what’s naive? The thought that a European bureaucrat who’s never managed so much as a blog could somehow in any way get up to speed or even attempt to regulate the algorithm that drives literally a quarter of the Internet in the entire world. That’s never going to happen in any realistic way. A more fully developed version of this arrogant claim and what it’s based on can be read here. It amounts to little more than We’re super-rich, you losers, so we’re above the law. So yes, the social media outfits have lots of data on us, can manoevre around different legal restrictions because of their global reach, point to their supreme algorithms and fall back on defending free speech. But ultimately they are basically just vast advertising companies, and what does hurt them is advertisers cancelling contracts. That did start to happen after a few recent revelations. Their free speech argument doesn’t work any longer once it is pointed out that they are making money out of racism, terrorist recruitment sites and child pornography, by running ads alongside such nasty content,  and are also tainting reputable organisations and businesses by linking their ads to it. Free speech has nothing to do with that.

Facebook had to deal with charges that they allowed fake news to influence the US presidential elections. They responded first with denials,  from Zuckerberg personally, but there was too much evidence to ignore so they’ve moved on to Plan B, blaming the users. It’s our fault. On May 8, Facebook ran ads in the old media telling us what to do, following on from news stories that they’re hiring 3,000 more people as content checkers and are yet again tweaking their algorithms. It should all have been great PR but their advice on spotting fake news was unaccountably mocked. Here’s another word cloud based on Facebook’s kindly advice to us all.

FacebookAdvice

How much of a problem is fake news? Last week there was a debate at the Royal Institution,  Demockery and the media in a ‘post-factual’ age, hosted by Sussex University, with a panel of journalists from various organisations. I guess the panellists were meant to represent a range of media as well as provide gender balance and as seems to be too often the case, the balance didn’t work. Kerry-Anne Mendoza from The Canary, and Ella Whelan from spiked-online, were under-informed and relied too much on assertion and opinion. Neil Breakwell from Vice News, a former Newsnight deputy editor, and Ivor Gaber, Professor of Journalism and a former political journalist, simply knew a lot more and so were more interesting. This unbalanced ‘balance’ happens too often and I’m going to call it out every time.

Asked about fake news, Mendoza didn’t even want to use the term as she claimed it had been tainted by Trump. (She also claimed that The Canary was a left-wing alternative for an audience who would otherwise be reading right-wing tabloids. The other panellists thought that claim about its readership was pretty unlikely, and she had no evidence for who its readers actually are.)   Whelan in true spiked contrarian style disputed there was even an issue because it would be patronising to suggest anyone was influenced by fake news. Gaber made the most serious and telling point, that it’s not about quibbling over truth or accuracy but it’s about intent.  That’s the real reason we should care about fake news. Unfortunately the discussion as a whole didn’t pursue that enough, surprisingly given the salience of current investigations into attempts by the far right and Putin’s regime to interfere with and disrupt democratic elections in the USA and France (at the very least). Mendoza, Breakwell and Whelan seemed mostly concerned to establish their own credentials as reliable sources with good editorial fact-checking practices. There was an intriguing moment when they all ganged up against Buzzfeed for putting last year’s leaked allegations about Trump and Russia online without any corroboration. The chair, Clive Myrie, stopped that as Buzzfeed weren’t present. Given the event’s own blurb, which included this quotation from the World Economic Forum: The global risk of massive digital misinformation sits at the centre of a constellation of technological and geopolitical risks ranging from terrorism to cyber attacks and the failure of governance, the discussion hardly matched up.

Now let’s have another word cloud. This one is from Facebook’s own document published at the end of April, Facebook and Information Operations v1. We can probably take it that the timing of this document, days before the select committee report, was not much of a coincidence. Carrying videos of actual crimes including murder on Facebook Live hasn’t helped their case much either.

Facebook cloud

(I’m making word clouds partly to lighten this long post with some images but I’m also finding they do help show up different emphases in the sources I’ve cited.) The document explains what they are trying to do, and it’s all so bland and  minimal you can’t see why they are not already doing that stuff. But Facebook really, really doesn’t want to get into censoring (as they would see it) online content, even though that is what they do some of the time. Their rhetoric uses the language of ‘community’ and ‘shared responsibility’. Here’s what some other commentators have to say about that.

Sam Levin: Zuckerberg has refused to acknowledge that Facebook is a publisher or media company, instead sticking to the labels of “tech company” and “platform”.

Jane Kirtley: It’s almost like these are kids playing with a toy above their age bracket. We surely need something more than just algorithms. We need people who are sentient beings who will actually stop and analyze these things.

Jim Newton: Facebook wants to have the responsibility of a publisher but also to be seen as a neutral carrier of information that is not in the position of making news judgments. I don’t know how they are going to be able to navigate that in the long term.

Edward Wasserman: If they are in the news business, which they are, then they have to get into the world of editorial judgment.

Jonathan Taplin: Facebook and Google are like black boxes. Nobody understands the algorithms except the people inside.

Paradoxically, the quotations above were to do with Facebook actually censoring, rather than failing to censor content. The row blew up last year when Facebook took down the famous photograph of a Vietnamese child running naked in terror after a napalm attack on her village. In Taplin’s words, ‘It was probably [censored by] a human who is 22 years old and has no fucking idea about the importance of the [napalm] picture’. After this particular row Facebook stopped using human censors and began relying on its algorithms which allowed through floods of fake news content. Because they collect so much data on users and deploy it via their famous, but secret, algorithms, those streams of fake news were also targeted. So it’s easy to see that now Facebook doesn’t only not know what it’s doing, it doesn’t know how to defend what it’s doing. Easier to (1) blame the users, although the word they would use is ‘educate’, (2) say we’re so big you can’t touch us and anyway it’s far too complicated.

Taplin, quoted above, has a new book out that’s just been well reviewed, Move fast and break things. I’d like to read it but as it has a critique of Amazon as well as the social media companies, I’m going to have to wait.

Taplin is particularly concerned about the effect huge companies such as Google and Amazon have had on smaller businesses around the world that they’ve crushed or swallowed up, and their wholesale theft of the creative products of others.  I should own up here that the cartoon above is from xkcd.

What’s to be done, and what can any of us do? Lawmakers despite what Martinez has to say could start to act if they feel their own safety is threatened, or elections are being heavily influenced. The far right social media connection to the murder of an MP, and evidence coming from France and the USA about deliberate interference, can’t be ignored completely. Anne Applebaum (one of the experts I recommended here) was the victim of a smear campaign after she wrote about Russia’s actions in Ukraine, and has described how the fake news channels work and how effective they can be.  Over at the Oxford Internet Institute (OII), there’s a research project on Algorithms, Computational Propaganda, and Digital Politics  tracking political bots. It has scrutinised what happened with the French presidential elections as well as in the USA.  Philip Howard, the new Pofessor of Internet Studies at the OII, and Helen Margett, the Institute’s Director, have both complained recently that the giant social media companies, who have collected so much data on us, are not releasing it to independent academics so that it can be properly researched and we can get a handle on what’s going on. Howard even calls it a sin.

Social media companies are taking heat for influencing the outcomes of the U.S. presidential election and Brexit referendum by allowing fake news, misinformation campaigns and hate speech to spread.

But Facebook and Twitter’s real sin was an act of omission: they failed to contribute to the data that democracy needs to thrive. While sitting on huge troves of information about public opinion and voter intent, social media firms watched as U.S. and UK pollsters, journalists, politicians and civil society groups made bad projections and poor decisions with the wrong information.

The data these companies collect, for example, could have told us in real-time whether fake news was having an impact on voters. Information garnered from social media platforms could have boosted voter turnout as citizens realized the race was closer than the polls showed – and that their votes really would matter. Instead, these companies let the United States and UK tumble into a democratic deficit, with political institutions starved of quality data on public opinion.

Margetts, in Political Turbulence, makes the point that such research is still feasible now but with the advent of the Internet of Things it could become completely impossible in future, and politically we would be moving into chaotic times.

After all this it might be a relief to end with a bit of optimism, so here are a handful of possible reasons. The political bots research unit doesn’t think attempts to influence the French elections really worked this time. Ivor Gaber reckons that because the UK (print) news media is so biased and sensationalist, fake news has less influence here because readers don’t believe what they read anyway. Taplin reckons that the younger digital tycoons – he’s thinking of Zuckerberg – care about their public images enough to want to make changes so they are not seen as evil-doers. (Google famously had ‘Do no evil’ as their original mission statement but that was last century.) Matthew Williams and Pete Burnap, of the Social Data Science Lab mentioned above, gave the select committee some evidence that other users confronting racists on Twitter did seem to have an effect. I’ll quote it in full, as it’s couched as useful advice.

Extreme posts are often met with disagreement, insults, and counter-speech campaigns. Combating hate speech with counter speech has some advantages over government and police responses: it more rapid, more adaptable to the situation and pervasive; it can be used by any internet user (e.g. members of the public, charities, the media, the police); and it draws on nodal governance and responsibilsation trends currently prominent in the wider criminal justice system.  The following typology of counter hate speech was identified:

Attribution of Prejudice

e.g “Shame on #EDL racists for taking advantage of this situation”

Claims making and appeals to reason

e.g. “This has nothing to do with Islam, not all Muslims are terrorists!”

Request for information and evidence

e.g. “How does this have anything to do with the colour of someone’s skin??”

Insults

e.g. “There are some cowardly racists out there!”

Initial evidence from ongoing experiments with social media data show that counter speech is effective in stemming the length of hateful threads when multiple unique counter speech contributors engage with the hate speech producer.  However, not all counter speech is productive, and evidence shows that individuals that use insults against hate speech producers often inflame the situation, resulting in the production of further hate speech.

There’s a lot to keep an eye on. I’ll be catching up with You Twitfaces again in a while. Meanwhile here’s Google shyly hiding again.
google6

 

You Twitfaces Round Two

Fighting on the Internet

This week I’m taking down Google and annoying Mac users (not intending to annoy but I reckon it will happen). Here’s Google looking shy.

google6

It’s a longer post as so much fighting over the Internet happened in March.  First up, on March 1st the government published the latest UK digital strategy. It got little coverage because, well, Trump and Brexit. Was it any good? Not really. The shadow minister for digital everything is Louise Haigh, who hasn’t had the job long, but long enough to take an informed view.  She called it recycled and meagre. There are lists of marvellous things that have already happened and the Govt would like to take credit for. There’s hopeful, wishy-washy stuff about things the government would like to happen but isn’t taking responsibility for. So they are giving consumers the right to request fast broadband. That’s not quite the same as saying it will happen, and that there’s an actual plan. Almost one in five (spun in the strategy as just over 80% who DO as that sounds so much better) of SMEs don’t have access to fast broadband.  Meanwhile, BT’s Openreach has just been fined £42 million for  commercial malpractice over fast broadband provision. I heard BT defending their position at a conference fifteen years ago, on the grounds that it was unfair for them to be forced to sort out broadband access throughout the country when their competitors might benefit. They were handed the grid’s local loops and a massive advantage when they were privatised and have been getting away with it for a very long time. This week, they committed to 95% cover but that leaves 5% who can’t all be volunteers on  reality TV shows and happy to live a slow or off-grid life. I know a couple of people who are happily like that some of the time, but then again, it also drives them crazy.

Then the ‘strategy’ has more hopeful stuff about the gender skills gap, and on digital skills training,  already outsourced to a bunch of organisations including banks (those Barclays ads) . Last year £35 million went to various outsourced providers, or in govt spending terms around half a peanut. The strategy pats itself on the back for all the great work done in libraries by staff along with volunteers  to provide Internet access and digital skills training. Meanwhile over in another universe, funding is getting cut and libraries are closing. One in 10 adults, according to the strategy, has never used the Internet. (The Oxford Internet Institute has been bringing out regular reports on the digital access gap for years.) I don’t think the ministers responsible for this strategy have a clue. I ran classes for Internet beginners in Peckham Library for a couple of years starting in 2000 and I’m sure it would cost more than a fiver each person to sort out problems of access, confidence, understanding and skills.

LibraryPicture

On March 10 the row involving Google was already simmering and at an advertising industry event, Martin Sorrell came out with this attack which got loud cheers from the audience:

The fundamental issue is that you [Google] have to take responsibility for this as a media company. You are not a passive digital engineer tightening the digital pipes with your digital spanner and not responsible for the flow through of content of those pipes, you are responsible for it. You have to step up and take responsibility. You have the resources, your margins are enormous, you have control of the algorithms, you don’t explain to people how those algorithms work. You have to change.

Google – who now own YouTube, were being criticised for letting racists, terrorist organisations, and hate-mongers of all stripes make money out of adverts on YouTube as well as making money themselves from the same ads. They were pretty slow to react. MPs on the Home Affairs Select Committee had another go on March 14, when David Winnick pretty much called Peter Barron (Google), Simon Winnick (Facebook) and Nick Pickles (Twitter) pimps. He said that they were engaged in little more than “commercial prostitution” and that he would be ashamed to earn his money in the way they did. Here’s Google’s lovely new London HQ again, in the lovely new Pancras Square with its lovely corporate fountains and  trees. It’s basically a large four-cornered smoker’s corner with a few coffee, sushi and sandwich joints.

google3

 

It’s taken Google a long time to admit that it’s an advertising company, one of the world’s largest. It might not even really admit that yet with its continuing claims – see the YouTube policy info – that it is up to users to report breaches. Facebook hasn’t admitted it either. Like CocaCola only wants to bring the world together rather than sell sugary drinks, Facebook wants to build global community. Why would another vast advertising company want to do that, exactly?

But there’s no question, the issues of fake news, politicians ticking them off and the not at all fake fact that companies are cancelling adverts are getting them a little rattled.  Zuckerberg gave us a fine example of a geek’s worldview with his new manifesto. It was sweet, almost. It reminded me of a symposium I went to in Seattle, Shaping the network society, in 2002. I realised there’s a worldview (some) geeks apparently develop. At a certain point, having been totally immersed along with people like themselves in Internet technology, they start to notice other kinds of problems – social, economic, political, environmental. They then offer their worldwide technology-based solutions, as they’ve never noticed there are already other people studying and trying to deal with all those world problems. They’re totally unaware of their own ignorance. In all seriousness, they propose that their next task is to build a global community and sort all that hard stuff out. Good luck, Mark. Are you going to drop all the adverts now that you’re saving the world and not an advertising company any more?

Tim Berners-Lee’s take on alarms about fake news, bots, algorithms and data was more grounded although the Guardian gave it the stupid sub-heading ‘I invented the Internet’. He didn’t quite write that. He did write ‘I may have invented the web, but all of you have helped to create what it is today.’ He argues for control of personal data, pushing back against misinformation, transparency and more understanding in relation to political advertising. The Web Foundation started by Berners-Lee reckons there is currently a worldwide 12% gender gap in access to the Internet. I started my PhD research back in 1999 provoked by the statistic that there was, at the time, a 9% gender gap in the UK, for no good reason.

The UK strategy is supposed to address the gender gap. Maybe the brand new All Party Parliamentary Group on the Fourth Industrial Revolution, launched last week, will help sort that out too. They’re keen on improving digital everything. Here they are.

APPG4thIR_launch

Time for some good news. The Indian state of Kerala has decreed that Internet access is a basic human right so all citizens should get free wifi access. And reading Harry Potter may help defeat Trump. You don’t even need a wand. Seriously, a study last year showed that reading Harry Potter lowers Americans’ opinions of Donald Trump.  It sounds unlikely but Diana Mutz is a real expert in political science and reported an evidence base of 1100 odd respondents. I liked this next bit as I’d wondered if she’d thought about the religious Right banning Potter books, but she’s properly bossed those variables already (my bold):

… I include control variables in all models in order to take into account potentially spurious causes of both Trump support and exposure to Harry Potter.  All models included gender (females were expected to rate Trump poorly), education (expected to negatively predict Trump support), age (expected to positively predict Trump support), and evangelical self-identification (expected to discourage both tolerance of Muslims and gays, and consuming stories about wizards).

So finally, Mac users. At those Internet research conferences I went to, it was normal to hear Bill Gates referred to as the Great Satan, quite genuinely and not all that humorously. Gates certainly didn’t have any kind of hero status. Steve Jobs did, and as a result of Apple’s brilliant marketing , Mac users were encouraged to position themselves as smarter, more creative, better. Their advertising made that explicit. Now Apple gets a cut every time anyone buys an app for their iPhone or iPad. The same goes for Android (Google again) apps, but you can get software for pcs without Microsoft getting any cut. I know what Bill and Melinda Gates have done, and are doing, with their billions. Where did Steve Job’s fortune go? Does anybody know? (Raf, if you’re reading this, you may remember the AoIR conference. After the closing speeches and thanks you said they should have thanked Bill Gates aka Satan without whom none of their Powerpoint presentations, or the conference, would have been possible.) How does it work, this division of tech billionaires into good guys and bad guys?

 

Fighting on the Internet