A fake news flow tracker

‘The Russians were pioneers. They understood that social media could be manipulated long before other people did.’

In a recent interview, available as a podcast here, Anne Applebaum explained why fake news in the context of the US elections isn’t really news, because there’s no country in Europe that doesn’t have a similar story about it. Russia has been purposefully and systematically disseminating fake news memes for at least a couple of years.  It’s a long interview and you may not have time, but she gets on to how fake news is made to flow from about 36 minutes in. She makes a lot of important, and worrying, statements about the impact these disinformation campaigns have already had in Europe, for example in support of the far right in Hungary.  I’ve previously recommended Applebaum as an expert worth following and here I can’t do better than quote from the Sam Harris podcast. It adds a lot more substance to the fake news discussion I reported on in a previous post.

She warns that [in relation to the Trump campaign’s Russian links] the danger is that the FBI investigation won’t find a smoking gun and then people will say it’s all right. But we don’t need a smoking gun. We can see it.

It’s a pattern of politics that they [the Putin regime] follow…they seek out extreme groups and movements that they support sometimes quite openly…they support the far right in Hungary…sometimes it’s with money, sometimes contacts, social media campaigns…there’s a pattern to it, it works the same in every country. They adjust it depending on the politics. Sometimes they support the far left, sometimes the far right. Sometimes they support business people. But in every country they do the same thing…done deliberately…they create the themes then an enormous network of trolls and bots…repeated on dozens and dozens of conspiracy sites…not an accident…a deliberate tactic

look at what happens in other countries, then you can see that it’s a pattern…For Americans it’s new, it’s not new for Europeans…most of the time, Russian interference in foreign elections takes the same forms that it did in the United States. Russian websites operating openly (Russia Today, Sputnik) or under other names spin out false rumors and conspiracy theories; then trolls and bots, either Russian or domestic, systematically spread them.

tree 2 copy

In another article written late last year, Applebaum gives a few examples of Russian interference on behalf of Le Pen in the French elections, and also describes an experience of her own when she became a target .

it was eye-opening to watch the stories move through a well-oiled system, one that had been constructed for exactly this sort of purpose…WikiLeaks — out of the blue — tweeted one of the articles to its 4 million followers… As I watched the story move around the Web, I saw how the worlds of fake websites and fake news exist to reinforce one another and give falsehood credence. Many of the websites quoted not the original, dodgy source, but one another…many of their “followers” (maybe even most of them) are bots — bits of computer code that can be programmed to imitate human social media accounts and told to pass on particular stories

tree 1

In a future post I’ll be looking at what’s being done in response and what’s recommended that we do ourselves so we don’t end up being fooled or worse, unwitting colluders.

 

 

A fake news flow tracker

You Twitfaces Round Two

Twitter, Facebook and Google (collectively known here as You Twitfaces, with acknowledgements to Benny A) have all been attacked in the old media lately.  There have been so many critical news stories and comment pieces that it’s hard to keep track of what the real issues are, who’s involved and what might happen next. What’s behind all this, especially as none of the issues is even new? Here I’m disentangling the four – as I think there are four – key concerns, puting them into some kind of timeline and pointing to a few useful sources or experts worth checking out. I’ve already flagged up some in previous posts on social media and Internet research here and here. Along the way I’ll be explaining my trip to the Royal Institution. We’re looking at hate speech, fake news, data collection and surveillance, and content theft.

First up, the Home Affairs committee, a select committee of the UK House of Commons, started investigating hate crimes and social media last year prompted largely by the murder of Jo Cox MP. The committee is now suspended because an election was called, so they had to rush out their report Hate crime: abuse, hate and extremism onlineIt was published on May 1st. As these things go it is readable and not incredibly long, and if you look it up you will get a flavour of how angry the cross-party MPs were with the corporate spokesmen (yes, they were all men) and their feeble excuses.  Witnesses who gave evidence, both individuals and organisations, are listed separately with links to the written evidence. Oral evidence is minuted. The Social Data Science Lab at Cardiff University submitted detailed well-grounded evidence based on large-scale research projects into hate crime in the UK.  They noted that most online hate speech research has been conducted on social media platforms (mainly Twitter and Facebook) but there’s a need to examine hate on emerging platforms and online gaming. They recommended more research into the relationship between hate speech online and offline hate crime.

The corporate spokesmen questioned by the committee were Peter Barron, Vice President, Communications and Public Affairs, for Google,   Simon Milner, Policy Director for the UK, Middle East and Africa, for Facebook, and Nick Pickles, Senior Public Policy Manager, Twitter. The report is scathing about their answers and evidence (available in the minutes for 14 March , and it’s eye-opening). They can’t defend the examples of hate speech, child pornography or illegal extremist content they are presented with, and don’t try. Instead they fall back on their community ‘standards’, relying on users to flag content, and on trying to improve their algorithms. They refuse to say how many human moderators they employ or how much they spend on making sure content that is illegal or violates their terms gets removed. The committee points out that when content breaches copyright, it seems to get removed much faster so they obviously have the means and they certainly have the money. A flavour of the report as a word cloud:

CommitteeReport

There are many extraordinary, wriggly exchanges. Peter Barron tries to defend Google allowing David Duke’s antisemitic videos to stay up on YouTube (which Google owns). He says their own lawyers decided the content wasn’t actually illegal. The chair points out that their own guidelines say they don’t support hate speech. Barron then tries to fall back on an alternative free expression argument which shreds any idea that their community standards mean anything.  In other exchanges Nick Pickles tries to defend Twitter’s failure to deal with volumes of abusive racist tweets directed at MPs. Simon Milner tries to defend holocaust denial material on Facebook on the grounds that it attracts a few counter-comments. The MPs make their disgust pretty plain, especially when they finally force the spokesmen to admit that whether they want to or not, their companies do in fact make money out of malicious, hateful and even illegal content.

The constant excuse that they rely on users to report abuses doesn’t go down well with the MPs. In the report’s words, ‘They are, in effect, outsourcing the vast bulk of their safeguarding responsibilities at zero expense. One MP tells them he would be ashamed to make his money the way they do. They really don’t like being told that and they seem to think that saying they work with ‘trusted flaggers’ such as specialist police units will work in their favour. That backfires. The report points out that if these social media companies earning billions in annual operating profits are relying on a publicly funded service to do their work, they should repay the cost.

So what does the report recommend? Briefly:

  • that all social media companies introduce clear and well-funded arrangements for proactively identifying and removing illegal content—particularly dangerous terrorist content or material related to online child abuse.
  • Government should now assess whether the continued publication of illegal material and the failure to take reasonable steps to identify or remove it is in breach of the law, and how the law and enforcement mechanisms should be strengthened in this area.
  • Government should consult on adopting similar principles online to those used for policing football matches —for example, requiring social media companies to contribute to the Metropolitan Police’s CTIRU for the costs of enforcement activities which should rightfully be carried out by the companies themselves.
  • social media companies currently face almost no penalties for failing to remove illegal content. There are too many examples of social media companies being made aware of illegal material yet failing to remove it, or to do so in a timely way. Government should consult on a system of escalating sanctions to include meaningful fines for social media companies which fail to remove illegal content within a strict timeframe.
  • social media companies should review with the utmost urgency their community standards and the way in which they are being interpreted and implemented, including the training and seniority of those who are making decisions on content moderation, and the way in which the context of the material is examined.
  • social media companies should publish quarterly reports on their safeguarding efforts, including analysis of the number of reports received on prohibited content, how the companies responded to reports, and what action is being taken to eliminate such content in the future.  If they refuse, the Government should consult on requiring them to do so.
  • Google is currently only using its technology to identify illegal or extreme content in order to help advertisers, rather than to help it remove illegal content proactively. They should use their existing technology to help them abide by the law and meet their community standards.
  • Government should review the entire legislative framework governing online hate speech, harassment and extremism and ensure that the law is up to date. It is essential that the principles of free speech and open public debate in democracy are maintained—but protecting democracy also means ensuring that some voices are not drowned out by harassment and persecution, by the promotion of violence against particular groups, or by terrorism and extremism.

After the election, the committee will move on to considering fake news. In a BBC Panorama programme broadcast on May 8 you can see clips of the committee hearings and comment from MPs, ex-Facebook employees and others. The interview with Simon Milner from Facebook is excruciating: he keeps repeating the same bland unconvincing statements rather than answer the questions. A disaffected ex-colleague of Mark Zuckerberg, Antonio Garcia Martinez, comes out with what he thinks Facebook really thinks about all the fuss (my transcript): You know what’s naive? The thought that a European bureaucrat who’s never managed so much as a blog could somehow in any way get up to speed or even attempt to regulate the algorithm that drives literally a quarter of the Internet in the entire world. That’s never going to happen in any realistic way. A more fully developed version of this arrogant claim and what it’s based on can be read here. It amounts to little more than We’re super-rich, you losers, so we’re above the law. So yes, the social media outfits have lots of data on us, can manoevre around different legal restrictions because of their global reach, point to their supreme algorithms and fall back on defending free speech. But ultimately they are basically just vast advertising companies, and what does hurt them is advertisers cancelling contracts. That did start to happen after a few recent revelations. Their free speech argument doesn’t work any longer once it is pointed out that they are making money out of racism, terrorist recruitment sites and child pornography, by running ads alongside such nasty content,  and are also tainting reputable organisations and businesses by linking their ads to it. Free speech has nothing to do with that.

Facebook had to deal with charges that they allowed fake news to influence the US presidential elections. They responded first with denials,  from Zuckerberg personally, but there was too much evidence to ignore so they’ve moved on to Plan B, blaming the users. It’s our fault. On May 8, Facebook ran ads in the old media telling us what to do, following on from news stories that they’re hiring 3,000 more people as content checkers and are yet again tweaking their algorithms. It should all have been great PR but their advice on spotting fake news was unaccountably mocked. Here’s another word cloud based on Facebook’s kindly advice to us all.

FacebookAdvice

How much of a problem is fake news? Last week there was a debate at the Royal Institution,  Demockery and the media in a ‘post-factual’ age, hosted by Sussex University, with a panel of journalists from various organisations. I guess the panellists were meant to represent a range of media as well as provide gender balance and as seems to be too often the case, the balance didn’t work. Kerry-Anne Mendoza from The Canary, and Ella Whelan from spiked-online, were under-informed and relied too much on assertion and opinion. Neil Breakwell from Vice News, a former Newsnight deputy editor, and Ivor Gaber, Professor of Journalism and a former political journalist, simply knew a lot more and so were more interesting. This unbalanced ‘balance’ happens too often and I’m going to call it out every time.

Asked about fake news, Mendoza didn’t even want to use the term as she claimed it had been tainted by Trump. (She also claimed that The Canary was a left-wing alternative for an audience who would otherwise be reading right-wing tabloids. The other panellists thought that claim about its readership was pretty unlikely, and she had no evidence for who its readers actually are.)   Whelan in true spiked contrarian style disputed there was even an issue because it would be patronising to suggest anyone was influenced by fake news. Gaber made the most serious and telling point, that it’s not about quibbling over truth or accuracy but it’s about intent.  That’s the real reason we should care about fake news. Unfortunately the discussion as a whole didn’t pursue that enough, surprisingly given the salience of current investigations into attempts by the far right and Putin’s regime to interfere with and disrupt democratic elections in the USA and France (at the very least). Mendoza, Breakwell and Whelan seemed mostly concerned to establish their own credentials as reliable sources with good editorial fact-checking practices. There was an intriguing moment when they all ganged up against Buzzfeed for putting last year’s leaked allegations about Trump and Russia online without any corroboration. The chair, Clive Myrie, stopped that as Buzzfeed weren’t present. Given the event’s own blurb, which included this quotation from the World Economic Forum: The global risk of massive digital misinformation sits at the centre of a constellation of technological and geopolitical risks ranging from terrorism to cyber attacks and the failure of governance, the discussion hardly matched up.

Now let’s have another word cloud. This one is from Facebook’s own document published at the end of April, Facebook and Information Operations v1. We can probably take it that the timing of this document, days before the select committee report, was not much of a coincidence. Carrying videos of actual crimes including murder on Facebook Live hasn’t helped their case much either.

Facebook cloud

(I’m making word clouds partly to lighten this long post with some images but I’m also finding they do help show up different emphases in the sources I’ve cited.) The document explains what they are trying to do, and it’s all so bland and  minimal you can’t see why they are not already doing that stuff. But Facebook really, really doesn’t want to get into censoring (as they would see it) online content, even though that is what they do some of the time. Their rhetoric uses the language of ‘community’ and ‘shared responsibility’. Here’s what some other commentators have to say about that.

Sam Levin: Zuckerberg has refused to acknowledge that Facebook is a publisher or media company, instead sticking to the labels of “tech company” and “platform”.

Jane Kirtley: It’s almost like these are kids playing with a toy above their age bracket. We surely need something more than just algorithms. We need people who are sentient beings who will actually stop and analyze these things.

Jim Newton: Facebook wants to have the responsibility of a publisher but also to be seen as a neutral carrier of information that is not in the position of making news judgments. I don’t know how they are going to be able to navigate that in the long term.

Edward Wasserman: If they are in the news business, which they are, then they have to get into the world of editorial judgment.

Jonathan Taplin: Facebook and Google are like black boxes. Nobody understands the algorithms except the people inside.

Paradoxically, the quotations above were to do with Facebook actually censoring, rather than failing to censor content. The row blew up last year when Facebook took down the famous photograph of a Vietnamese child running naked in terror after a napalm attack on her village. In Taplin’s words, ‘It was probably [censored by] a human who is 22 years old and has no fucking idea about the importance of the [napalm] picture’. After this particular row Facebook stopped using human censors and began relying on its algorithms which allowed through floods of fake news content. Because they collect so much data on users and deploy it via their famous, but secret, algorithms, those streams of fake news were also targeted. So it’s easy to see that now Facebook doesn’t only not know what it’s doing, it doesn’t know how to defend what it’s doing. Easier to (1) blame the users, although the word they would use is ‘educate’, (2) say we’re so big you can’t touch us and anyway it’s far too complicated.

Taplin, quoted above, has a new book out that’s just been well reviewed, Move fast and break things. I’d like to read it but as it has a critique of Amazon as well as the social media companies, I’m going to have to wait.

Taplin is particularly concerned about the effect huge companies such as Google and Amazon have had on smaller businesses around the world that they’ve crushed or swallowed up, and their wholesale theft of the creative products of others.  I should own up here that the cartoon above is from xkcd.

What’s to be done, and what can any of us do? Lawmakers despite what Martinez has to say could start to act if they feel their own safety is threatened, or elections are being heavily influenced. The far right social media connection to the murder of an MP, and evidence coming from France and the USA about deliberate interference, can’t be ignored completely. Anne Applebaum (one of the experts I recommended here) was the victim of a smear campaign after she wrote about Russia’s actions in Ukraine, and has described how the fake news channels work and how effective they can be.  Over at the Oxford Internet Institute (OII), there’s a research project on Algorithms, Computational Propaganda, and Digital Politics  tracking political bots. It has scrutinised what happened with the French presidential elections as well as in the USA.  Philip Howard, the new Pofessor of Internet Studies at the OII, and Helen Margett, the Institute’s Director, have both complained recently that the giant social media companies, who have collected so much data on us, are not releasing it to independent academics so that it can be properly researched and we can get a handle on what’s going on. Howard even calls it a sin.

Social media companies are taking heat for influencing the outcomes of the U.S. presidential election and Brexit referendum by allowing fake news, misinformation campaigns and hate speech to spread.

But Facebook and Twitter’s real sin was an act of omission: they failed to contribute to the data that democracy needs to thrive. While sitting on huge troves of information about public opinion and voter intent, social media firms watched as U.S. and UK pollsters, journalists, politicians and civil society groups made bad projections and poor decisions with the wrong information.

The data these companies collect, for example, could have told us in real-time whether fake news was having an impact on voters. Information garnered from social media platforms could have boosted voter turnout as citizens realized the race was closer than the polls showed – and that their votes really would matter. Instead, these companies let the United States and UK tumble into a democratic deficit, with political institutions starved of quality data on public opinion.

Margetts, in Political Turbulence, makes the point that such research is still feasible now but with the advent of the Internet of Things it could become completely impossible in future, and politically we would be moving into chaotic times.

After all this it might be a relief to end with a bit of optimism, so here are a handful of possible reasons. The political bots research unit doesn’t think attempts to influence the French elections really worked this time. Ivor Gaber reckons that because the UK (print) news media is so biased and sensationalist, fake news has less influence here because readers don’t believe what they read anyway. Taplin reckons that the younger digital tycoons – he’s thinking of Zuckerberg – care about their public images enough to want to make changes so they are not seen as evil-doers. (Google famously had ‘Do no evil’ as their original mission statement but that was last century.) Matthew Williams and Pete Burnap, of the Social Data Science Lab mentioned above, gave the select committee some evidence that other users confronting racists on Twitter did seem to have an effect. I’ll quote it in full, as it’s couched as useful advice.

Extreme posts are often met with disagreement, insults, and counter-speech campaigns. Combating hate speech with counter speech has some advantages over government and police responses: it more rapid, more adaptable to the situation and pervasive; it can be used by any internet user (e.g. members of the public, charities, the media, the police); and it draws on nodal governance and responsibilsation trends currently prominent in the wider criminal justice system.  The following typology of counter hate speech was identified:

Attribution of Prejudice

e.g “Shame on #EDL racists for taking advantage of this situation”

Claims making and appeals to reason

e.g. “This has nothing to do with Islam, not all Muslims are terrorists!”

Request for information and evidence

e.g. “How does this have anything to do with the colour of someone’s skin??”

Insults

e.g. “There are some cowardly racists out there!”

Initial evidence from ongoing experiments with social media data show that counter speech is effective in stemming the length of hateful threads when multiple unique counter speech contributors engage with the hate speech producer.  However, not all counter speech is productive, and evidence shows that individuals that use insults against hate speech producers often inflame the situation, resulting in the production of further hate speech.

There’s a lot to keep an eye on. I’ll be catching up with You Twitfaces again in a while. Meanwhile here’s Google shyly hiding again.
google6

 

You Twitfaces Round Two

Faraday and the Elephant

If you’ve ever been south of the river in London you’ll probably have seen the Faraday Memorial, even if you didn’t realise it. The Memorial is the big steel cube in the middle of what used to be a traffic roundabout at Elephant and Castle. The area around it is now more pedestrian-friendly. It looks like this.

Faraday memorial

There’s an explanation on a sign beside it. I’ve seen people reading the sign, unlike the days when traffic stopped anyone getting near. Michael Faraday, scientist and inventor, was a local boy. He came from a poor family and didn’t have access to much education, but took it on himself to go to Humphry Davy’s lectures at the Royal Institution. Faraday had been apprenticed to a bookbinder, so he carefully wrote out his notes from the lectures, bound them beautifully and presented them to Davy. That’s how he got his first break. The sign has a very brief summary of Faraday’s life and career, and a little about the memorial and its architect. The memorial is also, appropriately, an electricity substation for the Northern and Bakerloo lines.

Faraday sign at Elephant

If you want to get much better insight into Faraday’s work, I recommend the Faraday Museum in the basement of the Royal Institution. It is small and a little dingy but Faraday’s laboratory has been preserved and recreated, and there are a lot of extraordinary exhibits. Possibly my favourite ever museum curator’s blurb reads “After discovering electro-magnetic induction, Faraday took a holiday in Hastings.” [Pause badly needed there, if only for comic timing.] It continues: “He then returned to his laboratory and created another world-changing invention: the first electric generator.”  It makes me feel I should try a holiday in Hastings this summer.

Here’s a faF5ce you might well recognise, although not at this scale. The museum has a blown-up image of a £20 banknote across an entire wall. The note also featured a drawing of the famous institution lectures.

F4

You can see one of the earliest ever electric batteries in the museum, given to Faraday by its inventor Alexander Volta in 1814. There is also equipment made by Faraday himself, as he had to make most of his kit from scratch. Insulation didn’t yet exist so in order to make a coil, he and his assistants had to wrap string round wire. It could take a week to make an electric coil like this very early one.

F2

Faraday’s glassmaking experiments, working close-up to the furnace without adequate protection, probably caused some of his health problems. He was trying to make very specialist vessels like the glass ‘egg’ he wanted to use to create vacuums he would then fill with different gases. His experiments in passing an electric current through a variety of gases and metals led to the discovery of spectroscopy, which in turn is the basis of a lot of astrophysics as well as earthly physics. Faraday didn’t only invent electrodes. He also came up with the word. We owe him for some of our language as well as for his discoveries and inventions, and for being a public educator.

There is another, much tinier Faraday museum in London at Trinity Buoy Wharf. I’ll go there one day. In my next post I will also explain what I was really doing at the Royal Institution. Meanwhile, here’s a less interesting but maybe better known public artwork from Elephant and Castle although to be fair, it does feature another London elephant.

Elephant Elephant

 

 

Faraday and the Elephant

New living metaphor seen in the wild

Hope you all caught this bit of news from the local elections last week. The Liberal Democrats took overall controll of Northumberland County Council although the result was at first a dead heat in terms of seats – 33 to the Conservatives, and 33 to their opponents combined. The South Blyth ward was still tied after two recounts, so the result was decided by drawing straws. The Conservative candidate drew the short straw. Literally. It was even filmed.

[Recap here on what I mean by living metaphors – and if you catch any  new ones please tell. I still think they’re rarely found in the wild.]

New living metaphor seen in the wild

Throwing out your sourdough

Happy (late) Passover, Easter, and springtime. We are part way through the season for eating the bread of affliction, which is a good name for matzah if you have thrown out all your bread, flour and other baked goods and are eating factory-made sheets of unleavened bread (aka cardboard) for a week. Here’s what you’re missing.

bread

Sourdough is the exact opposite of matzah, and that got me wondering what all today’s home sourdough bakers do for Passover, if they also happen to be observant. The rules say you have to get rid of all leaven, but leaven is precisely what your precious sourdough starter is. It’s one thing using up your bread, chucking away any flour or grain based products you happen to have, and of course any yeast, and surviving with only approved kosher for Passover alternatives for a week. It’s another if you have been nurturing your sourdough for months or years. Professional bakeries claim to have kept theirs going for decades.  Joel and I got ours started only three weeks ago and I’m not as attached to mine as people who take theirs out for walks (seriously – it’s to catch more wild yeasts) or give it a name. The Bread Ahead bakery supposedly calls theirs Bruce. Our tutor in the flatbread baking class scoffed at that and claimed he calls his cat ‘the cat’, so why bother naming the sourdough starter? I can see it’s an acknowledgement that the natural yeast you’re cultivating is alive, although it’s a plant rather than an animal and isn’t a single creature but consists of billions of cells. But the point is, sourdough bakers are going to find it hard to throw the whole lot out.

A little online investigation turned up two options, depending what people thought the point of the prohibition on leaven was all about. If you think it’s only commemorating the flight from Egypt and the Israelites not having time to let their bread rise, then it’s fair enough to ask a neighbour or friend to mind your sourdough starter for a week, and then get it back. As long as it’s not in your own home, you’re within the rules. On the other hand, if you think it’s to do with starting afresh and renewal – a reasonable view given all the fresh green stuff on the Seder plate, and the fact that it’s an agricultural spring festival associated with the barley harvest – you might decide that it’s important to get rid of your old sourdough and use the new grain to start again.

parsley

Leaving aside the Biblical story about not having time to make leavened bread, there could be other possible significance for singling it out for a temporary prohibition. The most interesting suggestions link to the history of bread baking, and the distinctions that might have been made in ancient Egypt between raised bread that was likely to have been more expensive and eaten by wealthier classes, and flatbread which might still have been leavened but didn’t take as long to make, eaten by poorer workers. Alternatively it could mark a symbolic distinction between Egypt’s settled, grain-growing culture and the culture of a more nomadic community. There was definitely good reason to see fancier breadmaking as an aspect of Egyptian culture, going back at least four thousand years.

Ancient Egyptian hieroglyphics have several signs for bread – flatbread, raised bread or rolls – and there are tomb paintings showing elaborate bakeries.

800px-Ramses_III_bakery

Originally matzah would have been much closer to other types of flatbread. The machine made boxed version is recent and not much of a guide. Of course, you can now get artisan matzahs and some home bakers are now making their own (unlikely to suit anyone very orthodox as the flour and the entire baking environment must all be guaranteed leaven free, the whole process must take no more than 18 minutes, and as yeast is in the air all around it isn’t generally practicable). I’ve seen recipes by and for people who are either less concerned or have really set up alternative artisan matzah production, and they make it sound fairly palatable with additions like olive oil and honey. Two top suggestions: bashing nails into a wooden rolling pin so you can roll out matzah with perforations to make it look like the boxed version (why? and where would you keep such an implement the rest of the year?) and using the matzah recipe to make alternative communion wafers, which apparently also need a reboot.

Elizabeth David’s book on bread and yeast cookery has a great facsimile of an 1896 poster for Squire’s Patent Balloon Yeast. Absolutely Pure, Never Done Rising. (You can actually blow up a balloon with yeast if you want to experiment with how it behaves.) Modern day baker’s yeast has only been factory-produced since the nineteenth  ccentury. Before that, bakers all used naturally occurring leaven, cultivated their own sourdough or used ale barm. I’ve been making bread using commercial yeast for many years and have made sourdough bread for only a couple of weeks, but I’m already struck by how much easier it is. That wasn’t what I was expecting. It’s also very different handling the bread dough. I caught a living metaphor in the wild last week (see here for more about living metaphors) when I found that I was literally getting the feel for it.

Throwing out your sourdough