Scripted

Chapters

  • 05:39 Weird Tech Incoming
  • 10:30 Reality and Politics
  • 12:18 Political Armies
  • 14:08 David Ruined Net Neutrality
  • 15:44 Erosion of Trust in Truth
  • 17:23 Centralized Media?
  • 18:23 Apathy
  • 19:01 Conspiracy
  • 23:46 Apathy Benefits Oppression
  • 26:14 The Role of Bad News
  • 27:14 Fake Porn
  • 32:28 Forgery
  • 33:52 Scamming the Elderly
  • 34:51 Laser Phishing
  • 38:20 Perfect You
  • 42:04 Evidence, Courts, Legal Systems
  • 45:19 A Silver Lining?
  • 47:16 Economic Incentives
  • 48:17 Fake Reviews
  • 50:19 Commdifying Us
  • 52:05 What Can We Do?
  • 55:33 Don't Ask What but Why

(This is a temporary machine translation while we work on the human edited draft. Apologies for the gibberish in the meantime.)

(Complete)

Fake David Torcivia:

I’m David Torcivia.

Fake Daniel Forkner:

I'm Daniel Forkner.

Fake David Torcivia:

And this is Ashes Ashes, a show about systemic issues, cracks in civilization, collapse of the environment, and if we're unlucky, the end of the world.

Fake Daniel Forkner:

But if we learn from all this, maybe we can stop that. The world might be broken, but it doesn't have to be.

David Torcivia:

[0:18] The more astute listeners out there may have realized that no, that was not actually Daniel and myself voiced. Those were poor imitation bots created to lie to you about what we're actually saying and doing here, and maybe one day take over the recording of this show to free up tons of spare time for both of us I think.

Yes that's right this week we're discussing lies, false truths, half truths, and the post truth world that we are quickly heading to with the help of technology, media, and the internet at large.

Daniel Forkner:

[0:45] I thought my voice sounded pretty good actually.

David Torcivia:

[0:48] Sounds like you and Stephen Hawking had a love child, let's be honest here.

Daniel Forkner:

[0:52] Way too soon.

David Torcivia:

[0:53] But the technology is getting much better, and these things came together very easily. It was easy to create these fake voices, what it took us like 30 seconds or 60 seconds of recording samples to create this stuff using a piece of software called lyrebird.

Daniel Forkner:

[1:05] I think that's one of the things about this that we want to point out is those voices, as awful as they might have sounded, yeah we just took 60 seconds of our voice, we uploaded them to some free online resource and it spat out anything we typed into it too kind of mimic our voice.

David Torcivia:

[1:22] And this isn't even the state-of-the-art of this technology which is something we'll explore later on in this episode, but it doesn't take a genius to realize that this might have some interesting effects as we move forward and this technology becomes better and more ubiquitous.

But let's not dwell on this for the moment we'll get into this later on in the episode.

Daniel Forkner:

[1:40] Well David you mentioned fake news is going to be one of the topics of this show, and it’s true that we do live in a time of “fake news.” Now everyone has heard that phrase at this point, and it typically refers to literal fake news: stories and articles published that are published that are simply fabricated. They’re not true, but the serve a political or economic purpose, and they get distributed throughout the web with the help of algorithms, and by targeting groups vulnerable to being manipulated by these false claims being assertive.

It's easy for these things to happen because we live in the quote “attention economy,” something we brought up in the Facebook episode, where so many things are vying for our attention – shoe companies, movies, TV shows, politicians and political groups, YouTube streamers - on and on that the path of least resistance to our attention by all these interested parties gets reduced to the immediate and the emotional. What can startle us. Stimulate outrage. Pull at our curiosity. What can make us click this so we don't click that, and all this is aided by algorithms behind-the-scenes constantly evaluating what works and what doesn’t; what can stimulate a reaction in 500 milliseconds instead of 700.

[2:53] What this is doing is conditioning us; instilling in us the feeling that there is so much information that we don't have time to explore anything with patience and in depth.

What does this have to do with fake news you ask? Well such a distracting environment driven by those who desire to direct our behavior, has opened wide the door for people to introduce false information and spread that information by effectively gaming the system.

Now a simple example, which we'll explore in more depth later, comes from a group of teenagers in Macedonia. They realized that they could make money - and a lot of money - by getting Americans to click on fake articles to generate ad revenue during the 2016 presidential election.

And David we discussed Facebook in Episode 15 Terms of Service, and the algorithms that curate our feeds. That decide for us what we should and should not see based on what it thinks will engage us... well this is a situation where those types of algorithms gave momentum to those false claims.

The kids designed stories they knew people wouldn't be able to resist, people clicked on them, the algorithms said "oh these people like these articles," and exposed the articles to even more similar users, and next thing you know, you got a feedback loop of polarization and misinformation.

David Torcivia:

[4:17] That's right, let's look at what's going on here. We have a system of interested parties, and those parties drive people's attention towards information that will benefit them. And because of the intense competition for that scarce attention, this “attention economy,” sensationalism, emotional manipulation, and short-term conditioning become the methods used.

Algorithms spend the whole time monitoring the effectiveness of these methods, evolving them into better and better tools in terms of their ability to manipulate us. It's very easy to see almost immediately how this drives polarization and division.

In cases where you pit one side against the other like in politics, complex and nuanced debate is time-consuming, requires patience and a lot of research, and therefore it's unsuited to this system. Instead each “side” is presented with the most outrageous and simplified caricature of the other, which then gets evolved and exploited by machine learning until the only picture one group has of the other is a ridiculous, false, and emotionally-charged impression with no substantial dialogue bridging that gap.

Daniel Forkner:

[5:19] But a lot, and I mean a lot, has been said about these fake stories. We're not here to re-hash the same "fake news" analyses that everyone has already heard. No instead we believe, this is just the beginning of a trend that stands to get a lot worse, and a lot weirder, both in scope, and in method.

Weird Tech Incoming

David Torcivia:

[5:39] Daniel you mentioned weirder, and one of the things that really stuck out to me while we were doing research for this episode was a talk that Adobe gave introducing a new product that still hasn't hit the market but they wanted to demonstrate and that's something called Adobe VoCo.

[5:53] Now this is a tool Adobe has designed that makes it very easy to change anything somebody says. Let's stop. Let that sink in for a sec.

[6:01] You load your audio into Adobe VoCo, it analyzes it and displays the text of the audio - this is something that you can already do in a variety of Adobe applications, in audition, in Premiere, it’s in application now it analyzes the audio turns into text and makes it easy to edit. It's useful for things like podcasters, people doing interviews for quickly cutting down to just the parts you need. What happens when somebody says the wrong word, or doesn't say something exactly correctly, or you want them to introduce a question that happens a lot, you'll ask them an interview question they just start answering without introducing what the question is so the listener doesn't know what they're talking about.

Now Adobe VoCo is designed to help fix that. You can come in and change a word, you just select the word you want to change, type the new one in, and this software has analyzed this person's voice in the interview and you need what like 30 minutes or something is that right Daniel?

Daniel Forkner:

[6:49] Yeah when they exhibited this technology at some big event I think the sample they were using was about 30 minutes worth of someone's voice that they could manipulate flawlessly.

David Torcivia:

[6:59] Yeah so 30 minutes of voice, it analyzes it and knows how to do it, and then you can just type in words! And literally you could say “oh they said ‘Factory’ but they should have said ‘industrial’,” so you just type that in and then it generates a flawless, seamless, version of that voice saying a word that maybe doesn't exist anywhere else in that recording, and it saves your edit.

It sounds like a great thing yay! Editors everywhere rejoice we don’t have to go back and record this.

Daniel Forkner:

[7:21] But David isn't inflection and the context of words in sentences so important? I mean that's one of the hard things about editing audio right? Is getting those micro inflections in voice just right so that it flows in a natural way.

David Torcivia:

[7:36] Right language doesn’t happen in just single words. But sentences are almost crafted; they go up and they go down, and questions end differently than statements or exclamation, and so making sure that your sentence that you generate sounds seamless and flawless and genuine is a difficult math problem.

But Adobe solved that through this analysis, looking at the way that the context of the sentence works already, and knowing the correct phenomes, pronunciations, and slight utterances, differences in pronunciation that allow you to have a seamless authentic sounding speech generation.

And if the software doesn't pick it up exactly perfect initially on the first pass, well you can just click that word, open up a second screen and see a huge list of example inflections until you get the one that fits in exactly.

[8:20] It doesn't take an Evil Genius to see how this technology could be very much abused.

Daniel Forkner:

[8:26] Audio is not the only technology being used David to create these false and fabricated realities. A couple universities including Stanford University came out with a technology just a couple years ago. That allows you to do the same thing that Adobe VoCo is doing with voice for people’s faces. What’s really crazy about this, is we’ve all seen CGI in movies, where a lot of time and a lot of resources and a lot of computers at work to render certain images has produced scenes in movies of people that don’t necessarily look that great, but totally artificial faces.

[9:05] We see the same thing in video games, but what this group has come out with is a technology that allows you to superimpose an actor’s facial expressions and mouth movements on to your target face in real time on video. So what that means is I have a video of the president giving a speech, and I sit down in a chair and I point a camera at my face, and the president can be speaking on the television right next to me, and as I move my face this camera captures my facial expressions and then changes the president's face to match exactly what I'm saying while retaining the image of the president.

David Torcivia:

[9:41] Okay so we’ve got this very basic technology out of the way. There's a number of other technologies that are similar to these that we’ll explore as we go forward with the episode, but we just want to establish these sort of facts. At least initially.

Just like Photoshop forever transformed how we think about images and what's real and what's not, well that technology has finally caught up and is being brought to the world of audio, of voice recordings, of speeches, of conversations, as well as video. Something that previously had been far too expensive, time-consuming, or just physically impossible to fake in a convincing manner.

But as these tools get cheaper, and more democratized, and easy so anybody can do this - which is something that's already happening, we’ll explore that later in a very interesting weird twist of technology and ethic - as this stuff gets out there into the world and anybody can do it and the idea of propaganda suddenly becomes a hobbyist pastime, how does that change things?

Reality And Politics

Daniel Forkner:

[10:31] I think the biggest way that this threatens to change our world David is just in our sense of what is real.

That has huge implications for our political environment, because we’ve had so many claims already about the role “fake news”, disinformation, and misinformation has played in the political elections in America. This is not just a concern for American politics but it's happening all over the world; so it calls into question what is real and what is fake when we are consuming information online, and as we discuss the information among ourselves that we have found online and through other media sources.

But this adds a whole new layer to it when we're looking at someone give a speech on TV for example, if we don't have the guarantee that that's actually happening.

This is already going on, and this tech that we’ve just outlined - the audio, the video, and some bot stuff that we'll talk about - has the ability to leverage the misinformation already being used to epic proportions.

Recently a US-based NGO called Freedom House did a study that suggest that at least 17 countries in the past year have used dishonest tactics to influence elections.

David Torcivia:

[11:44] I love that phrase “dishonest tactics.” What a PC way to say that but keep going sorry.

Daniel Forkner:

[11:48] And outside election times at least 30 countries have used a diversity of methods of disinformation and propaganda to get unpopular policies passed; to repress criticisms of government, and other unpopular thing. These tactics include paying large groups of people to write fake stories, using armies of bots to promote propaganda, and the use of social media and search algorithms to keep those fake stories and to keep that misinformation alive in our social media environments.

Political Armies

David Torcivia:

[12:19] So let’s focus on Bots for just a second here, because the idea of this has really dominated what they media’s been talking about over the past year year-and-a-half, especially starting with the 2016 elections which is something we don't want to dwell on too much because there’s so much gray area in that conversation, and a lot of the political divisiveness, but the fact is this stuff happens and it's happened for years before bots were even a thing.

So this isn't a question of new techniques but it's a question of scale. It’s been common for many countries to pay what are basically armies; people who infiltrate online, have conversations, run thousands or hundreds of accounts pushing a narrative for the government to serve whatever purpose they want. This happens in the US; it happens in Russia; this happens in Israel. Israel is famous for paying college students to defend Israel online.

[13:01] This was something that was always limited by “how many people can you hire to do this?” But bots and semi-AI technology (let's be clear it’s not advanced general AI - but this machine learning-sort-of-smart-Markov-chain-generated-conversations that can get close enough to sounding human), well that's really enabled this practice to be scaled up to an industrial scale and that has huge effects on our political conversations, and society, and what we talk about as a whole.

Daniel Forkner:

[13:29] Not least of which because it allows you to simulate public movements and public interest.

If I'm a senator sitting in the halls of Congress and all of a sudden my aids are telling me they're getting phone calls from concerned citizens, and they're getting flooded by emails of people who are concerned about a certain issue, that might influence what I believe my constituency cares about and what policies that I end up supporting. But in this environment where it's so easy to create fake profiles of people and fake voices around their concerns, those interests could be completely made up and funded by a special interest party that doesn't have the citizens’ best interest.

David Torcivia:

[14:08] Well you know it doesn't have to be a fake person. So you remember the FCC filings about net neutrality and the accusations that some of these things were filed by bots and not real people?

David Ruined Net Neutrality

[14:18] Well I went and looked up my name on this filing after it all was said and done, and the comments were locked and the FCC put them out there for review, searched my name because I filed one supporting net neutrality.

But lo and behold did I find there was another David Torcivia who filed something against net neutrality. I look at their conversation and I click on it and it's an address that I used to live at years and years ago. Some bot had imitated me, put somebody else's voice in it, and was arguing against something that I steadfastly support.

Daniel Forkner:

[14:45] David it was you all along! It was you and your fake voice that ruined net neutrality for us here in the United States.

David Torcivia:

Fake news.

Daniel Forkner:

[14:54] Actually that brings up another point David is how all this affects our trust in information where maybe you really did create a comment saying that you were against net neutrality, and maybe you're a famous politician so people actually care about what you do.

Well now you can just say “hey I didn't do that that was fake. That was fabricated; I didn't make that comment someone else must have written that in my name and posted it so don't believe that, I still support net neutrality.” What does that mean for our ability to hold politicians and the people who lead us accountable for the things they say and the things they do?

David Torcivia:

[15:28] That’s a really fantastic point, especially as this audio and video technology becomes much more advanced, easy, faster, and more simple to generate these very accurate imitations of what has happened. Or not even imitations but complete falsification generations that aren't even based in any sort of reality.

Erosion Of Trust In Truth

So we remember during the 2016 election the audio tapes that came out about Trump's allegedly “grab them by the pussy” whatever.

[15:53] Well one of the responses Trump had was you know “this could be a fake. I've talked to technological experts and it is possible to imitate somebody's voice and it would be not impossible technologically speaking for somebody to have falsified this information.”

Say what you want about it but he's not wrong, and as technology becomes more widespread and people are aware that this technology exists, then it'll be easy for politicians even when something is accurate and true to say “well I didn't say that. That's generated, this is fake.”

And when the technology exists and everybody knows the technology exists, well that's enough because the side that supports somebody can say “oh yeah of course this is fake. My candidate, my person would never do this,” and the people against would say “oh of course they did this. I hate that guy.” Well you know now every single narrative just reinforces whatever personal ideas you have, and if you disagree with that then it's very simple to say “oh no that's not true it's made up. It’s fake.”

Centralized Media?

Daniel Forkner:

[16:42] I see a big picture issue surrounding this crisis of trust in the information that we consume. I think we've mentioned this before, we want to do episode in the future about media in general and about these filters and systems in place to make sure that the message that the media promotes are in the interests of our government and the corporations that back our media companies. So not so much that there's a conspiracy going on to frame the information we consume on a large scale, but there is a system in place that makes sure that the messages that are governments and our companies want us to hear get promoted by the media companies that we listen to.

And I see this erosion of trust as another way to support that system, because as it becomes harder and more costly to validate the authenticity of certain messages and certain videos and certain interviews, we may increasingly be forced to turn to some of these better funded institutions which can afford the type of technology and verification protocols to guarantee us that what they're showing us is real.

So it's kind of ironic I think where this could open us up to an opportunity where those who can guarantee what they're promoting is true, have the biggest power to also manipulate the messages that they present us, and that can be hugely problematic because so much of how we’re influenced today is not so much in the information that is presented to us at all times, but a lot of time it’s the information that we are not presented. It's the information that's held back.

Apathy

But to be honest David that's a discussion to be had about media that we can do a whole show on. But there is also another aspect of erosion of trust I think we should touch on and that's the fact that it can create apathy. If we don't know what we can trust and what we should be skeptical of in our media institutions, it’s possible that we as a public may choose to just opt out, and just not care.

That threatens to erode the very base of democracy itself.

David Torcivia:

[18:43] Right there's not enough time in the day for us to look into and research every single piece of news, or article, or image, or video that appears on our endless timelines whether it's from Facebook, or Twitter, or Google, or whatever it’s coming from constantly. There's so much stuff we can't look into it and this is something we've leaned on the media for.

Conspiracy

[19:01] But when our erosion of trust hits the media as well, when they get duped into running with fake news and made up stories, well that’s something that happens and when it happens enough can we trust anything? In the conspiracy world this has been the case for decades but it's really exploded in the past few years.

Daniel Forkner:

[19:19] I'm sure everyone at this point has heard about “flat earthers” and it kind of started out as humorous. I thought it was a joke and maybe it did start out as a joke.

David Torcivia:

[19:28] You know honestly I think it's a psy-op by some intelligence agency to measure how these big ideas spread, but maybe that's another conversation.

Daniel Forkner:

[19:38] Explain what you mean David because this sounds like a conspiracy in of itself.

David Torcivia:

[19:41] It's a conspiracy within a conspiracy which is the best type of conspiracy.

Basically I'm intelligence. I’m NSA, I’m CIA, I’m FBI… whatever I am. FSB. Some country somewhere decided “let's see how easy it is to spread an obviously false idea using the internet and how this idea spreads to influence others are. What the best methods of pushing this are whether it’s from videos, text, forums, conversations, bots, Twitter, whatever it is how can we get this to push something that’s obviously false and convince people that this is true?”

The intelligent applications of a study like this are obvious to anyone. So if it’s something that’s obviously wrong, well it's even easier to push this narrative. So let’s take something crazy. Something that we’ve known for thousands of years is not true, and that’s that the Earth is flat. And also there's no motivation for anybody to lie about the fact that the Earth is flat which is another thing but -

Daniel Forkner:

[20:32] You mean there's no like economic incentive in that of “hey the only way we're going to sell our product as if people believe the Earth is flat.”

David Torcivia:

[20:38] Yeah exactly there's no reason to and so people -

Daniel Forkner:

[20:41] Unless you sell gravity shoes.

David Torcivia:

[20:42] Yeah.

There's like secret places in the middle of Antarctica, or there's ice walls… people come up with crazy explanations to explain why this is the case, but in the end they're just reaching. There's a lot of really great content that’s been created about flat earth, things that sound sort of quasi-interesting and if you don't know enough science you’re like “you know what yeah actually this does make sense. There are holes in these things… why does it feel like I'm flying, or when I'm in an airplane things look flat blah blah blah blah,” and when you look deeper into it all these things fall apart but it's just enough interest to get the casual observer to buy into this.

So we actually started looking into this more when we went to YouTube researching this episode and just literally typed in the words “CGI fake,” because I wanted to see what people were talking about with fake computer-generated imagery.

[21:26] Almost every single one of the post that popped up - and maybe this is my personal filter bubble that was doing this for me - but it was all about Flat Earth. “Look at these fake SpaceX videos; look at these fake NASA videos; the Earth is actually flat look they messed up on this one frame blah blah blah blah.”

What's happened with this conversation is that the idea that we can't trust this media has enabled people to buy into the fact that “oh yes the world is flat and everyone is lying to us because they have the technology: photoshop, video editing, that enables them to fake these things and lie to us on an industrial scale.”

Daniel Forkner:

[22:00] So your conspiracy within the conspiracy is basically there's a chance that the intelligence communities of our governments wanted to measure how easy it is to spread something that is so obviously false people couldn’t possibly accept it.

David Torcivia:

[22:14] And then it got out of their hands and now it's its own crazy thing with basketball players and stuff supporting it.

Daniel Forkner:

[22:19] Well that's interesting because it goes back to Bernays and his thought leaders. There are a lot of celebrities in different fields that are big proponents of this. A famous Jiu-Jitsu star is a big spouter of Flat Earth, and so that could certainly be a reason for that.

But also perhaps [intelligence communities] saw this technology coming and they said “well everyone's going to be talking about this technology that can fake reality, so let's distract people perhaps to things that don't matter so that they don't focus on the fact that their governments could be faking things for political means.”

David Torcivia:

[22:51] Well there you go. That's the conspiracy in a conspiracy in a conspiracy… we're going deeper here and it never is going to end, but yeah I think that’s absolutely part of it. Especially in the conspiracy world there's a natural idea that “I don't trust this,” and maybe that's been spawned out of the endless decades of UFO fakes, fake photographs, fake cryptozoological sightings with Bigfoot, with Loch Ness monster, where people their initial instinct is to look at something and say “okay why is this not real?”, and then try to reach for every single thing they can about it until in the end they’re like “okay so this may be unexplainable.”

Daniel Forkner:

[23:26] David let's leave it up to our listeners to go down a conspiracy Rabbit Hole and report back, but I think we should take a step back because whether or not there is a conspiracy within a conspiracy within a conspiracy regarding this fake news.

David Torcivia:

[23:39] This show is another layer of that conspiracy by the way.

Daniel Forkner:

[23:41] This is the real fake voices that we’re using, the beginning fake voices were just the fake fake voices to think….

Apathy Benefits Oppression

But there certainly is a benefit for governments that want to use oppressive and repressive methods to have a public that is apathetic to their sources of information.

Again it goes back to that media propaganda, sometimes it’s the information that people are not aware of that allows you to get away with the most, and we've talked about on these shows about the things going on in Xinjian in China in terms of their surveillance and control, and that's not being reported in our Western media. I mean you can find out about it if you know what to search for.

David Torcivia:

[24:19] Yeah but compared to the conversation of Stormy Daniels or something whatever the popular thing at the moment is, it's basically impossible to find. I mean China's a major interest of world news, so if you go somewhere like Mayanmar, and look at the genocide that’s happening there currently, well odds are nobody's talking about that.

Daniel Forkner:

[24:35] That's another good point. 700,000 people in Myanmar being affected by military genocide that I'm ashamed to say I don't really know much about, and these are things that we if we want to be informed citizens have to search for.

But again a lot of this disinformation is driven by search result algorithms, by social media algorithms. Related to the fact that we may be pushed into this environment where we’re only trusting a very small collection of sources, well China right now is using this very topic of fake news and fabricated stories as justification for increasing their online censorship and surveillance.

So there was a conference late last year that people went to and because they were foreigners normally they would be allowed to use the internet and to visit different sites, but even there the Chinese Broadband Services had blocked things that they didn't want people to see, and again they used the fact that people are pushing false narratives online as justification for blocking websites saying “hey we’re protecting people.”

It's easy to see how all this erosion of trust, this apathy, is giving governments powers to influence and shape our behavior like never before. I think we’re looking at a future where an erosion of trust in our information sources leads to a dependence on a very small, centralized group of media companies in addition to overall apathy which threatens to keep people misinformed about things going on in the world that they should care about, because caring about things is the only way we get change to come about.

The Role of Bad News

David Torcivia:

[26:13] Yeah Daniel that's absolutely right, and that's a major reason that we have this show in the first place to try and push some of these stories without the lying optimism of the media because people don't want to hear bad news all the time. And because of that they don't realize how big of a problem some of this is, and they're not intense in searching for the solutions to this. Because of that we lose time, and time a lot of the times with these sorts of problems end up meaning that we lose lives down the road.

Daniel Forkner:

[26:40] Well solutions are something we always try to hit at the end of these episodes and we have some ideas for ways that we can move forward in the face of some of this tech.

Should we move on to something another aspect of this David?

David Torcivia:

[26:51] I want to talk about social and this is my favorite weirdest part of this because I saw it happening live on Reddit.

Daniel Forkner:

[26:58] What did you see happening live?

David Torcivia:

[27:00] So there's something called deepfakes are you familiar with this Daniel?

Daniel Forkenr:

Deep…fakes… that’s where you go really deep in your submarine to find your latest…

David Torcivia:

[27:10] No this is just you reaching for a bad joke.

Fake Porn

But what it is, so there was a subreddit on Reddit and I stumbled across it one day because some of the technology Subs were talking about it, and it's something called Deep fakes. The deep fake sub was named after a machine learning program that somebody put together that uses TensorFlow.

[27:28] And what you would do is you would load it with faces, and typically celebrity faces, you would download hundreds of a celebrity's face.

So I'm going to go download a million Emma Watson's, or a million Natalie Portmans. Put their fases in there. Because they're celebrities there's lots and lots of face photos, so you download hundreds of these or thousands of these, and you run it through this software.

And this is called a training data set, so the software analyzes it and generates an amalgamation of what this face looks like. It knows what it looks like from different angles, and different lighting conditions, help it moves, how it smiles, how it frowns, what the eyes do, how it blinks, and it learns all this and it can create a digital replication of this face.

So that's the first step, and once you’ve got this data set created and trained, well then what people would do with this technology is they would go and find a porn video of somebody who looks sort of like whatever actress they’re interested in and they would load it into this software.

This software analyzes this porn video and finds the actresses face, and it analyzes it from every angle, lighting condition, and builds a sort of idea of what this face is doing. And then very simply the third part is it says “well let's take this actress’s face that we trained and apply it to the face that we recognize in this video,” and then within a matter of hours because it does take a while to run, you now have an extremely convincing fake porn video of this famous actress.

This blew up overnight and it was very simple for people to do this.

Daniel Forkner:

[28:44] Well I guess it wouldn't have to be an actress right? Just anybody -

David Torcivia:

[28:47] No well I mean that's where we're getting to with this especially with things like Facebook where Facebook automatically, you click on somebody's profile, see all the pictures they’re tagged in, download them all, load them into the software and get the very same approximation where you can create just porn of anybody; man, woman, child, whatever. There's lots of weird legal questions of what this means. Is legal? How much do we own? Our personal individual visage of ourselves, which legally we do, but that's another question with recreations of holograms and stuff, like in the Super Bowl they wanted to have a hologram of Prince but the estate said no…

But anyway this idea of what do we own, and when anybody can create these amalgamation composites of ourselves, and it looks very convincingly like ourselves, what does that mean?

The sub was shut down for obvious reasons but these things are continuing to be created and porn websites have jumped on and created subsections for deep fakes. There are websites and communities where they’re continuing to generate this, because even though the initial way to get into it has been censored by Reddit, it still lives on the internet. You can’t shut down something like this, the cat is out of the bag, Pandora's Box is open. Now we’re in an era where custom generated porn of anybody you know is about to explode into the mainstream.

That's weird.

Daniel Forkner:

[29:55] This isn't the only case of using fake people and fake women to cater to male fantasies, for lack of a better word, because Ashley Madison -

David Torcivia:

[30:06] That website that matches people looking for extra-marital affairs; men with women; women with men; men with men; women with women; all outside of the whatever supposedly sacred bond of marriage that they claim to have but got everything hacked right?

Daniel Forkner:

[30:20] Well that's what they got huge publicity for is the fact that their database was hacked and users’ information including their email addresses, their contact information, their credit cards, all got leaked. There was a huge scandal around it.

Well perhaps what deserved an even bigger scandal is the fact that Ashley Madison has been using thousands of fake profiles, these are bots, to message millions of men to get them to spend more on the website, and the whole thing is driven by economic incentives. There's not enough real women on the website and even in cases where there are women, the engineers behind this website have found out that they can just make more money by using bots because the bots are more engaging with the men, they're a little bit more flirtatious, they say “hey just spend a couple more dollars and I'll talk to you some more, maybe we can meet up.”

And a lot of people were spending tons of money on the website for interactions with these people that they thought were real but were totally fake.

David Torcivia:

[31:18] Well then the website ethical response is like “well if people are enjoying having conversations with our bots, who's to say that this isn't the beneficial service that people are enjoying and getting whatever money that they put into it out of in terms of it the experience?”

Daniel Forkner:

[31:33] Yeah Ashley Madison leading the charge in Androidian relationships at the future but -

David Torcivia:

[31:39] This is a common scam too, I mean creating fake women, and it's almost always women though it does happen with men. They reach out to you on email or Facebook message whatever, and they're not a real person they're fake. Typically run by slave labor farms almost in third world countries where the cost of labor is extremely cheap and there's men and there's women and old men, old women, whatever it is pretending to be someone young and attractive, reaching out to lonely people online. Writing fake emails, writing fake love letter, “let's meet up, oh I can't afford a plane ticket, just buy me a plane ticket, no just wire me the money then I'll buy the plane ticket,” and people get scammed out of thousands of dollars from these fake romance things.

It this is very similar to what Ashley Madison was doing but now it's being expanded into not just individuals doing this in these labor farms, but instead automated by bots once again.

Forgery

Daniel Forkner:

[32:28] I think the point of all this is just an example of how these technologies are becoming more accessible and of lower-cost so that not only are companies using them like these dating websites, but those users, those ordinary people that you talked about making those deep fake porn videos, and these technologies are only going to become more accessible, and while a lot of it can be entertaining and fun, there are some serious threats associated with this, and one of that is automated intelligent personalized fishing.

David what is a phishing scam?

David Torcivia:

[33:02] Phishing scams can vary in their complexity and what they're trying to do, but the basic idea is I'm reporting to be somebody else or something else like a bank or an individual, and I reach out to you in order to get you to do something that's bad for you. Sometimes it's “give me money” sometimes it's “click on this link and turn over your password,” sometimes it’s “give me your Social Security number or your bank account or send me money” whatever it is.

Daniel Forkner:

[33:25] Or in the case of those nutrition activists in Mexico it's “click on this link so that we can install malware on your phone and track everything that you do.”

David Torcivia:

[33:34] So in the past it’s mostly been “I know very little bit of information about you, I know your name,” most of them will start with that: “I know your name and your email and maybe your phone number.” Things that you can find online fairly easily.

But as this information gets easier to find online, collect, combine, and generate something well now you’ve got a little bit more knowledge about somebody.

Scamming the Elderly

So this happened to my grandparents. They got a call, my grandpa got a call, and he said “please Grandpa it's me David, I've been in a horrible –

Daniel Forkner:

This happened?

David Torcivia:

Yeah this happened - “I've been in a horrible car wreck I need you to send me some money because I don't want to tell my dad, because he's going to be so angry at me” which is a very great line because everybody knows like a grandchild is not going to want to talk to their dad about getting in some situation they don't -

Daniel Forkner:

[34:18] Wait when did this happen?

David Torcivia:

[34:19] This happened like a year ago. Somebody was pretending to be me and it wasn't my voice, but on the phone you know you can sort of disguise it and especially when these prey on the elderly, They have worse hearing, it’s degraded, and so sometimes it’s hard to tell I mean like I'll call Grandpa and they don't recognize me until I have to explain who I am.

Daniel Forkner:

[imitating David’s Grandpa]: “Is that you Michael?”

David Torcivia:

[imitating David’s Grandpa]: “It’s Sonny Boy.”

I love you grandpa.

[34:39] But what happened was luckily he realized “well my grandson lives in New York, he doesn't even own a car,” and he'd heard of these scams before and he realized that he was being targeted and he just hung up on him and told him “don't call me or scam me again.”

Laser Phishing

Imagine the same thing happening except with this Adobe VoCo technology. With these abilities to generate very accurate audio recording and in real time. So now I will talk to somebody's grandparent and because people have so much content online, dumb podcasters who put out hours and hours of their voice samples….

Daniel Forkner:

[35:11] Just tons of beautiful voice samples just begging to be stolen and….

David Torcivia:

[35:17] Yeah exactly we’ve given our voice away so much, but YouTube, videos, Instagram, voice recordings, phone calls, all these things can be collected, combined, and used to generate, and some of these technologies only need a couple of minutes of audio recordings. 1 to 3 minutes. I mean the one that we used in the beginning of this show, while it is primitive at this time, only really needed less than 30 sentences to generate this stuff. That’s only a couple minutes worth of audio. Degrade it through some technology, play it for an elderly person, and very quickly you can see how this can get out of hand, and there is nothing that we have that's preparing us for this. These phishing scams are already a huge problem and when they can be customized with accurate sounding audio…

That’s game over.

Daniel Forkner:

[35:56] I don't think it has to be the elderly David. if I got an email, or I got a voice message tomorrow the sounded just like you and it said “Daniel hey, I had I have it I had an accident and there was a Subway problem, can you please send me this blah blah blah…” obviously I would be skeptical after listening to this but maybe I wouldn’t even think about. I might say “oh my friend is in trouble; he needs help from me; I’m going to do it.”

I actually brought up this point with my mother, and she actually had a really good idea for dealing with this in the future, which is when I was a kid, we had these code words and it worked like this:

My parents told me “Daniel if anyone, a stranger, ever tries to pick you up and says ‘Your parents told me to pick you up they’re in trouble and you need to come with me immediately.’ Well just ask them what the code word is and if they know it you know it's real, and if they don't run away from them.”

And that's immediately what she thought of when I told her about this and I think that's actually a good idea is that we should have a code word David that if I need help from you, if I’m in an emergency and it’s real, this is a word or phrase that only you, and only I know so that we can trust.

I think that code should be…

David Torcivia:

[37:02] No you can't say it on the air then you're ruining it…

Daniel Forkner:

No we can trust our listeners, they’re not –

David Torcivia:

Oh that's true they’re not going to scam us.

But this is something goes back to intelligence agencies who have had to deal with this technology for decades, because what they have in terms of able to generate this in the funds to generate these fake voice recording, these voice style transfers, vastly exceeds what is openly and publicly available in the world for the rest of us, and so like Daniel said you would have these code words between agents where they would say something. If they said one word it meant everything's fine, if it's another word they say “don't listen to what I'm saying.” Sometimes code phrases, simple ways of saying hello, I mean it gets complicated we don't need that, but just having a single word that identifies you where I would say “hey Daniel it’s David. Dolphin.,” and then he would know that it actually was me.

This might be something that we start going to and doing all the time. There are some more maybe interesting technological fixes but without that integrating into almost every single piece of tech that we have in our lives that we communicate on, well this is a very simple solution that works really well in the meantime.

A Perfect You

Okay so we've been talking a lot about ways you can take advantage of people with this technology and also for some reason a lot about sex and dating, well I want to continue that a little bit more right here, with our billion-dollar idea of this episode so entrepreneurs out there get ready this one's going to be great.

So we all know Snapchat with its facial recognition mode, you click on your face, hold it there, it does sill things like you wear a silly mask or whatever, and as much of Snapchat is an intelligence company, they have huge contract with US Government intelligence departments, and they are soaking up all this facial recognition data - don't use Snapchat…

But with that aside some of these things and the reason people use these faces so much is because it actually alters your face. So the very popular much-maligned dog-face Snapchat filter, it adds little puppy ears and little puppy whatever, well it also drags your face. It makes it look slightly longer, intensifies your cheekbones, and makes you in general more attractive which is why so many people use it even if they don't realize what's going on because its masked in terms of these 3D add-ons – the ears, the nose.

Well it's also making you for lack of a better word sexier. It doesn't take a very large leap to imagine taking these technologies, shifting them into an app that does it without the little add-ons that show you that this is in fact a fake thing. So I should be able to take a picture, a selfie, and it just instantly makes me hotter. There are apps that do this but typically they push it too far so people realize what's going on, but what happens when it's a very subtle thing? It makes it easy to lie, to “catfish,” especially when so much dating happens online and those pictures are your initial contact with somebody and so important to that.

Even more, taking this technology to a sexting level, if I take a full body nude with this machine learning body enhancing technology, well it's not hard for it to make me more tan. To give me abs. To make my arms look more defined. And of course, to make my dick look bigger.

This technology is coming. It is going to be worth billions of dollars, it already -

Daniel Forkner:

[39:57] I don't need that technology.

David Torcivia:

[39:59] Yeah well, some of us do Daniel.

But it’s coming out there. It’s going to make your tummy smaller, it can make your boobs bigger, whatever it is that you want and that you feel bad about, this technology is going to enable us to feed into those fears we have about our body that we aren't hot enough in whatever right way that we want to attract this person with, and make it very easy to lie automatically with this.

And for those people running those troll farms, those bot factories far away where the’re trying to capture people, catfish them into sending them money, will it’s going to be that much easier with this Tech. It is coming, if it doesn't already exist the technology is out there, somebody just has to put the little bits and pieces together, and we're going to have lies in sexting even more so than our lighting and our body poses already do.

Daniel Forkner:

[40:40] Well David those are great personal reasons why someone might want to use this tech to make themselves look better and how does that play into this culture of individualism, of consumerism, Of these values around things that just don't matter that's eroding our idea of what it means to be connected to people in a real way, as opposed to a fake way, a sold way…

I think there are huge discussions that should be had around that. I think people should think about how all this could affect the way we view ourselves and interact with other people, but there's also you mentioned in the very beginning of this episode legal frameworks, and I think there's going to be a problem as this tech enters the court systems in terms of evidence.

Evidence, Courts, Legal Systems

We already pointed out in last week’s episode with Moriah while you were on vacation David, about how the Supreme Court ruled that evidence obtained illegally can still be used as long as police were trying to follow the law. And police justify that because they say “look, this technology in general just evolves so fast that the legal framework does not have time to catch up with it,” and I wonder if we're going to face the same thing with this fake technology, or real technology creating fake things as applied to the legal framework we have that results in people going to prison.

David Torcivia:

Or worse.

Daniel Forkner:

Or worse. You know forensic evidence for example - we've talked about this off air David - there's a lot of problems with it, and we've all heard of cases of people who were in prison for 20 years based on DNA evidence that was later found out was not accurate at all and they weren't guilty of committing a crime, but we place huge weight on video, photographic, and audio evidence in courts in order to convict people, and if we're dealing with technology that can create fake audio, fake video, and given the fact that our courts moves so slowly at evaluating this technology, and those who are trying to defend themselves that don't have enough money don't get the best representation, I think there's huge opportunities for abuse in that system.

David Torcivia:

[42:48] Yeah so much of our court system is based on “what can we see. What evidence is there that shows that this is 100% actual.” A lot of that is CCTV, a lot of it is audio recordings, a lot of it is photographic evidence.

Because as we know through decades of research and experience, well fact of the matter is eyewitness testimony just isn't that accurate, even if it's the gold standard of a lot of court cases. Well because of that, video has taken over as the de-facto thing, but even without this video can be misleading, it can be lies, it doesn't show everything that’s going on and we interpret it as truth even if it maybe isn’t.

So when we started researching this episode I looked at everything here as “this is terrible. We're facing a terrible apocalypse of information, of news coming in,” but some of this I don't actually know if it's such a bad thing, after looking more into it and thinking more about it.

One of these areas where that's the case is here in court. So right now our legal framework is built around having these de-facto evidence that you can't disagree with: audio recordings, video recordings, and because of that we’ve seen a lot of pushes like in Police Departments for body cameras. And these are one of the one areas where people are doing video recordings correctly, at least in terms of verifying what you're seeing hasn't been altered with.

A lot of these video cameras are recorded with a cryptographic hash showing that it has not been edited or altered, which is something we'll talk about later in terms of solutions to some of these problems, but still even with that, body cameras don’t do a great job of showing everything that's happening. The New York Times did a great piece on this showing a hypothetical police shooting where the cop was running a body cam, but they also had other cameras positioned around this situation showing how depending on what perspective you looked at, who was at fault whether the cop is guilty, or whether this individual that was hypothetically shot by the police officer was in the right or was in the wrong.

[44:29] And depending on which direction of which video you looked at you saw very different stories. And so maybe if we get to a point where we don't trust any video where just because there's a video we accepted as definitive fact, we get into an area where we analyze these things more like they should be, as a sort of grey area where some things are correct, some things are wrong, and we're in a very Roshomon situation for fans of cinematic history.

Where we seeing murder, this very famous film by Akira Kurosawa, and this story is told from seven different perspectives. At the end of the film you’re sort of grasping at who is right and who is wrong. I mean it's a silly fictional example but it's much closer to the truth than simply relying on a single or maybe a couple angles of video as definitive actual proof just because it was recorded. We need to remember that even in this situation where we think we know what's going on, well video isn't so trustworthy, even without all this editing that we’re talking about earlier in this episode.

A Silver Lining?

Daniel Forkner:

[45:19] So let me just guess where you're going with this David, because you said you think this technology could be a good thing, well you just said video is not trustworthy, and we're looking at a future where video can be edited to even present something that looks real that's not real, and you think that's a good thing… is it because if we lose faith in something that's already flawed, we’ll stop using it in a way that has been misleading us for a long time we just didn't realize it.

David Torcivia:

[45:47] Yeah that's a really excellent summation of what I'm trying to say here is that video is inherently flawed because it purports to show truth, but reality is complicated, there are a lot of different angles, grey areas, there are things we don't see, things we don't know, motivations, and tiny little details that we miss out on that contribute to a larger actual “mystical truth” (because is there truth? I don't get too philosophical with this), but what's going on and the less that we trust video in the first place, the more that we question and say “well this is just a singular part of that truth,” well the better off we are as a society, and very much so in our legal framework.

Of course, the courts aren't designed for that even though they existed for hundreds of years without this video evidence, well it’s really shifted in the past few decades depending on this stuff and we're going to have to go back to a way that frankly worked quite well before this, and we’ll recover but there’s going to be a lot of legal storms and confusing areas to navigator our way around along the way, but I think in the end we're going to get there and it's going to be better off because of it… uhh if civilization doesn’t collapse before then anyway.

Daniel Forkner:

[46:48] There's the Dark David that I know.

David Torcivia:

[46:50] There I am a back, let's keep going though I don't want to get too bogged down on this -

Daniel Forkner:

[46:54] Now that we've finished listening to that fake news David with his good news.

David Torcivia:

[46:58] Was that even me or was that a voice recording of me generated by some third-party?

Who even knows?!

[47:05] Just quickly we want to touch on a couple economic concerns before we tie this all up, because while we talk a lot about social, political, and legal frameworks, a lot of the motivation of this technology is in the end ultimately economic.

Daniel Forkner:

[47:17] And a lot of this might be kind of obvious that “of course a lot of this is driven by financial incentives,” but it’s worth pointing out.

Economic Incentives

So we mentioned those Macedonian teenagers, and that’s just a small part of a larger troll farming operation that has been going on in Macedonia, but we shouldn't just target that country. It's a niche that a group of people found in that country and it spread to a lot more people, this practice of creating fake stories, and sensational news to drive people to click on things that they could then generate ad revenue from. But this is happening all over the world obviously and will happen increasingly as there is more and more money at stake here in controlling what people do, and controlling what information people see because that ultimately influences their behavior, and their behavior drives purchases, it drives votes, it dries all the types of things that those in power want in order to maintain power and to grow their interests.

David Torcivia:

Fake Reviews

[48:15] So while we’re over here talking about fake news, one of the bigger problems that we actually maybe even encounter more routinely then this fake news, are fake reviews. There's a huge problem on Amazon and other websites of just most of the reviews that you see for products are lies.

[48:30] This is true on Yelp, this it true an Amazon, a lot of these things are just written by Bots or people paid in these third-world labor farms to generate lies about the product in order to sell it, and this happens a lot in app stores as well. Even in things like social media posts. If you want to get upvotes for Reddit posts, well you can buy these very cheaply; it's like $5 for a thousand upvotes or a hundred upvotes or something.

By purchasing these social endorsements, whether it's from fake friends or just strangers on the internet, well products can be pushed higher up and you're more likely to buy, because one of the major things that influence people to purchase products is people telling them that this product is good, and whether that’s AstroTurf Bots writing comments on websites, posting to communities about purchasing things like on Reddit “buy it for life” for example, or posting actual reviews on the website themselves saying this is a great product in order to bury bad reviews, or move it higher up on algorithm ratings, this is a giant problem because we're not getting the best product a lot of time. We're buying the one that the producer of this thing decided to take the least ethical process in pitching that widget to us.

Daniel Forkner:

[49:35] Preying on that human heuristic “social proof.”

David Torcivia:

[49:38] Exactly, and in doing so are we getting the best product if the person producing that product is so willing to jump into a black-hat-gray-area method of pushing their wares?

Daniel Forkner:

[49:48] That is a good point that if a company is willing to go to insidious measures to manipulate us into using their stuff, they're probably also willing to skirt the lines of standards in order to create that stuff for us in the first place.

And that also goes back to Bernays which is: sometimes it's more useful to just change people's habits and their entire lifestyle so that they are forced into doing something and buying something that they otherwise don't even realize is not the only option to them.

Commdifying Us

Obviously another aspect of this that we kind of touched on is the intellectual property component, and this is something we’ll have a much bigger show on - David you're very immersed in this topic - but we talked about how this technology can be used to fake, or to generate the likeness of someone in things like film, and that calls into all sorts of questions about what is it that we truly own in terms of a person's likeness and mannerisms?

Is our image truly our own, or can it be bought and sold by entertainment companies that want to capitalize on that for future profits? What does that mean in terms of the way we commodify people? When you can take a person and their likeness, and now craft any personality that you want out of them, any image that you want, any facial expression, any audio, then every single bit of that person becomes a commodity; becomes something that can be crafted. I think that's dangerous for our conception of… I mean really this may sound a little cheesy but what it means to be human, where we can sell every part of what it means to be a person.

[51:25] I think that has the potential to erode the value that we place on those aspects of being human.

David Torcivia:

[51:32] These are great questions Daniel and something I want to devote at least to a single episode to in the future, especially the IP relationship of all of this. What we own. What is the intellectual property of ourselves? It's a huge question and something that is well outside the scope of the tail end of this ever lengthening episode, but these are things to start thinking about now, and if you have thoughts listening to this, well let us know because we want to integrate these thoughts in this episode when we talk about these very subjects.

But moving on from that, maybe we should start talking about what it is that we can do for this incoming post-truth world.

What Can We Do?

Daniel Forkner:

[52:06] What can we do David?

That's a great question and it's what we try to ask and answer at the end of all of these shows, and the thing that comes immediate to mind is we need to again, just raise the value that we place on real world genuine interactions with people. Focus on connections with people, the communities that we can build around then, and try to step away a little bit from this technology except where it's absolutely necessary. We have to remember that technology is ultimately a tool, and tools serve the values that we have, and that's the order it should be. Values first, and then tools to enhance those values. So if technology is being used in a way that's not enhancing our values of human connection - if that is a value that we have; I don't want to put values on anybody here but - if that's a value that we have, then we have to evaluate them along those lines.

But maybe more practically and more immediate in addition to those secret code words that we discussed, Journalist Sharyl Attkisson gave a talk at the University of Nevada in which she addresses some of the issues surrounding these AstroTurfing and these bots that we discuss, and she gives us a good framework or thinking about it from an individual standpoint which is: Recognizing when information and stories are being presented that have obvious bias. There's a couple ways to recognize misinformation, disinformation, disingenuous information.

Which is use of inflammatory language.

Number one, when a story used emotionally charged and derogatory like “nutty,” like “conspiracy,” like “crazy,” “crank,” “hack…”

David Torcivia:

[53:48] I get that one a lot.

Daniel Forkner:

Do you?

[53:52] Emotionally charged words like this, okay this is inflammatory language, could be a signal that the article you're reading this information is disingenuous.

Another one is any perspective that asserts to debunk myths that don't actually exist, and we see this a lot it's kind of used in clickbaity sensational things like “oh you know we de-myth this political perspective,” well maybe that myth doesn't exist in the first place it's just fabricated, and now we're going to tear that it apart in this article, and it's very easy to do that because it's already a trumped-up, very unrealistic thing. Be wary of that and recognize that that’s something going on.

Another thing to look out for is when things attack the people and the organizations around an idea rather than the idea itself. Right that’s very basic, the strawman fallacy that we’ve all heard about probably at some point, but it’s still sometimes easy to miss when you’re not being very analytical in the information you’re consuming.

[54:49] And finally she says question and criticize - this is a really good one David as we get a lot of these now - question those that criticize whistleblowers, and those that point out issues… um maybe like hosts on Ashes Ashes....

[55:03] These are articles and stories that are criticizing those that are questioning authority, as opposed to questioning authority themselves. That could be a sign, not necessarily for every case, but it could be a sign that something has an ulterior motive.

David Torcivia:

[55:18] That's all very well and good Daniel, and it makes me sound like I have to be some sort of Greek philosopher or logician in order to analyze the news at every moment, but it -

Daniel Forkner:

[55:26] I guess you got a long way to go.

Don't Ask What But Why

David Torcivia:

[55:28] Yeah a very long way to go, but I think we can really break it down to something much more simple than that, and that's just instead of asking “what is this trying to say,” like “what's going on, is this real or?” just go back to something that we bring up in that episode about Edward Bernays and propaganda and that's asking just instead of “what,” but “why?”

[55:46] Why? Why is this being pushed right now? Why is this story in the news, why are we talking about this instead of that?

So even if this thing is real, or not real, it's beside the point because it has caught on in the media and people are talking about it for some reason, and questioning why that's the case is far more valuable, far more important, and far more potent and analyzing this news then the questions about whether it's authentic or not.

Daniel Forkner:

[56:08] David that is a great point because that question, the “why” question, really encompasses everything. If you're just asking “is this real or not?” you're only covering half of the particular issue, because even if something is real, like you said and like we talked about earlier, sometimes it's the information that's not being presented that is the larger story.

And so we have to constantly ask “why.” What are the motives behind a particular thing being pushed in the media.

One relevant example of this… so Moriah King was on the episode last week, and she was in DC for the “March for Our Lives Protest,” and not to say anything negative about the people that were protesting and the messages that they're fighting for I think that's beside the point, the point is it was very controlled. There was only one entrance on a street that was guarded by military trucks and personnel that you could even enter to participate in this March that by the way had a permit with the government. It was organized, so all the scripts were already predetermined. The government and the organizers of this march already knew what was going to be said. There was a specific time at which it would start, and a specific time in which it would stop, and this message has been getting a lot of attention on the media.

David Torcivia:

[57:23] And at the same time that this is going on, that this National conversation about guns is occurring, well all of the teachers in West Virginia have walked out of their schools. They went on strike for a week and a half in order to ask for a number of things from the state government, and we didn't hear about this at all.

A whole state didn't have school for a week and a half and nothing happened? And then the same thing was happening in Oklahoma, teachers also went on strike over there, but we've been so busy pushing this other narrative, that media doesn't want to talk about these things that are actively impacting more people but it's in the same place. This is a conversation about people being hurt in schools, well when teachers are being hurt we ignore that because it's not something that the media is interested in pushing at the moment; it goes against a lot of economic narratives that we've been talking about as a country.

So instead that news is buried and we focus on something else, and we're allowed to say these things but, let's say these kids that are organizing these protests suddenly began asking for disarming police officers in addition to the public, well suddenly they would disappear. They wouldn't be allowed to say the things that they are, they wouldn’t be given National Air time, because those messages go against what the media wants out there.

And I don't want to get too deep into this conversation because I've got my tinfoil hat on and we’re in a conspiracy, conspiracy, conspiracy again but these are the kind of things to remember and to always think about in terms of “why.”

Daniel Forkner:

[58:39] Good point David. Again something we will explore in a later episode is how do these messages become the prominent messages in our media outlets?

David Torcivia:

[58:48] Two more things I want to touch on just very quickly. So we briefly mentioned cryptographic verification. This is another thing that we can do in order to prove that audio and video is genuine, and this is one of the concerns Adobe had with their VoCo product. They were going to integrate some sort of hash within their re-engineered generated audio, in order to verify that this audio is either untouched, or has been altered by this technology. The same thing we see with body cams is that they’re cryptographically signed so that we know they haven’t been edited. This doesn't exist in most cameras yet, but it would be trivial for camera manufacturers to add this technology if the will is their.

So we can as consumers pressure both cell phone companies, as well as actual camcorder and digital camera manufacturers to ask for this feature. It would be extremely valuable to journalists already even without all this stuff, so this is something we can very quickly introduce. We can do the same with voice communications, or FaceTime, or with actual just regular phone talks, but again these are problems that we have to shift to technology creators.

In the meantime we do have things like Daniel mentioned with code words enabling at least some modicum of trust between ourselves while we're communicating.

The last thing I want to briefly touch on is just a reminder of the virality of that news, and the idea that what is out there is all sifting around this “Marketplace of ideas.” This Marketplace isn't a fair thing, so these inflammatory, explosive, outraged pieces of news, or clickbait, or whatever it is... IS much more likely to explode out there than a well researched long-form article that is difficult to consume, and brings to question some of the things that you know or believe as true.

[1:00:19] One of these techniques is that it’s very easy for me to come out and say something, publish it that’s incorrect but has an inflammatory headline. We talked about this in our sugar episode when we say “oh you know kids who eat lots of candy get thinner,” and then you look at the study and it's not so true. Well it's very easy to do this with this news where we have a headline that is incorrect, or half correct, and then later on you say “oops we were wrong about this,” and we issue a correction on it, but nobody ever reads the correction.

With the initial headline that's inflammatory and exciting gets shared and becomes the actual fact.

Daniel Forkner:

[1:00:50] Once it happens it's too late to correct it.

David Torcivia:

[1:00:53] Exactly and this is becoming a sort of tool kit that PR companies use in order to push things because they know that they're being ethically dubious with it, but they can always recover in the end by saying “oh our mistake we didn't mean that,” and that part gets left out of the conversation.

This virality happens on the interesting piece of news, but not necessarily the correct one. So remember that when you share articles, don't just instantly jump on something that because it's been shared a lot, because it's easier to consume, because it's a meme with a couple lines of text. Instead maybe focus on sharing these better quality pieces, and you're going to be reading less, consuming less of them, but you're probably going to be better off for it.

Of course length isn't just a measure of quality, but it is something to keep in mind in terms of these very short pieces and how they spread.

Daniel Forkner:

[1:01:35] And if you're still skeptical out there, there is evidence that just being aware that misinformation can come our way is enough to inoculate us against some of its larger effects.

There was a study carried out at the University of Cambridge that took a group of people and exposed some of them to a small “dose” of information that was misleading, and by priming them to that fact they were more likely to be skeptical of information that was false in the future. The conclusion of this paper was quote “finally, preemptively warning people about politically-motivated attempts to spread misinformation helps promote and protect, or inoculate, public attitudes about the scientific consensus.”

So just like David mentioned how that scientific process can be hijacked like we discussed in our sugar episode, bringing awareness to that fact can go a long way in preventing people from being duped by information that is disingenuous and misleading.

So put on your tin foil hats, and join us next week for another episode of Ashes Ashes, where all your information, is ….

David Torcivia:

Fake. Fake news.

Fake David Torcivia:

If you want to learn more about any of this, and read detailed sources to show you that this isn't all just in fact fake news, you can find that in addition to a full transcript of this show on our website at ashesashes.org.

Fake Daniel Forkner:

A lot of time and effort goes into making these shows possible, and we will never use ads to support this show, nor will we ever purchase ads, effective as that might be, so if you enjoy this show and would like us to keep going, you can support us by giving us a review and recommending us to a friend.

Also we have an email address, it’s contact AT ashes ashes dot org. Send us an email, positive or negative, we’ll read it, and if you have any stories related to this show maybe we can share them on an upcoming show.

Fake David Torcivia:

Next week we're turning to the environment for a serious problem that has drastic effects on all of our health, in addition to the health of the animals around us.

But until then, this is Ashes Ashes.

Fake Daniel Forkner:

Bye

Fake David Torcivia:

Buh-bye