AB-Infocalypse Now

By March 21, 2018comp-5005, Data science
Annotated bibliography

Infocalypse Now

Charlie Warzel (12 February 2018)

Paper’s reference in the IEEE style?

C. Warzel, “He Predicted The 2016 Fake News Crisis. Now He’s Worried About An Information Apocalypse.,” BuzzFeed. [Online]. Available: https://www.buzzfeed.com/charliewarzel/the-terrifying-future-of-fake-news. [Accessed: 21-Mar-2018].

How did you find the paper?

Twitter

If applicable, write a list of the search terms you used.

  1. NA

Was the paper peer reviewed? Explain how you found out.

This paper is a Buzzfeed New article

Does the author(s) work in a university or a government-funded research institute? If so, which university or research institute? If not, where do they work?

The author is a senior writer for BuzzFeed News reporting on the intersection of tech and culture.  He has a BA in Political Science, has held several writing and editorial positions, and has help his current role for the last 5 years.

What does this tell you about their expertise? Are they an expert in the topic area?

The author appears to be an experienced researcher and reporter.

What was the paper about?

The incentives that governed its biggest platforms were calibrated to reward information that was often misleading and polarizing, or both. Platforms like Facebook, Twitter, and Google prioritized clicks, shares, ads, and money over quality of information

That future, according to Ovadya, will arrive with a slew of slick, easy-to-use, and eventually seamless technological tools for manipulating perception and falsifying reality, for which terms have already been coined — “reality apathy,” “automated laser phishing,” and “human puppets.”

It became clear to him that, if somebody were to exploit our attention economy and use the platforms that undergird it to distort the truth, there were no real checks and balances to stop it. “I realized if these systems were going to go out of control, there’d be nothing to reign them in and it was going to get bad, and quick,” he said.

Ovadya — now the chief technologist for the University of Michigan’s Center for Social Media Responsibility and a Knight News innovation fellow at the Tow Center for Digital Journalism at Columbia

the greater threat: Technologies that can be used to enhance and distort what is real are evolving faster than our ability to understand and control or mitigate it. The stakes are high and the possible consequences more disastrous than foreign meddling in an election — an undermining or upending of core civilizational institutions, an “infocalypse.”

Worse because of our ever-expanding computational prowess; worse because of ongoing advancements in artificial intelligence and machine learning that can blur the lines between fact and fiction; worse because those things could usher in a future where, as Ovadya observes, anyone could make it “appear as if anything has happened, regardless of whether or not it did.”

Already available tools for audio and video manipulation have begun to look like a potential fake news Manhattan Project. In the murky corners of the internet, people have begun using machine learning algorithms and open-source software to easily create pornographic videos that realistically superimpose the faces of celebrities — or anyone for that matter — on the adult actors’ bodies. At institutions like Stanford, technologists have built programs that that combine and mix recorded video footage with real-time face tracking to manipulate video. Similarly, at the University of Washington computer scientists successfully built a program capable of “turning audio clips into a realistic, lip-synced video of the person speaking those words.” As proof of concept, both the teams manipulated broadcast video to make world leaders appear to say things they never actually said.

Another scenario, which Ovadya dubs “polity simulation,” is a dystopian combination of political botnets and astroturfing, where political movements are manipulated by fake grassroots campaigns. In Ovadya’s envisioning, increasingly believable AI-powered bots will be able to effectively compete with real humans for legislator and regulator attention because it will be too difficult to tell the difference.

laser phishing would allow bad actors to target anyone and to create a believable imitation of them using publicly available data.If every bit of spam you receive looked identical to emails from real people you knew, each one with its own motivation trying to convince you of something, you’d just end up saying, ‘okay, I’m going to ignore my inbox.’”People stop paying attention to news and that fundamental level of informedness required for functional democracy becomes unstable.”

In some cases, the technology is so good that it’s startled even its creators. Ian Goodfellow, a Google Brain research scientist who helped code the first “generative adversarial network” (GAN), which is a neural network capable of learning without human supervision, cautioned that AI could set news consumption back roughly 100 years. At an MIT Technology Review conference in November last year, he told an audience that GANs have both “imagination and introspection” and “can tell how well the generator is doing without relying on human feedback.” And that, while the creative possibilities for the machines is boundless, the innovation, when applied to the way we consume information, would likely “clos[e] some of the doors that our generation has been used to having open.”

In that light, scenarios like Ovadya’s polity simulation feel genuinely plausible. This summer, more than one million fake bot accounts flooded the FCC’s open comments system to “amplify the call to repeal net neutrality protections.”

these technological underpinnings [lead] to the increasing erosion of trust,”

DiResta pointed out Donald Trump’s recent denial that it was his voice on the infamous Access Hollywood tape, citing experts who told him it’s possible it was digitally faked. “You don’t need to create the fake video for this tech to have a serious impact. You just point to the fact that the tech exists and you can impugn the integrity of the stuff that’s real.”

Fake News Horror Show

Computational propaganda is far more qualitative than quantitative

cryptographic verification of images and audio, which could help distinguish what’s real and what’s manipulated.

Despite some pledges for reform, he feels the platforms are still governed by the wrong, sensationalist incentives, where clickbait and lower-quality content is rewarded with more attention. “That’s a hard nut to crack in general, and when you combine it with a system like Facebook, which is a content accelerator, it becomes very dangerous.”

If applicable, is this paper similar to other papers you have read for this assignment? If so, which papers and why?

[]

If applicable, is this paper different to other papers you have read for this assignment? If so, which papers and why?

[]

What do these similarities and differences suggest? What are your observations? Do you have any new ideas? Do you have any conclusions?

[100-200 words]

This question is to be answered after your critical analysis is completed: Which sections (if any) of your critical analysis was this paper cited in?

[]