Controlling Truth: Information, Technology, and Politics

What the US Presidency Tells Us About Truth and Technology

Since the United States first passed its initial access-to-information (ATI) legislation in 1966, many other governments have followed suit. This proliferation of ATI is just another recent example of the intra-democratic movement to give citizens more information about, and by extension more participation in, the institutions that govern their lives. When implemented correctly, ATI contributes significantly to governmental transparency and accountability, thereby preventing and combating corruption and securing political power for the people.

Even so, the mere adoption of ATI legislation does not inherently improve the democratic process or increase governmental transparency. The success of such legislation is ultimately dictated by the state’s capacity for, and commitment to, actually implementing the legislation. And although increased governmental transparency and technological advancements alike may increase the availability of relevant information, these advances have simultaneously increased the availability of false or doctored information.

Today, more than fifty years after the US initially pioneered ATI, the Trump Administration has established new and troubling precedents that threaten informational accessibility and transparency. Indeed, this development invokes both civic and institutional considerations.

Foremost, there is presently (and always has been) a troubling lack of civic involvement among average Americans—especially as it pertains to existing ATI. As such, to the extent that American ATI statutes are invoked, requests for information disproportionately represent the unique interests of powerful political lobbies, interest groups, and self-serving corporations as opposed to those of the common people. Indeed, this lack of individualistic engagement evidences both a lack of political efficacy and legal clarity within the American political apparatus. And although the absent political efficacy is unlikely to change, government and media can (and should) make a concerted effort to inform the public of its right to information to catalyze greater civic engagement.

President Donald Trump’s inauguration as the 45th President of the U.S.

From an institutional standpoint by contrast, American bureaucracy is technically fulfilling its obligations under existing ATI statutes by actively responding to requests for information. However, the provided information is often heavily-redacted and thus offers insignificant insight and similarly fails to consistently address the question(s) that precipitated the information request. All in all, civic, bureaucratic, and institutional shortcomings presently implicate unique impediments for meaningful engagement with existing ATI legislation and collectively mitigate the potential for increased governmental transparency.

Fortunately, relevant literature presents numerous ways to increase governmental transparency; and in the Age of Trump, unimpeded access to information has never been of greater import. Needless to say, the Trump Administration has a well established history of an often troubled relationship with the truth. For instance, consider the 2016 US Presidential Election in which #FakeNews became both a social (media) and political phenomenon. Similarly, consider the invocation of #AlternativeFacts by Kellyanne Conway and Sean Spicer in the days after Mr. Trump’s inauguration.

Collectively, this administration has sought to actively discredit the truth and legitimize narratives and accounts that are at best misleading, and at worst, unequivocally false. This trend began in 2017, and a mere two years into Mr. Trump’s term, it has persisted despite the best efforts of the American bureaucracy, legislature, and media to ensure otherwise. Most disturbingly, this trend is part of a development which only further undermines the ability of the American people to make informed decisions when electing their leaders.

Furthermore, if any single certainty arose out of the 2016 Election, it was the unprecedented degree of digital and informational warfare unfolding before our very eyes. Indeed, this development was perhaps best highlighted by the Russian meddling which ultimately prompted the appointment of Robert Mueller to Special Counsel within the Department of Justice.

Special Counsel Robert Mueller testifies before Congress.

Indeed, recent revelations have only further highlighted the extent to which adversarial Russian agents, at the behest of Russian President Vladimir Putin, sought to undermine American sovereignty by engaging in a campaign of misinformation via pre-existing informational networks. In this sense, although technological advancements like social media and mobile phones have certainly increased the accessibility of information, they have also increased the risk of disseminating false information.

Meanwhile, as we near the 2020 US Presidential Election, new advances in technology like artificial intelligence (AI) pose a previously non-existent threat to the integrity of information (and by extension, American national security). For instance, consider the rise of deepfake video, an incredibly realistic albeit fictitious video account which has recently been made possible by these advances in AI. Indeed, deepfakes can be so convincing, that it can prove extremely difficult to determine what video has been manipulated, and what video has not. Although it has been possible to alter video footage for decades, doing so took time, highly skilled artists, and substantial sums of money; but now, less than 18 months before the 2020 election, deepfakes and similar technology are actively changing that. Needless to say, the implications of this technological advancement are boundless. As it develops and proliferates, anyone could have the ability to make a convincing fake video, including some people who might seek to “weaponize” such a video for political or other malicious purposes.

The rise of AI has numerous implications both for domestic politics and international relations.

The rise of AI and other new advances in technology pose a previously non-existent threat to the integrity of information.

From a domestic standpoint, and given the United States’ presently divisive political climate, the White House actively repudiates any news or information which illustrates a damning or politically inflammatory narrative about Mr. Trump. Indeed, imagine if such technology was available in October 2016, when TMZ released the Access Hollywood Tape in which then-candidate Donald Trump bragged about sexually-assaulting women. I suspect that in the presence of such technology, Mr. Trump, who already has a tumultuous relationship with the truth, would have simply claimed the tape (and audio) to be a computer-manipulated version of himself, thereby providing himself a modicum of plausible deniability. Even though the President never truly acknowledged nor took ownership for his actions, deepfake technology empowers those who find themselves in controversy (as the President often does),  to legitimately dismiss the allegations or evidence as ‘fake news’.

“The intelligence community is extremely concerned about the rise of deep fake technology, we already struggle to track and combat interference efforts and other malign activities on social media—and the explosion of deepfake videos is going to make that even harder.”

Senator Mark Warner, leading Democrat on the Senate Intelligence Committee

From an international relations standpoint, such technology requires similar attention. Resultantly, in recent months, three members of the House of Representatives, including Representative Adam Schiff, who now chairs the House Intelligence Committee, wrote to the Director of National Intelligence, Dan Coates, in September 2018 expressing their legitimate concern that deepfake “technology could soon be deployed by malicious foreign actors.” In the months since, the Department of Defense, through the Defense Advanced Research Projects Agency (DARPA), has commissioned researchers across the United States to begin developing ways to detect when a video is a deepfake.

For the sanctity of truth, the development of a technology that can counter deepfakes is of the utmost importance. In the absence of this technology, it will likely prove impossible for consumers to determine the legitimacy of the information before them. Per the selective-exposure theory, we live in a social and political climate in which most pursue information that reinforces their pre-existing ideologies as opposed to information which may challenge long-held beliefs. In other words, people have a tendency to ignorantly disregard information which does not reinforce their pre-existing beliefs. All in all, this foreshadows an unprecedented epidemic of (mis)information, one which may change the media—and how we interact with it—forever.

Edited by Selene Coiffard-D’Amico.