Unmasking the Alt-Right: Real-World Implications of a Virtual Movement

The first article in this series discussed the psychology behind alt-right radicalization online—who is most vulnerable to extremist indoctrination, the risk factors correlated with such vulnerability, and the forms that the process of radicalization can take. Now, the next question to be asked concerns the actual implications and dangers of online alt-right extremist movements and their recruitment efforts. We have seen too often that radicalization in the virtual world can lead to real-world violence; we have seen terrible acts of mass violence that have been connected with members of the alt-right online, such as in Christchurch and El Paso. But are these acts of violence the only dangers posed by the activities of the alt-right online? The second part of this series will explore whether, in a hypothetical world devoid of the tragedy of alt-right violence, the virtual activities of the alt-right would still have real-world implications.

Humour and Hatred

There is a tendency in alt-right activity to weave dark humour into online content, meaning that the rest of us often struggle to distinguish between honestly-held, hateful beliefs and ironically-presented hate speech. A leaked style guide from a neo-Nazi site proves that this is actually a tactic of the alt-right in some cases: “The unindoctrinated should not be able to tell if we are joking or not.” Anonymous imageboards like 4chan are not dedicated to any specific ideology, which makes it especially difficult to determine whether or not users actually believe the appalling things they are writing, or whether it is all part of some sort of ironic, competitive game to see who can say the most abhorrent thing, or share the most despicable, hateful meme.

The first article in this series discussed the dangers of anonymous-poster imageboards like 4chan. This danger stems from the power such sites have to provide exposure to alt-right content and to engender a link between violence, hate, and commonplace humour in people’s minds. Inherent to the structure of 4chan itself is a tendency to provide “gateway content” to extremist discourse—with a single click, users can be taken from one thread discussing video games or current events to another thread full of hate-ridden rants against minorities and women. 4chan users’ near-ubiquitous use of darkly ironic humour with regard to almost any subject means that terroristic violence, hate crimes, and bigotry are talked about in such a way that make people laugh and belittle their importance. This creates a link between toxic content and humour: people find themselves amused by hateful thoughts—whether they accept them or not—making hate speech appear more readily acceptable and less harmful.

Dr. Ghayda Hassan is the founder and director of the Canada Practitioners Network for the Prevention of Radicalization and Extremist Violence (CPN-PREV) and a UNESCO co-chair on Prevention of Violence Radicalization and Extremist Violence. She gives credence to the idea that, in certain cases, users of sites like 4chan may be participating in a collective exorcism of hate-feelings which they don’t actually accept. That is, people may congregate online and express feelings of hatred which they neither accept, nor believe; indeed, they may even appear to advocate hate crimes, without actually subscribing to any part of what they are saying.

“We all have hate-feelings,” says Hassan—having hate-feelings is part of being human. Neither accepting nor acting on these very human feelings, however, is natural, and both can lead to discourse and actions which the majority of us would consider utterly inhumane. Normally, says Hassan, we have “positive social spaces” in which we can experience and express our feelings of hate without it meaning that we actually want to destroy the target of our hate. The danger comes when these constructive spaces are replaced by unmediated online platforms where “real-world regulatory processes” no longer exist. In the real-world, people around us react to our expressions of hate and help us to understand hate-feelings and how to deal with them in a healthy way. Online, however, anyone can say anything anonymously, and there isn’t always a constructive reaction or discussion that follows in order to help somebody work through their hate-feelings in a positive way, explains Hassan.

There is the obvious concern that online alt-right content and recruitment efforts will generate more hate and inspire more radicalization, or at least expose more people to extremist ideologies than would be the case without this virtual outlet. While this may be the case, expressing hate-feelings online, even without mediation to help you interpret and dismiss them, doesn’t mean that an individual will act on those feelings in real life, says Hassan. Although the risk is certainly greater—with the lack of divide that once existed between the virtual world and the real world, activities in the virtual space can bleed into the real world, desensitizing people to violence and prejudice, and normalizing expressions of hate.

Dr. Hassan points out that, while there is certainly no guarantee of social upheaval in the foreseeable future, a look at history shows us that increased polarization of the population and increased “hate around otherness” can contribute to civil conflict. Thus, radicalism and expressions of hate speech that occur in the unmediated arena of the Internet have the potential to increase hatred towards others. “Previous wars have all been accompanied by highly de-humanizing discourse,” she continues, “and we are definitely preparing the social space for a more tense relationship that, [in the] long-term, may lead to civil unrest or disruption.”

A member of the alt-right holds up a sign calling for the movement to unite. “Donald Trump alt-right supporter” by Fibonacci Blue is under license CC By 2.0 on Flickr.

“An intense culture of fear”

Carrie Rentschler, an Associate Professor of Communications Studies at McGill University, focuses on social movements and media activism. She points to online hate-speech and the alt-right’s tactics of doxxing as well as rape- and death-threats as culprits in the “creation of an intense culture of fear”. In such a culture, people, particularly women, are afraid to speak out against hate and condemn the alt-right for fear of retaliation against themselves or those they are close to. Whether or not rape- and death-threats are realized, the fear of that realization is often enough to shut down would-be vocal opponents of the alt-right. The very threats made by radicals online are harmful, says Rentschler; whether it is carried out or not, “the threat itself says ‘you are not safe.’”

Examining the practice of doxxing gives us a better understanding of just how scared targeted individuals can be. Doxxing is an attempt to stop the target from participating in a certain kind of discourse, engaging with a particular cause, or challenging a specific group or ideology online. Doxxers destroy an individual’s privacy and anonymity online in an attempt to legitimize their threats and make the target feel exposed, vulnerable, and afraid.  Doxxers will reveal the real name, occupation, face, and even address about loved ones of the targeted individual. By frightening and shaming their opponents into silence, doxxers create a culture of fear wherein people are threatened away from voicing their opinions. It is a tactic used by both the alt-right and their opponents, and has unfortunately become increasingly mainstream.

Rentschler explains that when people are afraid to challenge hateful ideologies for fear of harm being done to them or their families, our society becomes one in which the hate-mongers are empowered because there are fewer and fewer voices to counter hateful, radical ideas. Rational discourse and debate can no longer take place when too many people are frightened into silence, and hate-speech is made dangerously prominent and powerful in such a society

Members of the alt-right face off against members of DC United Against Hate face off during opposing demonstrations in 2017. Issues of free speech become complicated in the case of extremism, especially extremism with roots in online activity–who can speak, and more importantly, who can’t?                                                                       Alt-Right Free Speech Rally 1 by Stephen Melkisethian is under license CC BY-NC-ND 2.0 on Flickr.

What’s to be done?

Online hate speech has the potential to indoctrinate non-radicals into extremist ideologies, normalize certain kinds of violent or discriminatory discourse, desensitize people to the idea of viewing of even committing acts of violence, and create a society in which people are afraid to stand up and speak out against hatred and violence. Knowing all this, one might ask: what steps can we as a society take to combat the powerful effects of online extremism, and to protect ourselves from its influence in the real world?

Rentschler emphasizes the importance of de-platforming, moderation, and holding people to higher standards on internet platforms. There is promising evidence that banning users who share hateful and extremist content on social media platforms is an effective way to reduce the amount of hate speech that is shared. “Holding people to particular standards online is essential,” says Rentschler, but currently, “there is very little in place to stop online hate speech and harassment”.  “Likes”, popularity votes, and upvotes are key to the proliferation of extremism online, Rentschler explains; the more popular content is, the more available and accessible it is made. This means that the more “likes” an alt-right post receives, the more mainstream exposure it might be given.

In terms of developing a solution to mitigate dangerous online activity, while some might think that it would be easier to discuss and challenge toxic ideas through free debate so as to prevent their normalization and acceptance into mainstream society. However, when asked, Rentschler was more skeptical: “I’m not sure that the fact that it’s out there means we’re discussing it and debating it,” says Rentschler; unless we’re actively and constantly challenging such speech, its relative prominence might do far more harm than good.

And what about free speech? Isn’t it important that we defend the right of even the most bigoted extremist to freely express themselves, even if what they say is abhorrent? Rentschler points out that at a certain point, the right to free speech of the alt-right infringes on that same right of those who would stand up against them, exemplified in part by the effectiveness of doxxing and similar practices. There’s an important question to be asked, she says—“Who can’t speak, out of fear of harassment and harm?” We might champion free speech as a pillar of what makes ours a free and safe society, but when it comes to online extremism and the very clear concerns it poses in the real world, the issue is far from black and white.

Featured image flat screen computer monitors on table photo by  Kaur Kristjan on Unsplash.