Is it just me who believes we’ve lost our ability to have civil discourse?
Every day, we rely on social media platforms to engage with like-minded people, promote ourselves, our work, and/or business. Unfortunately, the downside of increasing your visibility, especially when you wade into an online discussion with an unpopular opinion, is you become a lightning rod for online abuse. Online abuse can be especially relentless if you are a woman, identified as a member of a race, religion, ethnicity, or part of the LGBTQ+ community.
I believe social media companies can reduce, even come close to, eliminating, online abuse. The first step: Facebook, Twitter, LinkedIn, Instagram, et al. becoming more serious and urgent about addressing the toxicity they’re permitting on their respective platform. The second step: Give users more control over their privacy, identity, and account history.
Here are five features social media companies could introduce to mitigate online abuse.
Educate users on how to protect themselves online.
I’ll admit social media companies have been improving their anti-harassment features. However, many of these features are hard to find and not user-friendly. Platforms should have a section within their help center that deals specifically with online abuse, showing how to access internal features along with links to external tools and resources.
Make it easy to tighten privacy and security settings.
Platforms need to make it easier for users to fine-tune their privacy and security settings and inform how these adjustments impact visibility and reach. Users should be able to save configurations of settings into personalized “safety modes,” which they can toggle between. When they alternate between safety modes, a “visibility snapshot” should show them in real-time who’ll see their content.
Distinguishing between the personal and professional
Currently, social media accounts are all-encompassing of your professional life and personal life. If you want to distinguish between your professional and personal life, you must create two accounts. Why not be able to make one social media account that toggles between your personal and professional identities as well as migrate or share audiences between them?
Managing account histories.
It’s common for people to switch jobs and careers and their views over time. Being able to pull up a user’s social media history, which can date back more than a decade, is a goldmine for abuse. Platforms should make it easy for users to easily search old posts and make them private, archive, or delete.
Credit cards and/or phone number authentication.
All social media platforms allow the creation of anonymous accounts. Ironically, much of the toxicity permeating social media stems from people hiding cowardly behind anonymous accounts.
Anonymity enables toxic behavior by facilitating and backhandedly
encouraging “uncivil discourse.” Eliminating the ability to create an anonymous account would literally end online abuse.
Anonymity allows people to act out their anger, frustrations, and their need to make others feel bad, so they feel good. (I’m unhappy, so I want everyone else to be unhappy.). Being anonymous allows someone to say things they wouldn’t even think of or have the courage to, speak publicly, let alone face-to-face.
All credit cards and telephone numbers are associated with a billing address. Social media platforms could prevent anonymous accounts by asking new joiners to input their credit card information, to be verified but not charged, or a telephone number to which a link, or code, can be sent to authenticate. (Email authentication is useless since email addresses can be created without identity verification.)
Undeniable fact: When people know they can easily be traced they’re unlikely to exhibit uncivil behaviour.
Yeah, I know—for many, handing over more data to social media giants isn’t appetizing, even if it eliminates the toxic behaviour hurting our collective psyche. Having to go through a credit card or telephone authentication will be pause for many to ask themselves why the feel they must be on social media. Such reflection is not a bad exercise.
Online attacks have a negative impact on mental and physical health, stops free expression, and silences voices already underrepresented in the creative and media sectors and in public discourse.
Respective platform user guidelines (aka. Community Standards) are open to interpretation and therefore not enforced equitably. Content moderators (human eyes) and AI crawling (searching for offensive words and content) aren’t cutting it.
Social media companies can’t deny they could be doing a much better job creating a safer online environment. Unfortunately, a safer online environment will only evolve when social media companies begin taking online abuse seriously.
by Nick Kossovan