In the drawing by Peter Steiner one dog sits on the floor looking up at another dog sitting on a desk chair in front of a computer. With one paw on the keyboard, the dog on the chair is explaining that “On the internet, nobody knows you’re a dog.”
Back in 1993 this perfectly hit the sweet spot New Yorker cartoons aim for. Clever, slightly off-center, and allowing the reader to feel a certain smugness about knowing enough to get the joke.
Since it was a published, one generation and half of another have grown up living the joke. There’s really no need to explain it, and feeling a need to do so would just mark the explainer as old and out of touch.
But we can unpack it to see what it is that makes it funny. What is the unspoken premise, the part we know already that doesn’t need to be explained?
The unspoken premise behind the joke is this: The dog is getting away with something. By not revealing they are a dog, the dog is tricking everyone they interact with online.
This is how we are accustomed to thinking about identity online. Either we say who we are or we are suspected of getting away with something. Anything short of personal transparency is suspect.
There’s good reason for this. Catfishing is the nefarious practice of pretending to be someone you aren’t in order to trick someone into giving away something of value (money, favors, and nude photos figure highly in tales of catfishing). Trolling, or deliberately saying things just to provoke a reaction, is a little more ambiguous.
Certainly trolls play a role in misinformation campaigns and black hat election campaigns, but there are also relatively harmless trolls like the amazing KenM who has taken playing an affable moron on the internet to the level of high art. (Language, ever-evolving, may soon be leaving us without a term to describe KenM as I’ve run into people who insist anger is the only emotion stirred up by trolling, which is certainly not how we used the term back in the dog days of the ’90s.)
But still, it was a basic part of internet culture for many years that people could choose their own identity markers and everyone else respected them. You might have known ParticleMan’s real name, and you might have even met them for a cup of coffee or a beer, but on the forum where you interacted they would remain ParticleMan to you.
From perspective of businesses trying to make money on the internet, this widespread pseudonymity presented problems. Two of these problems for businesses in particular, or rather the way companies addressed them, have had serious impacts on the internet and society in general.
- Verified identities allow the collection of personal information that can be sold to third parties
- Verified identities can be marketed as a sign of trustworthiness to wary potential users
But for the first 15 years or so of the web, few sites required verified identity. Pseudonymity was so deeply embedded in internet culture that verification met with resistance. Some commercial sites, like Amazon with its “realname” badges, did implement it. But by and large usernames had no relation to legal names on most websites.
For those who’d embraced internet culture this was seen as an issue, but not exactly a major one. Someone might get away with impersonation or sockpuppetry for a while, but they almost always got unmasked.
However new internet users, particularly the large cohort of older Baby Boomers with little exposure to computers in their personal lives, weren’t a part of this culture. To tap the revenue stream they represented, companies needed to persuade this audience that using their own identities and sharing enormous amounts of personal data online was not only safe, but actually perfectly normal.
That change was to define the next 15 years of the web era.
Earlier social networking sites, including Friendster, MySpace, and LinkedIn had seen decent levels of adoption, but only among the more plugged-in demographics. But in 2006 Facebook, formerly only usable by those with college domain emails, opened signups to anyone over age 13. Momentum grew and by 2008 the site hit an exponential rate of user growth.
The key was leveraging potential user’s trust in existing users’ identities. Someone who would be wary of an account posting on Reddit with the name GalacticUnderlord would 100% trust the same person posting on Facebook as their nephew Jimmy, who went to Boston University.
2006 was also the year Twitter launched and at first it seemed more a part of longstanding web culture. Early adopters enthusiastically shaped its usage by coming up with thing like hashtags, the @+username tagging convention, and “RT” to designate a retweet. The tech-savvy users adapted these conventions from the protocols underlying the internet itself, but other users from the worlds of journalism, entertainment, politics, etc. had little trouble grasping their use.
Following well-publicized incidents with impersonators pretending to be Kanye West and Tony LaRussa, Twitter introduced the blue check verification system in 2009. On the surface this was to reassure notable figures that impersonation was no longer an issue, but it also provided a marketing angle to attract the same “everyperson” users that Facebook had so effectively brought on board. The opportunity for fans to interact directly with the objects of their admiration was an appealing one.
At the same time, while Twitter still allowed pseudonyms, for many serious users who didn’t rise to blue check levels of fame that countered its use as an invaluable networking tool. Using one’s real name while participating in public conversations about one’s professional or personal interests was a best practice for being recognized in one’s field.
Very broadly speaking, the roughly 30-year era of the web to date can be split into two roughly 15-year periods.
The first, from 1991 to around 2006, was the Age of the Pseudonym. By and large, internet culture was dominated by people either in tech or with a certain amount of tech awareness. Norms and standards policed behavior in a decentralized, community-focused way reflecting the general direction of many tech products towards data sharing and interoperability.
It was a matter of personal choice how much of one’s actual identity to share online, and pseudonyms served as a way to differentiate various aspects of oneself. It was common to have multiple active accounts on multiple sites: One for discussing a hobby perhaps, another for professional information sharing, a third for neighborhood news, etc.
Then with the mass adoption of social media sites we transitioned into the Age of the Mononym. All those different aspects of ourselves were crushed into a singular, verified, identity. By the nature of its design Facebook in particular upset the old order and usurped the roles of many sites with different areas of focus. But, always in search of a way to finally profit off its users, Twitter’s new features increasingly followed the same paths.
The savvy, we were told, would benefit from the new social era through our ability to curate our own images and control our “personal brands.” Instead of just being ourselves, we were to become marketing material for ourselves.
Even the promised success to come from personal branding proved to be an illusion, as the online world did what it does so well, consolidating and amplifying a few to the detriment of the many. Just as Amazon’s enormous presence casts a shadow over the retail landscape, the shadows of those who got to the personal branding game sooner, or had fewer moral compunctions about how they played it, shaded the many just who were at least as skilled but more interested in actually doing than in presenting themselves as doers.
2022 appears to be a watershed year in this story. Time will tell, and I’ve been wrong before: Once I wrote a paper predicting a dramatic falloff in Facebook’s user total by 2015. Oops.
But between the business impact on Twitter of the ongoing public meltdown of its new owner, Elon Musk, and the vast layoffs at Facebook as its Chief Executive (subject to no oversight whatsoever thanks to a bizarre stock arrangement) pursues life in a totally unappealing virtual world, it certainly feels like it could be the end of an era.
It’s hard not to cheer the collapse of a social media paradigm that sucked in our identities and traded them for profit. The last fifteen years have been terrible ones. So much of our discourse has been intermediated by businesses that incentivize anger and discord, solely because those emotions are addictive and encourage us to give up so much of ourselves.
But even worse than that, we are all multifaceted people who’ve had the multiple dimensions of our lives flattened and pancaked for the benefit of companies that truly don’t care about us. Anyone who found themselves on Facebook trying to walk a fine line between the old friends, former lovers, current co-workers, extended family and the like that they allowed into their feed without considering the ramifications understands how social media sometimes becomes performance media as a survival tactic.
It turns out that in the first 15 years of the web, when it was decentralized and we were free to choose what aspects of ourselves to reveal to different audiences, we weren’t actually secretly dogs. We were free to be ourselves, mixing and matching the different parts of our personalities to fit the social situations we found online. If we were really dogs, the odds were good we’d find a dog community where we could be accepted.
The last 15 years though, the dominance of social media insisting on singular, verifiable identities made many of us hide parts of ourselves in plain sight. The social media business machines insisted we present our whole selves, while our need to protect parts of ourselves from judgement and approbation ensured we our presentation was more performative than real.
The more we were pushed to put our identities out there, the more we hid the fact that we really are dogs after all.
Although this is a new blog post, the themes, ideas, and some of wording in it echo writing I’ve done in the past, both on my old, no longer online blog and in papers for the Master of Communications in Digital Media program I completed in 2011. I find it heartening that now I am looking back at them as an era in decline, and hope that I am correct in that assessment.