On Thursday, I wrote about my Facebook/Instagram bubble. I weighed the value of glimpsing friends’ lives against the cost to our privacy; the cost of turning ourselves into eye candy to keep each other hooked, for advertisers’ benefit more than each other’s joy. Yesterday I turned my thoughts to Twitter. It became a long post, and the election countdown clicked past 53 days in to 52 days before I’d come to a stopping point, so consider this a two-for-one post.
My relationship with Twitter is more fraught. It stresses me out; it has for years. I’m afraid this post will sound like an old man shouting at the kids on the lawn, but I always regret opening up the Twitter app or clicking through to see a tweet linked from elsewhere. The original post might be fine, but the design of Twitter encourages scrolling down, engaging with replies and discussion. This is where it goes off the rails. Within a few seconds, I see a half-truth, a provocation, a cynic, a racist, a bot, or a clickbait profiteer. I spend the next ten minutes looking up the fact to fill in the half-truth, mentally drafting a thoughtful reply to the provocation, consider sending a long post on the harms of perfectionism to the cynic, block and flag the racist and the bot and the profiteer. This is not how I want to spend my time. And so, I close Twitter and walk away.
It wasn’t always like this. There was a time when I treated Twitter as a fun curiosity. In its early days I used it to follow friends, post strange experiments in poetry to the tiny number of followers I had, and used the platform network with film and art bloggers.
Then there came a time when I took Twitter seriously because it was my job — or at least other people mistook it for my job. I moved to D.C. to design a website and graphics for the campaign of a high-profile Democrat I had never heard of (Democrats were — and mostly are — the good guys; I needed a job). They gave me an up-close glimpse of Twitter’s then-emerging role in politics: speeding up public discourse between activist-bloggers and between activists, campaigns, and digital-savvy journalists. Twitter then was a curious add-on to campaign outreach in 2007, and so it was overseen by its fiercest advocates: the team of youngish (white) tech geeks who built the website. Some of us (not me) had been brought on for our relationships with the growing network of activist-bloggers who had captured the attention of that odd niche constituency of very online people who kept showing up at campaign events (an audience we now simply call “engaged voters”).
Around this time, I realized that a few of the opinions I expressed in reviews of obscure documentaries and art were outside the official positions of Democrats and organized labor. Too radical, too accepting, or sometimes too much of an acknowledgment of a troublesome commentator or perspective. In 2009, when Barack Obama formed the first presidential administration of the social media era, potential hires were asked to hand over access to their social media accounts for scrutiny as a condition of employment. This was becoming a common practice in corporate America as well. There is little chance anything I wrote would have caused a stir for any of my employers, but I had no time to keep up the blog, let alone audit every sentence I’d written before I started working in politics. And so, I deleted my personal blog and started posting more carefully when I used social media.
Twitter by then had become a companion to every campaign (political or otherwise). Almost every journalist began quoting tweets and gathering sources on Twitter. Marketing and branding and media and communications people were figuring out a Twitter strategy for selling everything: candidates, Coke, coal, community theater tickets.
It might sound like I stopped using social media in 2009, but in fact that year probably marked the height of my usage (see screenshot). I transitioned from doing design work to a managerial role advising several national and local campaigns. Part of my job was to help raise the profile of our team’s work. And so, my personal feed became a mix of posts from our campaigns alongside the work of artists, filmmakers, and writers I was exploring in rare moments of downtime. And I had some success pointing policy wonks toward art, and artists toward policy. Or at least, I was enjoying cultivating an audience from those two worlds. More and more though, consuming Twitter was replacing my social life. I hadn’t found an art scene in D.C.’s yet, had little time for anything other than eating and drinking with coworkers. If I was out, I was checking Twitter at the table. On the rare occasion that I made it to some new experience — a literary reading or a concert — I was live-tweeting it instead of enjoying it.
At some point I began to feel tense when I hit the “tweet” button. I realized that every careful, sincere and thoughtful tweet on an issue in the news was generating ten or more cynical and reactionary replies. Early versions of the more virulent half-truths, provocations, racists, bots, and clickbait profiteers I see today. Many of these were anonymous. Some found my tweets by hashtag, through curated lists of topical Twitter users, or via mentions from more prolific coworkers and writers whose work I’d mentioned. My tweets about art and strange fragments of culture were exempt from this trolling; they would get a few “favorites” or retweets and fade away.
I turned towards elevating other voices since I felt I didn’t have the time or freedom to write in my own voice. It took me a few years, but I eventually realized that the culture of Silicon Valley, and the culture of tech-based advocacy/politics was replicating the unjust values of the offline world it was seeking to replace. The influential and powerful voices were white and male, including the geeks I started out working with in D.C. I would eventually hire a more inclusive team when I had the power to do so, but I realized it was just as important to use my social media voice to address this needed culture shift. I made a game of saying what I wanted to say on Twitter using nontraditional voices. If I had the impulse to post anything, I’d try to find a person of color or woman who had already posted it, and retweet them. Often with a bit of searching, I could stumble upon someone who’d said it earlier and better than the author I’d found. and I’d learn something in the process, make a new connection, change the culture in one small way.
But this was also a time when fans of conservative radio and Fox News were taking to Twitter. There would be obvious spikes in activity on our campaign accounts, and our personal accounts any time Glenn Beck or Rush Limbaugh mentioned issues we were advocating for. Some of the frothing-at-the-mouth outrage over these airwaves was directed at us by name as the author of a blog post or tweet, or as a campaign staffer quoted in an article. It became necessary to develop a protocol for forwarding threats of violence to our campaign’s lawyers, to Twitter (before their block/report system existed in any meaningful way), or to law enforcement.
In this environment, Twitter became a toxic workplace, a site of trauma. No fun. When I left paid political work behind in 2013, I set up an app that deleted Tweets older than a year. For a time, I used Twitter to talk about new work and art projects, but I gradually drifted away. Soon a year of no tweets meant the app had deleted my entire timeline without my noticing. I found new offline friendships to replace the scrolling timeline. The rise of Trump on the strength of his tweets seemed like a natural evolution of the hate speech I associated with the platform.
It hasn’t occurred to me to look for the people I’ve been talking to the most during the pandemic on Twitter: my (now Zoom-based) writers group; fellow travelers; friends from art fellowships and neighborhood circles. Somehow I still have near 3,000 followers. You can join them in anticipating my return to Twitter: @erikmoe and @futurecarto.