Earlier this week, President Biden issued a call to address the harms inflicted on children by social media: accountability for platforms, increased privacy protections, limits on data collection, and bans on targeted advertising for children were all offered as concrete policy objectives moving forward.
But putting the onus on platforms to fix problems alone won’t be sufficient to address the broader issues social media can cause for individuals of all ages, not to mention society. Regulation can also open up platforms so others can help solve these problems too. Fundamentally, we must give users themselves more direct power over their digital experience, independent of what the platforms choose to build.
The case for a new layer on the internet
Today, what you see online is a product of design for the average. With hundreds of millions of global users, the algorithms that command our attention necessarily need to optimize for the many, not for you.
Consider two simple examples: tagging users and content recommendations. From LinkedIn to Instagram, platforms have made the design decision to allow anyone to tag you in a post — and to let you know when it happens via a notification. This is great for the platforms: it drives up engagement! For average users, it’s convenient to be alerted to a tag; they probably want to know about it. But for people who experience regular harassment, this design decision has opened up a vector for abuse, as malicious people can spam them with tags in harassing and violent posts.
Content recommendations offer another example. Take recommended Tweets in your timeline: though an average user might appreciate discovering new content created by those outside their follow lists, for many others, the recommendation algorithms disrupt what would otherwise be a very carefully curated feed, and expose them to content they absolutely don’t want to see. That build-up of irrelevant noise drowns out the content they actually care about, and makes it harder to get the value they want from that platform. And of course, the problems only compound when we consider the negative externalities of “related content” recommendations on platforms like YouTube, where misinformation or radicalization can rapidly proliferate without oversight.
But because of their scale, it doesn’t make sense for the platforms to cater to the long tail of users and their diverse range of preferences. So today, we’re forced to make do with what they offer, however annoying or damaging it may be.
There’s a better alternative, and it comes in the form of what Daphne Keller of Stanford calls “middleware”: tools that sit between platforms and users, giving you the ability to create the digital experience that best serves you — however that looks. If you have the chance to decide what matters to you — what you want to consume, when, and how, you can craft an experience of the internet that helps you meet your goals, see truly relevant information that meets the standards you set, and avoid the overwhelm that comes with drinking from the undifferentiated digital firehose every day.
Given the constantly changing landscape of platforms and digital surface areas that we need to interface with (hello, Web3! Glad to see you decided to bring so many new options to the table 😅), this can’t just happen for each platform independently. Instead, we need a new layer on top of each user’s broader digital experience, to help them filter the noise, connect with what they care about, and discover the opportunities that the internet has always promised — but not always delivered.
What could this new layer enable?
Consider a few small examples of what’s possible when users, not platforms, get to decide how their attention gets directed (and don’t have to worry about self-expression or professional obligation resulting in immediate harassment).
More space for the communities and interests that bring us together
The relationships we build in digital communities of interest are powerful and profound, and help so many — particularly those with marginalized identities — feel less alone in the world. Yet today, engaging openly and earnestly in these communities online also often invites acute harassment from others.
Too many people feel the need to self-select out of the conversation to preserve their mental health, paradoxically cutting themselves off from the very community support they need. With better support for individuals to create their own boundaries, they could more easily protect their mental wellness and continue to enjoy the rich connections that their communities provide.
More experts, less misinformation
The deluge of content online today makes it increasingly difficult to identify misleading or blatantly false information that may show up in our feeds. But what if you could choose your own criteria for what types of content you want to see? Maybe you prefer to receive updates on the COVID pandemic from expert scientists, or to preemptively filter out articles that reference scientific studies that haven’t been peer reviewed. You could even choose only to see reporting on politics that has been fact checked by an independent assessor you trust.
By allowing users to pre-filter based on concrete criteria that match their personal standards, we can help individuals avoid inadvertent exposure to the types of content they would never choose to engage with in the first place. And allowing the user to proactively make these calls helps to avoid some of the challenges introduced when platforms make all judgements about which publications to boost or hide.
Increased accountability for powerful institutions
We’ve seen firsthand over the past few years how the courageous actions of corporate whistleblowers in the tech industry on social media have led to change in policy at both the company and even legislative level — but not all workers are in roles or industries that can allow them to withstand the inevitable harassment that follows speaking out. Journalists, too, have shed essential light on powerful players in industry, government, and culture, only to be faced with devastating online attacks (and newsrooms often have not yet developed the infrastructure to support them effectively).
More broadly, today we run the risk of losing out on incredible insight, activism, and perspectives because people with marginalized identities are disproportionately likely to be targeted with harassment and negative attacks, and they know it. Countless people self-censor because of the bad faith responses, snarky comments, and abuse they fear they may receive if they choose to exercise their voice online. If it's easier for everyone to filter out these harmful attacks, more critical voices will join the conversation, and all our experiences of the internet will be richer for it. Even discourse amongst people of differing opinions becomes more possible when we can focus on civil engagements and not have to deal with abuse simultaneously.
More responsive government (really!)
It’s difficult for elected officials to respond to their constituents (or monitor their concerns) on social media today, because they’re overrun by harassment and death threats. More powerful tools to sift through the noise to find the earnest inquiries of their communities will allow them to see good faith feedback, address concerns, and stay more engaged than is currently possible for many.
Learning from our past mistakes
We have the chance here to address so many of the hard-won lessons of the last several decades, and make sure that the next era of our digital lives is more positive, productive, and supportive for everyone. Although there are countless more learnings to draw on, here are a few that we cannot lose sight of as we strive to introduce new solutions:
Building for the most vulnerable first makes better products for everyone. As we have learned from countless examples in the disability community, accessible design that centers vulnerable people is simply better design. Solutions that are designed from day one to address the harassment faced by marginalized people are also more flexible and robust in addressing broader issues like spam and low quality content.
The user is the user, not the product. One of the most powerful opportunities in this space is the deep alignment in incentives you can have with your users. Unlike social media platforms, you don’t need to monetize personal data or user attention through an ad-supported model. How can you radically rethink the choices a user can make when their attention is truly their own to direct again? What kind of experiences can this create?
When possible, build tools to proactively enable community support. Although we advocate for user-driven tools, that doesn’t mean those users need to make decisions or navigate their digital life alone. Find ways to allow them to get trusted help, whether it’s monitoring abuse, or identifying content they should see.
Always consider the worst case scenario for any feature you build. You always have to think through how malicious people might misuse and abuse what you've built. Seemingly benign design decisions, like user tagging, can quickly turn into vectors for abuse. And tools built to solve one problem can end up exacerbating another. For example, limiting replies on Twitter helps reduce harassment, but also removes the primary space for fact checking misinformation. It’s essential to identify these tensions and build in safeguards from the start.
Don’t assume that you’ve thought of everything. Once real users start engaging with a feature, proactively engage with them to see if you’ve missed any negative impacts. Listen when they raise the alarm, and build a system to integrate their feedback rapidly when your products accidentally cause harm. Watch carefully for emergent behavior over time, and for negative outcomes and behaviors that might not get reported. Users may not always realize exactly what’s going on, whether they’re experiencing a coordinated attack by a troll army that just feels like an onslaught of abuse, or slowly being radicalized by a content recommendation algorithm.
These lessons just scratch the surface — much more has been written about the failures (and emergent opportunities) we can identify in the missteps of our collective digital past. If you’re interested in building in this space, consider leveraging frameworks like EthicalOS to proactively identify your risk zones and make more thoughtful design decisions from the start.
What’s next
When faced with the overwhelm and toxicity that so often accompany our current social media experience, it’s easy to consider just logging off altogether. But disconnecting altogether means losing all the opportunities, connections, and creativity that social media has spawned. It means giving up some of the most important pieces of our personal and professional lives today. It’s not “just” online — it’s real life, and it matters. We deserve the tools to make it better on our own terms.
There are heartening signs that the tide is shifting. Major platforms are starting to open up proactively, and regulation can play a role in helping to accelerate this move for those who lag behind. But there’s still more to do.
The path to a better internet requires giving users more control. Want to help usher forward this better future? Some suggestions:
Help spread the word: the more users demand this type of control, the more the platforms will enable it, and the more companies will step up. If you have the means, when you see products that enable user control, buy them!
Support regulation that requires platforms to provide open APIs that enable developers to build new solutions.
If you're an investor, find companies working on these problems and give them your support. You have the power to accelerate this new future, one check at a time.
And if you’re really passionate about this opportunity, come and join us at Block Party! It’s time to create the digital experience everyone deserves.