Bestie vibes only? Disabled content creators, shadowbanning, and the politics of authenticity

In October 2020, it finally happened to me: my Instagram was shadowbanned.
From 2019-2021, I created content under the handle @disabledphd. I shared updates about my experiences with academic ableism as a white queer/disabled graduate student, disability and intersectionality, and media representation. My platform was small—just under 5000 followers—but I began building a community with other disabled creators and users. One day, my account no longer had any engagement: it was like my public account vanished—no one was watching my stories or able to see my content in their feeds. Over time, I learned that many other disabled content creators (particularly racialized, queer, and trans users) were also shadowbanned.

Moreover, censorship took place across several platforms—especially TikTok, the now-viral micro-vlogging platform. Was I censored because I was queer, disabled, and shared my experiences? How does a platform decide who to censor, and how is this determined?
Shadowbanning (also referred to as soft blocking) is a form of digital content moderation. A platform’s algorithms will flag a user’s profile or content as ‘inappropriate’ and render the account invisible (Myers West, 2018, p. 4367). Some platforms, like Facebook, also rely on human moderators to assess content flagged by a platform’s algorithm. While users may still have access to their page, content, and messages, a shadowbanned account will not appear in other users’ feeds. Instead, it is ‘hidden’ from a platform’s explore page (typically generated by what the platform’s machine learning algorithms assume the user will be interested in based on previous engagement). Regardless of whether an account has a certain number of followers, engagement, or is public, it is like that user’s account does not exist.
I suggest shadowbans are a key design feature to sustaining a particular politic. For instance, in 2019, whistleblower reports revealed TikTok’s robust AutoR algorithm relied on ableist machine learning algorithms to censor disabled users. AutoR essentially controls which videos reach the platform’s coveted ‘For You’ homepage and who is censored. Additionally, TikTok’s ‘ugly content policy‘ mandated that within seconds of AutoR flagging a video, a post featuring a disabled person or disability-related hashtags would be hidden from the majority of users on the app. However, TikTok’s public relations officers positioned shadowbanning as a ‘bug’ or ‘mistake.’
While social networking platforms like TikTok present themselves as digital public forums for advocacy, dialogue, and representation, these shadowbanning practices reflect the dominance of offline ideological structures (see Chun, 2009). In this case, ableism is reprogrammed into machine learning algorithms that effectively determine who belongs on the app and who does not—perhaps in opposition to ‘bestie vibes only’ (a playful phrase originating from a now-viral video published by TikTok user @tiktoshh in January 2021).
TikTok isn’t the only social networking site to rely on ableist (and racist) algorithmic governance: Instagram also depends on content moderation regulations that target racialized people, disabled people, sex workers, queer people, trans people, and poor people. Platforms also shadowban users who bring attention to these moderation practices. For instance, on my @disabledphd Instagram account, I often wrote about shadowbanning and how this form of algorithmic surveillance prevents marginalized content creators—particularly queer/disabled creators—from engaging, connecting, and making. Nevertheless, for many disabled and chronically ill content creators, social networking platforms are a vital technology for building community and agitating against ableism, especially as the COVID-19 pandemic nears its third year.
My initial idea for my Sherman Centre project, ‘Bestie Vibes Only? An Analysis of TikTok’s User Interface and Platform Vernacular,’ proposed a cultural studies approach for studying human-computer interaction on TikTok. I anticipated this study, adjacent to my dissertation, would provide a foundation for a chapter in the former project. I wanted to know how creators who use their identity as a ‘branded good’ (Senft, 2013, p. 1) shape this performance from certain platform aspects, such as a user interface or platform vernacular. Jessalynn Keller (2019) defines platform vernacular as ways of talking on a social network that originates from particular interfaces, infrastructures, and organization of the site. For instance, TikTok features such as the stitch, duet, and green screen buttons facilitate unique platform interaction between users that are not present on other social media networks. I believed that I would focus on micro-celebrities, influencers, and other creators more broadly– yet I still found myself returning to user experiences with shadowbanning.
As the residency progressed, I grappled with looking at creator experiences of shadowbanning. At the beginning of the program, I was keen on understanding the processes of shadowbanning. Now, my project is more invested in shadowbanned creator experiences. In particular, my focus centers on the various strategies disabled creators reconfigure their platform use despite shadowbanning. For example, though I’m no longer an Instagram creator, I know many of the users I connected with during my time on Instagram face increased levels of cyber harassment and doxxing. There’s even an entire subreddit dedicated to ‘exposing’ chronic illness ‘fakers.’
Disabled content creators are at a strange crossroads: they must prove that their disability or chronic illness is ‘authentic’ enough through particular posting and storytelling strategies while simultaneously combating shadowbanning. Brooke Erin Duffy and Emily Hund (2019) call this phenomenon an authenticity bind, where users from a vulnerable or marginalized community find themselves stuck between the fine lines of visibility and vulnerability. I merged my experiences and interests in shadowbanning and algorithmic ableism with platform studies and the quest for ‘authentic’ content creation. Who gets to be authentic online, and how does a platform’s algorithm or user interface reinforce this? Rachel Dubrofsky’s (2016) reading of surveillance and white celebrity women’s authenticity suggests that offline, dominant networks of power (e.g., racism, capitalism, ableism) inform digital surveillance practices in media production.
Though Dubrofsky studies pop stars Miley Cyrus and Taylor Swift—not content creators—her analysis offers the possibility for understanding the strategies white celebrity women rely on to “perform” in public while still having a ‘natural’ self. If content creators are micro-celebrities, do shadowbanning and content moderation work to promote some content creators as more ‘natural’: white, wealthy, non-disabled people promoting skinny teas, expensive athleisure, and other wares? Within disability content creator communities, how does racism shape user understandings of ‘authenticity’ amongst creators?
As I prepare for interviews with disabled content creators in the coming year, my experiences as an SCDS resident offer essential questions to the evolving economies of content creation, authenticity, and looming threats of shadowbanning in digital worlds.
Acknowledgments
Thank you to the 2021 residency cohort for their support, and many thanks to Dr. Andrea Zeffiro, Marrissa Mathews, and Akacia Propst for their generative feedback on this piece.
References
Chun, W.H.K. (2011). Race and/as technology; or, how to do things to race. In L. Nakamura & P. Chow-White (Eds.), Race After the Internet (pp. 1-23), New York: Routledge.
Dubrofsky, R.E. (2016). A vernacular of surveillance: Taylor Swift and Miley Cyrus perform white authenticity. Surveillance & Society, 14(2), 184-196.
Duffy, B.E. & Hund, E. (2019). Gendered visibility on social media: Navigating Instagram’s authenticity bind. International Journal of Communication, 13, 4983-5002.
Keller, J. (2019). “Oh, She’s a Tumblr Feminist”: Exploring the platform vernacular of girls’ social media feminisms. Social Media & Society, 5(3), 1-11.
Myers West, S. (2018). Censored, suspended, shadowbanned: User interpretations of content moderation on social media platforms. new media + society, 20(11), 4366-4383.
Senft, T. (2013). Microcelebrity and the branded self. In J. Hartley, J. Burgess, & A. Bruns (Eds.), A Companion to New Media Dynamics (pp. 346-355), New York: Wiley & Sons.
Jessica,
I myself have been banned from social media posts in the past. If you are not one of the “cool” cats, you get punished for it.