Social media has been very important to my life in many ways. In particular, Instagram has enriched my life through friendships, relationships, and my development as an amateur photographer to establishing a photography business. However, there have been many changes that concern me going forward. While the moderation changes are alarming, it’s honestly the increasing presence of AI agents on the platforms that have me considering my engagement.
Outside of photography, I am an artificial intelligence researcher, with a PhD in Cognitive Science and Complex Systems from Indiana University. While I was at Indiana, I took courses with several professors who monitored the spread of disinformation through coordinated bot networks on the web through the Observatory on Social Media (OsOME).
This article is aimed for the general public, explaining how we got to our present difficulties on the web through ad-funded business models. Here, I focus on Google and Meta with their respective challenges in measuring engagement and controlling spam. I conclude the article with some thoughts on the affordances of various social media platforms with respect to three aspects of behavior on the web: content, connection, and commerce. Finally, I implore people to consider the business models of the tools they use, with a few suggestions on landing spots.
Algorithms
Algorithms and data structures are the two core elements of computer science. An algorithm is a way of doing something – a formal set of procedures for problem-solving. This contrasts with popular usage, which focuses on a specific class of ranking algorithms that determine the way information is presented in a “feed”.
This popular usage has its origins with the PageRank algorithm that allowed Google to quickly usurp all legacy search engines, such as Yahoo and AltaVista. The magic ingredient was looking at the structure of the web and hypothesizing that more important pages will have more incoming links. However, this algorithm can be quickly exploited – by creating sets of pages that link into a particular page, the target page’s importance can be artificially inflated. Thus, in order to maintain search result quality, Google augments PageRank with other metrics when presenting results – a measure of trust for each domain, augmentation with keyword blends, etc. By changing the algorithm, publishers then attempt to exploit the new algorithm, starting an arms race known as “Search Engine Optimization” (SEO).
When the goals of the search company are to provide high-quality search results and the goals of the publishers are to promote high-quality content, this game is productive – an ecosystem in which “content is king”. However, the motives of both publisher and search engine can be easily compromised. For publishers, the goal of high-rankings can overtake content quality, resulting in a web designed for Google, instead of humans. For search engines, an ad-based business model conflicts with users’ desire for high-quality search results by allowing paid intrusions to the ranking algorithm. Eventually, the scales tip to the extreme articulated by Google’s VP of Finance: “[W]e can mostly ignore the demand side…(users and queries) and only focus on the supply side of advertisers, ad formats and sales.”
This process, of compromising product quality for users and extracting as much value out of business customers as possible, is known as “enshittification”, a term coined by the tech critic, writer, and copyright reform advocate Cory Doctorow:
First, [companies] are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die.
Engagement
In the social media space, enshittification has been happening gradually through algorithmic changes as Meta, TikTok, and X have changed their default algorithms from being a chronological timeline of the profiles that a user follows to paid placement, just as Google did. Each time the app is opened, thousands of potential posts are evaluated from paid advertisements to recommended posts to the slight chance of seeing something from someone you follow. All these changes are carefully monitored to drive engagement metrics that can be marketed to advertisers: clicks, time-on-site, comments, likes, etc.
However, these metrics can be gamified by social media companies. This happened in the “pivot to video”: In 2015, Facebook encouraged news and media companies to post videos on their platform, rather than linking to written hosted elsewhere. Facebook cited increased engagement and many companies reduced their footprint – some going as far as removing their homepage altogether (Mashable, CollegeHumor, Funny or Die) and cutting their writing teams (Fox Sports, MTV News, Vice News). These companies all missed that they had lost control of their relationship to their audience. Once the transition to Facebook was complete, the company implemented a pay-to-play scheme for exposure. This ultimately led to the demise of sites such as CollegeHumor and Funny or Die.
In 2018, it was alleged that Facebook’s “pivot to video” exaggerated the success of videos on the platform by claiming they were watched over 900 times more than they actually were. The lawsuit was settled in 2019 for $40 million, with Facebook admitting no wrong-doing, but the damage to legacy media was done. Ironically, the “pivot to video” had also damaged Facebook’s long-term metrics for “organic posts from individuals.” As users began to see Facebook as a platform for advertisers, rather than to maintain connections, they disengaged from the platform – the final step of “enshittification”.
Spam, Bots, and Slop
Spam is the sending of unsolicited messages repeatedly to a large audience for the purposes of advertising, propaganda, or other purposes. The lines between SEO-optimized content and spam have long been blurred. On social media platforms, spam is colloquially carried out by “bots” – automated social agents that create content to influence the algorithm so it promotes their message. These “bots” have been widely seen as a problem, as they create a negative user experience. Furthermore, nation-state actors have often used “bot farms” to spread misinformation on social media.
However, not all bots are malicious, making it hard to remove all automated activities on social media. For example, the National Weather Service has many “bot” accounts to communicate weather statements, watches, and warnings to the general public. Other areas are more gray – engagement bots can automate liking of comments or posts and appear indistinguishable from human usage. Any platform with an open API is subject to both malicious and benign use, as defined by the particular terms of service.
To improve user experience and maintain an audience for ad-driven business models, social media companies have traditionally waged war on malicious bots. Ostensibly, this was part of Elon Musk’s rationale for purchasing then-Twitter. Unfortunately, by closing or rate-limiting APIs, independent tools to evaluate whether an account was a bot or not were forced to shut down. Internal teams working on misinformation and removing malicious bots had significant overlap. As social media companies increasingly abandon their in-house misinformation teams, the prevalence of bots has increased. These next-generation bots are also increasingly capable, as AI tools have given them multi-modal functionalities.
However, the pressures of enshittification operate here as well – scrolling through spam content increases certain engagement metrics, such as time-on-site and posts served. While user experience suffers and other metrics decrease, such as likes and comments, bots can have their place in driving ad sales by creating fake engagement for the ad market to promote. The deployment of new AI production strategies has accelerated the accumulation of spam content, exemplified by AI “travel influencers”. Metrics without differentiation between bot and human engagement are useless, but the new norm for end users.
Meta has recently decided to go all-in on this strategy, telling the Financial Times: “We expect these AIs to actually, over time, exist on our platforms, kind of in the same way that accounts do. They’ll have bios and profile pictures and be able to generate and share content powered by AI on the platform.” In implementation, these AI users are undifferentiated from normal users in their posts. In fact, they bear the blue “verified” checkmark on the platform, indicating they should be trusted more than other users. Additionally, they were unblockable, meaning that a human user could not “opt out” from the experience.
Meta backpedaled on this by removing the AI profiles, but reporting of the removal has emphasized the politicized aspects of these AI users, rather than the fundamental transgression: by creating artificial users, undifferentiated from human users, the “social” aspects of social media are compromised. This erodes any trust that users are real people, a long-standing conspiracy theory known as Dead Internet Theory, which is now Meta’s business plan. While some articles were concerned with the “digital blackface” of such profiles as “Brian – Everybody’s grandpa” and “Liv – Proud Black Queer momma of 2 & truth-teller”, these caricatures only intensify the core offensiveness of pitching these agents as equals for genuine human connection. The problem is not the aspects of humanity they attempt to mimic, but rather the mimicry itself.
This embrace of AI has also been seen on Google, where instead of embracing the arms race with purveyors of AI-generated content to maintain high-quality search results, they have merely placed their own AI summary before any search results – paid or organic. This technique attempts to use retrieval-augmented generation (RAG) to then link to alleged citations supporting the AI Summary. Recently, the search engine placed AI-generated content above the original article in their ranked results, an alarming loss for both searchers and publishers.
Trust
The degradation of search and social media products as a result of the ad-based revenue model is apparent to anyone who uses Google, Meta, or X. As a consumer, we ostensibly have choice in platforms (or to disengage altogether). As a business, disengagement is tempered by where the eyeballs are, which involves compromising ourselves to untrustworthy partners like Meta, who inflate metrics and invent artificial users.
As you evaluate your relationship with these platforms, it is helpful to think of three aspects: content, connection, and commerce. Underlying each one is trust – that information has been vetted, that users are “real”, and that business will flow. At present, many platforms fail on these aspects of trust. In the case of Meta and X, deliberately so as they remove internal misinformation teams. In Google’s case, unreliable AI Summaries remove trust in content. At Meta, company-owned AI profiles remove the trust that users are real, removing connection. At X and across all Meta platforms, algorithmic deprioritization of links outside their platforms reduces commerce.
Fighting Enshittification
Ultimately, the highest trust comes from owning your own distribution channels. However, “surfing the web” has been replaced with “scrolling” for most discovery activities. E-mails are rarely seen by end users, as spam filters have advanced. This makes the maintenance of your own website or newsletter feel like shouting into the void.
My recommendation is to observe what a platform’s monetization strategy is. All ad-driven platforms will be compromised in some way. Finding platforms that seek novel funding mechanisms, such as subscription-driven models, is a high priority. Otherwise, we only participate in large-scale advertising systems, rather than investing in platforms that suit our needs for trusted content, connection, and commerce.
For social media, there are non-ad-supported solutions: Mastodon and Bluesky. These are largely Threads or X replacements and of the two, I have had more traction on Bluesky. The medium gives a public forum for events and discussion, but do not replicate key aspects of Instagram, including the Grid that offers artists a gallery and Stories that offer ephemeral content, leading to connection. I struggle to see how to drive print sales or portrait bookings through either platform, so mostly have fun with it.
Both Mastodon and Bluesky offer a revolutionary service to their users: a default timeline that acts as a “no algorithm” feed – simply showing the posts of the users you follow, ordered by recency. Once you hit a critical mass of accounts though, the necessity of an algorithm becomes apparent. Bluesky allows users to select alternate feeds and create their own algorithms for ordering the information. For example, I subscribe to a feed called “The ‘Gram” that shows only posts from people I follow with media. Mastodon has a capacity to generate feeds from lists of users or hashtags, but does not allow algorithmic filtering in the ways that Bluesky does.
Another advantage of Bluesky is domain verification, providing consistent branding that points back to your own website. For my academic musings, follow me @jamram.net. For my landscape photography, follow me @exploringthefrontier.com. I think this connection of identity to the web at large is excellent and restores the ecosystem of the open web through social media. Finally, I prefer the granularity of the moderation tools at Bluesky, compared to the per-instance moderation at Mastodon. It provides a user with excellent control of their personal experience.
For search engines, I have not identified a non-ad-driven search engine. While DuckDuckGo heralds their privacy model as a reason to use the platform, they still rely on ad placements for revenue, compromising search quality. Given my academic interests, I’ve started thinking about alternatives to this in the context of enterprise search. I think a lot of frustration with disinformation on the web at large could be solved by increasing search index quality, which involves controlling for SEO, social bots, and AI-generated content. It’s a steep hill that would rely on trust models being integrated with the indexing efforts. Always available for consultation on that topic.
Conclusion
In this article, I covered the birth of modern content ranking algorithms through Google’s PageRank and the subsequent “enshittification” of these services through ad-based business models. I identified engagement and spam control as two challenges for high-quality content that are compromised by the metrics of ad-based business models. Then I analyzed social media platforms with respect to three aspects of behavior on the web: content, connection, and commerce. Finally, I identified current tools that may give hope for a non-ad-based business model and identified a gap in the search engine space.
Leave a Reply