LLMs and The End of Social Listening

Large Language Models (LLMs) are simultaneously creating and destroying entire industries overnight. Understandably those who know they are facing extinction are not talking about it.

At this moment the 14-billion-dollar social listening industry is quietly pivoting from their core function, which is ironically summarized here by an LLM.

What is social listening? The process of monitoring and analyzing online conversations about a brand, product, or topic to understand what people are saying about your brand, identify trends, and track customer sentiment.

How it works: Social listening works by collecting data from a variety of online sources, including social media platforms, blogs, forums, and news websites. This data is then analyzed to identify patterns and trends.

The online marketing and advertising industry, the tech and entertainment sectors, political campaigns, and public health monitoring all depend upon the data streams generated by listening to people. Governments, industries, and brands base major decisions on what they hear you saying online. All of that is about to end.

How does it end?

It ends with noise. Go to Google Trends (https://trends.google.com/trends/) and search for yourself. For most of us there will be large swaths over the past year where nobody was searching for your name. For some of you Google may not give any results because your signal is already lost in the noise.

On the flatline days, a single person could alter the blue trend line by searching on your name. That is called injecting noise.  We can do that throughout the year and change the number of searches, thereby randomizing the number of searches that appear for each week. 

You can do the same thing for social media posts by looking at term use frequency. 

Both are analogous to Amplitude Modulation (“AM”) in radio.[1] If we add the organic and the artificial together, we create a new pattern that has no correlation to what was happening in real life (“IRL”). The conclusions drawn from social listening data depend on capturing “authentic behavior” not these artificial fabrications. 

This same effect occurs in social media posts. If we inject more searches or posts about “John Fuisz” on days nobody is talking about me. I can make myself look more popular, but I also destroy the ability to correlate what people were actually saying about me to sales data.

Another way to think about it is to consider the signal-to-noise ratio (SNR). The SNR is a measure of the strength of the signal (organic, IRL data) relative to the strength of the noise (intentionally-created distortion). A higher SNR means that the signal is stronger than the noise–the signal is easier to detect and characterize. A lower SNR means that the noise is stronger than the signal–the signal more difficult to detect and characterize.

The injected noise pattern is also important. Simply adding steady, homogenous noise (e.g., 10 searches per day for “John Fuisz”)  changes the SNR, but the signal’s general pattern will still be discernible. For example, white noise is a type of noise that is evenly distributed across all frequencies. White noise is very difficult to filter out, and it can make it very difficult to detect a signal. To truly distort the original signal, the noise has to vary over time.

This is already happening to some of you now. Your own searches or your own advertising is artificially injecting noise which your ad tech companies then measure[1], report to you as a successful ad campaign and the cycle repeats. 

But it is all artificial.

[1] See, https://www.taitradioacademy.com/topic/how-does-modulation-work-1-1/

LLMs make the destruction possible.

The manipulation of social media posts must appear organic so that they are not easily filtered out by advanced social listening analytics. LLMs allow us to generate organic-seeming targeted content at scale.

We could program a noise machine and tell it to post about “Pepsi.” Once activated, Pepsi would see its name being used in an apparent random manner (not correlated with advertising campaigns), essentially turning off its social listening system. Pepsi would still know who is buying their product and where. It would no longer know from social media people’s beliefs about their products. How likely is that? How likely is it for someone to protest Pepsi’s sale of soda? Or protest a company polluting the environment? Or to support a strike? Don’t like Disney’s policies on LGBTQ+? Before you get too excited, realize the ransom you could demand is only so large. Anyone can hit up Pepsi and do exactly what you did, so Pepsi is unlikely to pay you anything assuming they have alternatives. More on that later. 

Let’s ask an LLM to create a noise machine.

Our noise machine will have two core components:

●      Automated posting; and

●      Automated content creation.

           

What is an LLM?

We’ve had bots for a while.  That takes care of the automated posting.  What we need is organic-seeming content creation and we will use the LLM for that.

An LLM is a type of artificial intelligence (AI) algorithm that is trained on a massive dataset of text and code. LLMs are able to generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way.

LLMs are still under development, but they have already learned to perform many kinds of tasks, including creative writing, such as poems, code, scripts, musical pieces, email, letters, etc.

Per an LLM, to set up a noise machine to post to social media content generated by an LLM, you will need to:

●      Choose an LLM: There are a number of different LLMs available, such as GPT-3, LaMDA, and Bard. Choose an LLM that is well-suited for the type of content you want to generate.

●      Create a bot account: Create a social media account for your bot. You will need to provide a username, password, and other basic information.

●      Get API credentials: Most social media platforms offer APIs that allow you to programmatically post content. You will need to get API credentials from the social media platform where you want to post your bot's content.

●      Write a bot script: Write a bot script that uses the LLM to generate content and then posts the content to your social media account. You can use any programming language to write your bot script.

●      Schedule your bot: Once you have written your bot script, you will need to schedule it to run at regular intervals. You can use a cron job or other scheduling tool to do this.

Here is a simple example of a bot script, provided to us by Google Bard, that generates and tweets a random quote using the GPT-3 API:

Python

import openai

import tweepy

# Set your OpenAI API key

openai.api_key = "YOUR_OPENAI_API_KEY"

# Set your Twitter API keys

auth = tweepy.OAuthHandler("YOUR_CONSUMER_KEY", "YOUR_CONSUMER_SECRET")

auth.set_access_token("YOUR_ACCESS_TOKEN", "YOUR_ACCESS_TOKEN_SECRET")

api = tweepy.API(auth)

# Generate a random quote

quote = openai.Completion.create(

prompt="Generate a random quote",

engine="davinci",

max_tokens=100,)

# Tweet the quote

api.update_status(quote.choices[0].text)

The LLM identified the source for the code as github.com/erossiter/pythoncourse2018

The script can be modified to generate different types of content, such as poems, code, or news summaries. You can also schedule your bot to post content to other social media platforms, such as Facebook or Instagram.

Don’t worry about your bots’ friends and followers; social listening doesn’t care. 

Bot detectors may get you, but by increasing the number of bots and lowering the amount each bot posts, you can obviate that problem. We can set the posting pattern to look human.

After you program your bot, set up one of the free social listening tools.  Pick an issue, but pick one that not a lot of people chat about for the test.  Target your bot and have it inject noise for 3 days, then stop for 3 days, and repeat.  If you can’t see the difference, increase the amount of noise. Once you can reliably produce a signal picked up by your social listening tool, unleash it.

The noise was already there.

Those search engine optimizers, influence bots, blog writers, etc. have been adding noise to social media for sometime now. Foreign governments also add noise to shut down other government’s social listening systems.  The noise is everywhere and it is getting louder.

Now it is social listening’s turn. Hacktivists can shut down big oil and make them effectively blind.  They may take a social issue and target a brand. It will happen.  Possibly, as early as next week.

With the end of any industry new ones are born from the ashes. Veriphix is introducing the era of predicting and measuring intersubjective (group) beliefs.

Belief3, our platform for measuring group beliefs at scale is 200% more accurate than social listening and getting more accurate relative to social listening every day because of what killed social listening…noise.  With a robust internal ethics review, Belief3 uses no social listening, and generates its data from consenting, anonymous and compensated individuals. A better product and it is ethical? And people get paid? Imagine that.

Not only can we operate Belief3 without using social listening, but we can also use those same noise bots to inject positive change inducing nudges to help change communal population belief on an issue.

RIP social listening, we will take things from here.

-      

Next
Next

Trying to get ahead of the market? Use Belief3.