> and apparently to get exposure you have to say the something that fits the narrative.
I think we should really be aware that, if tech companies weren't already able to build something like this anyway, with LLMs they are definitely able now.
There is lots of talk about the generative powers of LLMs, but they also have unprecedented analysis powers: You can now easily build something that automatically checks whether a tweet expresses a certain opinion or narrative and automatically upranks or downranks it based on the results.
So if you're the owner of a platform, you can now fully control the appearance of what "people are saying" on the platform, without even having to use bots or fake messages.
(Of course you could use those as well, in addition, if the opinion you want to push is so bad there aren't enough real users to uprank in the first place)
Definitely. AFAIK they previously used to do sentiment analysis and Facebook faced some backslash for experimenting over the mood of their users by manipulating their timelines but today it must be possible to do %100 editorial moderation using LLMs and pretend that whatever you want is the general public sentiment.
I also notice that "influencers" are also influenced by this. They pick the talking points from real time media like Twitter and then make coherent videos over this stuff and it gets legitimized. People rarely revisit their past works once the firehose is spraying at some other direction and the fake public sentiment becomes the real public sentiment.
> So if you're the owner of a platform, you can now fully control the appearance of what "people are saying" on the platform
There was a whole scandal at Twitter about this around 2020 or 21. People came forward and said there were secret departments that would suppress certain ideas or keep certain stories from trending.
I think we should really be aware that, if tech companies weren't already able to build something like this anyway, with LLMs they are definitely able now.
There is lots of talk about the generative powers of LLMs, but they also have unprecedented analysis powers: You can now easily build something that automatically checks whether a tweet expresses a certain opinion or narrative and automatically upranks or downranks it based on the results.
So if you're the owner of a platform, you can now fully control the appearance of what "people are saying" on the platform, without even having to use bots or fake messages.
(Of course you could use those as well, in addition, if the opinion you want to push is so bad there aren't enough real users to uprank in the first place)