Skip to main content

Control the AI or control the content?

· 3 min read

My young self used to think the only path to a truly democratic society was replacing the human element (with all its biases and prejudices) with an all seeing AI. Putting aside what living in such an AI driven society would be like, it also ignores the fundamental problem that even if the AI is created with noble intents it's still trained by human beings and thus embeds their unconscious biases.

Of course today we're living in a society where LLM are vernacular and biases in training are not only incidental but very deliberate - try asking Gemini anything and you'll experience a chatbot that is so desperate to please you'll soon be thinking you're a gas-lighting, abusive spouse! Such programmed bias is comedic or disappointing at worst, but there's also plenty of models that have been prompted with much more sinister motives. It can be fun to speculate on more generally intelligent 'AGI' models carrying these behaviours. But let's digress.

Let's assume that within a few years, most ordinary citizens will be dependent on such models to be productive in their life (they're simply too effective compared to prior tools such as search engines and social media). Over time, a generation of young and old alike will have their minds slowly reprogrammed to fit the interaction style and output of these models. It'll drive how they reason, communicate and express themselves and eventually they won't even be aware of it.

There's nothing novel in this trend, it's been happening with every form of mass-publication throughout human history, whether it's social media group-think, workplaces, communities, schools, nationalism, religion, etc. Where AI differs is in reaching an apex of intimacy and scale that even totalitarian control of a social network is unable to achieve.

If we assume this is true then we can also assume that any actor that maintains control of that AI directs its machinations must have immense power over its users. However such a nefarious actor can do better. Instead of controlling the AI itself one must remember that a) there are many rival AI and b) they are all built on the same foundations - a training corpus of digital, human knowledge. In that case, a well resourced actor could simply reshape that vast trove of digital knowledge and bias it towards their own intentions instead. In turn, the many AI would embed that programming and then project it on their human and machine users. The consequence of that I'll leave you to speculate on.