Have you heard of the open source internet? The antidote to a capitalist web already exists

In the early days of the internet, famously, no one knew if you were a dog. The internet was a place where you could be anyone.

More importantly, it was also a place where you could find anything: that rare book, or the perfect pair of neon-pink tights, or a community for your unusual health condition. The underlying model of the internet was that it was decentralised, and everyone had the right to have a voice – even dogs.

Marketers realised they could use the internet to make money, but no one had figured out how yet. The original search engine included an index of all the pages on the web: you could literally browse the whole web if you were so inclined.

For those of us who were there, it was like the coolest club going, only everyone there was an oddball, nerd or another kind of outcast. Like all the best clubs, though, the internet didn’t stay exclusive. Marketers did work out how to use it to sell things (mostly pornography in the early days), and the internet became a fact of life rather than a niche interest.

From consolidation to ‘enshittification’

In the early 2000s, we saw another phenomenon: consolidation.

Facebook, through its links to the US college experience, became the place to connect with friends. Amazon, through its distribution network, became the place to buy … well, everything. Google was the source of information, and used this position to become the default source of information in browsers and mobile phones.

Initially, this consolidation happened because these tools were great for the people who used them. Then the tools became less great for end users, and instead became great for the people who sold things on them (advertisers, mostly).

However, people kept using the tools because the cost of switching was high, or there was no viable alternative.

Finally, these products have become great for people who own them, and not great for anyone else. The competition has also been squeezed out. The most fitting term for this process is “enshittification”, coined by author and digital rights activist Cory Doctorow. It is rife across digital products as diverse as ridesharing, streaming services and search engines.

So now, instead of connecting with friends, finding unique products or having the information of the world at your fingertips, the internet is a shopping mall advertising the same poor-quality products everywhere.

Google is currently facing an antitrust lawsuit in the US over its online advertising business practices.

The alternative world exists

So, what was the alternative? It’s been there all along. In fact, lots of the internet still runs on it.

It’s called the free and open source software movement.

In the dawn of the tech era – 1950s and 60s – most of the people involved in tech and programming were hobbyists and tinkerers, who shared code to help each other build stuff, grow and learn.

This became a social movement centred around the ethics of distributing software, and it had four underlying principles:

software should be free to use for any purpose
software, and the code that underlies it, should be available for study and modification
you should be free to share software with others, and
you should be free to share software you have modified.

For many people in the movement, it was unethical to make software proprietary, or work with companies that did: this became the free software movement.

The open-source software movement is an alternative that’s more amenable to proprietary software, but still believes people should have access to the code.

This approach has much in common with the modern “right-to-repair movement” – it’s fine for a company to sell you a product, but you should be able to take it apart and fix it if it isn’t working.

Open-source software is baked into the internet. Over 95% of the top million web servers – the computers that send web content to your laptop or browser – run Linux, an open-source operating system (instead of Windows or iOS).

Netscape, an early web browser, was released open source, and the Firefox browser is still open source today.

Tux the penguin is the mascot of Linux, chosen by its creator Linus Torvalds.
Anthony Easton/Flickr, CC BY

A right to repair the internet

So, how different would the internet look if the open source movement had been even more dominant?

It is instructive to look at what happens when for-profit tech giants release code and documentation, either deliberately (like Twitter) or accidentally (like Google).

In both cases, analysis of the code or documents discovered quirks that benefit either the companies or their founders, which company representatives said or implied weren’t happening.

In these cases, the openness has meant people could understand what was happening in a way that wasn’t possible before.

Understanding is one thing. Even better would be if people could use what has been released to get their own data, so the cost of switching to an alternative service – be that a social media network, search engine or shopping provider – is lower.

Imagine if you could write a post and choose which social media platform it went to, or have a single app to keep up with all your friends. Open-source code and such behaviour being allowed would almost certainly mean this was a reality.

And that reality is still possible. The recent antitrust judgement against Google has shown tech giants that the consolidation required to enshittify user and seller experience – and enrich tech company owners – is on notice.

Without consolidation, tech companies have to compete for users by providing better services, and that’s good for everyone.

The right-to-repair movement is taking off, too. Perhaps one day, we will have the right to understand – and repair – the technology we use on the internet. That would be a future worth fighting for. Läs mer…

Can AI talk us out of conspiracy theory rabbit holes?

New research published in Science shows that for some people who believe in conspiracy theories, a fact-based conversation with an artificial intelligence (AI) chatbot can “pull them out of the rabbit hole”. Better yet, it seems to keep them out for at least two months.

This research, carried out by Thomas Costello at the Massachusetts Institute of Technology and colleagues, shows promise for a challenging social problem: belief in conspiracy theories.

Some conspiracy theories are relatively harmless, such as believing Finland doesn’t exist (which is fine, until you meet a Finn). Other theories, though, reduce trust in public institutions and science.

This becomes a problem when conspiracy theories persuade people not to get vaccinated or not to take action against climate change. At its most extreme, belief in conspiracy theories has been associated with people dying.

Conspiracy theories are ‘sticky’

Despite the negative impacts of conspiracy theories, they have proven very “sticky”. Once people believe in a conspiracy theory, changing their mind is hard.

The reasons for this are complex. Conspiracy theorist beliefs are associated with communities, and conspiracy theorists have often done extensive research to reach their position.

When a person no longer trusts science or anyone outside their community, it’s hard to change their beliefs.

Enter AI

The explosion of generative AI into the public sphere has increased concerns about people believing in things that aren’t true. AI makes it very easy to create believable fake content.

Even if used in good faith, AI systems can get facts wrong. (ChatGPT and other chatbots even warn users that they might be wrong about some topics.)

AI systems also contain widespread biases, meaning they can promote negative beliefs about some groups of people.

Given all this, it’s quite surprising that a chat with a system known to produce fake news can convince some people to abandon conspiracy theories, and that the change seems to be long lasting.

However, this new research leaves us with a good-news/bad-news problem.

It’s great we’ve identified something that has some effect on conspiracy theorist beliefs! But if AI chatbots are good at talking people out of sticky, anti-scientific beliefs, what does that mean for true beliefs?

What can the chatbots do?

Let’s dig into the new research in more detail. The researchers were interested to know whether factual arguments could be used to persuade people against conspiracy theorist beliefs.

This research used over 2,000 participants across two studies, all chatting with an AI chatbot after describing a conspiracy theory they believed. All participants were told they were talking to an AI chatbot.

The people in the “treatment” group (60% of all participants) conversed with a chatbot that was personalised to their particular conspiracy theory, and the reasons why they believed in it. This chatbot tried to convince these participants that their beliefs were wrong using factual arguments over three rounds of conversation (the participant and the chatbot each taking a turn to talk is a round). The other half of participants had a general discussion with a chatbot.

The researchers found that about 20% of participants in the treatment group showed a reduced belief in conspiracy theories after their discussion. When the researchers checked in with participants two months later, most of these people still showed reduced belief in conspiracy theories. The scientists even checked whether the AI chatbots were accurate, and they (mostly) were.

We can see that for some people at least, a three-round conversation with a chatbot can persuade them against a conspiracy theory.

So we can fix things with chatbots?

Chatbots do offer some promise with two of the challenges in addressing false beliefs.

Because they are computers, they are not perceived as having an “agenda”, making what they say more trustworthy (especially to someone who has lost faith in public institutions).

Chatbots can also put together an argument, which is better than facts alone. A simple recitation of facts is only minimally effective against fake beliefs.

Chatbots aren’t a cure-all though. This study showed they were more effective for people who didn’t have strong personal reasons for believing in a conspiracy theory, meaning they probably won’t help people for whom conspiracy is community.

So should I use ChatGPT to check my facts?

This study demonstrates how persuasive chatbots can be. This is great when they are primed to convince people of facts, but what if they aren’t?

One major way chatbots can promote misinformation or conspiracies is when their underlying data is wrong or biased: the chatbot will reflect this.

Some chatbots are designed to deliberately reflect biases or increase or limit transparency. You can even chat to versions of ChatGPT customised to argue that Earth is flat.

A second, more worrying probability, is that as chatbots respond to biased prompts (that searchers may not realise are biased), they may perpetuate misinformation (including conspiracy beliefs).

We already know that people are bad at fact checking and when they use search engines to do so, those search engines respond to their (unwittingly biased) search terms, reinforcing beliefs in misinformation. Chatbots are likely to be the same.

Ultimately, chatbots are a tool. They may be helpful in debunking conspiracy theories – but like any tool, the skill and intention of the toolmaker and user matter. Conspiracy theories start with people, and it will be people that end them. Läs mer…