Is content moderation a dead end?
In the late 1990s, Microsoft was the evil empire, and a big part of ‘evil’ was that it was too closed - it made things too hard for developers. But then came the great malware explosion, which at one point shut down half the Pentagon, and we realised that the real problem was that Windows and Office were too open. Microsoft had built them as fluid and extensible platforms, where any developer could do pretty much whatever they wanted once they were on your PC, and when we combined that with the internet, this was a problem. Microsoft had to pivot to ‘trustworthy computing’ - it put a lot of effort into closing off APIs that could be abused, and checking for bugs that could be exploited, and it also had to create a whole infrastructure of scanning and monitoring. Microsoft made it much harder to do bad stuff, and wrote software to look for bad stuff.
This is exactly what happened to social in general and Facebook in particular in the last 5 years. Until 2016 or so, Facebook was ‘evil’ because it was too closed - because it was too hard for developers to access information. It even tried to make people use their real names - that was especially evil. Then, just as for Microsoft, that turned upside down, and we realised that the real problem was it was too open. This time the malware was aimed at cognitive biases instead (Read more...)