We’ve certainly learned quite a bit about Facebook over the last few weeks, haven’t we? What’s becoming increasingly clear with every conversation, every question answered (or skirted), and every discovery is that data privacy and ownership are at the center of things. And those issues remain critical for businesses to address with regard to their own customers. But more than that, we need to take a good hard look at the very technology that makes all of this possible.
AI to Facebook’s rescue?
With regard to harmful content and its role in twisting what people see, read and believe, Mark Zuckerberg indicated that the short-term plan is more humans (up to 20,000 new hires!) and then artificial intelligence could handle the scale: “We’re going to shift increasingly to a method where more of this content is flagged up front by AI tools.”
In a confidential document recently uncovered by The Intercept, we find that instead of merely offering advertisers the ability to target people based on demographics and consumer preferences, Facebook will use AI to also offer the ability to predict future behavior. That is, Facebook promises the ability to target consumers based on how they will behave, what they will buy, and what they will think.
You know what else is powered by AI? Recommendation systems. They’re behind Amazon’s suggestions (“people who bought this also bought…”) and they’re behind the algorithms that control the content you see on Facebook. While they can save us time and bring us the content that (we think) we want, recommendation engines are perhaps the biggest threat to societal cohesion on the Internet – bubbles, conspiracy theories, and nonsense make their way to the top.
And this mist of online reality threatens to break down the basic elements that hold us together as a society offline, as arguments and people alike become intractable. “Fake news” and “alternative facts” have crept into our collective vocabulary and some members of society are unable to discern truth from fiction.
A turning point?
Are we already too far down a path of platform-induced destruction? Is it too late to wrest control of our fate from the algorithms? Is the result of making the world a more connected during this fractious political time actually made us more disagreeable than ever?
Elon Musk – that technological wunderkind known for embracing technology and all it has to offer – recently said “humans are underrated,” noting that “excessive automation at Tesla was a mistake.” And pioneers in areas of the Internet have begun apologizing for their creations.
“The fault, dear Brutus, is not in our stars,
But in ourselves, that we are underlings.”
But maybe – just maybe – the underlying factor isn’t the algorithms that drive the platforms, nor even the platforms themselves. Perhaps our problem lies within. When freely available information meets confirmation bias meets positive feedback loops, the fact is clear: we cannot resist. The temptation to engage is too much.
The sad truth about technology is deceptively simple: we’re addicted.