Facebook CEO Mark Zuckerberg, under pressure from Congress and critics, now says he is going to protect the millions of people who use his social network from fake news.
Phew, feel better now?
I have never been an alarmist about the spread of fake news on social media. What’s new about rumor, innuendo, distortion, lies and the other unsavory devices purveyors of bogus stories meld into the falsehoods they present as truth? They are as old as humanity. And fictitious stories — just like the fake news a lot of people are decrying now — have always ranged in degree from harmless joke to life-threatening weapon.
If we’re gullible enough to believe something we read on Facebook — or read or heard anywhere else, for that matter — without considering the source or doing any independent research, maybe we deserve our just reward — or punishment. Isn’t that the way life has always been? If you fail to research what might be wrong with your car, or the person you pay to fix it, are you more likely to be duped by a dishonest mechanic? If you eat that unfamiliar berry you run across on your camping trip, do you run the risk of being poisoned?
And who signed up for this social media thing anyway? Uh, I believe that would be us — the same people who could give it up any time we feel like it. And any time would include the point we become convinced that the threat of being duped by fake news outweighs the benefits of sharing selfies, collecting likes and feeling connected to hundreds of friends we hardly know.
What is fake news, anyway? You’ll find varying answers, but many of the people in my life define it as anything with which they disagree or that upsets their preconceived sensibilities.
I talked about this stuff back in December 2016, when I cited some poignant observations by communications professor Jonathan Albright, formerly of Elon University and now at Columbia.
“If we aren’t vigilant,” Albright wrote for The Guardian news outlet in London, “the result of fake news is likely to be yet another layer of filtering. And this time around, the filters won’t be to segment audiences for advertising purposes or to target voting electorates; it won’t be to display the news articles, “likes” and intra-thread @replies that algorithms think we want to see first.
“The filters in the future won’t be programmed to ban pornographic content, or prevent user harassment and abuse. The next era of the infowars is likely to result in the most pervasive filter yet: It’s likely to normalize the weeding out of viewpoints that are in conflict with established interests.
“This isn’t a just problem limited to the center, the left or the right. Rather, this is a new reality. So, as everyone barricades themselves further into algorithmic information silos, encrypted messaging services, and invite-only social network sites, it’s at least worth a thought. In the coming decade, Al-powered smart filters developed by technology companies will weigh the legitimacy of information before audiences ever get a chance to determine it for themselves.”
So here we are, with Mark Zuckerberg, the guy who helped create the problem, pledging to use his magical algorithms to protect us from the perils of using his social network. Or maybe he’s really pledging to protect us from ourselves, masses unwilling or unable to accept responsibility for discerning truth from fiction as we scroll habitually and mindlessly through the posts people, businesses or who knows what share with us on Facebook.
Back in 2016, I suggested the solution to fake news could be worse than the problem. It conjures the cliche, “Be careful what you ask for, you might just get it.”
And now, we are getting it.
Yes, let’s leave it to Mr. Zuckerberg and his army of algorithms to determine what does and does not constitute legitimate news. That ought to work, right?
— Executive Editor Keith Magill can be reached at firstname.lastname@example.org. Follow him on Twitter @CourierEditor. You’ll find links to material Magill references in this column at houmatoday.com and dailycomet.com.