Watching Zuckerberg testify and thinking… the real issue lies in “the process with which content gets pushed onto the platform“.
This along with all its associated elements that go into the process of publishing content – is the key issues here.
Is Zuckerberg simply buying time by saying that the Artificial Intelligence needs to improve if Facebook is to win the war against “bad” content and “bad” actors?
Is he implying that is the only way – as if Artificial Intelligence is the magic pill? That the AI will get better and eventually solve this problem for them?
Those of us who are experienced SEOs know for a fact that even the best AI will not stop the blackhatters and gamers of the system. Humanity has, and will always find a way to game even the best of AI’s (by finding backdoors if it has to). After all, AI is reading patterns made by humans and we can simply reverse engineer things – given that we have access to certain amounts of seed data pattern sets.
In his testimony, Zuck stresses that the way the system works currently is that – any type of content gets published immediately (posts / ads etc). It gets taken down, if and only if it trips the AI filters and when users flag it in a specific way that the AI algorithm recognizes as a pattern. Then, either the AI takes it down, or it pushes out the content for a manual review.
Zuckerberg is clearly stating that the platform is not currently geared up to immediately stop a cleverly disguised piece of fake news of the sale of illegal opiates through posts or ads – but rather relies on the alertness and benevolence of users (to act in goodness without any reward) to help the AI by flagging “bad” like this.
This is how things currently work on the platform. And, this is what he is hiding behind.
Does Facebook have access to a method or system / technology – that can have a real human manually review every piece of content or advertisement that goes into their system within a few seconds / or minutes?
With their current head count of about 20,000 employees, this is an impossible feat. In fact, even if every human on the planet worked at Facebook this would not be possible. Manual reviews would result in months or perhaps years of waiting for every post or ad (or an edit of an ad) to go live on their platform. This feat is impossible to do from a purely time-resource limitation point of view.
A significant delay in posts/content made by users – would result in a total break down of the social media model that millions are addicted to.
The sheer massive number of new posts (and ads) getting pushed and published into the system every second – makes this feat impossible to do from a purely time-resource limitation point of view.
For Facebook to regulate and review the information would be impossible and akin to a private company checking and manually auditing everything that gets posted on the world wide web.
On the other hand, the pure value in a social media platform (or any information based system like the www, chat, email, or the Internet) is people being able to post and share information / ads / content immediately and in real time.
It seems like Facebook has to save their only source of revenues that come from their ad platform – which is also closely associated to regular content in the manner in which ads gets published.
Were Facebook to moderate and approve every advertisement prior to it going live on their ad platform – the associated days of delay in every the ad going live – would result in massive losses in revenues for FB and a possible break down of the ad platform as advertisers move away from the FB platform because in ad approval cycles (and loss in revenues to alternate platforms like Adwords/ Google). A key feature on the FB ad platform is being able to drill down audiences to target quickly, and make your A/B split testing so you can choose your control groups and keep improving ROIs quickly in each ad-set cycle. Any delay in getting ad edits approved by humans (as opposed to users flagging them so the AI reports it to employee moderator) – will result in a total break down of the ad platform which would result in closure of Facebook as it is sustained by the ad platform. This is like telling someone at a discussion that they can’t speak until they write down what they want to say and it goes into approval first.
And, improving the automated tools / and AI that Zuck keeps talking about are not the answer.
That is misleading.
No amount of improvement in AI will allow it to catch “bad” content at a 100% filtering rate.
And, doesn’t this all sound a bit like Google back in 2011 before their massive search algorithm updates that saved them from the onslaught of SERP spam?
However, this is different because this also involves the privacy of millions of people and their private data (over 29,000 data points per individual on average).
Data that he said could be misused by “bad” actors and pushed out to other platforms – so if you deleted your Facebook account your private data will still linger (and potentially sold/traded) around the web.
Good luck Zuck, but this does look like the end of the road for Facebook being able to run the way you want it to run.
What does not amaze me is how the US government / NSA etc are cleverly staging this play for the world, while secretly continuing to backdoor and retain control of a treasure of global data from one of their most prized homegrown assets.
Big brother already knows how this game ends… and is not going to be on the losing side.