Facebook is working on its third virtual assistant.
That Facebook would pursue a voice-based, artificially intelligent interface is hardly surprising. A list of the world’s biggest eight companies by market cap contains the complete roster of seven major players in the virtual assistant market: Microsoft, Apple, Amazon, Alphabet (Google), Facebook, Alibaba and Tencent. The outlier in the list is Berkshire Hathaway.
Evidently, to be one of the biggest tech companies is to have an AI assistant interface. It’s easy to predict that voice-based virtual humans will become the preferred, mainstream user interface for almost all computing.
And Facebook also owns one of the world’s most formidable AI research labs. The company now spends nearly $8 billion per year on R&D.
So Facebook is going to have a powerful AI virtual assistant. And it has been working on it for a long time. And inevitably Facebook’s assistant will find its way into your company, behind your firewall, and into the homes of employees.
But should it?
The M experiment
Just over a year ago, Facebook announced the closure of an experimental product called Facebook M, which it opened to around 2,000 Californians for two and a half years on Facebook’s Messenger platform.
Facebook M was an AI-powered virtual assistant that was backed up by a team of humans answering the questions and taking the actions that the AI couldn’t yet handle.
One of M’s capabilities was to eavesdrop on chat conversations and interject suggestions, such as movies to watch or people to video-chat or call.
M was designed to remind you about meetings — reminders you could request. And it could create meetings with time and place, and even suggest booking an Uber or Lyft to get there.
If someone on chat asked, “Where are you?” M would present a one-tap “send location” button.
The M idea was based on constant surveillance of every word exchanged on Messenger. After closing M, Facebook’s AI assistant efforts transitioned from watching every word typed to listening to every word spoken.
Facebook currently sells a smart display, called Facebook Portal, which uses two virtual assistants. One is Amazon’s Alexa assistant. The other is Facebook’s second virtual assistant product, which is also called Portal, and which can be used on the hardware Portal for making calls and other small tasks.
This week, CNBC broke the news that Facebook is working on a third assistant.
Instead of creating a ubiquitous, all-purpose, cross-platform agent like Amazon’s Alexa or Google’s Assistant, Facebook may instead be designing an assistant that runs on Facebook’s own hardware, including Portal and its Oculus VR platform, as well as future, unspecified hardware platforms, probably including smart speakers and smart displays for businesses and enterprises.
Commentators are taking misguided solace in Facebook’s limited ambitions. The risk is not that Facebook might be broadly helpful. The risk lies in the presence in the room of a microphone controlled by Facebook.
Can Facebook be trusted?
It seems as if there’s a major Facebook scandal every month that erodes public trust in the company. And this month’s scandal was a whopper: Facebook was caught requesting email passwords from new users signing up for Facebook, then using those passwords to copy and transfer the email contacts associated with those email accounts without user permission.
Facebook claims to have “unintentionally uploaded” people’s address books. It says that it did not retain the passwords and that it deleted the data. It has not said whether it retained the information about those users’ social connections, which was the apparent purpose of stealing the contact information.
Facebook further tried to minimize the impact by implying that the number of victims was small — 1.5 million or so. But the information stolen was not that of the 1.5 million users; it was information belonging to many more people, the users’ contacts. If the average user has 100 contacts (and those 1.5 millio users do not have contacts that overlap), then the number of actual victims would be closer to 150 million.
Facebook did not reveal how many people were actually affected.
Facebook’s intent is unknowable. But regardless of whether Facebook’s actions were malicious or incompetent, we’re still left with the conclusion that Facebook is untrustworthy.
The Electronic Frontier Foundation (EFF) went for the darkest interpretation, which is that Facebook is behaving like a criminal hacking organization. “For all intents and purposes, this is a phishing attack,” the EFF said in its official response to the event.
But even the most generous interpretation is that Facebook recklessly ignored minimal standard practices for safeguarding data by requesting email passwords.
As the EFF said in its statement on the matter, email passwords are often the target for phishing attacks because email contains the keys to everything a person does online and everyone a person knows. That’s why even the most minimally responsible companies never ask users for email passwords.
It gets worse.
Part of Facebook’s defense was that users could choose to not share their email password, and instead use email or phone numbers for verification. But they could access those options only by choosing the “Need help?” button, a clear example of dark pattern design.
And the phone option is also compromised. Facebook was caught last year using phone numbers gathered ostensibly for verification purposes for advertising without user permission.
A common theme in Facebook scandals is the reckless handling of personal data.
Last month, Facebook revealed that it had been storing passwords for hundreds of millions of Facebook users and tens of thousands of Instagram users in easily readable format on Facebook servers accessible to thousands of Facebook employees for many years. This week, Facebook quietly amended its post on the revelation to say that in fact the number of Instagram users affected amount to millions, not thousands. Facebook didn’t say how many millions.
Another report showed that two third-party Facebook app developers stored a “vast” collection of Facebook-user data on Amazon cloud servers in a publicly downloadable format. This data included Facebook-user passwords and activity.
Is Facebook dishonest?
A recent report revealed that Facebook CEO Mark Zuckerberg used the data of Facebook users to reward friends and punish enemies and that Zuckerberg and other senior executives discussed plans for years to sell user data. That same report shows that Facebook’s public statements about user privacy are different from actions taken behind closed doors.
We also learned this month that even after deactivating your Facebook account, Facebook continues to track you. This practice isn’t mentioned in its data policies.
We know from previous reports that Facebook maintains information users never provided (shadow profiles) and tracks people who have logged out of Facebook. It also tracks people who never even signed up for an account.
The Facebook transgressions from last year alone that suggest a Facebook culture of dishonesty are too numerous to mention in this column.
Why virtual assistants need trust
It’s important to be clear on the role trust plays in the world of virtual assistants housed on smart speakers and smart displays.
While it’s true that smartphones also have microphones, the use of those microphones is tightly restricted by the mobile operating system vendors, and any unauthorized use is likely to be discovered and stopped — either by the company’s own internal teams or by security researchers who devote their lives to catching such abuse.
Smart devices sold directly by Amazon, Google and Facebook are “black boxes,” and have no intermediate organization concerned with stopping abuse of the sensor.
It’s close to impossible to know if and when or under what circumstances the microphone installed in such a device is activated, what happens with the recordings and how that data is processed or used.
And we’re not talking about today’s technology, but tomorrow’s. Over the next 10 years, it will become possible for companies like Facebook to record audio from millions of microphones every day, all day, and process that data into meaningful, privacy-violating mega-databases of information.
The AI personal assistant user interface revolution is coming. And it’s going to put microphones everywhere.
And that’s why we must all reject Facebook’s participation in this revolution — especially after the repeated trespasses against user privacy that are at least incompetent and at worst malicious or even criminal.
Facebook simply can’t be trusted.
This story, “Can Facebook be trusted with a virtual assistant?” was originally published by
Share this post if you enjoyed! 🙂