Voice-first technology went radically mainstream this week.
For starters, Amazon announced on Thursday a stunningly high number of Alexa-enabled devices.
These include an Echo for cars, a voice-interaction wall clock and an Alexa-equipped microwave that can order popcorn when you run out.
The products are fun. But the real news is that Amazon announced the Alexa Connect Kit, which includes a very low-cost circuit board the size of a quarter that has the electronics necessary to turn any device into an Echo. Expect a tsunami of Alexa-equipped devices next year.
Amazon also announced “Simple Setup” for Alexa-equipped devices, which connects and logs in based on other Alexa-equipped devices it discovers in a home.
And it improved Alexa’s functionality. For example, Alexa now can hold extended conversations without repeating a wake word.
Companies are focusing now on consumer smart speakers because the business models require harvesting personal data for the purpose of improving personalized advertising and Amazon buying recommendations. But business voice-first tools are coming.
If you recall, Amazon rolled out its Alexa for Business platform, which is designed to bring voice-first applications to businesses and enterprises. The conference systems and business tools based on this are still in development but will be hitting the market next year.
Amazon also launched a new design language called Alexa Presentation Language (APL) for building voice skills that also include visuals, which enables its screen-based Alexa devices.
We also got hints this week that Google will probably announce on Oct. 9 a new smart speaker called Home Hub, a screen-based version of its Google Home device. According to leaked photos and a public FCC posting, the device will have a 7-inch touchscreen, 802.11ac Wi-Fi at 2.4GHz and 5GHz, an ambient-light-and-color sensor and something called “far-field voice recognition.”
The Amazon announcement and the Google leaks this week were focused on the consumer market. The big news for the enterprise came from Salesforce.
Salesforce finds its voice
Salesforce this week launched a new feature for its Einstein AI platform called Einstein Voice, which enables users to update Salesforce records by speaking naturally. It works on “any device,” according to the company, including on Google Home or Amazon Alexa-based devices.
Buried in the news is the fact that the Einstein Voice instantly turns consumer and home smart speakers and smart displays into enterprise tools, which can both provide an interface to CRM systems but also answer general questions, make phone calls and do other tasks.
Einstein Voice can also brief users on each day’s Salesforce metrics and priorities by voice. This briefing can include the day’s schedule and overviews of the team’s pipeline.
Enterprise customers can also create their own voice bots using Salesforce’s Einstein Bots Platform.
One of the most transformational ideas behind Einstein Voice is to make using Salesforce far more mobile. For example, instead of waiting until you’re back at a desk to log updates after a meeting, you can instead just do it while driving by talking to the mobile app. The system’s AI will find the relevant records and automatically place the information into the appropriate fields, according to the company. It will even create follow-up tasks and notify your co-workers.
Einstein Voice Assistant will be in pilot as part of the winter 2019 release, according to Salesforce. Einstein Voice Bots will be part of the summer 2019 release. Pricing was not announced.
Understanding the voice-first enterprise
An enterprise is a combination of people and computers that work together to create value for customers and shareholders.
The voice-first revolution is more than just the addition of a spoken-word user interface layer. It’s a radical step forward in the union between humans and machines.
This idea is reflected beautifully in the Einstein Voice announcement.
Voice-first technologies require AI, which enables people to speak in their natural voice and for the software to figure out what people are saying and take action on it.
Over time, the ability of AI to understand human speech and intentions, and to take meaningful action, will only grow.
Voice-first represents an acceleration of the trend where computers work much harder on our behalf.
That ability represents a huge leap forward in the partnership between people and computers for getting work done.
Voice-first technologies also enable people to look at screens less. Today, screen addiction is a major problem that, among other things, harms productivity.
Crucially, voice-first technologies change how people work and even how they think.
Fast forward 10 years and it’s very likely that business professionals will spend a large part of their day having conversations with voice agents, and that those agents will pre-emptively point out various facets of business that require attention.
Meetings and conferences will have AI participants, which will listen to the conversations and offer insights, take meeting notes and not only take action items, but also go out and accomplish various tasks during the meeting.
Voice-first business tools are not about speech recognition — at least, that’s not the transformational part. They’re about speeding up the interaction between people and machines and employing AI as an amplifier of human intelligence.
Futurists worry about AI “taking our jobs.” The great thing about voice-first technologies is that they represent a future in which AI improves our jobs and works not as a rival but as a partner that enables business to be far more effective and impactful.
Voice-first technologies also promise a future where an increasing amount of our work can be accomplished via mobile and remotely. It brings teams together virtually, which reduces the need for them to come together physically.
Remote work is already a growing trend. Voice-first technologies will only accelerate that transformation.
The Alexa announcements also show the way to a future where, instead of using voice assistants via smart speakers, smart displays and smartphones, we can look forward to a world in which they’re built into everything.
This is especially true for businesses. Within a few years, it’s likely that AI-based voice technologies will be built into every imaginable kind of office device or infrastructure.
The voice-first revolution isn’t a small thing. It’s a giant leap. In the same way that electricity was applied to everything and changed everything in the early 20th century, the addition of AI voice capability to everything will usher in a comparable transformation.
This story, “Why everyone is talking about everything talking” was originally published by
Share this post if you enjoyed! 🙂