Two of IBM’s Watson-branded collection of machine-intelligence services will be available to run as standalone applications in the public or private cloud of your choice. IBM is delivering these local Watson services atop IBM Cloud Private for Data, a combined analytics and data governance platform that can be deployed on Kubernetes.
Ruchir Puri, CTO and chief architect for IBM Watson, said this was driven by customer demand for machine learning solutions that could be run where customer data already resides, typically a multicloud or hybrid cloud environment (see related interview).
“Rather than trying to move the data to a single cloud, and create a lockin in this open compute-environment-driven world, we are making available AI and moving it to the data,” Puri said. The concept follows how Hadoop and other mass data-processing systems perform work on data in place, rather than moving the data to the processing.
Currently, only two services—Watson Assistant and Watson OpenScale, which Puri described as “flagship products”—will be offered to customers as standalone applications. Watson Assistant is used to build “conversational interfaces” such as chatbots; Watson OpenScale provides “automated neural network design and deployment,” or a way to train, deploy, and oversee machine learning models and neural networks in an enterprise setting.
IBM Cloud Private for Data is composed of preconfigured microservices that run on a multinode, Kubernetes-based IBM Cloud Private cluster. Puri said the customer is expected to perform their own integration between IBM Cloud Private for Data and its local data stores; such integration isn’t handled by IBM directly.
Puri made it clear these local Watson incarnations do not just forward API calls from a local proxy into IBM-hosted Watson. The customer runs its own local incarnation of the service, delivered atop IBM Cloud Private and running in the environment of choice. Supported environments include Amazon Web Services, Google Cloud, Microsoft Azure, and Red Hat OpenShift. Local Watson services are API-compatible with Watson services running in IBM Cloud.
What’s likely to change is the results delivered from local Watson incarnations versus the master version of Watson, because the local versions needs to be periodically updated. Puri could not provide a specific timeline for how often new versions of local Watson services will come down the pike (quarterly, annually, etc.), but he did affirm that it will be updated “on a relatively regular basis.”
The amount of system resources needed to devote to a Watson service instance varies depending on the workload. Some SLAs for the offered products include a prescription for the computing environment (memory, cores, GPUs) required for the desired performance, Puri said. Both virtualized and bare-metal deployments are supported.
Other Watson services will be made available locally atop IBM Cloud Private later. IBM plans later in 2019 to deliver Watson Knowledge Studio, which “discovers meaningful insights from unstructured text without writing any code,” and Watson Natural Language Understanding, an automatic metadata extraction tool. The latter, Puri said, is already used in Watson Assistant as an internal microservice, so most of the work to port it to a local incarnation has already been done.
This new incarnation of Watson services provides a glimpse into some of the motives around IBM’s acquisition of Red Hat. IBM Cloud Private can use the Kubernetes-powered OpenShift as its base, and Watson’s services were reworked over a three-year period around Kubernetes and containers, Puri said. Once Red Hat is fully under IBM’s umbrella, it seems likely that Red Hat’s infrastructure expertise will unlock cloud portability for future IBM data-centric services, Watson and otherwise.
This story, “IBM preps Watson AI services to run on Kubernetes” was originally published by
Share this post if you enjoyed! 🙂