KServe OSS Project Could Revolutionize Production ML Serving
KServe joins open-source foundation LF AI & Data as an incubation project. The collaboration has been praised by IBM, Bloomberg and Nvidia for its potential to facilitate production machine learning (ML) serving, a costly container management challenge faced throughout the industry. When finalized, the model will enable companies to run thousands of models in a single production deployment, a revolutionary development for artificial intelligence (AI) .
"This is important not just for building better products faster, but also to ensure that we unlock the creative potential of our AI researchers without burdening them with writing tons of boilerplate code in this regard,” said Anju Kambadur, Head of AI Engineering, Bloomberg.
KServe provides a Kubernetes Custom Resource Definition which encapsulates the complexity of autoscaling, networking, health checking and server configuration to bring cutting edge features Rollouts to ML deployments. This holistic approach will enable a simple, pluggable and complete story for production ML serving that includes prediction, pre-processing, post-processing and “explainability.” The completed model will provide high-performing, abstraction interfaces for common ML frameworks including TensorFlow, XGBoost, Scikit-learn, PyTorch and ONNX.
The promise of this potentially pivotal solution motivated IBM, Bloomberg, Nvidia, Google, Sheldon and other organizations to collaborate with KServe to release and publish it as an open-source project. All respective parties already have practical applications in mind. Co-Founder and adopter of KServe, IBM, looks forward to running hundreds of thousands concurrent models for internet-scale AI applications like IBM Watson Assistant and IBM Watson Language Understanding. Bloomberg, another KServe founder, is using this solution to expand Bloomberg Terminal and other enterprise products. Nvidia, an active contributor to the project, aims to work in lock-step with KServe to support the scalability of AI via its serverless inferencing network.
Early adopters include South-Korean search engine Naver Search, which shared that KServe had allowed them to modernize its AI serving infrastructure and provided it the tools to handle the traffic scaling difference between day and night cycles.
"By providing a standardized interface on top of Knative and Kubernetes, KServe allows our AI researchers to focus on creating better models and putting their hard work into production without becoming experts in delivering and managing highly-available backend services," said Mark Winter, Software Engineer, Naver Search.