- Kuberflow for Kubernetes assists data scientists administer machine learning workflows and deploy and scale models in production.
- Kubeflow was cofounded by developers at Google, Cisco, IBM, Red Hat, CoreOS, and CaiCloud.
- The project is continuously growing to hundreds with contributors from over 30 participating organizations.
Project co-authors Jeremy Lewi, Josh Bottum, Elvira Dzhuraeva, David Aronchick, Amy Unruh, Animesh Singh, and Ellis Bigelow announced the news in a Medium post this morning. “Kubeflow’s goal is to make it easy for machine learning engineers and data scientists to leverage cloud assets (public or on-premise) for [machine learning] workloads,” they wrote. “With Kubeflow, there is no need for data scientists to learn new concepts or platforms to deploy their applications, or to deal with ingress, networking certificates, etc.”
In 2017, Kuberflow was first introduced to the world at the annual Kubecon conference. Come 2020, its first major release is available. Kubeflow was developed to focus on two foremost concerns with machine learning projects: the requirement for integrated, end-to-end workflows, and the demand to make deployments of machine learning systems manageable and scalable. It allows data scientists to develop machine learning workflows on Kubernetes and to manage them without worrying about the intricacies of Kubernetes or its working.
Kubeflow is intended to manage each phase of any machine learning project: writing the code, assembling the containers, allotting the Kubernetes resources to run them, training the models, and providing predictions from those models.
According to Google Kubeflow offers isolation, repeatability, resilience, and scale not only for model training and forecast serving but also for research and development purposes. Jupyter notebooks, a Kuberflow tool, can be process- and resource-limited, and can re-utilise configurations and data sources.
Several Kubeflow components are still under development and will be launched in the near future.