Google last year brought TensorFlow as its new open source framework. While the first release of the project was designed to run on a single machine, it has now been updated to deliver you an open source artificial intelligence (AI) solution across multiple machines.
TensorFlow 0.8 is the release that comes preloaded with the much-anticipated distributed computing support. It has a gRPC library that helps in enabling AI on “hundreds of machines in parallel” at once.
“Distributed TensorFlow is powered by the high-performance gRPC library, which supports training on hundreds of machines in parallel,” Google’s software engineer Derek Murray says in a statement. “It complements our recent announcement of Google Cloud Machine Learning, which enables you to train and serve your TensorFlow models using the power of the Google Cloud Platform.”
The new version of TensorFlow additionally comes with new libraries, including some Python libraries. Besides, the distributed architecture of the framework is capable of scaling a single-process job up to use a cluster.
“The current version of distributed computing support in TensorFlow is just the start. We are continuing to research ways of improving the performance of distributed training—both through engineering and algorithmic improvements—and will share these improvements with the community on GitHub,” Murray adds.
TensorFlow uses machine learning efforts to enable AI. It is designed to be operable on devices such as desktops, servers or mobile computing platforms with CPUs or GPUs.
Google’s TensorFlow isn’t the only community-driven AI solution. Tech companies like Facebook, Microsoft and Yahoo, are also actively developing their open source offerings with machine learning algorithms.