The startup making artificial intelligence chips also aims to promote cooperation.
In an effort to promote greater collaboration, artificial intelligence chip start-up Cerebras Systems has made open source ChatGPT-like models available to the academic and commercial communities. The Silicon Valley-based company Cerebras published seven models, ranging in size from smaller language models with 111 million parameters to larger models with 13 billion parameters, all of which were trained on Andromeda, its AI supercomputer.
“There is a big movement to close what has been open sourced in AI…it’s not surprising as there’s now huge money in it,” said Andrew Feldman, founder and CEO of Cerebras. “The excitement in the community, the progress we’ve made, has been in large part because it’s been so open.”
Models with more features can carry out more intricate generative operations. OpenAI’s chatbot ChatGPT launched late last year, for example, has 175 billion parameters and can create poetry and research, which has helped draw large interest and funding to AI more widely.
While larger models operate on PCs or servers, Cerebras claimed that smaller models can be used on smartphones or smart speakers, although complex jobs like summarising lengthy passages call for larger models. Bigger isn’t always better, though, according to Cambrian AI chip expert Karl Freund.
Due to the design of the Cerebras system, which contains a chip the size of a dinner plate designed for AI training, Feldman claimed that his largest model only required a little more than a week to train. This is a significant improvement over the several months that it typically takes.
The majority of AI models are currently developed on Nvidia chips, but more and more startups, like Cerebras, are attempting to gain market dominance. According to Feldman, the models developed on Cerebras machines can also be customised or further taught on Nvidia systems.