Home etc Blogs Open Source Versions Of The ChatGPT Training Algorithm Goes Live

Open Source Versions Of The ChatGPT Training Algorithm Goes Live

0
653
3d rendering robot learning or machine learning with education hud interface

A non-profit machine learning research group called LAION, or Large-scale Artificial Intelligence Open Network, is committed to making AI models, datasets, and code accessible to the general public.

OpenAssistant and trlX are open source versions of the reinforcement learning from human feedback (RLHF) algorithm, which was used to train ChatGPT, by the AI research teams LAION and CarperAI. Phil Wang, an independent AI engineer, has also made his own version of the system publicly available.

The introduction of LAION-5B, an AI training dataset with more than five billion image-text pairs, was covered by InfoQ in 2022. The most recent endeavour of LAION is called OpenAssistant, which aims to “provide everyone with access to a superb chat based large language model.” The RLHF implementation, a dataset of machine-generated responses and associated human ranks, and a dataset of human-generated instructions will all be used in the intended MVP version of OpenAssistant.

As stated by LAION: “We are not going to stop at replicating ChatGPT. We want to build the assistant of the future, able to not only write email and cover letters, but do meaningful work, use APIs, dynamically research information, and much more, with the ability to be personalized and extended by anyone. And we want to do this in a way that is open and accessible, which means we must not only build a great assistant, but also make it small and efficient enough to run on consumer hardware.”

Within the EleutherAI research team, a new lab called CarperAI has been established with the goal of “increasing the performance and safety of large language models (LLMs) via reinforcement learning.” InfoQ has previously written about EleutherAI’s creation of the open-source GPT-NeoX language model. The lab announced a project in October 2022 to develop and make available “instruction-tuned” models using RLHF. HuggingFace, Scale, and Humanloop are a few of the organisations working together on the project. CarperAI open-sourced Transformer Reinforcement Learning X (trlX), a framework for RLHF-based optimization of HuggingFace language models, as part of this project.

Although ChatGPT’s training techniques are used in these open-source projects, there are presently no trained models available for them. According to Wang’s project FAQ, completing the training could cost “millions of dollars in compute + data.” Although initiatives to gather data and train models are listed in LAION’s roadmap paper for OpenAssistant, it is unclear when trained models might be made available.

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here