A team at Stanford University has built an AI model called Alpaca that has been built on an open-source language model for less than USD $600. The team took Meta’s open-source LLaMA 7B model, which lags significantly behind ChatGPT, and trained it using GPT itself. By asking for training input and output and feeding that into the LLaMA model, they were able to fine-tune it in about 3 hours.
The researchers tested the resulting model against GPT across a variety of tasks, and the two models almost tied, with Alpaca winning 90 and GPT winning 89. While OpenAI stipulates that its code cannot be used to develop competitive models, clearly it is not as easy to protect models as large tech companies had hoped. Mass replication could be on the way.
Read more at New Atlas.
Comments