Two months after it started, Stanford researchers shut down OpenAI, which was like ChatGPT.

Two months after it started, Stanford researchers shut down OpenAI, which was like ChatGPT.

This article was initialized by a human, created by AI, updated by a human, copy edited by AI, final copy edited by a human, and posted by a human. For human benefit. Enjoy!

According to Stanford Daily, the news of the firing was made less than a week after it came out.

The source code for Stanford's ChatGPT model, which was made for less than $600, is open to anyone.

Researchers say that their robot model worked about the same as OpenAI's ChatGPT 3.5.

Alpaca was built on Meta AI's LLaMA 7B model and used a method called "self-instruct" to make training data.

Douwe Kiela, an adjunct professor, said, "As soon as the LLaMA model came out, the race was on."

Kiela, who also worked as an AI developer at Facebook, said, "Someone was going to be the first to instruction-finetune the model, so the Alpaca team was the first." "It kind of went viral, and that's one reason why."

"It's a really cool, simple idea that they did a great job of carrying out."

"Alpaca" is AI for experts in academia

Tatsunori Hashimoto, an Alpaca researcher in the Computer Science Department, said, "We think the interesting work is in developing methods on top of Alpaca. Since the dataset itself is just a collection of known ideas, we don't have any plans to make more datasets of the same kind or to make the model bigger at the moment."

In their announcement, scientists said that Alpaca, their chatbot, is only for academic study and won't be used by the public any time soon.

Hashimoto said, "The LLaMA base model is trained to predict the next word based on data from the Internet, and instruction-finetuning changes the model so that it gives more weight to completions that follow instructions than to those that don't."

Alpaca's source code can be found on GitHub, which is a site for sharing source code. It has been looked at 17,500 times. People have used the code to make their own models more than 2,400 times.

"I think LLaMA is a big reason why Alpaca works as well as it does, so the base language model is still a big problem," Hashimoto said.

As the use of artificial intelligence systems has grown, scientists and experts have debated whether the source code, data used by companies, and training methods for AI models should be made public. They have also talked about how transparent the technology is as a whole.

He said, "I think that one of the safest ways to move forward with this technology is to make sure that it is not in too few hands."

"We need places like Stanford where cutting-edge study on big language models can be done in the open. So, Kiela said, "I thought it was very encouraging that Stanford is still one of the major players in this big language model space." 

Interested in the latest updates on AI technology? Follow us on Facebook and join our group (Link to Group) to leave your comments and share your thoughts on this exciting topic!