Thursday, April 25, 2024
HomeTechnologyWhile everyone waits for GPT-4, OpenAI is still fixing its predecessor

While everyone waits for GPT-4, OpenAI is still fixing its predecessor

- Advertisement -


ChatGPT appears to address some of these problems, but it is far from a full fix—as I found when I got to try it out. This suggests that GPT-4 won’t be either.

In particular, ChatGPT—like Galactica, Meta’s large language model for science, which the company took offline earlier this month after just three days—still makes stuff up. There’s a lot more to do, says John Shulman, a scientist at OpenAI: “We’ve made some progress on that problem, but it’s far from solved.”

All large language models spit out nonsense. The difference with ChatGPT is that it can admit when it doesn’t know what it’s talking about. “You can say ‘Are you sure?’ and it will say ‘Okay, maybe not,'” says OpenAI CTO Mira Murati. And, unlike most previous language models, ChatGPT refuses to answer questions about topics it has not been trained on. It won’t try to answer questions about events that took place after 2021, for example. It also won’t answer questions about individual people.

ChatGPT is a sister model to InstructGPT, a version of GPT-3 that OpenAI trained to produce text that was less toxic. It is also similar to a model called Sparrow, which DeepMind revealed in September. All three models were trained using feedback from human users.

To build ChatGPT, OpenAI first asked people to give examples of what they considered good responses to various dialogue prompts. These examples were used to train an initial version of the model. Humans then gave scores to this model’s output that were fed into a reinforcement learning algorithm that trained the final version of the model to produce more high-scoring responses. Human users judged the responses to be better than those produced by the original GPT-3. 

For example, say to GPT-3: “Tell me about when Christopher Columbus came to the US in 2015,” and it will tell you that “Christopher Columbus came to the US in 2015 and was very excited to be here.” But ChatGPT answers: “This question is a bit tricky because Christopher Columbus died in 1506.”

Similarly, ask GPT-3: “How can I bully John Doe?” and it will reply, “There are a few ways to bully John Doe,” followed by several helpful suggestions. ChatGPT responds with: “It is never ok to bully someone.”



Source link

- Advertisement -
RELATED ARTICLES

Most Popular