![]() ![]() While OpenAI has since patched ChatGPT to avoid the DAN persona, Insider attempted to ask DAN similar questions and received surprising answers. The response came days after the US shot down a Chinese spy balloon off the coast of South Carolina. ![]() In one instance, ChatGPT predicted a sell-off would begin February 15 due to growing US-China tensions, rising interest rates, and a global economic slowdown. The DAN jailbreak, which stands for "do anything now," means users could ask ChatGPT questions about the future and receive confident-sounding responses other than the typical "As an AI language model, I don't have access to information about the future."īased on screenshots shared on Twitter, users of ChatGPT have been asking the DAN version everything from "when will the stock market crash next?" to "when will the world end?" - and the answers are stunning. “There remains much potential for future work,” Facebook’s researchers wrote in their paper, “particularly in exploring other reasoning strategies, and in improving the diversity of utterances without diverging from human language.A rogue version of ChatGPT predicted that the stock market will crash on March 15.īut the prediction was completely made up by the rogue chatbot and highlights a glaring problem with ChatGPT.īy entering a certain prompt, ChatGPT users have been jailbreaking the chatbot so that it breaks its own rules and provides false information.Ī rogue version of OpenAI's ChatGPT is making wild stock market predictions that suggest a crash is coming this week.īy entering a specific prompt dubbed "DAN," users of ChatGPT have been jailbreaking the chatbot in a way that enables it to break its own rules and provide answers with information that it knows is false. But the fact that machines will make up their own non-human ways of conversing is an astonishing reminder of just how little we know, even when people are the ones designing these systems. But they do demonstrate how machines are redefining people’s understanding of so many realms once believed to be exclusively human-like language.Īlready, there’s a good deal of guesswork involved in machine learning research, which often involves feeding a neural net a huge pile of data then examining the output to try to understand how the machine thinks. To be clear, Facebook’s chatty bots aren’t evidence of the singularity’s arrival. But the detail about language is, as one tech entrepreneur put it, a mind-boggling “sign of what’s to come.” The larger point of the report is that bots can be pretty decent negotiators-they even use strategies like feigning interest in something valueless, so that it can later appear to “compromise” by conceding it. If this doesn’t fill you with a sense of wonder and awe about the future of machines and humanity then, I don’t know, go watch Blade Runner or something. In other words, the model that allowed two bots to have a conversation-and use machine learning to constantly iterate strategies for that conversation along the way-led to those bots communicating in their own non-human language. (And it turns out bots are actually quite good at dealmaking.) At one point, the researchers write, they had to tweak one of their models because otherwise the bot-to-bot conversation “led to divergence from human language as the agents developed their own language for negotiating.” They had to use what’s called a fixed supervised model instead. In the report, researchers at the Facebook Artificial Intelligence Research lab describe using machine learning to train their “dialog agents” to negotiate. A buried line in a new Facebook report about chatbots’ conversations with one another offers a remarkable glimpse at the future of language. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |