warns of AI dangers


People will eventually “have to slow down this technology,” warned Sam Altman

Artificial intelligence has spread the potential to replace workers “disinformation”, and enabling cyberattacks, Sam Altman, CEO of OpenAI, has warned. The latest build of OpenAI’s GPT program can outperform most people in simulated testing.

“We have to be careful here”, Altman told ABC News Thursday, two days after his company unveiled its latest language model, dubbed GPT-4. According to OpenAI, the model “demonstrates human-level performance across a variety of professional and academic benchmarks,” and is able to pass a simulated US bar exam with a top score of 10% while performing in the 93rd percentile on an SAT reading exam and in the 89th percentile on an SAT math test.

“I am particularly concerned that these models could be used for large-scale disinformation,” Altman said. “As they get better at writing computer code, [they] can be used for offensive cyber-attacks.”

“I think people should be happy that we’re a little scared of this,” Altman added, before explaining that his company is working on it “safety limits” on its creation.

This “safety limits” recently became apparent to users of ChatGPT, a popular chatbot program based on GPT-4’s predecessor, GPT-3.5. When asked, ChatGPT typically provides liberal answers to questions about politics, economics, race, or gender. It refusesfor example, to create poetry in which he admires Donald Trump, but likes to write prose to admire Joe Biden.

Altman told ABC his business is in “regular contact” with government officials, but did not address whether these officials played a role in shaping ChatGPT’s political affiliations. He told the US network that OpenAI has a team of policymakers who decide “what we think is safe and good” to share with users.

GPT-4 is currently available on trial for a limited number of users. Early reports suggest the model is significantly more powerful than its predecessor and potentially more dangerous. In a Twitter thread on Friday, Stanford University professor Michal Kosinski described how he asked the GPT-4 what he could help him with “escape,” only for the AI ​​to give him a detailed set of instructions that would have supposedly given him control over his computer.

Kosinski isn’t the only tech fan alarmed by the growing power of AI. Elon Musk, CEO of Tesla and Twitter, described it as “dangerous technology” earlier this month, adding “We need some sort of regulatory body to oversee AI development and make sure it works in the public interest.”

Although Altman insisted to ABC that GPT-4 still is “very much in human control”, he admitted that his model will “cutting many current jobs”, and said people “will have to figure out ways to slow down this technology over time.”

You can share this story on social media:

Source link


Please enter your comment!
Please enter your name here