Current Buzz Spot

Panic over new ChatGPT that 'thinks' as expert warns AI can steal all your cash

By Sean Keach

Panic over new ChatGPT that 'thinks' as expert warns AI can steal all your cash

ARTIFICIAL intelligence that can "reason" could spark a terrifying new wave of cash-stealing scams.

A top security expert has told The Sun how new advancements to apps like ChatGPT could be exploited by online crooks.

Artificial intelligence chatbots are everywhere - and they're able to make difficult jobs much easier for millions of people.

This month, OpenAI showed off its new ChatGPT o1 model, which is capable of "thinking" and "reasoning".

"We've developed a new series of AI models designed to spend more time thinking before they respond," OpenAI explained.

"They can reason through complex tasks and solve harder problems than previous models in science, coding, and math."

It's the latest major advancement to ChatGPT since the AI chatbot first launched in late 2022.

And it's available in an early preview for paying ChatGPT members now (but in a limited way).

The Sun spoke to security expert Dr Andrew Bolster, who revealed how this kind of advancement could be a huge win for cyber-criminals.

"Large Language Models (LLMs) continue to improve over time, and OpenAI's release of their 'o1' model is no exception to this trend," said Dr Bolster, of the Synopsys Software Integrity Group, speaking to The Sun.

"Where this generation of LLM's excel is in how they go about appearing to 'reason'.

"Where intermediate steps are done by the overall conversational system to draw out more creative or 'clever' appearing decisions and responses.

"Or, indeed to self-correct before expressing incorrect responses."

He warned this brainy new system could be used for carrying out clever scams.

"In the context of cybersecurity, this would naturally make any conversations with these 'reasoning machines' more challenging for end-users to differentiate from humans," Dr Bolster said.

"Lending their use to romance scammers or other cybercriminals leveraging these tools to reach huge numbers of vulnerable 'marks'."

He warned that they'd be able to carry out lucrative scams cheaply "at scale for a dollar per hundred responses".

"Web users should always be wary of deals that are 'too good to be true'," Dr Bolster told us.

"And [they] should always consult with friends and family members to get a second opinion.

"Especially when someone (or something) on the end of a chat window or even a phone call is trying to pressure you into something."

To combat the new ChatGPT being abused, OpenAI has fitted it out with a whole host of new safety measures.

"As part of developing these new models, we have come up with a new safety training approach that harnesses their reasoning capabilities to make them adhere to safety and alignment guidelines," OpenAI said.

"By being able to reason about our safety rules in context, it can apply them more effectively.

"One way we measure safety is by testing how well our model continues to follow its safety rules if a user tries to bypass them (known as 'jailbreaking').

"On one of our hardest jailbreaking tests, GPT-4o scored 22 (on a scale of 0-100) while our o1-preview model scored 84."

But although it's safer, no AI system is foolproof - so be vigilant when browsing the web so you're ready to spot these costly scams.

Previous articleNext article

POPULAR CATEGORY

business

6346

general

8162

health

6050

sports

8124