By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
NewsFox.UsNewsFox.Us
  • Home
  • Business
  • Entertainment
  • Deserts
  • Forests
  • Hiking
  • Contact us
Reading: Scammers use artificial intelligence to trick people.Here are the best ways to protect yourself
Share
Notification Show More
Aa
NewsFox.UsNewsFox.Us
Aa
Have an existing account? Sign In
Follow US
© 2022 Foxiz News Network. Ruby Design Company. All Rights Reserved.
NewsFox.Us > Blog > Business > Scammers use artificial intelligence to trick people.Here are the best ways to protect yourself
Business

Scammers use artificial intelligence to trick people.Here are the best ways to protect yourself

admin
Last updated: 2023/04/15 at 6:43 AM
admin 5 months ago
Share
Scammers use artificial intelligence to trick people.Here are the best ways to protect yourself
SHARE

Earlier this year, Microsoft announced a new artificial intelligence (AI) system that can reproduce a person’s voice after listening to them for just three seconds.

Contents
AI can reproduce anyone’s voiceScammers are tricking people with AI-generated texts and fake product reviewsCrooks can use AI to create malicious computer code to crack passwordsAI will make fraud ‘harder to identify’, Australian regulator saysRead more about scams in Australia:Australian businesses try to thwart fraud, but reports suggest low confidence in regulation

This was a sign that AI could be used to convincingly replicate key parts of someone’s identity.

Here’s an example of someone’s three-second voice prompt entered into the system:

And that’s what the AI, known as VALL-E, generated when asked to reproduce the person’s voice while saying the following phrase:

Deputy reporter Joseph Cox later reportedly used similar AI technology to access bank accounts with his voice replicated by AI.

In March, Guardian Australia journalist Nick Evershed raised concerns among some security experts when he said he could use an AI version of his voice to access Centrelink’s self-service accounts.

Voice duplication is already being exploited by scammers, but it’s not the only way professionals can take advantage of AI.

See how technology is being used and how best to protect yourself.

AI can reproduce anyone’s voice

The Guardian’s research shows that the ‘voiceprint’ security system used by Centrelink and the Australian Taxation Office (ATO) (which uses the phrase ‘In Australia, my voice identifies me’) can be fooled. It has been suggested that there is

It felt like the scene in the 1992 movie Sneakers where Robert Redford’s character recorded someone’s voice to go through a security checkpoint.

Services Australia said in its 2021-22 annual report that voice biometric authentication is used to authenticate more than 56,000 calls per day and 39% of calls to Centrelink’s key business numbers said. It also says that voiceprints are “as secure as fingerprints.”

The ATO says, “It would be very difficult for someone else to imitate your voiceprint and access your personal information.”

Dr. Lisa Given, Professor of Information Science at RMIT University, said: AI-generated voices can also trick people into believing they’re talking to someone they know.

“If the system can reasonably copy my voice and add some empathy, the scammers can go from texting ‘Mom, I lost my phone’ to making a phone call or actually seeing that person.” I was trying to create a voicemail,” she says.

Last month, the U.S. Federal Trade Commission warned consumers about fake family emergency calls using AI-generated voice clones. The FBI has also warned about virtual kidnapping scams.

These concerns have led experts to suggest some basic tactics that people can use to protect themselves from voice duplication.

  • Call friends and family directly to confirm their identity, or come up with a safe word What to Say on the Phone to Confirm a True Emergency
  • Beware of Sudden Callsbecause the caller ID can be forged, even from someone you know
  • Be careful when asked to share personally identifiable information address, date of birth, middle name, etc.

Mark Gorrie, Asia Pacific Managing Director at cybersecurity software company Gen Digital, said: AI voice generators continue to improve in their ability to fool both humans and security systems..

“For years, ‘robo-scams’ have been easily detected by sound alone,” he says. “But voice-based AI has improved, and the text it uses has definitely improved.”

Artificial intelligence systems are increasingly being used to identify AI-based fraud.(Unsplash: Towfiqu Barbhuiya)

Scammers are tricking people with AI-generated texts and fake product reviews

As AI systems improve, large-scale language models (LLMs) such as OpenAI’s popular chatbot ChatGPT will be able to better emulate human-like responses. This is what scammers try to replicate in emails, text her messages, and other chatbots they create themselves.

“The concepts of empathy and social cues that we as humans use in building relationships are exactly the kind of tricks fraudsters can use and put into their systems,” says Dr. Given.

Scammers are using AI in phishing scams, which usually includes emails or text messages that claim to be from legitimate sources but ultimately use social engineering to obtain personal information. Some messages may even use links to direct you to dangerous websites.

Dr. Given says chatbots and LLM can be used to make phishing campaigns more compelling by “perfecting the language” and making the message look more personal.

“Phishing emails in the past were filled with typos and untruthful details, to the point where people said, ‘Who did this email come from? What is this? has surged in recent years across text scams and scammers using more platforms,” ​​she says.

Cybersecurity firm Darktrace said it was seen 135% increase in sophisticated new social engineering attacks in the first months of 2023It said it was in line with the popularity of ChatGPT.

“At the same time, we saw a decline in malicious emails containing links and attachments. This trend suggests that generative AI, such as ChatGPT, is giving threat actors the means to craft sophisticated, targeted attacks at speed and scale. We are offering it,” the company said. Said.

According to Gorrie, Gen Digital expects such scams to continue to rise. Can be easily generated by people with little technical skill.

“Don’t assume you can look at a message any longer and genuinely identify whether it’s real or fake,” he says. “You have to question and think critically about what you see.”

Darktrace Chief Product Officer Max Heinemeyer said the company is also using AI to identify AI-based fraud.

“In a world where AI-assisted attacks are on the rise, humans can no longer be held responsible for determining the authenticity of communications received. This is now the job of artificial intelligence,” he said. I was.

AI too Used to post fake product reviews onlineBut some tools designed to find AI-generated content struggle to identify it consistently, Gory said.

“It just makes it clear that it’s obviously much harder to detect.”

AI can generate fake product reviews, which scammers use when trying to sell shoddy products.(AP: Jenny Kane)

Crooks can use AI to create malicious computer code to crack passwords

Software engineers and hobbyists have used AI to quickly build apps, websites, and more, but the technology is also used to generate code that can be used to hack other computers. .

“Several hacker forums suggest that a non-technical person who is not very familiar with writing malicious code may have the ability to write some basic code for malicious purposes. We’ve already seen it in some places,” Gory says. “So the barriers to becoming a true hacker are definitely changing.”

AI programs have also been used in attempts to crack passwords, with experts urging people to crack passwords. Strengthen passwords and use two-factor authentication when possible.

Some experts are also concerned about AI capabilities that could soon be added to productivity applications such as Google Docs and Microsoft Excel. They are concerned that if scammers or hackers get their hands on a large amount of stolen data, they can use AI tools to quickly extract valuable information.

AI can generate malicious computer code or try to crack people’s passwords.(Unsplash: Chris Reed)

AI will make fraud ‘harder to identify’, Australian regulator says

The Australian Competition and Consumer Commission (ACCC) has not received reports of fraud specifically pointing to the use of AI, but recognizes that the technology “makes it difficult for the community to identify fraud”. said that

“With the advent of new technologies, (ACCC’s) Scamwatch has become more sophisticated in its fraudulent schemes and is aware of the risks posed by AI,” the spokesperson said.

“We will continue to work with our partners in the telecommunications and digital platform industry to identify ways to detect and stop fraud.

“Communities should continue to be sensitive to requests for personal information or money and use caution when clicking on hyperlinks.”

Read more about scams in Australia:

Australian businesses try to thwart fraud, but reports suggest low confidence in regulation

Two-thirds of Australians feel there are not enough laws and regulations to protect them from the unsafe use of AI, according to a March report by consultancy KPMG and the Australian Information Industry Association. I understand.

Some Australian banks and telecommunications companies say they are already using AI to detect potential fraud and cyber threats within their systems.

According to the Australian Financial Complaints Authority (AFCA), about 400 fraud-related complaints are received each month, up from about 340 in 2021-22.

AFCA chief ombudsman and CEO David Locke said some companies are working together to detect and prevent fraud, but more needs to be done.

“The pervasive and sophisticated nature of fraud means that the industry must be willing to invest in new technologies and have the ability to respond quickly,” he said. .

While there are many positive implementations of AI, Given said: No one is immune from AI-based fraud.

“I think people have to realize that this is affecting everyone, it doesn’t matter whether you’re a tech expert or a novice,” she says.

“Playing with technology, understanding how it works, reading about it, talking to people and talking to kids is actually very healthy and positive.

“People need to be as critical as they can while understanding that AI is not entirely without risks and problems.”

You Might Also Like

ASX widens losses after stronger-than-expected jobs report

ACCC Withdraws Merger Review Authority

The stability of volatile small-cap stocks has come into view

Macquarie’s Eku wins massive Canberra big battery mandate in innovative funding

Macquarie’s Eku Energy Selected for Massive 500MWh Canberra Large Battery

TAGGED: AI, Artificial, Artificial Intelligence, ato, bot, centrelink, chatbots, ChatGPT, computer code, computer hacking, criminal, cybersecurity, darktrace, Data, disinformation, email scam, Fraud, fraudster, gen digital, generated AI, hello mom scam, Intelligence, large language model, lisa given, mark gorrie, Microsoft, people.Here, phishing, product review, protect, scammers, services australia, text scam, Trick, vall -e, voice, voiceprint, ways
admin April 15, 2023 April 15, 2023
Share this Article
Facebook Twitter Email Print
Previous Article Should I Join a Large Retirement Pension Fund? Should I Join a Large Retirement Pension Fund?
Next Article Opportunistic sedation offers new medical hopes for patients with complex disabilities Opportunistic sedation offers new medical hopes for patients with complex disabilities
Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

about us

We influence 20 million users and is the number one business and technology news network on the planet.

About Us

  • Home
  • About us
  • Privacy Policy
  • Terms and Conditions
  • Contact us

Find Us on Socials

© Foxiz News Network. Ruby Design Company. All Rights Reserved.

Join Us!

Subscribe to our newsletter and never miss our latest news, podcasts etc..

Zero spam, Unsubscribe at any time.
Welcome Back!

Sign in to your account

Lost your password?