Rogue Credit Union Alerts Public To Believable AI Fraud
MEDFORD, Ore. — Using Artificial Intelligence (AI), more fraudsters are using tactics like distressing voicemails impersonating loved ones supposedly in dire situations and meticulously crafted scam texts pretending to be from reputable organizations. As technology has rapidly advanced, scams have evolved, becoming more difficult to discern.
Scammers are crafting increasingly believable schemes, using AI-generated content from written texts to multimedia posts according to Rogue Credit Union (RCU) officials. The omnipresence of AI in a digital age is undeniable. It has many positive benefits, transforming various industries and extending into creative realms like art generation. AI has revolutionized our lives and is set to create even more change.
Identifying AI Scams
An abbreviation for artificial intelligence, AI encompasses machines capable of executing tasks such as speech recognition, visual perception, decision-making, and far more, with infinite possibilities. On the opposite end, the immense capabilities of AI are harnessed by fraudsters to scam people.
RCU members have confirmed the prevalence of such scams, for example in a recent text message encountered by some RCU members who received a message that read, “Rogue Credit Union: There was a new device login attempt or possible login to your Rogue CU account outside the U.S. If you did not initiate this, please secure your account by visiting…”
The message appears authentic at first glance, but on closer examination, several red flags emerge. Firstly, official correspondence from RCU never refers to Rogue Credit Union as Rogue CU. The errors in correct punctuation signal its dubious nature and while these kinds of details are subtle, they are indicative of fraudulent intent.
At present, a type of technology can be administered that makes messages look more convincing. Scammers can get a few seconds of a person’s voice recording- often from the internet, clone the voice, and use it for fraudulent purposes. Personalized emails, phone calls, and chatbots use AI techniques to further the scam.
Once a telltale sign of a fraudulent message, grammar errors are now often fixed by AI-powered proofreading tools. Utilizing AI to fabricate fake audio recordings that mimic the voices of trusted individuals, and images manipulated to superimpose faces of acquaintances onto misleading visuals are some of the strategies used by sophisticated fraudsters.
Older People Are More Vulnerable To AI Scams
While younger people are more tech-savvy, they are not immune to scams created by AI, which are increasingly credible. The Federal Trade Commission’s (FTC) Consumer Sentinel report for 2022 indicated that older Americans reported more than $1.6 billion in losses to frauds and scams, but this number is certainly lower than the actual figure as many- for a variety of reasons, such as embarrassment or shame, don’t report scams they have been victims of.
The FTC estimates that older people lost as much as $48.4 billion to scams in 2022 and- with AI, the scams are getting worse.
This came out of a meeting of the Senate Special Committee on Aging held last November. With AI technology making scams more believable to the person being targeted by mimicking human-like behavior, the purpose of the hearing was to discuss how AI is being used in scams against older adults.
How To Decrease The Likelihood Of Falling Prey To An AI Scam
In an era dominated by Artificial Intelligence, it’s paramount to exercise caution when receiving communications such as calls, texts, or emails. Never respond to messages in haste, even if they seem to originate from a familiar source. Rather exercise awareness and verify the legitimacy of the communication and if there is any doubt, contact RCU directly at 800.856.7328.
Safeguard yourself by refusing to engage with fraudulent actors, preserving your security and peace of mind in an increasingly AI-immersed world. Advice from the Senate Special Committee on Aging applies to all ages and includes habits people can practice to decrease the chance of being victimized:
- Don’t give out any personal or sensitive information to an online chatbot.
- Don’t send money to unknown places
- Don’t share any sensitive information via email, phone, text, or social media.
- Create a safe word for your family, shared only with family members and close contacts.
- Report possible scams to authorities as well as the companies involved.
Fortunately, government agencies, elected representatives, and companies like RCU are taking notice, but it’s important that members of the public stay alert to reduce the possibility of being victims of AI scams.