Curated from Latest from TechRadar US in News,opinion — Here’s what matters right now:
As AI tools become part of everyday life, most people believe they would be better equipped to spot AI-generated scams, but new research reveals a worrying trend: as people get more familiar with AI, they’re more likely to fall for these scams. New research finds that the generations most confident in detecting an AI-generated scam are the ones most likely to get duped: 30% of Gen Z have been successfully phished, compared to just 12% of Baby Boomers. Ironically, the same research found that fear of AI-generated scams decreased by 18% year-over-year, with only 61% of people now expressing worry that someone would use AI to defraud them. During the same period, the number of people who admitted to being successfully duped by these scams increased by 62% overall. A Proliferation of Scams Traditional scam attempts rely on mass, generic messages hoping to catch a few victims. Someone receives a message from the “lottery” claiming that a recipient won a prize, or a fake business offering someone employment. In exchange for providing their bank account details, the messages would promise money in return. Of course, that was never true, and instead the victim lost money. With AI, scammers are now getting more personalized and specific. A phishing email may no longer be riddled with grammatical errors or sent from an obviously spoofed account. AI also gives scammers more tools at their disposal. For example, voice cloning allows scammers to replicate the voice of a friend or family member with just a three second audio clip. In fact, we’re starting to see more people swindled out of money because they believe a message from a family member is asking for ransom, when it’s actually from a scammer. The Trust Breakdown This trend harms both businesses and consumers. If a scammer were to gain access to a customer’s account information, they could drain an account of loyalty points or make purchases using a stolen payment method. The consumer would need to go through the hassle of reporting the fraud, while the business would ultimately need to refund those purchases (which can lead to significant losses). There’s also a long-term impact to this trend: AI-generated scams erode trust in brands and platforms. Imagine a customer receiving an email claiming to be from Amazon or Coinbase support, an unauthorized user was trying to gain access to their account, and that the user should call support immediately to fix the issue. Without obvious red flags, they may not question its legitimacy until it’s too late. A customer who falls for a convincing deepfake scam doesn't just suffer a financial loss; their confidence in the brand is forever tarnished. They either become hyper-cautious or opt to take their business elsewhere, leading to further revenue loss and damaged reputations. The reality is that everyone pays the price when scams become more convincing, and if companies fail to take steps to establish trust, they wind up in a vicious cycle. What's Fueling the Co
Next step: Stay ahead with trusted tech. See our store for scanners, detectors, and privacy-first accessories.
Original reporting: Latest from TechRadar US in News,opinion