Curated from Latest from TechRadar US in News,opinion — Here’s what matters right now:
If you’d told a room full of risk-averse insurance executives five years ago that nearly half of UK consumers would soon welcome health advice from AI, you’d have been met with serious skepticism, if not outright laughter. Our latest report shows that 49% of UK respondents would take health recommendations from AI, with 36% open to financial advice and 40% willing to accept insurance suggestions. The shift is a wake-up call. To be clear, I’m not advocating for AI to replace doctors, advisers or brokers. But what the research has uncovered is more interesting: consumers are already crossing that line themselves. The trust threshold, that digital Rubicon, has been crossed. And it raises urgent questions for brands and regulators alike. If people are prepared to make high-stakes decisions based on machine recommendations and AI tools , organizations must rethink how they communicate with transparency, accountability and humanity front and center. A shift in expectation Consumer behavior change in the last decade has been relentless, with high expectations set by the frictionless experiences of eCommerce giants and streaming platforms. Now, even in traditionally cautious sectors like healthcare and finance, people want the same speed, ease and responsiveness. And they expect the communications to match the seamlessness and immediacy of the service. In other words: the way organizations communicate is now as important as what they deliver. Research shows that nearly seven in ten insurance consumers would walk away from a brand if its communications fell short. That number has risen sharply, from 51% in 2023 to 67% in 2025. The message is loud and clear: it’s how you talk to people that matters. That includes AI. In 2024, 77% of consumers wanted clear disclosure when AI was involved in customer communications. A year later, that figure has plunged to just 37%. On the surface, it looks like comfort is growing. But that drop says more about shifting expectations than confidence: consumers are getting used to AI being part of the conversation, but they still want reassurance that it’s being used responsibly. Consider that while just under half of UK consumers say they’re actually willing to trust AI, and in financial services, nearly half of those respondents say AI-generated content should always be checked by a human. That’s why clarity, control and human oversight are non-negotiable, for both compliance and trust in a rapidly evolving landscape. Where trust gets tested So how does this play out in reality? This is where many organizations slip up. Not by using AI, but by using it in a way that feels impersonal: generic messaging , disconnected channels and clunky digital journeys. These things erode trust at the exact moment customers are willing to place more of it in your hands. So, what can you do to grow trust? From communication to conversation The brands that will thrive in this AI-enabled future will be the ones who stay human, who treat communi
Next step: Stay ahead with trusted tech. See our store for scanners, detectors, and privacy-first accessories.
Original reporting: Latest from TechRadar US in News,opinion