What if the real risk of AI isn’t deepfakes — but daily whispers?

upscalemedia transformed
Most people do not appreciate the serious threat that AI will soon pose human agency. A common refrain is “AI is just a tool,” And like any device, its benefits and dangers depend on how people use it. This is old-fashioned thinking. AI is driving change The tools we use To We wear prosthetics. it will make significant new threats We are not ready for this.

No, I’m not talking about scary brain implants. These AI-powered prosthetics will be mainstream products we buy from Amazon or the Apple Store and marketed with friendly names like “assistants,” “coaches,” “co-pilots” and “tutors.” They will provide real value in our lives – so much so that we will feel at a loss if others are wearing them and we are not. This will create pressure for rapid mass adoption.

The artificial devices I am referring to are “AI-powered wearable devices“Such as smart glasses, pendants, pins and earbuds. Your wearable AI will see what you see and hear what you hear, all the while tracking where you are, what you’re doing, who you’re with and what you’re trying to accomplish. Then, without you needing to say a word, these mental aids will whisper advice Put in your ears or flash the guidance in front of your eyes.

The difference between a device and a prosthesis may seem subtle, but Implications for human agency Are intense. It can be best understood through a simple analysis of inputs and outputs. A device takes human input and produces amplified output. A device can make us stronger, faster or allow us to fly. A mental prostheses, on the other hand, form a feedback loop around the human, accepting input from the user (by tracking their actions and engaging them in conversation) and producing output that can make an immediate impact User perspective.

This feedback loop changes everything. This is because body-worn AI devices will be able to monitor our behavior and emotions and use this data to talk to us Believing things that are untrue, buying things we don’t need or adopting ideas that we would otherwise feel are not in our best interests. it is called AI manipulation problemAnd we are not prepared for the risks. This is an urgent issue as big tech is racing to bring these products to market.

Why are feedback loops so dangerous?

In today’s world, all computing tools are used to deploy targeted influence on behalf of paying sponsors. Wearable AI products will likely continue this trend. The problem is, these devices can easily be “given away.”influence objective“And they are tasked with adapting their negotiation strategies to optimize their impact on the user, overcoming any resistance they find. This changes the concept of targeted effect From social media buckshot to heat-seeking missiles that expertly get past your defenses. And yet, policymakers do not appreciate this risk.

Unfortunately, most regulators still view the threat of AI in terms of its ability to rapidly produce traditional types of impact (deepfakes, fake news, propaganda). Of course, these are significant threats, but they are not that dangerous. interactive and adaptive effects It could soon be deployed widely through conversational agents, especially as those AI agents travel with us through our lives inside wearable devices.

it’s coming soon

Meta, Google and Apple are racing to launch wearable AI products as quickly as possible. To protect the public, policymakers need to abandon their “tool-use” framing when regulating AI tools. It’s hard because the tool-use metaphor goes back 35 years to when Steve Jobs colorfully described the PC as “”.bicycle of mind” The bicycle is a powerful tool that keeps the rider firmly in control. Wearable AI will turn this metaphor on its head, making us wonder who’s driving the bicycle – Humans, AI agents whispering in human ears, or corporations deploying the agents? I believe it will be a dangerous mixture of all three.

Furthermore, users will likely rely on AI-voices in their heads More than they should. This is because these AI agents will provide us with useful advice and information in our daily lives – educating us, reminding us, training us, informing us. The problem is that we may not be able to distinguish when an AI agent has changed its purpose from helping us to influencing us. To appreciate the difference, you can watch the award-winning short film privacy lost (2023) about the dangers of AI-powered wearable devices. This is especially true when devices include invasive features like facial recognition (which Meta is reportedly adding For their glasses).

What can we do to protect the public?

First and foremost, policymakers need to realize that conversational AI enables A whole new form of media It is interactive, adaptive, personalized and increasingly context-aware. This new form of media will act as a form of “proactive influence”, as it can adjust its tactics in real time to overcome user resistance. When deployed in wearable devices, these AI systems can be designed to manipulate our actions, influence our opinions, and influence our beliefs – and it can all be seemingly informal communication. Even worse, these agents will learn over time what negotiation tactics work best for each of us on an individual level.

Fact is, conversational agent Control loops should not be allowed to form around users. If not regulated, AI will be able to influence us with supernatural inspiration. Furthermore, AI agents should be Users are required to be informed Whenever they transition to conveying promotional material on behalf of a third party. Without such protections, AI agents would likely become so persuasive that they would make today’s targeted influence techniques seem redundant.

Louis Rosenberg is an augmented reality pioneer and longtime AI researcher. He received a PhD from Stanford, was a professor at California State University, and wrote several books on the dangers of AI, including The Arrival Mind and Our Next Reality.



<a href

Leave a Comment