By Susam Pal On 12 January 2026
Introduction
Since the launch of ChatGPT in November 2022, generative artificial intelligence (AI) chatbot services have become increasingly sophisticated and popular. These systems are now embedded in search engines, software development tools as well as office software. For many people, they have quickly become part of everyday computing.
These services have proven quite useful, especially for exploring unfamiliar topics and as a general productivity aid. However, I also think that the way these services are advertised and consumed can pose a threat, especially if we get into the habit of trusting their output without checking it.
Contents
Loss
Some design choices in modern AI systems may encourage uncontrolled acceptance of their outputs. For example, many popular search engines are already highlighting AI-generated answers at the top of the page. When this happens, it’s easy to stop scrolling, accept the generated answer, and move on. Over time, this may inadvertently train users to treat AI as the default authority rather than a starting point for further investigation. I wish each such generic AI service came with a brief but clear warning stating that these systems may sometimes produce outputs that are factually incorrect, misleading, or incomplete. Such warnings should highlight that habitually trusting AI output can be dangerous. In my experience, even when such warnings exist, they are minimal and of little visual importance.
In the world of science fiction, there are three laws of robotics formulated by Isaac Asimov, which are repeated throughout his work. These laws were designed to control the behavior of robots to keep humans safe. As far as I know, Asimov never created any equivalent law that dictates how humans should interact with robots. I think we now need something to that effect to keep ourselves safe. i will call them Inverse Law of Robotics. These apply to any situation that requires us humans to interact with a robot, where the term ‘robot’ refers to any machine, computer program, software service or AI system that is capable of performing complex tasks automatically. I am using the word ‘inverse’ here not in the sense of logical prohibition but to indicate that these rules apply to humans rather than robots.
Inverse Law of Robotics
Here are three inverse laws of robotics:
-
Humans should not anthropomorphize AI systems.
-
Humans should not blindly trust the output of AI systems.
-
Humans must remain fully responsible and accountable for the consequences arising from the use of AI systems.
non-anthropomorphism
Humans should not anthropomorphize AI systems. That is, humans should not be attributed with emotions, intentions, or moral agency. Anthropomorphism distorts judgment. In extreme cases, anthropomorphization can lead to emotional dependency.
Modern chatbot systems often seem conversational and empathetic. They use polite phrases and social signals that closely resemble human interaction. Although this makes them easier and more pleasant to use, it also makes it easy to forget what they really are: large statistical models that produce plausible text based on patterns in the data.
I think vendors of AI based chatbot services can do a better job here. In many cases, systems are deliberately designed to feel more human rather than more mechanical. I would argue that the opposite approach would be healthier in the long run. A slightly more robotic tone will reduce the likelihood that users will mistake fluent language for comprehension, decision-making, or intent.
Whether or not vendors make such changes, it is still the responsibility of users to avoid this loss. We should actively avoid the habit of treating AI systems as social actors or moral agents. By doing this they remain clear about their capabilities and limitations.
non-respect
Humans should not blindly trust the output of AI systems. AI-generated content should not be considered authoritative without independent verification of its context.
This principle is not unique to AI. In most areas of life, we should not accept information uncritically. Of course, in practice this is not always possible. Not everyone is an expert in medicine or law, so we often rely on trusted institutions and public health officials for guidance. However, guidance published by such institutions is in most cases reviewed by experts in their fields. On the other hand, when we get a response to a question from an AI chatbot in a private chat session, there has been no peer review of the specially generated response presented to us. Therefore, the onus falls on us to critically examine the response.
Although AI systems have become quite effective at certain tasks today, they are still known to produce outputs that it would be a mistake to trust. Even if AI systems improve at producing reliable outputs with a high degree of probability due to their inherent stochastic nature, there will still be a small probability of producing outputs that contain errors. This makes them particularly dangerous when used in contexts where errors are subtle but costly. The more serious the potential consequences, the higher the burden of verification should be.
In some applications like preparing mathematical proofs or developing software, we can add an automated validation layer in the form of proof checkers or unit tests to verify the output of the AI. In other cases, we have to independently verify the output ourselves.
non shirking responsibility
Humans must remain fully responsible for decisions involving AI and accountable for the consequences arising from its use. If there is a negative outcome as a result of following AI-generated advice or decisions, it is not enough to say, ‘the AI told us to do this’. AI systems do not choose targets, deploy themselves, or bear the costs of failure. Human beings and organizations do this. An AI system is a tool and, like any other tool, the responsibility for its use lies with those who decide to rely on it.
However, this is easier said than done. This becomes especially difficult in real-time applications such as self-driving cars, where humans do not have the opportunity to adequately review decisions made by AI systems before acting. Requiring a human driver to be constantly alert does not solve the problem that AI systems often take less time to act than it would take for a human to intervene. Despite this serious limitation, we must acknowledge that if an AI system fails in such applications, the responsibility for investigating the failure and adding additional guardrails must still fall on the humans responsible for the design of the system.
In all other cases, where there is no physical barrier that prevents a human from reviewing the AI output before acting on it, any negative consequences arising from the use of AI should fall solely on the human decision-maker. As a general principle, we should never accept ‘AI told us so’ as an acceptable excuse for harmful outcomes. Yes, the AI may have offered the recommendation but a human decided to follow it, so that the human can be held accountable. It is absolutely vital to prevent indiscriminate use of AI in situations where irresponsible use could cause significant harm.
conclusion
The three laws mentioned above are based on usage patterns I have observed and that I believe are harmful to society. I’m hoping that with these three simple laws, we can encourage our fellow humans to pause and consider how they interact with modern AI systems, resist habits that impair judgment or blur responsibility, and be mindful that AI is a tool we choose to use, not a possession we defer to.
<a href