The Trust Paradox: Understanding Why Humans Trust AI over Other Humans
Author : Rutuja Gaidhane
Abstract :
In recent years, artificial intelligence (AI) has moved from a futuristic concept to an everyday reality. People use AI in navigation apps, online shopping recommendations, digital assistants, and even medical diagnostics. However, an interesting psychological phenomenon has emerged — humans often express greater trust in AI systems than in other people, even though AI is created, trained, and monitored by humans themselves. This research chapter explores this trust paradox by examining the psychological, social, and technological factors that influence human confidence in AI-generated advice. It highlights how perceived neutrality, emotional distance, and consistent performance make machines seem more reliable than humans. Real-life examples, such as the use of AI in healthcare chatbots and education apps, show how easily people accept algorithmic decisions. The chapter concludes that while AI can enhance decision-making, uncritical dependence may weaken human judgment, emphasizing the need for awareness and responsible use.
Keywords :
Human Judgment, Psychological Factors, Perceived Objectivity, Emotional Detachment, AI Reliability.