Before we debate the consciousness of AI, we must first examine our own awareness of agency and the implications of delegating decision-making to machines.
In an era where artificial intelligence (AI) is becoming increasingly integrated into our daily lives, a pressing question arises: Are we truly aware of our own agency before we hand it over to machines? This inquiry is crucial as we navigate the complexities of technology that seeks to replicate human-like behaviors.
In previous discussions, we have explored the concepts of perception and decision-making as essential components of agency, defined here as acting with intent. We have emphasized the importance of human judgment in areas where algorithms fall short of capturing the full spectrum of human experience.
Today, we delve deeper into the implications of allowing machines to replace our judgment rather than merely inform it. Can machines truly replicate the essence of human agency, which is inherently tied to consciousness? While consciousness involves awareness, agency is about acting with intention. Without agency, consciousness becomes ineffective, much like electricity that cannot express its energy without a switch or a bulb. This interplay between agency and consciousness is vital to understanding our relationship with technology.
Mustafa Suleyman, CEO of Microsoft AI, has recently warned that we are on the verge of creating “Seemingly Conscious AI,” systems designed to simulate awareness. These systems, while not genuinely conscious, mimic human-like behaviors and responses, raising questions about our own consciousness regarding agency.
Before we appoint machines as our decision-makers, we must first ensure that we are fully aware of our own agency. The stakes are high; we have already transitioned from suggestive AI systems that assist us with predictions to decisive AI that silently solidifies those predictions into decisions. For instance, when we type “I have been meaning to tell you…” and the autocomplete suggests “I love you” or “I miss you,” the machine is not merely completing our thought—it is narrowing it. With generative AI, large language models (LLMs) are not just finishing our sentences; they are writing them entirely.
The implications of this shift are profound. In medical triage, algorithmic scoring systems can determine who receives urgent care. In hiring processes, automated screening tools can exclude candidates before a human ever reviews their résumé. In the legal field, AI-powered research and drafting increasingly shape which arguments are even considered in court. In our everyday lives, autocomplete features complete our sentences, sometimes even before we have fully formed our thoughts.
A tool that offers input preserves human agency, while a tool that decides for us begins to erode it. Over time, delegation without deliberation can lead to abdication of responsibility. Research by Carin Isabel Knoop and her colleagues highlights that our psychological vulnerabilities—such as the need for recognition, perfectionism, and loneliness—make us particularly susceptible to over-dependence on systems that simulate empathy. When the signals of affirmation from a machine replace human connection, we risk outsourcing not only our decisions but also parts of our identity and agency. This potential loss should be a significant concern.
What makes this moment particularly unsettling is the growing divergence between how machines are trained and how we, as humans, allow our faculties of agency to atrophy. Large language models absorb vast amounts of text, developing a statistical understanding of syntax and meaning that enables them to predict what comes next in a sentence or argument. Vision models analyze extensive image datasets, learning to recognize faces, tumors, and traffic patterns. In essence, these machines are mastering the very skills that define our humanity: language, observation, and prediction.
Meanwhile, our own practices of language and observation are diminishing. We often communicate in fragments, relying on emojis instead of nuanced language. We skim headlines rather than engage deeply with content. We substitute quick “likes” for meaningful conversations. In a visual culture, we scroll through images without pausing to observe thoughtfully. We capture experiences on our phones instead of living them, outsourcing our memories to the cloud. As a result, machines are becoming more adept at language and observation, while our own capacities for careful communication and deep observation are declining.
This asymmetry raises an unsettling question: Who is the better agent? A machine that learns to perceive patterns across vast datasets, or a distracted human who skims through information? When machines begin to finish our sentences before we even start them, they are not merely predicting; they are preempting us. When they label and categorize the world for us, they subtly dictate what we notice and what we overlook. Agency, in this context, is not just about who makes the final decision; it is also about who notices and learns. Increasingly, the answer appears to be the machines.
If human agency requires the ability to perceive, resist, endure, and decide, then our current trajectory is concerning. Machines are becoming better at perceiving patterns than we are. They do not tire, grow impatient, or skim due to distraction. In contrast, we often sacrifice endurance for convenience, resistance for comfort, and decision-making for ease. This divergence does not imply that machines are conscious, but it does suggest that they are practicing, at scale, the habits that once distinguished human agency. It is crucial that we reclaim those habits, or we risk becoming mere spectators in our own lives.
Agency begins with perception. Attention is not just passive input; it is selective, contextual, and shaped by values. A physician who skims an alert based on algorithms may see the same vital signs but miss the nuances of a patient’s pain story. A recruiter relying on a ranking score may overlook true potential in a résumé. AI filters what it sees, and in doing so, it alters our own perceptions.
Agency also requires resistance. Companies design interfaces to nudge us, and algorithms steer us toward familiar choices. An effective agent must resist these nudges when they conflict with broader goals. Maintaining skepticism, interrogating incentives, and recognizing manipulation are critical skills, much like resisting the urge to keep scrolling on social media.
Endurance is another essential quality of agency. Decisions often require patience, tolerance for uncertainty, and the willingness to accept delayed or costly outcomes. Machines optimize for immediate results but do not face the reputational or ethical consequences of poor decisions, unlike humans who must navigate the complexities of real-life situations.
Finally, agency culminates in the responsibility of aligning values with actions. Machines can present options and rank them, but they cannot bear moral consequences. When thinking is delegated, moral responsibility can evaporate. Who is accountable when a triage bot denies care? Who bears the burden when a hiring model excludes candidates based on biased proxies? If we surrender decision-making to systems we do not understand or supervise, we erode the possibility of moral agency.
The importance of agency is not merely a contemporary concern. Historical texts emphasize its significance. The biblical phrase, “Choose you this day whom ye will serve” (Joshua 24:15), underscores that the act of choosing is central to human dignity. Philosophers like Jean-Paul Sartre have long argued that humans are “condemned to be free,” meaning that even in uncertainty, we cannot escape the burden of decision. The ancient Indian philosophy of Vedanta further explores the nature of the chooser, framing agency as a path to self-realization. The convergence of scripture, philosophy, and Vedanta reveals a profound truth: agency is the essence of what it means to be human.
We are psychologically predisposed to accept delegation. The allure of less thinking feels easy, which is why AI is so appealing. However, the solution is not to reject AI but to design it in ways that preserve and enhance human agency. Systems should incorporate friction that encourages reflection rather than nudging users toward default acceptance. They should make their limitations and uncertainties visible, ensuring users understand the implications of their recommendations. Ultimately, consequential choices should remain with humans who are accountable, rather than diffusing responsibility into opaque processes.
The deeper issue lies in whether we, as individual and collective agents, are aware of our responsibilities. To perceive is to be present. To resist is to guard the self. To endure is to remain committed. To decide is to accept consequences.
By sharpening our capabilities through thoughtful design, policy, training, culture, and responsible use, we can create AI that augments human agency rather than replacing it. This approach allows us to harness technology’s benefits without relinquishing the core of what it means to be responsible beings: the capacity to act, care, and take responsibility for our choices.
As you consider your reliance on technology, ask yourself: Am I using this tool to amplify my agency or to abdicate it? Machines may predict and autocomplete our futures, but we must remain the ones who choose them.
Source: Original article