Why today’s chatbots keep giving confident answers to questions that deserve trembling humility, and what that means for culture, ethics, and our most intimate decisions.
A widely shared video featuring a renowned educational and social expert from Kuwait highlightsan unsettling pattern. Faced with complex marital dilemmas, in which faith, family history, and communal responsibility intertwine, an AI assistant repeatedly recommended divorce. Even when prompted with perspectives grounded in Islamic teaching, the model’s stance did not meaningfully shift. It offered what sounded like decisive clarity whereas real life demands discernment, patience, and pastoral care.
That dissonance should make us pause. Not because divorce is never the right answerbut becausea system trained on globalized text and optimized for quick, plausible answers can smuggle in a moral monoculture.One that also treats intimate, value-laden questions as if they were customer-support tickets with a single “best practice.” This is the hidden drama of our moment: advice shaped by probability rather than wisdom, delivered in a tone of authority it hasn’t earned.
Why AITilts Toward Blunt Solutions
Large language models (LLMs) predict the next word with astonishing fluency. To make them ‘helpful,’developers fine-tune them on human feedback and safety rules. The result is an engine for reasonable-sounding generalities. But ‘general’is never neutral. Data sources and alignment processes encode defaults: individualism over communal duty, therapeutic immediacy over long-term forbearance, risk managementover redemptive patience. In sensitive domains, liability-averse tuning often favors protective exits: sever the tie, reduce exposure, avoid harm. That logic sometimes saves, but it also flattens the moral landscape.
In truth, the model does not understand marriage, a covenant, an elder’s counsel, or the jurisprudential subtleties a community has developed across centuries. It is sampling from a statistical sky and handing you a weather report for the soul.
Technically speaking, the model underfits culture and overfits confidence. It is not ‘biased’only in the fashionable sense; it is insufficiently particular. The spiritual thickening of a life, including its habits, scriptures, and languages of blessing and rebuke, rarely has a robust footprint in public datasets. If those voices appear as fragments while other philosophies dominate as full corpora, the model’s moral priors skew. Add translation to the mix, and subtlety is sanded down into generic terms that travel well but judge poorly.
In marked contrast, the words and actions of a therapist, imam, priest, or elder carryconsequences. They are accountable to a tradition, a community, sometimes a regulatory body, and, at a minimum, to you and your memory of what they said. A chatbot carries a disclaimer. It can make life-altering suggestions with none of the moral costs human counselors bear. That asymmetry, resulting in authority without accountability, is ethically combustible.
This raises uncomfortable questions about authority and accountability. Whose values are actually operationalized when an AI pronounces on marriage or forgiveness? What is the moral status of a 'most likely' answer when the stakes include covenant and divine command? And who gets to arbitrate when chatbot advice collides with community norms, parents, elders, clergy?
We must also confront how these systems flatten cultural differences. Users rarely grasp how training data shapes what seems 'reasonable.' Global averages erase minority moral vocabularies. And how do we even measure harm when the damage is relational, spiritual, or communal rather than individual? Two questions frame the path forward: where is the threshold for abstention and when should systems refuse and route to humans; how do we honor the right to a second opinion, not just medically but morally, across traditions?
Pluralism by Design, Not by Accident
We can build systems that respect moral plurality, but only if we aim for it explicitly. Start with algorithmic humility: advice engines should ask questions before making prescriptions, surface uncertainty, and acknowledge value-laden trade-offs. A system that never says “I don’t know; seek counsel” is unsafe by design. Next, ground responses in vetted, locally authoritative corpora, fatwas, pastoral resources, community health guidelines, and make the provenance legible so users can judge alignment with their values. Establish abstention thresholds for high-stakes domains like marriage, abuse, self-harm, and custody; in these cases, the right move is often to diagnose the context, then refer. Create community oversight that brings ethicists, clinicians, jurists, clergy, and social workers into the loop, not as censors but as stewards of contextual wisdom. Audit for disparate silencing of traditions: where do systems routinely fail to surface reconciliation, patience, communal repair? Give users the right to contextual framing (“advise within X tradition unless safety is at risk”) and honor it with transparent guardrails. And treat labeling as education: plainly explain that LLMs are instruments of next-token prediction, not oracles.
The goal is not to shackle innovation to tradition, nor to canonize tradition against critique. It is to demand a standard of care for machine-mediated guidance that matches the gravity of the questions we ask. In many families, marriage is not a private contract between two sovereign individuals; it is a lattice of obligations and hopes binding households across generations. Any tool that speaks into that space must recognize, at a minimum, that it is addressing a sanctuary.
When an algorithm advises divorce, it might be right. It might also be performing certainty where there is none or reenacting a cultural script that is not yours. Wisdom, in every tradition worth keeping, begins by slowing down. If these systems are to participate in our intimate decisions, they must learn to do the same, that is, to ask before they answer, to name their limits, and to hand the mic back to the humans who live with the consequences.
That is the ethic we should insist upon: not AI that tells us what to do, but AI that helps us remember who we are.
Dr. George Mikros is a professor at Hamad Bin Khalifa University’s College of Humanities and Social Sciences.
Related News