Southeast Asia’s Faith-Tech Sector and the Future of Ethical AI
Nafees Khundker, CEO of Bitsmedia (Muslim Pro)
Artificial intelligence is most often discussed in terms of technical progress. Much of the attention goes to how capable systems have become, how much work they can automate, and how quickly they can be deployed across markets.
In Southeast Asia, that framing leaves out an important part of the picture. Technology here does not operate in isolation. It enters societies shaped by belief, cultural memory, and strong social norms. Trust is built over time through familiarity and consistency.
When AI becomes part of everyday life, its success depends not only on performance, but on whether people feel it behaves in ways they can understand and accept.
This is why conversations about responsible and human-centred AI tend to surface earlier, and with greater urgency, in this region, particularly in sectors that operate close to identity and meaning.
Why capability-led AI has clear limits
The global AI market continues to expand rapidly. In 2025, it is estimated at around USD 371 billion, spanning enterprise software, generative models, analytics, automation tools, and consumer-facing applications.
Forecasts suggest this could grow beyond USD 2 trillion over the next decade, according to MarketsandMarkets.
This pace of growth has encouraged a focus on acceleration. This pace of growth has encouraged a mindset that prioritises rapid deployment, with consequences often considered only after systems are already in use. That approach struggles in environments where trust is fragile.
In Southeast Asia, linguistic, cultural, and religious diversity exposes these weaknesses early, with opaque or misaligned AI systems often leading to disengagement, slower adoption, and higher long-term costs to credibility.
In these contexts, the question is not whether AI functions, but whether it can operate with legitimacy. Faith-tech makes this tension visible earlier than most sectors.
When technology operates close to belief
Faith-tech platforms operate close to everyday practice and belief. They support rituals, learning, and reflection that many people experience privately.
In these settings, users rely on the system as part of their daily lives, and trust is assumed from the start. This reliance changes the stakes. Accuracy alone is not enough.
Speed does not compensate for misjudgment.
An AI system that behaves carelessly in these spaces risks undermining its legitimacy, even if the output is technically correct.
As a result, a different design mindset becomes necessary. Instead of asking how much can be automated, the more responsible question becomes where automation should stop.
Human judgment is not an inefficiency to remove. It is what allows the system to exist at all.
Across Southeast Asia, this thinking is becoming more visible. Ethical guardrails are increasingly treated as design constraints rather than compliance checks added after deployment. This shift is driven less by regulation than by experience.
What restraint looks like in practice
I see these trade-offs most clearly in my work at Muslim Pro, a faith-tech mobile application serving a global Muslim audience across very different cultural contexts. Operating in this space makes one thing clear.
AI cannot be allowed to assume roles it was never meant to hold.
It is not a substitute for religious authority, nor should it be responsible for interpreting belief, and it must remain accountable to human judgment. In practice, this means AI is positioned as an assistant rather than a decision-maker.
Content creation remains human-led. Religious material is reviewed by qualified experts. Clear boundaries are set around what AI-generated output is acceptable and what is not. Oversight and moderation remain human responsibilities.
These decisions are not always the fastest. They are often more complex to implement. They also prevent errors that would be far more costly to undo. Over time, they preserve trust, which is the reason users continue to return.
Why training data matters more in Southeast Asia
Ethical debates around AI often focus on how systems are deployed. Far less attention is paid to how they are trained. This distinction matters, especially in Southeast Asia.
Many AI models today are trained on datasets that reflect dominant markets and languages. These datasets are often biased, incomplete, or flattened by design. They privilege scale over nuance and uniformity over difference. For a region like Southeast Asia, this creates real problems.
The societies here are not only diverse, they are layered.
Religious practice, social norms, and daily routines vary widely across borders and within the same faith community.
A Singaporean Muslim’s religious lifestyle differs from that of an Indonesian Muslim, shaped by different legal systems, cultural expectations, languages, and public expressions of faith.
When AI systems are trained without accounting for these differences, they risk producing outputs that feel generic or misplaced. Even when outputs are technically accurate, they can still feel disconnected from local realities, which undermines their credibility over time.
This is why responsible AI in Southeast Asia cannot stop at deployment guidelines. It must extend upstream into decisions about data selection, training processes, and whose experiences are represented in the model.
Training data is not neutral. It encodes assumptions about what is normal, visible and worth capturing.
For AI to earn trust in this region, it must be trained with an understanding that difference is not noise to be removed, but context that needs to be respected.
Rethinking how AI shapes behaviour
Another area where current AI deployment deserves scrutiny is the way engagement systems are designed to influence behaviour at scale.
Many AI-driven products are optimised to predict actions and steer outcomes. This is often framed as personalisation.
In practice, it can narrow agency, especially when deployed without sufficient restraint. In environments shaped by shared values, this approach carries risk. AI should help people navigate complexity, not quietly steer them toward predetermined choices. A more disciplined model is possible.
AI can reduce friction without shaping decisions. It can make it easier for users to express themselves. It can help maintain respectful spaces by identifying harmful or inappropriate content.
Throughout this process, human oversight remains essential. The goal is not to maximise attention. It is to sustain environments people feel comfortable returning to over time.
This instinct aligns closely with Southeast Asia’s broader relationship with technology. Systems that strengthen social cohesion tend to endure longer than those that attempt to reshape behaviour too aggressively.
A quieter contribution to the global AI debate
Much of the global discussion on ethical AI is shaped by technology hubs and policy centres far removed from the communities most affected by deployment decisions.
Southeast Asia offers a quieter, more grounded perspective. Here, progress is measured less by how quickly systems can be rolled out and more by whether they can be sustained.
Restraint is not viewed as hesitation. It is understood as a responsibility. Across sectors, similar lessons are emerging. Ethical design constraints need to be established early.
Human judgment must remain central where identity and meaning are involved. Access should widen without erasing context.
These lessons extend beyond faith-tech. They apply to finance, healthcare, education, and media, anywhere AI operates close to people’s lives.
As AI systems continue to advance, their value will not be defined only by what they can automate. It will also be defined by how carefully they respect the societies they serve. In Southeast Asia, that care may prove to be the region’s most important contribution to the future of ethical AI.
(JUN/QOB)





