Most technology errors and omissions, directors and officers, and cyber policies written before 2025 treated artificial intelligence as a feature of the underlying service. In 2026 that assumption is being stripped out of standard wordings. Three shifts between mid-2025 and early 2026 reshaped the AI insurance market.
Affirmative AI liability cover moved from concept to active placement: Armilla Insurance Services launched its Lloyd's-backed AI Liability Insurance on 30 April 2025, the Artificial Intelligence Underwriting Company (AIUC) came out of stealth on 23 July 2025 with a USD 15 million seed round and an AI agent insurance programme, and Testudo Global entered the Lloyd's coverholder market later that year. Verisk introduced new generative AI exclusion endorsements for commercial general liability policies, effective 1 January 2026, giving traditional carriers a clean way to strip AI exposure out of standard wordings. And the Monetary Authority of Singapore published its Consultation Paper on Guidelines on AI Risk Management for Financial Institutions on 13 November 2025, making documented AI governance the supervisory expectation for Singaporean banks, insurers, and fintechs deploying AI. Insurance is part of the response.
By the end of this guide, you'll know what AI insurance in Malaysia and Singapore covers, why your existing E&O, D&O, and cyber policies probably don't respond to AI-specific failures, the four categories of affirmative AI cover Asian businesses actually access, and how emerging standards like AIUC-1, ISO 42001, and NIST AI RMF shape the underwriting conversation.
This guide is for Heads of AI, CTOs, Heads of Legal and Compliance at Southeast Asian AI vendors, AI-enabled SaaS platforms, and enterprise AI deployers, for CFOs and Risk Managers evaluating silent AI coverage gaps at renewal, and for MAS-regulated financial institutions working through the new AI Risk Management Guidelines.
Evaluating AI insurance for your Malaysian, Singaporean, or wider Asian AI business?
AI liability insurance is a real and active market in 2026, but the specialist capacity sits with Lloyd's of London coverholders such as Armilla and Testudo, AIUC's US-based agent insurance programme, and Munich Re's aiSure rather than with domestic Asian general insurers. If you operate an AI vendor, an AI-enabled platform, or a MAS-regulated financial institution deploying AI, Emerge can help you access this cover in the markets where it is written.
What is AI insurance?
AI insurance in Malaysia and Singapore is a specialist commercial policy that protects AI vendors, AI-enabled platforms, and enterprise AI deployers against third-party claims arising from AI-specific failures. Cover is written across four categories: affirmative AI liability insurance, AI performance warranties, affirmative AI endorsements on existing commercial lines, and AI-aware directors' and officers' liability. Specialist capacity sits with Lloyd's of London coverholders, Bermuda specialty carriers, and dedicated AI insurers in the United States, with emerging reinsurer-cloud partnerships.
AI systems are unusual among insurable technology risks because the failure mode is probabilistic rather than deterministic. A software bug either exists or it does not. An AI agent's decision to call a wrong tool, hallucinate a fact, or treat a user differently is an output of a statistical system, not a flaw in logic. Insurance in this class therefore underwrites the governance architecture around the model as much as the model itself: how inputs are validated, how outputs are monitored, what human oversight exists, how bias is tested for, how incidents are logged, how data flows in and out of the system, and how the business responds when something goes wrong.
The silent AI cover problem
The defining market problem in 2026 is that existing commercial wordings do not respond cleanly to AI-specific failures. Technology errors and omissions policies were written before large language models became embedded in professional services workflows and do not contemplate a vendor being sued because its AI produced a defamatory, inaccurate, or IP-infringing output. Directors and officers wordings do not contemplate claims against directors for failing to govern an AI model that produced a harmful decision at scale. Cyber policies respond to breaches and ransomware but typically exclude financial loss caused by an AI system behaving as designed.
This is sometimes called "silent AI cover": the uncertainty of whether an existing policy will respond to an AI-specific claim, and the high probability that a contested claim ends up in coverage litigation rather than in payment. Armilla, launching its Lloyd's-backed AI Liability Insurance in April 2025, described this silent-cover risk in its public announcement as mirroring the early, costly lessons of cyber risk, before affirmative cyber wordings became standard market practice.
The insurance industry's response has moved in two directions at once. Affirmative AI products have emerged to cover named AI perils explicitly, and exclusionary endorsements have emerged to remove AI-specific risk from standard commercial wordings.
Verisk introduced new generative AI exclusion endorsements for commercial general liability policies, effective 1 January 2026. Several major US carriers have filed their own AI exclusions with state regulators. The practical effect is that AI-deploying businesses entering 2026 without documented AI governance frameworks increasingly face coverage denials or explicit AI exclusions at renewal.
| Existing policy line | Typical response to AI-specific claim | 2026 trajectory |
|---|---|---|
| Technology professional indemnity / E&O | Silent or contested on claims arising from AI-specific failures such as hallucinations or model drift | Carriers introducing affirmative AI endorsements for governed deployments and exclusions for ungoverned ones |
| Commercial general liability (CGL) | Historically silent; new Verisk/ISO exclusion endorsements effective 1 January 2026 allow carriers to strip generative AI liability | Generative AI exclusions expected to become standard on renewal |
| Directors and officers liability | Responds to individual director claims in principle but typically without affirmative wording for AI governance failures | AI-aware D&O endorsements emerging, often conditional on documented AI governance |
| Cyber insurance | Responds to breach, ransomware, and data exfiltration; typically does not respond to AI output harm or performance failure | Affirmative AI-specific endorsements emerging, including through the Google Cloud and Beazley, Chubb, and Munich Re partnership |
| Media liability | Responds to defamation, IP infringement, and content-related claims; contested on AI-generated content | Affirmative AI content cover emerging, typically tied to generative AI governance controls |
The four categories of affirmative AI cover Asian businesses access
The affirmative AI insurance market in 2026 is structured across four categories. Most AI-deploying businesses need more than one of them. The descriptions below reflect typical market practice as publicly disclosed by the carriers and coverholders writing these products.
Affirmative AI liability insurance
This is the product most commonly described as "AI insurance" in market commentary. It responds to third-party claims arising from AI-specific failures: hallucinations, inaccurate outputs, model drift and performance degradation, data leakage, IP infringement associated with AI outputs, defamatory or harmful content generated by AI, tool and action failures by AI agents such as incorrect refunds or unauthorised purchases, and regulatory violations tied to AI-specific rules.
Armilla Insurance Services, a Lloyd's coverholder, launched its AI Liability Insurance on 30 April 2025, underwritten by Chaucer and other Lloyd's syndicates, with publicly disclosed per-organisation limits of up to USD 25 million. The Artificial Intelligence Underwriting Company, operating under Caliber Labs PBC, launched its AI agent insurance programme following its 23 July 2025 emergence from stealth, with publicly announced coverage up to USD 50 million for AI-specific losses. Testudo Global entered the Lloyd's coverholder market later in 2025, positioned for companies integrating vendor generative AI into their operations.
AI performance warranties
AI performance warranties are structurally different from liability insurance. They respond when an AI model fails to meet the performance metrics its vendor commits to in a customer contract, rather than to a third-party claim.
Munich Re launched aiSure in 2018 as one of the first AI insurance products, providing performance guarantee cover for specific AI systems against contractually committed KPIs. Armilla operates a parallel product, Armilla Guaranteed, which backs vendor performance commitments and offers a refund mechanism to the enterprise buyer if the AI fails to meet pre-agreed accuracy or efficiency targets. Performance warranty cover is typically bought by AI vendors to de-risk customer procurement cycles, and is underwritten against the specific model, deployment context, and KPI definition.
Affirmative AI endorsements on existing lines
A third structure adds affirmative AI cover to existing commercial lines rather than standing alone. In 2025 Google Cloud announced a partnership with Beazley, Chubb, and Munich Re to offer a cyber insurance product with explicit affirmative AI coverage to Google Cloud enterprise customers. This approach bundles AI-specific cover into the cyber line where much of the data-layer AI risk already sits. Similar endorsements are now written as add-ons to technology E&O and media liability policies, typically conditional on the insured demonstrating named AI governance controls, model-use documentation, or third-party certification such as AIUC-1 or ISO 42001.
AI-aware directors and officers liability
The fourth category responds to claims against directors and officers personally for AI governance failures. These include shareholder actions alleging a board failed to oversee AI deployment, regulatory investigations tied to AI-specific frameworks like the EU AI Act or MAS's AI Risk Management Guidelines, and claims arising from AI-driven discrimination or consumer harm. California AB 316, effective 1 January 2026, bars businesses from asserting that "the AI acted autonomously" as a defence in civil actions, which sharpens director exposure where AI decisions cause harm. AI-aware D&O cover is typically a specialist endorsement rather than a standalone product, and increasingly requires documented AI governance as a condition of affirmative wording.
What AI insurance actually responds to
AI insurance wordings define their triggers tightly, because the category of "AI failure" is broad and the insurable perimeter has to be specific. The triggers below reflect the affirmative wordings published by AIUC, Armilla, and similar specialist carriers.
| Trigger event | Example | Typical response |
|---|---|---|
| Hallucination or inaccurate output | AI-generated legal research cites cases that do not exist; AI customer-service agent confirms a refund policy the company does not offer | Affirmative AI liability responds to third-party financial loss and defence costs |
| IP infringement by AI output | Generative AI produces output that reproduces protected code, text, or imagery; downstream user receives a cease-and-desist | Affirmative AI liability or AI-aware media liability |
| Model performance degradation | Deployed model's accuracy drops below contracted KPI; customer suffers measurable economic loss or demands contractual refund | AI performance warranty (e.g., Munich Re aiSure, Armilla Guaranteed) |
| AI agent tool or action failure | Autonomous agent calls a payment API with the wrong amount; agent exceeds authorised permissions and triggers an unauthorised transaction | Affirmative AI liability, typically conditional on AIUC-1 or equivalent agent-specific controls |
| Data leakage via AI | AI system's training or inference pipeline exposes confidential data; prompt injection attack exfiltrates sensitive information from a connected system | Affirmative AI liability and/or affirmative AI cyber endorsement |
| AI discrimination or bias | Hiring, credit, or insurance model produces outcomes that systematically disadvantage a protected group; regulatory investigation or class action follows | Affirmative AI liability and/or AI-aware D&O |
| Regulatory violation tied to AI-specific rules | Breach of EU AI Act high-risk system obligations; breach of MAS AI Risk Management Guidelines; breach of sector-specific AI rules | Affirmative AI liability with regulatory defence cover; AI-aware D&O for individual director exposure |
| Defamatory or harmful AI content | AI tool publishes defamatory content about a named individual; AI-generated content causes reputational harm to a third party | Affirmative AI liability, typically overlapping with media liability where in place |
AIUC-1, ISO 42001, and NIST AI RMF: how standards shape underwriting
AI insurance underwriting increasingly treats documented AI governance as a baseline rather than a differentiator. Three reference frameworks dominate the conversation.
AIUC-1 is a certifiable standard for AI agent systems published in 2025 by the Artificial Intelligence Underwriting Company. It is organised around six domains: Data and Privacy, Security, Safety, Reliability, Accountability, and Society. Certification requires independent audit of controls, technical testing against adversarial scenarios (described by AIUC as modelled on documented real-world AI failures), and quarterly re-testing to keep the certificate valid. AIUC ties its own insurance pricing and availability directly to AIUC-1 audit results, and positions the standard as operationalising principles from NIST's AI RMF, the EU AI Act, and MITRE's ATLAS framework.
Schellman became the first authorised AIUC-1 auditor in 2025. ElevenLabs was one of the first certified vendors (February 2026), and UiPath the first enterprise automation platform (March 2026), publicly disclosing certification processes involving 2,000 to 5,800 technical tests.
ISO/IEC 42001:2023 is the international standard for AI management systems. It was published by ISO in 2023 and is structurally similar to ISO 27001 for information security management, covering governance, risk, and lifecycle controls for AI. Adoption is emerging in Singapore and Malaysia, and ISO 42001 certification is increasingly cited by enterprise buyers as evidence of mature AI governance. From an insurance perspective, ISO 42001 certification typically supports affirmative AI cover eligibility and can influence retention and sub-limit terms.
NIST AI Risk Management Framework (AI RMF 1.0) was published by the US National Institute of Standards and Technology in January 2023. It is a voluntary framework organised around four core functions (Govern, Map, Measure, Manage) and is widely referenced globally. MAS's November 2025 Consultation Paper on Guidelines on AI Risk Management for Financial Institutions explicitly names international frameworks including NIST AI RMF as reference points for Singapore firms. From an insurance perspective, NIST AI RMF alignment typically supports eligibility for affirmative AI endorsements on existing lines, especially in US-exposed programmes.
| Framework | Scope | Governance body | Typical insurance role |
|---|---|---|---|
| AIUC-1 | Certifiable standard for AI agent systems, six domains, quarterly technical testing | AIUC, with authorised auditors such as Schellman | Direct input to AIUC insurance pricing and eligibility; reference point for other carriers writing agent-specific cover |
| ISO/IEC 42001:2023 | AI management system standard, organisation-wide governance, risk, and lifecycle controls | International Organization for Standardization, via accredited certification bodies | Supports affirmative AI cover eligibility; referenced in enterprise procurement as evidence of mature AI governance |
| NIST AI RMF 1.0 | Voluntary risk management framework, four functions (Govern, Map, Measure, Manage) | US National Institute of Standards and Technology | Reference point for AI risk governance; named in MAS consultation paper alongside FEAT and IMDA frameworks |
| MITRE ATLAS | Adversarial threat framework for machine learning systems | MITRE Corporation | Informs technical testing and red-teaming expectations under AIUC-1 and similar standards |
Who buys AI insurance: the vendor and deployer distinction
AI insurance programmes are structured around two distinct buyer profiles, and most mid-market businesses sit in one or the other rather than both.
The AI vendor builds and sells AI systems: AI agents, AI-enabled SaaS, foundation model access, or vertical AI applications. Vendor-side cover responds to third-party claims from enterprise customers whose deployments fail, and to performance warranty claims where the model misses contracted KPIs.
AIUC's insurance programme is structured around AI agent vendors, with the AIUC-1-certified agent as the insured risk, and the AI company typically purchasing cover that protects both itself and its enterprise clients. Armilla writes both vendor-side and enterprise-side cover, with its affirmative AI liability policy available to either the insured party as the vendor or as the enterprise procuring the AI. For Asian AI vendors exporting into regulated markets, vendor-side cover also responds to regulatory defence costs under frameworks that apply extraterritorially.
The AI deployer is the enterprise that deploys AI into its own workflows, regardless of whether the model is built in-house or procured. Deployer-side cover responds when the deployed AI produces an outcome that harms a customer, an employee, or a third party: a wrong decision, a leaked document, a biased recommendation, a defamatory output. For MAS-regulated financial institutions deploying AI under the new AI Risk Management Guidelines, deployer-side cover also supports documented financial resilience. Google Cloud customers accessing the Beazley, Chubb, and Munich Re partnership are buying deployer-side affirmative AI cover as an extension of their cloud cyber programme.
| Business profile | Primary exposure | Core cover to evaluate |
|---|---|---|
| AI-native startup selling AI agents or AI-enabled SaaS | Customer claims from model failure; IP infringement; EU AI Act obligations if serving EU users | Affirmative AI liability, AI-aware tech E&O, performance warranty |
| Enterprise deploying third-party AI in customer-facing workflows | Customer harm from AI output; brand and reputational exposure; regulatory exposure | Affirmative AI endorsement on tech E&O / media liability; affirmative AI cyber endorsement |
| MAS-regulated financial institution deploying AI | AI Risk Management Guidelines compliance; AI model risk, including generative and agentic AI | Affirmative AI endorsement on PI / D&O; AI-aware cyber; documented AI governance as a placement condition |
| Malaysian or Singaporean company whose AI outputs are consumed in the EU | EU AI Act extraterritorial obligations (provider or deployer); regulatory defence; fines | Affirmative AI liability with regulatory defence; AI-aware D&O for individual director cover |
| AI platform provider with enterprise customer base | Layered exposure between vendor and customer; indemnity obligations under enterprise contracts | Affirmative AI liability; performance warranty; contractual indemnity review |
Wondering how your current E&O, D&O, or cyber policy responds to an AI failure?
Most commercial wordings written before 2025 treat AI as a feature of the underlying service rather than a separately underwritten risk. Through 2026 that assumption is being stripped out of standard wordings via new generative AI exclusion endorsements, and replaced with affirmative products written through Lloyd's, US specialist carriers, and global reinsurer partnerships. A 30-minute exposure briefing translates your current policy language into a concrete view of where your AI cover is affirmative, silent, or excluded, and what options exist in the specialist market.
AI insurance in Singapore: MAS AI Risk Management Guidelines and the FEAT lineage
Singapore's AI governance framework is the most developed in Southeast Asia and it is now explicitly sector-regulated in financial services. The Monetary Authority of Singapore first published the FEAT Principles in 2018, setting expectations on Fairness, Ethics, Accountability, and Transparency in AI and data analytics use by financial institutions. In 2019 MAS and industry partners launched the Veritas framework to operationalise FEAT, publishing methodology papers across three phases. On 5 December 2024 MAS issued an information paper on artificial intelligence model risk management, based on a thematic review of selected banks conducted in mid-2024.
On 13 November 2025 MAS published its Consultation Paper on proposed Guidelines on AI Risk Management for Financial Institutions (P017-2025). The Guidelines apply to all MAS-regulated financial institutions: banks, insurers, capital markets intermediaries, payment institutions, and fintechs. They set expectations on board and senior management oversight of AI risk, AI inventory and materiality assessment, AI lifecycle controls from development through retirement, third-party AI tool governance (institutions cannot delegate governance to vendors), and capacity to manage generative AI and agentic AI risks, including through MAS's Project MindForge initiative on generative AI.
The Guidelines build on MAS's FEAT principles and complement IMDA's Model AI Governance Framework and the AI Verify Foundation's testing toolkit. MAS also references international frameworks including NIST AI RMF. Insurance does not substitute for any of these governance obligations. It sits alongside them as the mechanism through which financial institutions provide credible financial backing to their AI governance posture and demonstrate operational resilience to MAS, institutional clients, and counterparties.
| Singapore framework or expectation | Primary source | Insurance implication |
|---|---|---|
| FEAT Principles for AI and data analytics in financial services | MAS, 2018 | Foundational expectation for AI governance in MAS-regulated firms; insurance placement typically requires FEAT alignment evidence |
| Veritas framework for operationalising FEAT | MAS Veritas Consortium, 2019 onwards (three phases) | Methodology for AI model assessment; underwriters treat Veritas-aligned assessment as positive signal on submission |
| Information paper on AI model risk management | MAS, 5 December 2024 | Thematic review expectations feed into underwriting questions on AI inventory and model risk |
| Proposed Guidelines on AI Risk Management for Financial Institutions | MAS Consultation Paper P017-2025, 13 November 2025 | Board-level AI oversight, three lines of defence, lifecycle controls become supervisory baseline; documented governance becomes placement condition |
| Model AI Governance Framework | IMDA and PDPC (voluntary, sector-agnostic) | General benchmark for non-financial AI deployers; procurement-side evidence for enterprise customers |
| AI Verify testing toolkit | AI Verify Foundation (open-source) | Technical testing evidence accepted in submissions; complements third-party certification such as AIUC-1 or ISO 42001 |
AI insurance in Malaysia: the National AI Office, AIGE, and the 2026–2030 Action Plan
Malaysia's AI governance framework is at an earlier stage than Singapore's but moving fast. The Ministry of Science, Technology and Innovation (MOSTI) published the National Guidelines on AI Governance and Ethics (AIGE) in 2024, a non-legally binding framework structured around seven principles: fairness; reliability, safety and human control; privacy and security; inclusiveness; transparency; accountability; and human-centricity. The AIGE is organised for users, regulators, and developers, and aligns Malaysia with the OECD AI Principles and adjacent regional frameworks.
The Ministry of Digital launched the National Artificial Intelligence Office (NAIO) on 12 December 2024. NAIO coordinates Malaysia's AI policy, governance, and investment strategy. In late 2025 the Minister of Digital confirmed that the National AI Action Plan 2026–2030 would be tabled in Parliament by December 2025, positioning 2026 as a pivotal transition year for Malaysian AI governance.
A complete AI legislative framework is being developed by the Ministry of Digital through NAIO, with a risk-based approach to AI-related harm, incident reporting, and ethical AI principles, and is expected to be submitted to Cabinet in mid-2026. Updates to the Personal Data Protection Act 2010 (Act 709) are being prepared in parallel, introducing rules on profiling, automated decision-making, and accountability relevant to AI deployment.
For Malaysian AI businesses, the insurance implication is that documented AI governance, whether under AIUC-1, ISO 42001, or AIGE principles, matters before the legislative framework lands. Insurance placement for affirmative AI cover already requires it as a submission input, and the regulatory direction of travel is clear enough that deferring governance work until the Act is gazetted is a higher-risk posture than investing in it now.
| Malaysia framework or expectation | Primary source | Insurance implication |
|---|---|---|
| National Guidelines on AI Governance and Ethics (AIGE) | MOSTI, 2024 (non-legally binding) | Reference principles for AI governance submissions; seven-principle framework referenced by enterprise procurement |
| National Artificial Intelligence Office (NAIO) | Ministry of Digital, launched 12 December 2024 | Coordinating body shaping Malaysian AI regulatory trajectory; NAIO publications inform underwriter views on Malaysian AI risk |
| National AI Action Plan 2026–2030 | NAIO / Ministry of Digital (announced for tabling in Parliament by December 2025) | Five-year roadmap; signals governance, incident reporting, and risk-based regulatory direction that feeds underwriting submissions |
| AI legislative framework (in development) | Ministry of Digital through NAIO; expected Cabinet submission mid-2026 | Pre-legislative governance investment already relevant to placement; framework will shape future compulsory cover or disclosure |
| Personal Data Protection Act 2010 (Act 709) amendments | Jabatan Perlindungan Data Peribadi (JPDP); amendments in progress | New rules on profiling and automated decision-making create specific AI-deployer exposure; affirmative AI endorsements increasingly cover PDPA defence |
The EU AI Act's extraterritorial reach for SEA AI exporters
Regulation (EU) 2024/1689, commonly known as the EU AI Act, came into force on 1 August 2024. Most substantive obligations for high-risk AI systems become applicable on 2 August 2026. The Act is extraterritorial in a way that directly affects Southeast Asian AI vendors and deployers. Under Article 2, the AI Act applies to providers placing AI systems on the EU market regardless of where they are established, and to providers and deployers located in a third country where the output of the AI system is used in the EU.
The practical consequences for Malaysian and Singaporean AI businesses are specific. A Singaporean AI vendor selling an AI-enabled SaaS platform to EU customers is within scope as a provider. A Malaysian software company whose AI system processes prompts submitted by EU end users, or whose outputs are consumed by EU-based businesses, is within scope as either a provider or a deployer depending on the architecture. The Act classifies AI systems across four risk tiers, with most substantive obligations concentrated on high-risk systems, and sets separate obligations for general-purpose AI models.
From an insurance perspective, the EU AI Act matters in three ways. First, non-compliance penalties under the Act can reach substantial percentages of worldwide annual turnover, creating direct exposure for SEA businesses within scope. Second, the Act's required documentation (technical documentation, risk management systems, post-market monitoring) becomes input to affirmative AI insurance submissions. Third, the Act's obligations on providers of general-purpose AI models (which took effect from August 2025) intersect with vendor-side AI liability cover for foundation model providers and their downstream integrators.
| EU AI Act provision | In-force date | SEA applicability |
|---|---|---|
| Prohibited practices (Article 5) | 2 February 2025 | SEA providers placing systems on the EU market or whose output is used in the EU |
| AI literacy obligations (Article 4) | 2 February 2025 | SEA providers and deployers with EU-facing AI deployments |
| General-purpose AI model obligations | 2 August 2025 | SEA providers of foundation models or GPAI accessed by EU users or EU-based businesses |
| High-risk AI system obligations (most of Annex III) | 2 August 2026 | SEA providers and deployers of high-risk systems used in the EU; wide range of sectors including employment, education, financial services, and critical infrastructure |
| Obligations for high-risk AI systems embedded in regulated products | 2 August 2027 | SEA manufacturers of products (medical devices, vehicles, machinery) incorporating high-risk AI sold in the EU |
What AI insurance typically excludes
The affirmative AI insurance market is still narrowing what sits inside and outside the insurable perimeter. Exclusions vary materially by carrier, but certain categories are consistent across Armilla, AIUC, Munich Re's aiSure, and adjacent specialist wordings.
| Typical exclusion | What it captures | Market reasoning |
|---|---|---|
| Intentional misuse or reckless deployment | Deployment of AI systems outside their documented scope; knowing violation of AI governance policies | Moral hazard exclusion; affirmative wordings require the insured to act within the controls declared at submission |
| Prior known issues or pre-existing claims | AI failures or incidents known to the insured before inception; claims notified under prior policies | Standard across commercial lines; affirmative AI policies require incident history disclosure at submission |
| Uncertified or undeclared AI systems | Cover limited to the AI systems, models, and deployment scopes named in the submission; new systems added mid-term typically require notification | AI risk is model-specific; underwriters need to know which systems, training data sources, and deployment contexts are in scope |
| Sanctions and illegal activity | AI deployments involving sanctioned parties or jurisdictions; AI used in unlawful conduct | Sanctions exclusions are standard across commercial lines |
| Bodily injury or tangible property damage | Physical harm from AI decisions is typically excluded or sub-limited; sits with CGL, product liability, or dedicated product recall cover | Affirmative AI liability is designed for economic and reputational loss, not bodily injury; CGL and product liability remain the primary responders, subject to the new generative AI exclusion endorsements |
| Regulatory fines where legally uninsurable | Specific regulatory penalties that are uninsurable as a matter of law in the relevant jurisdiction | Insurability varies by jurisdiction; policies typically cover legal defence costs even where the fine itself cannot be insured |
| Intellectual property claims involving prior knowledge | IP infringement where the insured knew or should have known of the infringement risk; claims from specific named rights-holders where prior notice exists | IP cover is granted in the context of disclosed training data and model provenance; undisclosed risks typically fall outside |
How Asian businesses access AI insurance
Specialist capacity for AI insurance sits across three markets. Lloyd's of London is the primary market, with Armilla and Testudo writing through named syndicates including Chaucer, and adjacent syndicates offering affirmative AI endorsements on technology E&O and media liability lines. The United States market carries the Artificial Intelligence Underwriting Company's AI agent insurance programme and the Google Cloud partnership with Beazley, Chubb, and Munich Re. Bermuda and continental European reinsurers provide capacity behind these programmes, including Munich Re's aiSure which has been in market since 2018.
Three things determine whether a placement goes well. The first is submission quality. Underwriters in this class want to see the AI architecture in detail: what models are deployed, what training data sources are used, how inputs and outputs are monitored, what human oversight exists, how prompt injection and adversarial inputs are handled, how incidents are logged and escalated, how models are retrained and re-tested, and what third-party certifications are held. AIUC-1, ISO 42001, and NIST AI RMF alignment are treated as positive signals.
The second is regulatory context. A MAS-regulated Singapore bank deploying generative AI under the new AI Risk Management Guidelines has a different risk profile from a Malaysian AI SaaS vendor exporting to EU customers, or a Hong Kong AI platform serving institutional clients. Translating that context into the submission is the specialist work. Documentation under MAS's FEAT principles, IMDA's Model AI Governance Framework, Malaysia's AIGE, and applicable EU AI Act obligations should all sit inside the submission file.
The third is timing. The Asia-London day lag means a submission leaving Kuala Lumpur or Singapore on a Monday afternoon reaches London underwriters on a Monday morning UK, with responses arriving overnight. The US-based AIUC programme operates on a parallel cycle with additional lag. Working with a specialist intermediary that operates in Asian business hours and has direct London, Bermuda, and US market relationships compresses that cycle.
Emerge works with Malaysian, Singaporean, and wider Asian AI vendors, AI-enabled platforms, and enterprise AI deployers to access AI insurance across these markets. For an AI-native startup, an enterprise deploying third-party AI in customer-facing workflows, or a MAS-regulated financial institution working through the new AI Risk Management Guidelines, engaging a specialist intermediary is how you access a market that otherwise requires hours-staggered calls across three time zones.
FAQ
What is AI insurance?
AI insurance in Malaysia and Singapore is a specialist commercial policy that protects AI vendors, AI-enabled platforms, and enterprise AI deployers against third-party claims arising from AI-specific failures. It is structured across four categories of cover: affirmative AI liability insurance, AI performance warranties, affirmative AI endorsements on existing commercial lines such as cyber and media liability, and AI-aware directors' and officers' liability. Specialist capacity is concentrated at Lloyd's of London coverholders, Bermuda specialty carriers, and a small number of dedicated AI insurers in the United States, with emerging structured partnerships between global reinsurers and major cloud platforms.
Do my existing E&O, D&O, and cyber policies respond to AI-specific failures?
Often they do not. Most professional indemnity, D&O, and cyber wordings written before 2025 treated AI as a feature of the underlying service, not a separately underwritten risk, which leaves the response to AI-specific claims silent and frequently contested. In January 2026 Verisk introduced new generative AI exclusion endorsements for commercial general liability policies, which several major carriers have since filed with state regulators. Before assuming you are covered, review your current policies for generative AI, machine learning, and algorithmic decision-making exclusion language, and confirm whether affirmative AI wording is available on renewal.
Is AI liability insurance available for AI businesses in Malaysia and Singapore?
Yes. Specialist capacity is written through Lloyd's of London coverholders such as Armilla Insurance Services and Testudo Global, through the Artificial Intelligence Underwriting Company's US-based agent insurance programme, through Munich Re's aiSure product for performance warranties, and through structured partnerships such as Google Cloud with Beazley, Chubb, and Munich Re for affirmative AI cyber endorsements. Domestic Malaysian and Singaporean general insurers do not write meaningful AI-specific capacity today. Asian businesses access this cover by working with a specialist intermediary that operates across the London, Bermuda, and US markets.
What does AI liability insurance actually cover?
Typical affirmative AI liability wordings respond to third-party claims arising from AI-specific failures. The trigger events named in published wordings include hallucinations and inaccurate outputs, model drift and performance degradation, data leakage, IP infringement associated with AI outputs, defamatory or harmful AI-generated content, tool and action failures by AI agents, and regulatory violations tied to AI-specific rules. Cover typically includes legal defence costs and damages. Specific triggers, retentions, and sub-limits vary materially by carrier and by the controls the insured has in place at inception.
What is AIUC-1 and why does it matter for underwriting?
AIUC-1 is a certifiable standard for AI agent systems published by the Artificial Intelligence Underwriting Company in 2025, covering six domains: Data and Privacy, Security, Safety, Reliability, Accountability, and Society. Certification involves independent audit of controls and quarterly technical testing against adversarial scenarios. AIUC ties its own AI agent insurance pricing and availability directly to AIUC-1 audit results. Other AI insurers increasingly treat documented AI governance, whether under AIUC-1, ISO 42001, or NIST AI RMF, as an underwriting baseline rather than a differentiator.
Does the EU AI Act apply to Malaysian or Singaporean AI companies?
Often yes, through the Act's extraterritorial provisions. Under Article 2 of Regulation (EU) 2024/1689, the AI Act applies to providers placing AI systems on the EU market regardless of where they are established, and to providers and deployers in a third country where the output of the AI system is used in the EU. A Malaysian or Singaporean AI vendor selling into EU customers, or whose AI system produces outputs consumed in the EU, is within scope. Most substantive high-risk system obligations become applicable on 2 August 2026, with earlier milestones for prohibited practices and general-purpose AI models.
What are MAS's expectations on AI governance for Singapore financial institutions?
On 13 November 2025 MAS published a Consultation Paper on proposed Guidelines on AI Risk Management for Financial Institutions (P017-2025). The Guidelines apply to all MAS-regulated financial institutions and set expectations on board and senior management oversight of AI risk, AI inventory and materiality assessment, AI lifecycle controls from development through retirement, third-party AI tool governance, and capacity to manage generative AI and agentic AI risks. The Guidelines build on MAS's 2018 FEAT Principles and the Veritas framework, and complement IMDA's Model AI Governance Framework and the AI Verify initiative.
Who pays for AI insurance, the AI vendor or the enterprise deploying AI?
Both, depending on where the exposure sits. AI vendors buy liability cover to protect against claims from enterprise customers whose deployments fail, and performance warranty cover to back the KPIs their models commit to in customer contracts. Enterprise deployers buy cover when they embed third-party AI into critical workflows, when their own AI systems produce outputs consumed by customers, or when they fall under regulatory frameworks like MAS's AI Risk Management Guidelines or the EU AI Act that expect documented financial resilience. Many programmes are structured so vendor and deployer policies layer around each other rather than overlapping.
Emerge Conclusion
AI insurance in Malaysia and Singapore is moving from a frontier category into part of the baseline operating stack for AI vendors, AI-enabled platforms, and enterprise AI deployers. The product is real, written in meaningful capacity across Lloyd's of London, US specialist markets, and global reinsurer partnerships, and the regulatory posture across MAS's AI Risk Management Guidelines, Malaysia's National AI Office, and the EU AI Act's extraterritorial reach increasingly makes documented AI governance and financial resilience a supervisory and procurement expectation.
Emerge works with Malaysian, Singaporean, and wider Asian AI vendors, AI-enabled platforms, and enterprise AI deployers to access this cover. If you are building or deploying AI in Southeast Asia, the time to engage the market is before the next material release of your product or the next material expansion of your AI footprint, not after the next incident or renewal exclusion.
Talk to Emerge about AI insurance →
Disclaimer: This article provides general guidance on emerging insurance categories available in Asian and global insurance markets as of June 2026. Policy availability, wording, and terms vary significantly between carriers, especially for emerging risks. Regulatory frameworks referenced, including Singapore's Monetary Authority of Singapore FEAT Principles, the Veritas framework, the December 2024 MAS information paper on AI model risk management, the November 2025 MAS Consultation Paper on Guidelines on AI Risk Management for Financial Institutions (P017-2025), IMDA's Model AI Governance Framework, the AI Verify toolkit, Malaysia's National Guidelines on AI Governance and Ethics (AIGE) and the National AI Office, Malaysia's Personal Data Protection Act 2010 (Act 709), and Regulation (EU) 2024/1689 (the EU AI Act), may be amended. Standards referenced, including AIUC-1, ISO/IEC 42001:2023, NIST AI Risk Management Framework 1.0, and MITRE ATLAS, are updated periodically. Always review specific policy wordings and consult qualified insurance, legal, and AI governance advisors before making coverage decisions.



