By Josh Hammerquist, FSA, MAAA, Vice President & Principal, Lewis & Ellis and Muhammed Gulen, Esq., Vice President & Legal Consultant, Lewis & Ellis
The “NAIC Model Bulletin: Use of Artificial Intelligence Systems by Insurers” issued by the National Association of Insurance Commissioners (NAIC) outlines an ethical framework for AI deployment in the insurance industry. This bulletin delineates ethical boundaries and regulatory standards for AI systems, emphasizing transparency, accountability, and non-discrimination.
For carriers, the bulletin serves as a guide to navigate the complex landscape of AI integration, impacting how insurers approach AI strategy, from product development to customer engagement. In this vein, decisions made by insurers using AI systems are subject to stringent regulatory oversight to ensure compliance with legal standards, including those pertaining to unfair trade practices. These standards mandate that insurers’ decisions are accurate, consistent, and free from unfair discrimination.
Given the inherent risks associated with AI, such as the potential for inaccurate or discriminatory outcomes, insurers must implement robust controls to mitigate these risks. The adoption of such controls is crucial in preventing adverse consumer outcomes, thereby aligning AI applications with the ethical and regulatory framework drafted by the NAIC. This proactive stance on compliance is important for insurers because the requirements to adhere to the principles of fairness and integrity in their AI-driven decisions are already regulated, even if the laws do not keep pace with the technology at the moment.
The bulletin’s directives are not just about enforcing regulations; they are about promoting a culture of ethical innovation that aligns with the industry’s integrity and public trust. Insurers are encouraged to adopt AI systems that enhance customer value while adhering to the principles of fairness and non-discrimination.
To ensure compliance with the NAIC Model Bulletin, insurers can undertake the following measures:
1. Thoroughly Review the Bulletin: Gain a deep understanding of the bulletin’s requirements, which include guidelines on transparency, accountability, and non-discrimination in AI systems.
2. Assess AI Systems: Evaluate all AI technologies in use to ensure they meet the standards outlined in the bulletin, reviewing algorithms, data sources, and decision-making processes.
3. Establish Governance Frameworks: Set up governance structures for overseeing AI deployment and operation, with processes for continuous monitoring, evaluation, and reporting.
4. Educate and Train Staff: Provide comprehensive training for staff on the principles and requirements of the bulletin to ensure awareness and adherence to compliance obligations.
5. Commit to Ethical AI Practices: Dedicate efforts to ethical AI practices by avoiding biases and ensuring fair treatment of all consumers. This may involve regular audits and updates to AI systems.
6. Stay Updated on Regulatory Changes: Keep informed of any updates or changes to the related regulations to remain compliant as the regulatory landscape evolves.
7. Document Compliance Efforts: Keep detailed records of all compliance efforts, including assessments, governance processes, staff training, and corrective actions taken.
8. Consult with Experts: Seek advice from legal and regulatory experts specializing in insurance and AI to navigate compliance requirements effectively.
By integrating these steps into their AI strategies, insurers can ensure that their use of AI is in line with the NAIC Model Bulletin, demonstrating a commitment to ethical practices that resonate with customers and regulators alike. This approach not only maintains compliance but also capitalizes on the opportunities presented by AI, positioning insurers as leaders in the responsible use of AI in the industry.
The NAIC Model Bulletin notes that insurers should be prepared to provide comprehensive documentation and information regarding their use of third-party AI systems, predictive models, or data.
This includes:
1. Due Diligence: Insurers should expect to present details of the due diligence conducted on third parties and their data, models, or AI systems. This ensures that the third-party services align with the insurer’s standards and regulatory requirements.
2. Contracts: Documentation of contracts with third-party AI systems, models, or data vendors should be available. These contracts should cover terms related to representations, warranties, data security and privacy, data sourcing, intellectual property rights, confidentiality, disclosures, and cooperation with regulators.
3. Audits and Confirmations: Insurers should maintain records of audits and confirmation processes performed to verify third-party compliance with contractual and regulatory obligations.
4. Validation and Testing Documentation: Insurers must keep documentation related to the validation, testing, and auditing of third-party services, including the evaluation of model drift, which refers to the change in model performance over time.
By ensuring these measures are in place, insurers can demonstrate their commitment to maintaining the integrity and reliability of their AI systems, in compliance with the NAIC Model Bulletin’s guidelines. This proactive approach not only aligns with regulatory expectations but also reinforces the insurer’s dedication to ethical AI practices.
Published in the Fall 2024 issue of Insights Magazine.
PIMA® (Professional Insurance Marketing Association®) is a member-driven trade association focused exclusively on the affinity market.
#2025
#Blog
#PIMAPublications
#LegislativeandRegulatory