Artificial intelligence (AI) is already part of your business—whether you’ve approved it or not. Your employees are using it to write emails, draft marketing content, and even make decisions that could affect your customers. That’s why every business needs an AI program now. A strong AI program isn’t just about reducing risk; a strong AI program is about unlocking AI’s full potential in a safe and strategic manner.
Below is a concise, actionable roadmap for helping your business establish a robust AI program.
- First Phase: “Discovery”
Your goal: understand how AI is already being used in your business (and how AI could be further used to support your business).
First, your business needs governance and accountability. Who will own your business’ AI decisions? Every business should designate a leader or team to oversee and manage AI use. Many small businesses are appointing a Chief AI Officer (CAIO), and medium to large businesses often benefit from appointing an AI steering committee. The CAIO—or equivalent governance body—should be responsible for governance, development and maintenance of the AI program, risk management, vendor oversight, and compliance alignment.
Second, once appointed, your CAIO should generally reflect on the following:
- Employee Use: Survey your teams to capture the AI tools they currently use or want to use, and the business problems they aim to solve.
- Technology Profile: Identify each proposed AI system’s type (generative, predictive, etc.) and assign a risk tier based on (i) the types of data that will be input into the AI system, (ii) the impact of the applicable AI system’s output, and (iii) the applicable law.
- Data Exposure: Identify the types of information that could be input into the AI system (e.g., client personal information, marketing confidential information, confidential financial records, trade secrets).
- Legal Landscape: Identify the applicable laws governing the data types identified (e.g., the EU GDPR, VCDPA, CCPA, PIPEDA, etc.).
- Risk Appetite and Compliance Framework: Identify how much risk your business is willing to accept and which compliance framework is to guide the development of the AI program (e.g., the EU AI Act, NIST’s AI RMF, ISO 42001, etc.).
- Second Phase: “Build” your AI Framework
Your goal here is to build an AI Framework that is appropriate and will translate the insights identified in the Discovery Phase into concrete policies and procedures.
Generally, the AI Framework should account for at least the following themes:
- Hallucination Mitigation. AI systems can generate “hallucinations,” which are false responses that are presented as true. Your Framework should (i) classify the types of hallucinations that could occur, and (ii) determine appropriate mitigatory controls and implement such controls.
- Ethical Use. Professionals, such as lawyers, doctors, and public accountants, generally have certain rules of professional conduct they need to consider prior to using AI. The Framework should account for such ethical rules, and it should lay out principles and safeguards that protect against the replacement of human decision-making.
- Transparency and explainability. If your business intends to use AI to make decisions about others, the Framework should set out principles on how the business should be prepared to explain the underlying logic, assumptions, and limitations of the AI systems to those affected individuals. The CAIO should also draft a plain‑language notice that explains the AI-usage.
- Data Governance. It’s important for your Framework to educate on how AI systems work in order to protect your business’ confidential and personal information. Generally, these AI systems learn by analyzing text and detecting its patterns and structures. The AI system may use and retain inputs and the resulting outputs to learn, and there is a risk that the AI system could disclose that learned information in its generated output to others. Therefore, your business should reflect on its data governance policies and understand what information is input into the AI system and whether the information remains in the AI system.
- Intellectual property (IP) rights. The Framework should outline procedures for (i) maintaining copyright protection over pre-existing IP assets; and (ii) evaluating whether newly created AI-assisted IP assets should be protected.
- AI Provider Management. The Framework should have a vendor-risk matrix that evaluates (i) the AI provider on their ability to keep inputs and outputs confidential, integral, and available; (ii) the sensitivity of the input and the output and the use-case; (iii) the appropriate contractual clauses; and (iv) the applicable law.
Contracts the CAIO should look for include Service‑Level Agreements (SLAs), Non-Disclosure Agreements (NDAs), and Data Processing Agreements (DPAs).
- Incident‑Response & Redress. The CAIO should draft an incident response plan that: (i) defines detection triggers (e.g., consequential hallucinations, data leakage, etc.); (ii) defines escalation paths from employees to your businesses incident triage team (and then to the incident response team, as is necessary); (iii) defines steps appropriate to contain the confidentiality incident; and (iv) sets out communication templates for affected parties.
Remember that applicable law may also lay out specific requirements that must be reflected in the Framework. Receiving advise from legal counsel before finalizing the Framework would be prudent.
Once your business has an AI Framework, but before employees use any procured AI system, the CAIO should develop specific policies and procedures depending on the business’ use cases. For example, if employees are to use an AI system, then there should be rules governing the use of those AI systems (i.e., an Acceptable Use Policy (AUP)). The AUP is not meant to merely enforce your business’ internal usage rules, but also to notify employees of the contractual rules published by the third party responsible for providing your business the third-party AI system.
- Third Phase: “Communicate”
Once the Framework and all the applicable policies and procedures have been built, your business then needs to verbalize them to its employees, customers, and other stakeholders. Afterall, even the best frameworks fail if the relevant stakeholders don’t know about, understand, or trust it.
Your business should consider the following:
- Roll‑out Sessions: Educate employees with interactive workshops highlighting the new policies and procedures and the rationale behind them.
- Training Modules: Provide role‑specific learning for defined groups of employees (e.g., for sales, legal, medicine, etc.) that covers permissible use, data‑handling, and escalation procedures.
- External Transparency: Publish a concise summary of your AI governance on your website or in client contracts to reinforce trust.
- Fourth Phase: “Monitor and Evolve”
The last step is to monitor the Framework and its policies and procedures for compliance, enforcement, and performance, and to monitor the deployed AI system for its hallucination rate and overall performance. The CAIO should also request user feedback to highlight usability gaps and adoption barriers.
The CAIO should conduct at least annual audits against the Framework (the frequency and scope of the audits depend on the applicable law, the sensitivity of the information, and the use case). The CAIO should continually re-assess risk tiers, update policies, and incorporate emerging best practices, and keep your lawyer on speed dial.
Davis, Burch & Abrams is a business law firm that helps companies develop practical, compliant AI policies. If you have any questions about this article—or if your business needs guidance to stay current with AI and privacy laws in the United States or Canada—please reach out to the author, Savvas Daginis, at [email protected].
This article is for informational purposes only and should not be seen as legal advice. You should consult with a lawyer before you rely on this information.