AI Landmines Hidden in Your Contracts
Practical guidance for spotting AI risks in your agreements
You do not need to be an AI expert to face legal or regulatory problems caused by vague or outdated contract language. Artificial intelligence is now built into CRMs, HR tools, marketing platforms, analytics dashboards, and many other services.
This means your company may already be exposed to AI-related risks through its software and technology service contracts, even if no one has labeled them as such. Unlike the Shadow AI problem we covered here, these risks are in documents you have already signed.
Whether you are licensing software, integrating AI into your services, or building your own AI-powered platform, here are three areas to examine closely.
Landmine 1: Training Data Risk
The risk: Many AI vendors seek broad rights to use customer data to improve their products, which often includes training their models. This can mean your confidential, proprietary, or regulated data is incorporated into systems used by other customers or even competitors. This can create exposure for trade secret loss, privacy law violations, and reputational harm if the data is linked back to you.
What to watch for:
- Language granting unrestricted rights to your data.
- No restrictions on reusing your data for unrelated projects.
- No statement that the training data is sourced in compliance with intellectual property and privacy laws.
What to do:
- Limit data rights to what is necessary to provide your services.
- Require warranties that all training data, whether yours or the vendor’s, is lawfully obtained and non-infringing.
- Ask directly whether your data will be used for other customers’ models.
Landmine 2: Licensing Problems in Models or Datasets
The risk: Many AI systems are built on open-source models, libraries, or datasets. Some carry restrictive licenses that can extend to your business. For example, a “viral” license could require you to make your own proprietary code public, or a “non-commercial” license could block you from using the tool in your core business.
What to watch for:
- No disclosure of underlying components or datasets.
- Vague disclaimers such as “customer is responsible for ensuring rights to use outputs.”
- Silence on whether you can use generated outputs for commercial purposes without restriction.
What to do:
- Request a list of all third-party components and datasets, with their licenses.
- Obtain a warranty that outputs can be used commercially without restriction.
- Include indemnification for licensing disputes tied to embedded code, models, or datasets.
Landmine 3: Inadequate AI Warranties
The risk: Traditional software warranties focus on uptime and conformity to documentation. AI systems can fail in ways those warranties do not address, such as performance decline over time (model drift), biased decision-making, or non-compliance when laws change. Without specific protections, you may be paying for a system that is available but unreliable or legally risky.
What to watch for:
- Warranties that address uptime but not accuracy, bias, or compliance.
- No requirement to retrain or update the model when performance declines.
- No remedy if outputs become unreliable or unlawful.
What to do:
- Negotiate measurable AI performance standards and remedies for failure.
- Require compliance with applicable AI and sector-specific laws.
- Include obligations to retrain or adjust models if they no longer meet agreed-upon performance metrics.
AI Contract Self-Check
Before you sign any agreement for software or technology services, ask:
1. Does this contract allow the vendor to use my data to train models for other customers?
2. Do I know every model, library, or dataset being used, and do I understand the license terms?
3. Does the warranty address performance, fairness, or compliance, or only system uptime?
If any answer is unclear, the contract needs closer review.
Why This Matters
These are not theoretical risks. They are issues that can often be identified and addressed before they become costly disputes or regulatory problems. The “What to do” bullet points in this post are only potential solutions and are only starting points, often requiring more due diligence, creativity, or industry specific solutions.
If you would like to review your existing software agreements or develop templates that address AI-specific risks, our team can help you identify and resolve these issues early.