AI Landmines, Part 2: The Limits of AI “Nutrition Labels”

Posted on Thursday, September 11th, 2025

Why system cards are a starting point, not a complete assessment

In Before You Use AI, Read the Fine Print, we discussed how vendor terms can leave you exposed to privacy and liability risks. In this follow-up to our AI Landmines post, we focus on another potential problem: treating AI “system cards” as if they provide a complete view of the technology.

System cards, sometimes called model cards or AI “nutrition labels,” are designed to provide quick insight into a tool’s capabilities, limitations, and data handling practices. They can be useful, but they are not intended to give you the full legal and operational picture.

When System Cards Can Help

When used properly, system cards can:

  • Summarize model design and training to clarify the tool’s intended purpose.
  • List capabilities and limitations so you can match the tool to your needs.
  • Highlight potential bias risks so you can plan additional oversight.
  • Describe data handling practices to help you assess privacy alignment.
  • Prompt questions about accountability by identifying the developer or vendor.

When They Can Mislead

Even a well-crafted system card can lead to misplaced confidence if it is treated as complete and objective:

  1. Marketing Disguised as Transparency
    The provider creates the card and can emphasize benefits while minimizing weaknesses.
  2. Incomplete Privacy and Compliance Details
    Many cards leave out details about data retention, third-party access, or compliance with specific laws.
  3. Limited Discussion of Ethical and Legal Risks
    Safeguards may be mentioned without explaining how they work or where they fall short.
  4. No Clear Accountability
    Few cards define who is legally responsible if the AI’s output causes harm.
  5. Outdated Information
    AI systems change quickly, but system cards may not be updated to reflect those changes.

How to Use System Cards Effectively

  • Verify claims through independent audits, third-party reviews, or your own testing.
  • Get privacy and compliance commitments in writing in your contracts.
  • Define liability in the agreement so responsibility is clear if something goes wrong.
  • Monitor outputs for bias and performance changes over time.
  • Maintain human oversight for high-stakes decisions.

Conclusion

System cards are a useful starting point for AI due diligence. They should guide your questions, not replace deeper review. Use them alongside contract analysis, independent testing, and regular monitoring to avoid risks they do not address.

If you would like us to review how your AI vendors document their systems and whether your contracts address the risks system cards leave out, we can help you turn that initial documentation into a complete compliance plan.