AI Compliance Software RFP Template (2026): The Complete Procurement Toolkit
A 60-question AI governance RFP template with scoring rubric, contract clauses, and a 12-week procurement timeline — covering EU AI Act, NIST AI RMF, ISO/IEC 42001, SR 11-7, OSFI E-23, NAIC Model Bulletin, NYC Local Law 144, and Texas TRAIGA coverage requirements.
By AI Compliance Vendors Editorial · Published April 26, 2026 · Last verified April 26, 2026
TL;DR
Enterprise procurement teams running a formal selection process for AI governance or AI compliance software need a structured RFP that maps directly to the regulatory obligations their organization faces. This guide delivers a complete, ready-to-issue template covering 60 mandatory technical questions across ten capability domains, a security and compliance questionnaire grounded in industry-standard frameworks, pricing and TCO disclosure requirements, customer reference standards, roadmap commitments, and contract clauses that legal counsel can use as a starting checklist. A 100-point scoring rubric ties every section to a defensible vendor comparison. The template is designed for GRC directors, enterprise IT sourcing managers, and legal and compliance leads who are accountable for the selection outcome and the ongoing regulatory posture that follows.
This guide is a companion to the AI compliance software procurement guide and the AI compliance vendor due diligence resource on this site.
When to Run an RFP (vs. PoC / Direct Purchase / Open RFI)
Not every AI governance software purchase warrants a full formal RFP. The decision turns on three variables: annual contract value, the number of credible vendors in the market, and your organization's regulatory exposure.
Use a formal RFP when:
- Annual contract value (including implementation and support) is expected to exceed $150,000, the threshold at which most enterprise procurement policies require competitive sourcing.
- Your organization is subject to two or more of the regulations described in Section A (EU AI Act, SR 11-7, NAIC Model Bulletin, OSFI E-23, TRAIGA, NYC LL 144, Colorado SB21-169), because vendor capability diverges materially across regulatory frameworks.
- More than three vendors have cleared a prior market scan or Request for Information (RFI).
- Internal audit or an external regulator has identified model governance as a control deficiency, creating a documented risk that must be addressed through a defensible selection process.
Use a limited PoC or direct negotiation when:
- Budget is under $50,000 annually and the use case is narrowly scoped (for example, a single-framework policy library for one business unit).
- Your organization already holds a master service agreement with a vendor whose platform has been expanded to cover AI governance through a module add-on.
Use an open RFI when:
- You are conducting market research before budget allocation, with no procurement decision expected within six months.
- You want to benchmark the four to eight vendors on the AI governance platform shortlist before narrowing to an RFP shortlist.
A hybrid approach is common: issue an RFI, shortlist three to five vendors, then issue an RFP to the shortlist simultaneously with a structured 30-day PoC. The PoC methodology is described in a dedicated section at the end of this guide.
RFP Timeline: 12-Week Reference Plan
| Week | Activities |
|---|---|
| 1–2 | Internal alignment: confirm requirements, stakeholders, budget envelope, and evaluation committee. Finalize this RFP document. |
| 3 | Issue RFP to shortlisted vendors. Set vendor Q&A window (written questions only, responses shared with all vendors simultaneously). |
| 4 | Vendor Q&A closes. Circulate anonymized Q&A log to all vendors. |
| 5–6 | Vendor response period. |
| 7 | Written responses due. Evaluation committee independently scores all submissions against the rubric in this document. |
| 8 | Reconcile scores. Select two to three vendors for demonstration and reference calls. Notify non-selected vendors. |
| 9 | Vendor demonstrations (max 2 hours each). PoC environments provisioned. |
| 10–11 | Parallel 30-day PoC. Reference calls completed. |
| 12 | PoC debrief. Final scoring. Contract negotiation authorized with preferred vendor. |
Compressed timelines are possible but introduce risk. A 10-week schedule is achievable; anything under eight weeks limits the depth of reference validation and PoC coverage.
Section A: Mandatory Technical Capabilities
Overview
Vendors must answer all 60 questions in this section. Responses must be in writing and supported by documentation (product screenshots, technical specifications, or third-party audit reports). Responses of "on roadmap" are permitted only for questions marked [R], and must include a committed delivery quarter. Unsubstantiated "yes" answers will be scored at zero.
A-1. Regulatory Framework Coverage
The platform must demonstrate native support for the following frameworks. "Native support" means the framework is modeled within the platform's policy library, risk taxonomy, or control mapping layer — not merely referenced in marketing materials.
(1) EU AI Act (Regulation (EU) 2024/1689, published July 12, 2024 in the Official Journal of the EU): Does the platform map obligations by role (provider, deployer, importer, distributor) and by risk tier (prohibited, high-risk, limited-risk, minimal-risk)? Does it track the phased implementation schedule, including the February 2, 2025 prohibited practices deadline, the August 2, 2025 GPAI obligations deadline, and the general application date of August 2, 2026? (EUR-Lex source)
(2) NIST AI Risk Management Framework 1.0 (NIST AI 100-1, January 2023): Does the platform operationalize all four core functions — GOVERN, MAP, MEASURE, and MANAGE — as defined by NIST, including the subcategories in Tables 1 through 4 of the framework document? Can it generate evidence artifacts mapped to specific GOVERN and MEASURE subcategories? (NIST source)
(3) ISO/IEC 42001:2023 (AI Management System, published December 2023): Does the platform support Plan-Do-Check-Act lifecycle management for AI systems and generate evidence aligned with ISO 42001 clauses? Can it export a gap assessment against ISO 42001 requirements? (ISO source)
(4) Federal Reserve / OCC SR 11-7 (Guidance on Model Risk Management, April 4, 2011): Does the platform support model inventory management with validation status, last-validation date, validation independence flags, and outcome analysis records consistent with SR 11-7 requirements? (Federal Reserve source)
(5) OSFI Guideline E-23 – Model Risk Management (final version published September 11, 2025, effective May 2027): Does the platform support the model lifecycle governance, model risk rating scale, and model inventory requirements described in OSFI E-23? Does it accommodate the requirement for an independent validation function? (OSFI source)
(6) NYC Local Law 144 (AEDT bias audit requirement, enforcement commenced July 5, 2023): For customers using AI in New York City hiring or promotion decisions, does the platform track bias audit status, audit date, summary publication, and candidate notice requirements under LL 144? (NYC DCWP source)
(7) Colorado SB21-169 (Restrict Insurers' Use of External Consumer Data, effective September 7, 2021, rules from January 1, 2023): For insurance-sector customers, does the platform support documentation of external data source risk management frameworks and bias testing results as required under Colorado SB21-169? (Colorado Legislature source)
(8) NAIC Model Bulletin on Use of AI Systems by Insurers (adopted December 4, 2023, adopted by 24+ states as of early 2026): Does the platform support the written AIS Program structure — governance, risk management and internal controls, internal audit, and third-party AI oversight — as described in the NAIC bulletin? (NAIC source)
(9) Texas TRAIGA (HB 149, signed June 22, 2025, effective January 1, 2026): Does the platform support documentation of AI system disclosures, prohibited-use screening, and the governance documentation required by the Texas Attorney General for investigations under TRAIGA? Does the safe harbor for NIST AI RMF or ISO 42001 compliance activate within the platform? (Texas Legislature source)
(10) Multi-framework crosswalk: Can the platform generate a unified control mapping that shows where a single organizational control satisfies obligations under two or more of the above frameworks simultaneously, reducing documentation redundancy?
A-2. AI Model Inventory
(11) Does the platform maintain a centralized, structured inventory of all AI systems and models deployed across the organization, including model name, version, owner, business purpose, input data sources, output type, and deployment environment?
(12) Does the inventory support risk classification of each model (for example, high-risk under EU AI Act Annex III, or model risk tier under SR 11-7)?
(13) Does the platform alert owners when a model has not been reviewed or validated within the organization-defined review cycle?
(14) Does the inventory support full lifecycle tracking — from model intake through production deployment, monitoring, and decommissioning?
(15) Can the inventory be accessed via API so it can be synchronized with existing model registries, MLOps platforms, or CMDB tools?
(16) Does the platform support recording of third-party and vendor-supplied AI models separately from internally developed models, consistent with the vendor oversight obligations in SR 11-7, OSFI E-23, and the NAIC Model Bulletin?
A-3. Policy Authoring
(17) Does the platform include a built-in policy authoring tool that allows GRC teams to create, version, publish, and retire AI governance policies without requiring developer involvement?
(18) Does the policy engine support role-based attestation, so that individual model owners, data scientists, and business unit leads can attest compliance with specific policy requirements on a scheduled basis?
(19) Can policies be linked directly to the regulatory framework obligations in A-1, so that a change in regulation (for example, a new OSFI circular) triggers a flagged policy review?
(20) Does the platform maintain a version history of all published policies with timestamps and approver records suitable for production in a regulatory examination?
A-4. Fundamental Rights Impact Assessment (FRIA) and AI Impact Assessment (AIIA) Support
(21) Does the platform include a FRIA template or workflow aligned with the requirements of EU AI Act Article 27, covering: description of deployer processes, period and frequency of use, categories of affected persons, specific risks of harm, human oversight measures, and risk materialization actions? (EUR-Lex Article 27 source)
(22) Does the platform support general AI Impact Assessments (AIIAs) beyond the EU FRIA scope, including assessments for systems not classified as high-risk under the EU AI Act? See also the AI Impact Assessment template for comparison benchmarks.
(23) Can FRIA/AIIA outputs be exported in a machine-readable format (PDF, JSON, or DOCX) and attached to the EU AI Act registration database records or provided to market surveillance authorities?
(24) Does the platform allow a FRIA to be linked to a GDPR Data Protection Impact Assessment (DPIA) so that overlapping elements are not duplicated, consistent with Article 27(4) of the EU AI Act?
A-5. Vendor Risk Module
(25) Does the platform include a dedicated AI vendor risk module that allows procurement teams to assess, score, and monitor third-party AI systems and data providers used within the organization?
(26) Does the vendor risk module support sending and receiving standardized questionnaires (for example, aligned with the Shared Assessments SIG or Cloud Security Alliance CAIQ) to AI vendors and aggregating responses into a vendor risk register?
(27) Does the module support continuous monitoring of AI vendors — for example, by ingesting news feeds or cyber threat intelligence — so that risk scores are updated between formal assessment cycles?
(28) Does the module include contract compliance tracking, allowing teams to verify that AI vendor contracts contain required provisions (audit rights, sub-processor lists, DPA clauses) before model deployment is approved? See the AI vendor due diligence questionnaire template for a parallel vendor assessment resource.
A-6. Testing and Red-Teaming
(29) Does the platform support structured adversarial testing of AI systems, including the ten risk categories in the OWASP Top 10 for LLM Applications (v2025, released November 18, 2024): Prompt Injection (LLM01), Sensitive Information Disclosure (LLM02), Supply Chain vulnerabilities (LLM03), Data and Model Poisoning (LLM04), Improper Output Handling (LLM05), Excessive Agency (LLM06), System Prompt Leakage (LLM07), Vector and Embedding Weaknesses (LLM08), Misinformation (LLM09), and Unbounded Consumption (LLM10)? (OWASP source)
(30) Does the platform align testing coverage to the MITRE ATLAS adversarial machine learning knowledge base (as of the v5.1.0 November 2025 release, documenting 16 tactics and 84 techniques targeting AI systems)? (MITRE ATLAS source)
(31) Does the platform support scheduling and tracking of red-team exercises against production AI systems, with results stored in the platform's audit trail?
(32) Does the platform produce a red-team or adversarial testing report that can be shared with a regulator or included in a vendor's EU AI Act technical documentation package?
A-7. Production Monitoring
(33) Does the platform integrate with production AI systems to receive real-time or near-real-time performance metrics, including model drift indicators, output quality metrics, and error rate thresholds?
(34) Can the platform trigger alerts or automated workflow actions (for example, a policy review task or an escalation to the model owner) when a monitoring threshold is breached?
(35) Does the platform support the six-month log retention requirement for deployers of high-risk AI systems under EU AI Act Article 26(6)?
(36) Does the platform support incident tracking for AI-related incidents, with root cause analysis fields and a link from incident record to the affected model's inventory entry?
A-8. Audit Evidence Export
(37) Can the platform generate a structured audit evidence package — including policy attestations, risk assessment outputs, testing records, monitoring logs, and incident records — formatted for production to an external regulator, internal audit function, or conformity assessment body?
(38) Does the audit export support configurable date ranges so that teams can produce evidence for a specific examination period without exporting the entire platform history?
(39) Does the platform maintain an immutable audit trail of all user actions (policy edits, risk assessments submitted, approvals granted) with timestamps and user IDs, preventing retroactive modification of compliance records?
(40) Does the platform support the EU AI Act Article 26(5) obligation for deployers to keep logs of high-risk AI system operations and make them available to competent authorities on request?
A-9. SSO, RBAC, and SOC 2
(41) Does the platform support Single Sign-On (SSO) via SAML 2.0 or OpenID Connect, compatible with enterprise identity providers including Microsoft Entra ID (formerly Azure AD), Okta, and Ping Identity?
(42) Does the platform implement role-based access control (RBAC) with at minimum the following distinct roles: platform administrator, GRC/compliance manager, model owner, business unit reviewer, and read-only auditor?
(43) Does the platform hold a current SOC 2 Type II report? Specify the audit period, scope (availability, security, confidentiality, processing integrity, privacy), and the name of the issuing audit firm. Provide the report under NDA as part of the response.
(44) Does the platform support multi-factor authentication (MFA) enforcement for all user accounts?
(45) [R] If the platform does not currently hold ISO/IEC 27001 certification, is this on the vendor roadmap? If so, provide a committed certification target date.
A-10. Data Residency
(46) Does the platform offer data residency options for the European Economic Area (EEA), United Kingdom, United States, Canada, and Australia, as separate deployment configurations?
(47) Does the platform support tenant-level encryption at rest using customer-managed keys (CMK), allowing the organization to control encryption key rotation and revocation?
(48) Does the platform provide a complete and current sub-processor list, including the name, country of incorporation, and data processing role of each sub-processor? How frequently is this list updated, and how are customers notified of changes?
(49) Does the platform log all data access events at the sub-processor level, sufficient to support a data residency audit or regulatory inquiry?
(50) Does the vendor offer a contractual commitment to process data only within the specified residency region, with no fallback to regions outside the agreed scope without prior written consent?
A-11 through A-12 (Additional Technical Questions)
(51) Describe the platform's API architecture. Does it expose a documented, versioned REST or GraphQL API that allows the organization's existing GRC tools or SIEM platforms to ingest compliance data programmatically?
(52) What is the platform's stated uptime SLA? Provide the last 12 months of uptime statistics for the production environment.
(53) Does the platform support custom risk scoring methodologies, allowing GRC teams to weight risk factors according to the organization's specific sector, regulatory profile, and risk appetite?
(54) Does the platform support localization — at minimum English, French, German, and Spanish — for global deployments?
(55) Describe the platform's disaster recovery (DR) architecture. What is the current Recovery Time Objective (RTO) and Recovery Point Objective (RPO)?
(56) Does the platform support integration with ServiceNow, Jira, or Microsoft Teams for workflow notifications and ticket creation from compliance events?
(57) Describe the vendor's AI-specific security architecture. How does the vendor ensure that customer compliance data and AI model metadata ingested into the platform is not used to train the vendor's own AI models or shared with other customers?
(58) Does the platform include an explainability or transparency reporting feature that generates human-readable descriptions of how a specific AI model makes decisions, suitable for disclosure to affected individuals?
(59) [R] Does the platform plan to support the EU AI Office's forthcoming FRIA questionnaire template, once published? If so, provide a committed delivery quarter.
(60) Provide a list of the five most significant platform updates released in the last 12 months, with release dates and descriptions. This is used to assess active development and roadmap execution cadence.
Section B: Security and Compliance Questionnaire
Vendors must complete this section in writing. The organization reserves the right to request an on-site or virtual security review for shortlisted vendors.
B-1. Third-Party Assessment Frameworks. Has the vendor completed, within the last 12 months, a Shared Assessments Standardized Information Gathering (SIG) questionnaire — the industry standard spanning 19 risk domains, with a content library of 1,855 risk control questions (Shared Assessments source) — or a Cloud Security Alliance Consensus Assessments Initiative Questionnaire (CAIQ), which provides yes/no documentation of security controls across 16 control domains aligned to the CSA Cloud Controls Matrix (CSA source)? Provide the completed SIG or CAIQ as an attachment.
B-2. NIST SP 800-171. For vendors handling Controlled Unclassified Information (CUI) on behalf of customers in regulated sectors: does the vendor comply with NIST Special Publication 800-171 Revision 3 (final, published May 2024), which sets requirements for protecting CUI in nonfederal systems across 17 control families? (NIST source) Provide your most recent System Security Plan (SSP).
B-3. SOC 2 Type II. Provide the most recent SOC 2 Type II report under NDA. The report must cover at minimum the Security trust service criterion. Audit period must have ended within the last 12 months. Describe any qualified opinions or exceptions noted and the remediation actions taken.
B-4. Penetration Testing. When was the most recent external penetration test conducted? Provide an executive summary of findings and remediation status. Confirm whether the penetration test scope included the vendor's AI/LLM components and any customer-facing APIs.
B-5. Vulnerability Disclosure. Does the vendor maintain a public responsible disclosure or bug bounty program? Provide the program URL. What is the vendor's average time from vulnerability report to patch release for critical-severity findings?
B-6. Data Breach History. Disclose any data breaches, security incidents, or unauthorized access events affecting customer data in the last 36 months, including the nature of the incident, number of affected customers, and regulatory notifications made.
B-7. Subprocessor Security. How does the vendor assess the security posture of its subprocessors? Does the vendor require subprocessors to hold SOC 2 Type II or ISO 27001 certifications?
Section C: Pricing and Total Cost of Ownership
Vendors must complete this section with binding, itemized pricing for a 3-year term. All prices must be in USD (or the customer's local currency if requested). No estimate ranges are acceptable; if pricing depends on configuration decisions, provide the pricing for the base and expanded configurations separately.
C-1. Pricing Model. Identify the pricing model: per-seat (named user or concurrent), per-AI-model-managed, flat enterprise license, or consumption-based. Explain how pricing scales as the organization's AI model inventory grows.
C-2. 3-Year Quote. Provide a line-item quote for Year 1, Year 2, and Year 3, including: platform license fee, implementation services (including data migration and integration), training (initial and annual), and all support tiers. Identify which items are fixed and which are indexed to inflation or usage.
C-3. Implementation Services. What does the vendor's standard implementation include? What is the estimated time-to-value (from contract signature to first productive use of the platform)? Is implementation performed by the vendor directly or by a systems integrator partner, and if the latter, is the partner's rate card included in the quote?
C-4. Support Tiers. Describe all available support tiers (for example, Business, Enterprise, Premier). For each tier, specify: hours of coverage, response time SLAs by severity level (P1 through P4), named customer success manager availability, and access to a dedicated support queue.
C-5. True-Up Clauses. Describe any true-up or overage mechanisms. If the organization exceeds its licensed model count or user count mid-term, what is the mechanism for reconciliation, and at what price?
C-6. Price Protection. What price protection does the vendor offer across the 3-year term? Is a cap on annual renewal increases available?
C-7. Exit Costs. If the organization terminates the contract at the end of Year 2, what costs apply? Does the vendor charge for data export or termination assistance services?
Section D: Customer References
D-1. Reference Requirement. Vendors must provide a minimum of three (3) verifiable customer references. At least two references must be in the same industry vertical as the issuing organization and at similar revenue scale (within one order of magnitude). At least one reference must have been a customer for more than 24 months.
D-2. Reference Format. For each reference, provide: organization name, industry, approximate annual revenue or employee headcount, primary use case implemented on the platform, deployment date, and a named contact (name, title, and direct email or phone number). References to generic case studies without a named contact will not be accepted.
D-3. Reference Call Scope. The evaluation committee will conduct 30-minute reference calls and will ask about: implementation timeline vs. actual, ongoing support quality, regulatory examination or audit readiness experience, and whether the organization would repurchase.
D-4. Analyst Recognition. Optionally, provide analyst coverage (Gartner, Forrester, IDC) or industry awards received in the last 18 months. This is supplementary and not scored.
Section E: Roadmap and EU AI Act Readiness
E-1. August 2, 2026 Obligation Readiness. The general application date of Regulation (EU) 2024/1689 is August 2, 2026, at which point the full high-risk AI system obligations in Chapter III apply to deployers (EUR-Lex source). Provide a written statement describing how the vendor's platform currently supports, and plans by August 2, 2026 to support, the following deployer obligations: use per instructions (Article 26(1)), human oversight assignment (Article 26(2)), input data relevance (Article 26(4)), production monitoring (Article 26(5)), log retention for six months (Article 26(6)), FRIA for in-scope deployers (Article 27), and worker notification (Article 26(7)).
E-2. Published Changelog. Provide a link to the vendor's publicly accessible changelog or release notes. The changelog must show releases within the last 12 months. Vendors without a publicly accessible changelog will receive zero points for this item.
E-3. Regulatory Tracking Process. How does the vendor monitor changes to the regulatory frameworks in Section A-1? Who owns this process internally (for example, a regulatory affairs function, a legal team, or an external counsel relationship)? How quickly after a regulatory change is published is a platform update or mapping update released?
E-4. Product Roadmap Transparency. Provide a high-level product roadmap for the next 12 months, including committed delivery quarters for items marked [R] in Section A. Roadmap items described only as "future" without a committed quarter will not be scored.
Section F: Contractual Must-Haves
The following contract terms are non-negotiable for this procurement. Vendors should confirm in their response that each term is acceptable. Vendors that decline any of these terms should explain their objection in writing; omissions will be scored as a refusal.
F-1. Data Processing Agreement (DPA) with EU SCCs and UK IDTA. The vendor must execute a DPA that, for transfers of personal data from the EU/EEA, incorporates the European Commission's 2021 Standard Contractual Clauses (issued June 4, 2021, under GDPR Article 46) (European Commission source), and, for transfers of personal data from the UK, incorporates the UK International Data Transfer Agreement (IDTA, in force March 2022) or the EU SCCs with the ICO UK Addendum.
F-2. Sub-Processor List. The DPA must include a current, complete list of all sub-processors, updated at least quarterly, with advance notice (minimum 30 days) of any new sub-processor addition and the right of the customer to object.
F-3. Audit Rights. The contract must include the right of the customer (or its designated third-party auditor) to audit the vendor's compliance with the DPA and the security controls in Section B, at most once per year on reasonable notice, at the customer's cost, with the vendor providing reasonable cooperation.
F-4. EU AI Act Warranty. For vendors whose platform is used as a component within high-risk AI system deployments, the contract must include a warranty that the vendor: (a) has assessed whether its platform qualifies as an AI system, AI model, or tool under Regulation (EU) 2024/1689; (b) will cooperate with the customer to support the customer's compliance with deployer obligations under Articles 26 and 27; and (c) will provide the technical documentation specified in Article 13 upon request.
F-5. Termination Assistance. On expiry or termination, the vendor must provide at least 90 days of termination assistance, during which all customer data (including audit evidence, policy records, risk assessments, and model inventory data) remains accessible and exportable in a non-proprietary format.
F-6. Source Code Escrow. For organizations classifying their AI governance platform as a critical operational dependency (particularly in financial services subject to operational resilience requirements), the contract must include a source code escrow arrangement with a recognized escrow provider, specifying release conditions including vendor insolvency, acquisition, or material breach.
F-7. Liability and Indemnification. Specify the vendor's limitation of liability (for example, a cap at 12 months of fees paid) and its indemnification obligations covering intellectual property infringement and personal data breaches caused by the vendor's negligence.
F-8. Governing Law and Jurisdiction. State the proposed governing law and dispute resolution forum. For EU-based customers, confirm that the vendor accepts EU member state jurisdiction for GDPR-related claims.
Scoring Rubric (100-Point Breakdown)
| Section | Category | Maximum Points |
|---|---|---|
| A-1 | Regulatory framework coverage (9 frameworks + crosswalk) | 20 |
| A-2 | AI model inventory | 5 |
| A-3 | Policy authoring | 4 |
| A-4 | FRIA/AIIA support | 4 |
| A-5 | Vendor risk module | 4 |
| A-6 | Testing and red-teaming | 4 |
| A-7 | Production monitoring | 4 |
| A-8 | Audit evidence export | 4 |
| A-9 | SSO/RBAC/SOC 2 | 3 |
| A-10 | Data residency | 3 |
| A-11/12 | Additional technical (questions 51–60) | 5 |
| B | Security and compliance questionnaire | 10 |
| C | Pricing and TCO | 10 |
| D | Customer references | 10 |
| E | Roadmap and EU AI Act readiness | 5 |
| F | Contractual must-haves (all 8 terms) | 5 |
| Total | 100 |
Scoring scale for each question:
- Full credit: Capability demonstrated with supporting documentation (screenshots, third-party attestation, or product specification).
- Partial credit (50%): Capability confirmed in writing without documentation, or capability is on committed roadmap with a delivery quarter within 6 months.
- Zero credit: No response, response of "planned but no committed date," or response contradicted by documentation review.
The evaluation committee should score independently before reconciling to reduce anchoring bias. A minimum threshold score of 70 points is recommended for a vendor to advance to the PoC stage. Any vendor scoring zero on Section F (contractual must-haves) should be disqualified regardless of total score.
Regulatory framework scoring sub-rubric (A-1, 20 points):
Each of the nine frameworks (questions 1–9) is worth 2 points. The crosswalk question (10) is worth 2 points. For each framework: 2 points for documented, demonstrated native support; 1 point for confirmed support without documentation; 0 points for marketing claim only or no response.
Red Flags During Evaluation
The following vendor behaviors or documentation gaps are meaningful risk indicators. None are disqualifying on their own, but each warrants a direct question and escalation to legal or procurement leadership before proceeding.
Pricing available only under NDA before the PoC. Some vendors require NDA execution before sharing list pricing. While an NDA for the SOC 2 report is standard, withholding pricing from the evaluation process prevents genuine cost comparison and is a common tactic to compress negotiation timelines.
Refusal to commit to roadmap delivery dates. A platform vendor that cannot commit to delivery quarters for features described as "on roadmap" is signaling either an early-stage product, organizational uncertainty, or an intent to avoid accountability. This is particularly concerning for EU AI Act features given the August 2, 2026 deadline.
No publicly accessible security page or trust center. Enterprise SaaS vendors at a mature stage of development maintain a public security or trust page that references their certifications, uptime history, and responsible disclosure policy. Absence of this page is not necessarily a security failure, but it does indicate limited transparency investment.
SOC 2 Type I only (no Type II). A SOC 2 Type I report attests that controls are designed appropriately as of a point in time. A SOC 2 Type II report attests that those controls operated effectively over a period (minimum six months). For a compliance-sensitive platform holding audit evidence and model inventory data, Type II is the appropriate standard.
Customer references in different industries or with no named contacts. References from organizations in unrelated sectors, or references listed without named contacts (for example, "a Fortune 500 bank" without a name or phone number), cannot be verified and provide no useful signal.
Sub-processor list not available or last updated more than 12 months ago. GDPR Article 28 and the 2021 EU SCCs require data controllers to be informed of sub-processor changes. A vendor that does not maintain a current sub-processor list is not operating in compliance with its own DPA obligations, which raises broader questions about its data governance practices.
No internal changelog published in the last six months. AI governance platforms in active development update their regulatory mappings, policy libraries, and framework coverage regularly. A stale changelog is a leading indicator that regulatory currency — which is the core product in this category — is not being maintained.
How to Run a 30-Day PoC Alongside the RFP
A parallel PoC is not a replacement for the RFP evaluation. It provides empirical evidence that supplements written responses and demonstrations. The following structure applies to a 30-day PoC running concurrent with the RFP evaluation phase (Weeks 9–11 in the timeline above).
Environment. Require each vendor to provision a dedicated PoC tenant with production-equivalent feature access. Sandbox environments with disabled features are not acceptable.
Data. Load the vendor environment with a representative set of your organization's real (or realistically anonymized) AI inventory records — typically 10 to 20 models — rather than synthetic or vendor-provided sample data. Realistic data surfaces integration gaps and UX friction that demos conceal.
Test scenarios (minimum five):
- Onboard a new AI model from intake through risk classification and policy attestation.
- Generate a FRIA for a model classified as high-risk under EU AI Act Annex III.
- Send a vendor risk questionnaire to a simulated third-party AI vendor and import their response.
- Configure a monitoring alert for a model drift threshold breach and verify the workflow notification.
- Export an audit evidence package for a specified calendar quarter and confirm it contains all required artifacts.
Scoring. Score each test scenario on a 1–5 scale: 1 (did not complete), 2 (completed with significant manual workaround), 3 (completed with minor friction), 4 (completed as expected), 5 (completed and exceeded expectation). Weight scenarios by their relevance to your regulatory exposure.
Integration test. Attempt at least one API integration with an existing internal tool (ITSM, HRIS, or model registry). Measure actual integration effort against vendor's stated estimate.
Support quality. Log at least two support tickets during the PoC and measure response time against the vendor's stated SLA. This is a reliable leading indicator of post-go-live support experience.
FAQ
1. Is an RFP legally required for this purchase? In most private-sector organizations, a formal RFP is a procurement policy requirement rather than a legal one. The threshold for mandatory competitive sourcing varies by organization. Check your procurement policy for the applicable dollar threshold. For publicly funded organizations (government agencies, universities, healthcare systems with public funding), competitive sourcing requirements may be statutory.
2. How many vendors should be on the RFP shortlist? Three to five is the practical range. Fewer than three limits competitive tension. More than five creates an evaluation burden that reduces the depth of each vendor assessment.
3. Should we require vendors to complete the entire SIG before shortlisting? No. The SIG, which covers 19 risk domains, is a significant vendor burden. Use it as a request in Section B for the written response stage, after vendors have already invested in the RFP process. For market scanning (RFI stage), the CAIQ-Lite — which covers the same 17 control domains in 138 questions compared to the full CAIQ's 295 (CSA source) — is a proportionate pre-shortlist tool.
4. What is a reasonable implementation timeline for an AI governance platform? For organizations with no prior AI governance tooling, 90 to 120 days from contract signature to first productive use (model inventory populated, first policy active, first risk assessment completed) is a reasonable expectation for a mid-market deployment. Enterprise deployments with complex integrations may require 180 days. Any vendor promising less than 60 days without a clearly scoped and limited implementation should be questioned.
5. How should we evaluate a vendor's EU AI Act readiness if the vendor does not operate in Europe? Even for US-headquartered organizations, the EU AI Act's extraterritorial scope applies if the output of the AI system is used in the EU, or if the deployer is established in the EU. Require all vendors to provide a written legal opinion or product analysis addressing their EU AI Act applicability and their roadmap for supporting deployer obligations.
6. What is the difference between an AI governance platform and an AI compliance platform? In practice, the terms are used interchangeably by vendors. AI governance platforms tend to emphasize policy authoring, model lifecycle management, and stakeholder accountability structures. AI compliance platforms tend to emphasize regulatory mapping, evidence collection, and audit readiness. The best platforms do both. Use the capability framework in Section A to evaluate functional coverage rather than vendor categorization.
7. Should we require the vendor to sign the DPA before issuing the RFP? No. The DPA is a contract artifact, not an RFP prerequisite. Require confirmation in Section F that the vendor accepts DPA terms as a condition of shortlisting, then execute the DPA as part of contract finalization with the preferred vendor.
8. How do we evaluate vendors on frameworks that are not yet in force (for example, OSFI E-23, effective May 2027)? Assess current coverage of analogous frameworks (SR 11-7 for model risk management) and evaluate the vendor's stated roadmap and regulatory tracking process for E-23. A vendor that has already built SR 11-7 support is structurally better positioned to build OSFI E-23 support than a vendor with no model risk management capability.
9. What is the right internal team composition for the evaluation committee? At minimum: a GRC/compliance lead (responsible for regulatory requirements), an IT security lead (responsible for Section B), a procurement or legal lead (responsible for Section C and F), and a business unit representative who will use the platform operationally. For financial services organizations, a model risk management representative is also recommended.
10. How do we handle a vendor that declines to answer specific questions citing competitive sensitivity? Treat any unanswered mandatory question as a zero for scoring purposes. A vendor that withholds responses to questions about its sub-processor list, SOC 2 report, or contractual terms is providing useful signal about its post-contract behavior. Partial responses or redacted responses for legitimate legal reasons (for example, redacting specific customer names in references) should be treated differently from blanket refusals.
Sources
- European Commission. Regulation (EU) 2024/1689 (EU AI Act). OJ L, 2024/1689. Published July 12, 2024. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689
- National Institute of Standards and Technology. Artificial Intelligence Risk Management Framework (AI RMF 1.0). NIST AI 100-1. January 2023. https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf
- ISO/IEC. ISO/IEC 42001:2023 — Information technology — Artificial intelligence — Management system. Published December 2023. https://www.iso.org/standard/81230.html
- Board of Governors of the Federal Reserve System. SR 11-7: Guidance on Model Risk Management. April 4, 2011. https://www.federalreserve.gov/supervisionreg/srletters/sr1107.htm
- Office of the Superintendent of Financial Institutions Canada. Backgrounder: Guideline E-23 — Model Risk Management. September 11, 2025. Effective May 2027. https://www.osfi-bsif.gc.ca/en/news/backgrounder-guideline-e-23-model-risk-management
- New York City Department of Consumer and Worker Protection. Automated Employment Decision Tools (AEDT). Enforcement commenced July 5, 2023. https://www.nyc.gov/site/dca/about/automated-employment-decision-tools.page
- Colorado General Assembly. SB21-169: Restrict Insurers' Use of External Consumer Data. Effective September 7, 2021. https://leg.colorado.gov/bills/sb21-169
- National Association of Insurance Commissioners. Model Bulletin on Use of Artificial Intelligence Systems by Insurers. Adopted December 4, 2023. https://content.naic.org/insurance-topics/artificial-intelligence
- Texas Legislature. HB 149 — Texas Responsible Artificial Intelligence Governance Act (TRAIGA). Signed June 22, 2025. Effective January 1, 2026. https://capitol.texas.gov/BillLookup/History.aspx?LegSess=89R&Bill=HB149
- OWASP. OWASP Top 10 for Large Language Model Applications (v2025). Released November 18, 2024. https://owasp.org/www-project-top-10-for-large-language-model-applications/
- MITRE. MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems). https://atlas.mitre.org/
- Shared Assessments. Standardized Information Gathering (SIG) Questionnaire. https://sharedassessments.org/about-sig/
- Cloud Security Alliance. Consensus Assessments Initiative Questionnaire (CAIQ) and CAIQ-Lite. https://cloudsecurityalliance.org/artifacts/ccm-lite-and-caiq-lite-v4
- National Institute of Standards and Technology. NIST SP 800-171 Rev. 3: Protecting Controlled Unclassified Information in Nonfederal Systems and Organizations. Published May 2024. https://csrc.nist.gov/pubs/sp/800/171/r3/final
- European Commission. Standard Contractual Clauses (SCCs) for International Data Transfers. Issued June 4, 2021. https://commission.europa.eu/law/law-topic/data-protection/international-dimension-data-protection/standard-contractual-clauses-scc_en
- UK Information Commissioner's Office. International Data Transfer Agreement (IDTA). In force March 2022. https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/international-transfers/
- Quarles & Brady LLP. Nearly Half of States Have Now Adopted NAIC Model Bulletin on Insurers' Use of AI. March 2025. https://www.quarles.com/newsroom/publications/nearly-half-of-states-have-now-adopted-naic-model-bulletin-on-insurers-use-of-ai
- Baker Botts. Texas Enacts Responsible AI Governance Act. July 2025. https://www.bakerbotts.com/thought-leadership/publications/2025/july/texas-enacts-responsible-ai-governance-act-what-companies-need-to-know
- artificialintelligenceact.eu. Article 26: Obligations of Deployers of High-Risk AI Systems. https://artificialintelligenceact.eu/article/26/
- artificialintelligenceact.eu. Article 27: Fundamental Rights Impact Assessment. https://artificialintelligenceact.eu/article/27/
- NAIC. AI Model Bulletin Adoption Map (as of April 1, 2026). https://content.naic.org/sites/default/files/cmte-h-big-data-artificial-intelligence-wg-map-ai-model-bulletin.pdf
- Blakes Law. OSFI Releases Final Guideline E-23 for Model Risk Management. September 26, 2025. https://www.blakes.com/insights/osfi-releases-final-guideline-e-23-for-model-risk-management-and-ai-use-by-frfis/
- DLA Piper. Texas Adopts the Responsible AI Governance Act. June 25, 2025. https://www.dlapiper.com/insights/publications/2025/06/texas-adopts-the-responsible-ai-governance-act
- NIST AI Resource Center. NIST Artificial Intelligence Risk Management Framework. https://airc.nist.gov/Home
- aicompliancevendors.com. AI Compliance Software Procurement Guide. https://aicompliancevendors.com/guides/ai-compliance-software-procurement
- aicompliancevendors.com. AI Compliance Vendor Due Diligence. https://aicompliancevendors.com/guides/ai-compliance-vendor-due-diligence
- aicompliancevendors.com. AI Vendor Due Diligence Questionnaire Template. https://aicompliancevendors.com/blog/ai-vendor-due-diligence-questionnaire-template
- aicompliancevendors.com. Best AI Governance Platforms. https://aicompliancevendors.com/best/ai-governance-platforms
- aicompliancevendors.com. AI Impact Assessment Template. https://aicompliancevendors.com/guides/ai-impact-assessment-template
- Vectra AI. MITRE ATLAS: AI Security Framework with 16 Tactics and 84 Techniques. March 2026. https://www.vectra.ai/topics/mitre-atlas
- OWASP GenAI. OWASP Top 10 for LLM Applications — LLM Risks Archive. https://genai.owasp.org/llm-top-10/
- Giskard. OWASP Top 10 LLM Risk Categories: What Changed in 2025. https://www.giskard.ai/knowledge/owasp-top-10-for-llm-2025-understanding-the-risks-of-large-language-models
- Cloud Security Alliance. CAIQ Resources. https://cloudsecurityalliance.org/research/topics/caiq
- Securiti. Article 27: Fundamental Rights Impact Assessment for High-Risk AI Systems. https://securiti.ai/eu-ai-act/article-27/
For questions about this template or to submit it against a specific vendor engagement, contact the editorial team via [aicompliancevendors.com](https://aicompliancevendors.com).