top of page
Search

They Wrote the AI RFP Playbook. We Built the AI Agents That Run It.

Four diverse professionals around a conference table reviewing stacks of vendor RFP proposals. Two flying teal AI agents labeled A and B assist them. A third Swix-branded helper agent sits near a laptop showing a scoring dashboard. A floating holographic scorecard above the table ranks the vendors as Top Tier, Strong Contender, and Consideration.
Agent A and Agent B handle the prep. The Swix artifact runs the meeting. The committee still makes the call.

A few weeks ago, Janette Roush, SVP of Innovation and Chief AI Officer at Brand USA, and Skylar Clark put on a session in their Agents of Change webinar series called "Using AI to Evaluate RFP Responses Without Losing the Human Touch." If you missed it, the recording is on Brand USA's site and it's worth your time.


Janette laid out the cleanest framework I've seen for evaluating vendor proposals with AI: shared project, master evaluation prompt, vendor-by-vendor analysis, cross-vendor comparison, committee scoring artifact at the end. Every piece earns its place, and the framing about keeping humans in the loop is exactly right.


So I spent a few days translating it into something a DMO can pick up and run out of the box.


The whole flow at a glance


  • Step 1: Run Flow A agent once per vendor. Upload the RFP and that vendor's proposal. The AI returns a full scorecard with compliance check, weighted section scoring, evidence citations, an AI flag, six to eight interview questions, and a JSON payload for the scoring tool. Six vendors means six runs, about 3 minutes each.

  • Step 2: Run Flow B agent once across all the scorecards. Upload each Flow A output. Flow B produces a Committee Briefing document: executive summary for the board, finalist recommendations with risk profiles, vendors not recommended with rationale, open strategic questions for the committee to debate, an interview scoring template for round 2, and recommended next steps.

  • Step 3: Open the scoring artifact in your browser. Simply paste each vendor's Flow A output in. The artifact pulls the JSON automatically, no formatting or clean-up needed. Add your committee members.

  • Step 4: Your committee scores only where they disagree. Empty cells inherit the AI's score. The Spread column flags any section where the committee splits by two or more points, so the meeting goes straight to the calls that matter.

  • Step 5: Export a CSV when you're done. Full record of every score, override, and compliance check for the procurement file.


That's the whole loop. Flow A scores each proposal. Flow B writes the briefing for stakeholders not in the room. The artifact runs the meeting. Three pieces, no overlap. Drop your RFP in on Monday, walk into committee on Friday with everything ready.


Screenshot of the Committee Scoring Artifact's Overview tab. The 1-to-5 Scoring Scale is shown as five color-coded rows, from Exceptional in green down to Unacceptable in red. Below it, four Recommendation Bands: Strongly Recommend (4.25 to 5.00, green), Recommend (3.50 to 4.24, blue), Neutral (2.50 to 3.49, amber), and Do Not Recommend (below 2.50, red).
The artifact Overview tab. A 30-second orientation for any committee member who's about to score.

Why

Janette's framework is the most complete public blueprint I've seen for AI-assisted RFP evaluation. The webinar gives DMOs the blueprint and the strategy. We focused on making the running of it easy enough that the project lead doesn't have to set up shared projects, write a master prompt, run fresh chats per vendor, and build a Claude artifact on their own.


The playbook came first. Everything we created is turnkey and sits on top of it.


What we created


Two ActionFlow agents and a committee scoring tool on Swix, all running the framework Janette and Skylar demoed.


Screenshot of Flow A in the Swix ActionFlow builder. Eleven sequential actions are shown: file uploads for the RFP and the vendor proposal, vendor name input, then AI generation steps for analyzing the RFP, compliance check, detailed scoring, references assessment, final scorecard, human next steps, artifact payload, and the Show User Output step.
Flow A in Swix. Eleven actions, one full proposal evaluation. Run it once per vendor

The first agent, flow A, handles per-proposal evaluation. Upload an RFP and a vendor's response, get back a full scorecard: compliance check, weighted section scoring with cited evidence, an AI flag (Strong contender, Dark horse, Style over substance, Red flag), six to eight vendor-specific interview questions, targeted manual checks for the human reviewer, and a 2-3 sentence overview of where this vendor actually lands.


The second agent, flow B, takes all the scorecards and produces a Committee Briefing document: a board-ready executive summary, finalist recommendations with risk profiles, a clean list of vendors not advancing with the reasoning, the open strategic questions the committee actually needs to debate, and an interview scoring template for the second round. It's the document you send to stakeholders who aren't in the room and to committee members who want to prep before the meeting.


Screenshot of Flow B in the Swix ActionFlow builder. Ten sequential actions are shown: RFP project name, proposal count, six file uploads for scorecards from Flow A, the Committee Briefing generation step, and the Show User Output step.
Flow B in Swix. 10 actions, one full committee brief. Run it once with uploaded scorecards from Flow A

The third piece is where it gets fun. A standalone HTML scoring tool that opens in any browser. No install, no logins, no shared accounts. Try it free at swix.ai/rfp-scoring-artifact. Open it on a desktop, click "Demo Data" in the header to load sample vendors, and explore how the AI-first override scoring works.


How it actually plays out in the room


Screenshot of the Committee Scoring Artifact's Proposal Summaries section. A row of vendor chips runs across the top with one vendor selected. Below, a card shows the active vendor's Overview paragraph, Key Highlights with green checkmarks, Areas of Concern with red icons, and a View Full Proposal button at the bottom.
Click any vendor chip. The AI's full read on that proposal opens below, with a one-click link to the source document.

Open the artifact on the morning of your committee meeting. The Overview tab walks any new member through how scoring works in about 30 seconds: the 1-5 rubric, the recommendation bands, the AI-first override model. Less for new committee members to figure out, and nobody has to guess at the rules during the meeting.


Click any vendor chip and you get the at-a-glance read your committee actually needs: a 2-3 sentence overview, the section scores where this vendor stood out (with the AI's cited evidence), the areas of concern (compliance failures, low-scoring sections, anything else worth flagging), and a one-click button to open the full proposal PDF from wherever you've stored it. SharePoint, Google Drive, your RFP portal, whatever works.


Screenshot of the Committee Scoring Artifact's Section-by-Section Scoring grid. Each row is a scoring category with the AI's score shown as a placeholder. Two evaluator columns have a few override values entered. A Spread column flags one section with a 2.0 divergence between committee scores.
Every cell shows the AI score. You only type a value if you disagree. The Spread column flags where the committee splits, so the meeting goes straight to the calls that matter.

The actual scoring grid is where the real work happens. Every cell shows the AI score as a placeholder. You only type a value if your judgment differs. Empty cells inherit the AI score automatically, so your committee can focus their energy on the calls that actually matter instead of marking a thousand cells "I agree."


A Spread column flags any section where the committee disagrees by two or more points, which is exactly where the meeting conversation should go.


Screenshot of the Committee Scoring Artifact showing two views. The Weighted Totals view ranks three vendors by committee-adjusted score with color-coded recommendation band badges in the rightmost column. Below, the Compliance Matrix lists requirements down the left side and vendor names across the top with checkmarks, X marks, and tildes for met, failed, or unclear.
Weighted totals at a glance, color-coded by recommendation band. The math is fully transparent. Every number traces back to AI scores plus explicit human overrides.

The Weighted Totals view ranks every vendor by committee-adjusted score, color-coded by recommendation band (Strongly Recommend, Recommend, Neutral, Do Not Recommend). The Compliance Matrix shows you which vendors met which requirements, with hover-evidence cited from the source proposal.


Export the whole thing as a CSV when you're done. Auto-saves to your browser as you go, so if you close the tab and come back the next morning, your work is still there.


What this changes


The 15 to 25 hours of committee reading Janette quoted in the webinar drops to about 30 minutes of project lead setup plus the actual meeting. Compliance failures and budget math errors get caught automatically, and every vendor gets a consistent, evidence-cited scorecard for review. The bigger ROI is what you avoid. A missed compliance failure, a budget math error, or a wrong-fit vendor can cost $100K or more on a single bad selection. The recovered committee time, roughly $1,500 to $3,000 per cycle at typical DMO loaded rates, is the smaller win.


The committee still makes the call. Chemistry, cultural fit, strategic priority. All the human things AI cannot, and should not, decide.


And the obvious procurement question: this all runs inside your own Swix workspace. Your RFPs and proposals stay in your account, and your team controls who can access them.


Beyond marketing


We built it to work on any RFP a DMO runs. The evaluation criteria and compliance grid adapt automatically to whatever each RFP requires, with no template wrestling between projects and no re-prompting between RFP types. Marketing agencies are the obvious case after the Brand USA webinar, but website redesigns, CRM evaluations, research procurement, and event production all use the same suite.


Credit where it belongs


Janette and Skylar wrote the playbook. The entire framework is theirs. What we added is the wiring, so a DMO can drop their RFP in on Monday and walk into committee on Friday with everything ready.


That part is easier in a platform. Their webinar gave us everything else.


If you watched the Brand USA webinar and thought "we should be doing this" but never carved out the time to set it up, send me a message. Happy to walk through what we built on a 15-minute call.


More soon...


-Jason

 
 
bottom of page