AiOSHR / PeopleProfessional ServicesAttract & HireHR05

Interview Debrief & Consolidation

Consolidate interview feedback into an evidence-backed debrief with areas of agreement, open concerns, and a hiring recommendation.

Evidence-backed debriefs reduce panel bias, surface unresolved concerns early, and produce a defensible hiring recommendation that stands up to later review.

GenAI Impact

46%

Faster

7.3

Hours saved

15.6

Hours without AI

Based on: 5 candidates with 3 interviewer scorecards each

Criterion-by-criterion cross-panel alignment with mandatory convergence classification ensures every interviewer's evidence is visible in the hiring recommendation, preventing the manual-debrief failure where dominant voices overshadow dissenting assessments.

Governed prompts with verification checkpoints prevent GenAI from fabricating false panel consensus where interviewers disagree, while data handling controls stop candidate scorecard PII from leaking into unapproved tools.

Before You Start

This workflow processes interviewer scorecards containing candidate performance assessments and panel member evaluation notes. Do not paste raw scorecards or personally identifiable candidate details into public or unapproved GenAI tools.

GenAI may misattribute evidence to the wrong interviewer or fabricate consensus where disagreement exists. Verify every cited quote maps to the correct scorecard and that disagreement areas are faithfully represented before finalizing.

Who's Involved

Recruiter

Collects interviewer scorecards, runs the debrief consolidation workflow, and distributes the final summary.

Hiring Manager

Reviews the consolidated debrief for accuracy, resolves disagreements, and approves the hiring recommendation.

Interview Panelist

Provides completed scorecards and clarifies feedback when the debrief flags ambiguous evidence.

Execution Steps

HumanGenAIHybrid

Before you start

Confirm all panel members have submitted completed scorecards
Verify the structured interview guides from the preparation workflow are available
Confirm the interview format guidelines including evaluation criteria are current
Verify no scorecard is missing ratings or criterion labels

Prompt

Extract structured evidence from interviewer scorecards

CONTEXT
You will be provided with the following source documents:
1. Structured Interview Guides
2. Interviewer Scorecards
3. Interview Format Guidelines

TASK
For each interviewer, extract the key evidence they recorded against each evaluation criterion. Produce an Individual Evidence Summaries document that preserves the interviewer's own wording and ratings.

OUTPUT FORMAT
Use a top-level markdown heading for each interviewer (by role label, e.g., "Interviewer 1 — Technical Lead"). Under each interviewer heading, list each evaluation criterion as a subheading. For each criterion, include:
- **Rating:** The rating given (e.g., Strong / Partial / Weak)
- **Key Evidence:** One to three verbatim or near-verbatim quotes from the scorecard
- **Interviewer Notes:** Any additional observations the interviewer recorded

CONSTRAINTS
Do not paraphrase evidence in a way that changes its meaning. Do not infer ratings where the scorecard is blank — flag missing ratings as "Not Rated." Do not include personally identifiable candidate information beyond what appears in the scorecards.

Outputs

Individual Evidence Summaries
AI-drafted · you verify·passed to next step

Verification: Verify extracted evidence matches the original scorecards and no ratings were fabricated for blank entries.

Before you start

Confirm Individual Evidence Summaries cover every panel member

Inputs

Individual Evidence Summariesfrom prev step

Prompt

Cite and align evidence per criterion across interviewers

CONTEXT
You will be provided with the Individual Evidence Summaries extracted from each interviewer's scorecard and the Structured Interview Guides that define the evaluation criteria.

TASK
For each evaluation criterion, consolidate the evidence and ratings from all interviewers into a single cross-panel view. Produce a Cross-Interviewer Evidence Map that shows where interviewers converge and diverge on each criterion.

OUTPUT FORMAT
Use a markdown heading for each evaluation criterion. Under each heading, create a table with columns: Interviewer, Rating, Key Evidence Cited, Notes. Below each table, add a one-sentence convergence summary stating whether the panel broadly agrees, partially agrees, or disagrees on this criterion.

EXAMPLE
## Technical Problem-Solving
| Interviewer | Rating | Key Evidence Cited | Notes |
|---|---|---|---|
| Interviewer 1 — Technical Lead | Strong | "Described a systematic root-cause analysis approach" | Noted speed of resolution |
| Interviewer 2 — Team Lead | Partial | "Solved the problem but skipped documentation" | Flagged process gap |

**Convergence:** Partial agreement — technical capability confirmed, process discipline disputed.

CONSTRAINTS
Do not merge or average ratings across interviewers. Do not omit any interviewer's evidence even if it appears redundant. Do not introduce criteria not present in the Structured Interview Guides.

Outputs

Cross-Interviewer Evidence Map
AI-generated·passed to next step

Verification: Verify no interviewer's evidence was omitted and convergence summaries accurately reflect the ratings shown.

Before you start

Confirm the Cross-Interviewer Evidence Map is complete for all criteria

Inputs

Cross-Interviewer Evidence Mapfrom prev step

Prompt

Prompt available with library accessGet Access →

Outputs

Agreement and Disagreement Analysis
AI-generated·passed to next step

Verification: Verify disagreement classifications match the actual rating spread in the evidence map.

Before you start

Confirm the Agreement and Disagreement Analysis has been reviewed for completeness

Data Handling: Do not paste candidate personal contact details or compensation expectations into the prompt when adding context to concerns.

Inputs

Agreement and Disagreement Analysisfrom prev step
Cross-Interviewer Evidence Mapfrom prev step

Prompt

Prompt available with library accessGet Access →

Outputs

Risk and Concerns Register
AI-drafted · you verify·passed to next step
Confirm every flagged concern traces to specific evidence in the Cross-Interviewer Evidence Map
Verify severity ratings are consistent with the degree of panel disagreement

Verification: Verify the AI did not fabricate concerns for criteria where all interviewers agreed.

Before you start

Confirm the Risk and Concerns Register has been reviewed and approved by the hiring manager

Inputs

Cross-Interviewer Evidence Mapfrom prev step
Agreement and Disagreement Analysisfrom prev step
Risk and Concerns Registerfrom prev step

Prompt

Prompt available with library accessGet Access →

Outputs

Draft Interview Debrief Summary
AI-generated·passed to next step

Verification: Verify the hiring recommendation is supported by cited evidence and does not contradict the panel's recorded disagreements.

Before you start

Confirm the Draft Interview Debrief Summary is complete with all sections populated

Inputs

Draft Interview Debrief Summaryfrom prev step

Outputs

Interview Debrief Summarydownload
you create this
Confirm all cited evidence traces back to the original interviewer scorecards
Verify the hiring recommendation accurately reflects the panel's collective assessment
Confirm open questions have been addressed or documented as accepted risks

Reference

Guardrails

  • Evidence-Only AssessmentsEvery claim in the debrief must trace to a specific interviewer's scorecard — reject any AI-generated statement that cannot be sourced to an input document.
  • Preserve Interviewer VoiceUse near-verbatim quotes from scorecards when citing evidence to prevent the AI from smoothing over nuance or reinterpreting feedback.
  • Transparent DisagreementNever merge conflicting interviewer ratings into an average or consensus — present each position with its supporting evidence so reviewers see the full picture.
  • Recommendation TraceabilityThe hiring recommendation must cite at least two specific strengths and any unresolved concerns — a recommendation without cited evidence is incomplete.

Pitfalls

  • Pasting full interviewer scorecards with candidate personal details into an unapproved GenAI tool without redacting sensitive information.
  • Accepting the AI's convergence summary without verifying it against the actual rating distribution in the evidence map.
  • Allowing the AI to fabricate consensus language when interviewers clearly disagreed on a criterion.
  • Using the AI-generated hiring recommendation as the final decision without the hiring manager reviewing the underlying evidence.
  • Including specific candidate compensation or personal data in the prompt context when generating the debrief summary.

Definition of Done

  • The Individual Evidence Summaries contain extracted evidence for every criterion from every interviewer with no fabricated ratings.
  • The Cross-Interviewer Evidence Map shows a complete comparison view with convergence summaries for each criterion.
  • The Agreement and Disagreement Analysis correctly classifies every criterion and cites supporting evidence.
  • The Risk and Concerns Register lists all unresolved items with severity, source evidence, and suggested resolutions.
  • The Interview Debrief Summary contains a hiring recommendation that cites specific evidence from the panel's assessment.

Unlock the Full Library

Get full access to all prompts, execution steps, and downloadable examples — for this playbook and the rest of our GenAI capability framework — AGASI AiOS.

We'll send a magic link — no password needed.

AGASI AiOS · HR05 v1.0 · Apr 7, 2026