AiOSHR / PeopleAttract & HireHR04

Interview Prep (Questions & Guides)

Convert approved role requirements into structured interview questions, evaluation rubrics, and interviewer guides.

Structured interview questions reduce interviewer bias and ensure consistent evidence collection against must-have criteria, leading to stronger hiring decisions.

GenAI Impact

49%

Faster

2.6

Hours saved

5.2

Hours without AI

Based on: 1 role with 3 interview panels and 5 criteria

Criterion-mapped question generation with structured Strong/Partial/Weak rubrics ensures all three interview panels evaluate candidates against identical evidence standards, eliminating scoring drift between interviewers.

Governed prompts with source-faithful constraints prevent introduction of unapproved evaluation criteria, while data-handling guardrails block exposure of confidential role requirements to public GenAI tools.

Before You Start

This workflow processes internal must-have criteria and role requirements derived from workforce plans. Do not paste these inputs into public or unapproved GenAI tools.

GenAI may generate leading questions or miss criteria. Verify every generated question maps to a specific must-have criterion and does not suggest the expected answer before distributing to interviewers.

Who's Involved

Recruiter

Coordinates interview preparation, generates questions and guides, and distributes materials to panelists.

Hiring Manager

Reviews interview questions for technical accuracy and approves the final structured interview guides.

Execution Steps

HumanGenAIHybrid

Before you start

Confirm the must-have criteria are finalized and approved
Verify the interview format guidelines are current

Prompt

Generate criterion-mapped behavioral interview questions

CONTEXT
You will be provided with the following source documents:
1. Must-Have Criteria
2. Interview Format Guidelines

TASK
For each must-have criterion, generate two to three behavioral interview questions that probe for concrete evidence of the candidate's experience. Produce a Draft Question Bank organized by criterion.

OUTPUT FORMAT
Use a markdown heading for each must-have criterion. Under each heading, list the questions as a numbered list. After each question, include a one-line 'What good looks like' indicator describing the evidence pattern expected in a strong answer.

EXAMPLE
## Technical Skill: Cloud Infrastructure Management
1. Describe a time you designed a cloud architecture to handle a significant increase in traffic. What trade-offs did you make?
   - **What good looks like:** Candidate cites specific scaling decisions, names the constraints, and explains the outcome.
2. Tell me about a production incident you resolved in a cloud environment. How did you diagnose and fix it?
   - **What good looks like:** Candidate walks through a structured troubleshooting approach with measurable resolution.

CONSTRAINTS
Do not invent criteria not present in the source must-have criteria. Do not generate hypothetical or leading questions that suggest the desired answer.

Outputs

Draft Question Bank
AI-drafted · you verify·passed to next step

Verification: Verify the AI did not generate questions for criteria absent from the Must-Have Criteria.

Inputs

Draft Question Bankfrom prev step

Prompt

Prompt available with library accessGet Access →

Outputs

Approved Question Bank
AI-drafted · you verify·passed to next step
Confirm every must-have criterion has at least two mapped questions
Verify no leading questions remain that suggest the desired answer

Before you start

Confirm the Approved Question Bank is finalized by the panel

Inputs

Approved Question Bankfrom prev step

Prompt

Create evidence-based evaluation rubric per question

CONTEXT
You will be provided with the Approved Question Bank and the Must-Have Criteria for the role.

TASK
For each question in the Approved Question Bank, generate a three-level evaluation rubric with Strong, Partial, and Weak indicators. Produce a Draft Evaluation Rubric.

OUTPUT FORMAT
Use a markdown table for each criterion with columns: Question (abbreviated), Strong, Partial, Weak. Each cell should contain one sentence describing the observable evidence pattern for that rating level.

CONSTRAINTS
Do not add evaluation dimensions beyond what the must-have criteria specify. Do not use vague indicators such as 'good answer' or 'poor answer' — every indicator must reference specific observable evidence.

Outputs

Draft Evaluation Rubric
AI-generated·passed to next step

Verification: Verify the evaluation rubric contains specific observable indicators and no vague descriptors.

Before you start

Confirm the Draft Evaluation Rubric has been reviewed for completeness

Inputs

Approved Question Bankfrom prev step
Draft Evaluation Rubricfrom prev step

Prompt

Prompt available with library accessGet Access →

Outputs

Draft Interviewer Guidance
AI-generated·passed to next step

Verification: Verify red flags reference observable behaviors and do not include subjective bias indicators.

Inputs

Approved Question Bankfrom prev step
Draft Evaluation Rubricfrom prev step
Draft Interviewer Guidancefrom prev step

Prompt

Prompt available with library accessGet Access →

Outputs

Structured Interview Guides
AI-drafted · you verify·passed to next step
Confirm all sections are formatted consistently across the package
Verify the cover section accurately summarizes the role criteria and interview structure

Verification: Verify the compiled guides contain all approved questions and rubric entries with no omissions.

Reference

Guardrails

  • Evidence-Based QuestionsEvery interview question must target observable evidence linked to a specific must-have criterion — reject any question that cannot be evaluated objectively.
  • Consistent Evaluation ScalesUse identical Strong, Partial, and Weak definitions across all interviewers to eliminate scoring drift between panel members.
  • Source-Faithful CriteriaDo not allow the AI to introduce requirements beyond the approved must-have criteria into interview questions or evaluation rubrics.

Pitfalls

  • Accepting AI-generated questions without verifying each maps to a specific must-have criterion.
  • Including leading questions from the AI output that telegraph the desired answer to candidates.
  • Pasting confidential candidate or compensation details into the prompt when generating follow-up questions.
  • Using the AI-generated evaluation rubric without calibrating it with the full interview panel.

Definition of Done

  • Every must-have criterion has at least two mapped interview questions in the Approved Question Bank.
  • The Evaluation Rubric contains Strong, Partial, and Weak indicators for every question with no vague descriptors.
  • The Interviewer Guidance includes probing follow-ups and red flags for each approved question.
  • The Structured Interview Guides compile all components into a single consistently formatted package.

Unlock the Full Library

Get full access to all prompts, execution steps, and downloadable examples — for this playbook and the rest of our GenAI capability framework — AGASI AiOS.

We'll send a magic link — no password needed.

AGASI AiOS · HR04 v1.0 · Apr 7, 2026