JD Orchestrator Console

1. Job Basics

Define the core hiring parameters, ownership, and where this role sits in the organisation.

Tip: click Auto-generate to create a unique Job ID based on Department + Role + Date, then adjust if required.

Company Profile & EVP

Capture the mission, products, benefits, and culture/stack tags so every JD reuses a consistent, JD-ready story about who you are and why candidates should care.

Evaluation-first
Security-by-design
Fast feedback
Craft & outcomes
AWS Bedrock
OpenSearch Serverless
AWS S3
AWS RDS
AWS Textract
AWS MSK / Kafka
AWS API Gateway
LangChain
Python
Node.js
LLaMA 3
Claude Sonnet 3.5
Titan Text Embeddings
Keycloak
Flutter

Skills Taxonomy & Searchability

Define clear, atomic skills for the role — separating required vs nice-to-have and soft vs hard — so search, matching, and AI scoring can work with clean, consistent tags.

Proficiency:
Foundational
Working
Proficient
Advanced
Expert
Python
LangChain
RAG pipelines
Vector search (OpenSearch)
Hybrid retrieval (k-NN + BM25)
AWS (S3, RDS, API Gateway, MSK)
Kafka / event-driven systems
LLM orchestration
AWS Bedrock
LLaMA 3
Claude Sonnet 3.5
AWS Textract
Resume & JD parsing
HRTech / FinTech data patterns
BM25 tuning & rerankers
Evaluation mindset
Security & privacy awareness
Cross-functional collaboration
Clear written communication
Ownership from prototype to production
Mentoring
Experiment design & A/B testing
Product thinking

Must‑Have Certifications

Select certification‑critical occupations to enforce non‑negotiable requirements during screening. Search by Occupation Title and add one or more items to the JD.

SEARCH for “Must‑Have” Certification Shows O*NET‑SOC
O*NET‑SOC Code
Occupation Title
Certification Importance (1‑5)
Zone Description
Tip: you can add multiple items. Remove any item with the × icon.

Compensation & Pay Transparency

Capture structured pay details — currency, ranges, bonuses, OTE, equity, and contract rates — and decide what’s visible externally so compensation stays consistent, compliant, and transparent.

Compliance & Security

Set the guardrails for the role — visa sponsorship, work authorization, data sensitivity, PII/PHI handling, and clearance needs — so every JD aligns with legal, regulatory, and security expectations from day one.

Posting, Tracking & Campaign Metadata

Configure where and how the role goes live — internal vs external flags, channels, and UTM conventions — so every JD is campaign-ready, trackable, and easy to report on across sourcing funnels.

Variants & A/B Testing

Create two versions of this Job Description (A and B), run them on your job boards, then enter the results here to automatically pick a winner. When you’re done, promote the winning copy back into the canonical (master) JD.

How it works: 1) Create Variant A & B → 2) Publish externally → 3) Paste performance numbers → 4) Promote the winner to the master JD.
Results Entry & Winner Calculator
After you post A and B externally (LinkedIn/Seek/etc.), enter the results here. We’ll calculate the winner using your chosen KPI and save the run as a snapshot.
Winner
Choose a KPI, enter results, then click Compute winner.
Apply rate
Applies ÷ Impressions (A vs B)
Qualified rate
Qualified ÷ Applies (A vs B)
Inclusion signal
Manual score (0–100) (A vs B)
Time-to-fill
Days (lower is better) (A vs B)
Saved runs (this browser only)
Tip: Export JSON if you want to share runs with your team or move devices.
Experiment Setup
Define what you’re testing and the rules of the test. This keeps decisions consistent and audit-ready.
Variant Builder
Create meaningful variants, track what changed, and keep a clear trail for explainability and audit.
Variant A (Control)
Baseline copy (closest to canonical Candidate JD).
Variant B (Challenger)
Purposefully different copy aligned to the objective.
Change notes (why B should win)
Explain why you changed this copy and what outcome you expect.
Targeting & Allocation
Define who sees A vs B and when. Use 50/50 to start, or ramp up safely.
Posting & Tracking
Use consistent tracking so A and B results can be compared fairly across channels.
Build tracking link
Standardize attribution across channels and variants (copy-ready).
Quality Checks & Approvals
Run checks before posting. Capture approvals and keep an audit trail.
Automated checks (explainable)
Explainable signals (not black box): readability, length, jargon, inclusivity.
Approvals (who signed off)
Standard chain: Recruiter → Hiring Manager → HR → Compliance (adjust per policy).
Audit log
Notes and decisions (exportable).
Decide Winner & Update the Master JD
Pick a winner, record why, then update the canonical JD so future postings start from the best copy.
Learning to reuse later
Capture what worked so future JDs start stronger.
What will be updated in the canonical JD
When you promote a winner, specify exactly what changed (title, intro, EVP, CTA).

Channel Planning & Budget Alignment

Design where the JD should run and how hard it should be pushed — channel mix, geo targeting, pacing, and budget caps — so spend, reach, and role priority stay aligned across recruitment campaigns.

AI Resume Questions & Interview Support

Convert the JD into structured, AI-ready question sets — knockouts and nice-to-know prompts — for resume screening, recruiter pre-screens, and hiring manager interviews, so evaluation stays consistent and role-specific.

0 / 30
Tick questions to delete. “Generate Questions” will top you back up to 30.
Inclusive Language & Bias Checker
Score: —
Scans for gender-coded, age-coded, ableist, elitist, and exclusionary language. Suggestions are meaning-preserving.
0 / 30
Tick questions to delete. “Generate Questions” will top you back up to 30.
Inclusive Language & Bias Checker
Score: —
Use this on questions too — screening language can accidentally reintroduce bias even when the JD is clean.

Candidate-Facing JD Generation & Quality

Turn the structured JD into clean, candidate-ready copy — titles, intros, outcomes, skills, and benefits in the right tone and length — so every posting is clear, inclusive, and consistently high quality across channels.

Readability & Engagement Checks
Signals to keep the candidate-facing description scannable and welcoming (Datapeople-style checks).
Full
Short
Both
Word count
Target: short 80–160, full 250–700.
Reading grade
Aim for ~8–10 for broad accessibility.
Jargon amount
Heuristic: acronyms + dense technical terms.
Active voice
Prefer clear verbs (“build”, “own”, “deliver”).
Welcoming
Inclusive cues + low “bro-terms”.
Generates candidate-facing text from Job Basics, Company Profile & EVP, and Skills. Salary uses your inputs only (no external market guessing).
Candidate Copy Pack
Score: —
Includes Full/Short JD + Tone Notes, and will also scan Mission/Culture fields from the Company Profile step when present.