06 / VI

Iklavya — agentic AI for hiring and placement. Government-grant funded.

A B2B hiring platform with three sub-systems: an agentic interviewer (voice + video), a live jobs pipeline and a programmatic course studio. Backed by a government grant premised on student employability.

Year

2026

Role

AI Engineer · Systems

Stack

Agentic AI · ElevenLabs · HeyGen

§ 01

The problem, before.

Students graduate into a hiring market that tests them in formats they have never practised. The interview itself is a skill — and for most students, it is also a scarcity. Meanwhile jobs rot fast, career services are overstretched, and video course production costs more than most institutes can justify.

Before

A placement cell books a handful of mock interviews a week, emails stale job lists, and commissions video courses that take months to film. Students practise against static question banks that don't probe; they see job openings that expired before they applied.

After

Iklavya runs realistic AI-powered interviews at any hour, pushes fresh jobs into student inboxes within 48 hours of posting, and produces new video modules on demand via Remotion. Three automations in one platform, funded because the state verified it actually moves the employability needle.

He has a sharp eye for process bottlenecks. Within the first week, he identified two critical gaps in our hiring pipeline that our internal team had missed entirely.
Aditi Chaurasia · COO & Co-Founder · Supersourcing
§ 02

Harnessing agentic AI — the interviewer.

A good interview is a dialogue, not a quiz. The AI interviewer holds a role (say, backend engineer at a Series A), a goal (probe system design depth), and a memory (what the candidate has already said). Each turn, a planner agent chooses the next question; a voice agent speaks it through ElevenLabs; a video agent renders a face through HeyGen or Simli; a reasoning agent listens and scores.

CandidateBrowser · mic / camInterview OrchestratorState · role · goalsClaude · planner/criticQuestion AgentAnthropic · role promptVoice (TTS)ElevenLabs · streamingAvatarSimli / HeyGen · lip-syncASR / ListenerStreaming transcriptRubric ScorerAnthropic · JSON outSession Memorytranscript + scores→ report
Fig. 01 — Agentic interview loop

Building voice-AI or avatar-AI into your product?

Realtime voice + video agents are hard: latency, interruptions, lip-sync, cost. I've shipped the above and would rather save you three weeks than pitch you. Book a 30-minute call.

§ 03

Live jobs pipeline — keeping the corpus fresh.

Jobs rot fast. Firecrawl harvests listings from partner sites and public boards; a cleaning agent normalises them into a schema (role, seniority, stack, geography, salary band); a match agent connects them to students based on their interview history and interests. The pipeline runs on a schedule and keeps the corpus under forty-eight hours old.

FirecrawlJob boardsRaw StoreS3 + metadataCleaner AgentLLM + schemaEnriched IndexVector + keywordMatcherStudent ← Job
Fig. 02 — Scrape → clean → match
§ 04

Notification system design — reaching students reliably.

A job match is only useful if it arrives on time. Iklavya’s notification layer is a priority queue with a fan-out: events are classified by urgency (interview tomorrow, new match, weekly digest), fanned out across email, SMS and push, and deduplicated across channels so a student does not receive the same news in triplicate.

Event BusClassifierpriority · dedupEmail (SES)SMSPush (FCM)Delivery Ledger
Fig. 03 — Notification fan-out
§ 05

Course studio — Remotion at scale.

Video modules are expensive to produce manually. Iklavya’s studio writes scripts with an LLM, renders them via Remotion with programmatic typography, graphics and captions, and publishes to the student catalogue. A new ten-minute module can be produced in under ten minutes of compute — at the cost of a coffee.

§ 06

Theory of strong design — for agentic platforms.

Strong agentic systems treat LLMs as hot, fallible workers — talented, forgetful, occasionally confused. The platform is the HR department.

Across all three sub-systems (interviewer, jobs pipeline, studio) the architecture keeps prompts versioned, outputs validated against schemas, retries bounded, and costs capped per session. When a model is replaced, only the worker changes; the ledgers, notifications, rubrics and indices remain.

§ 07

How it was built.

Build timeline

  1. Grant + product scopingMonth 1

    Employability metrics, cohort model, KPIs for the grant auditor, architecture doc.

  2. Interview loopMonth 2–3

    Orchestrator, question agent, voice (ElevenLabs), avatar (HeyGen/Simli), ASR, rubric scorer.

  3. Jobs pipelineMonth 3–4

    Firecrawl scheduler, cleaning agent, enriched index, student matcher.

  4. Course studioMonth 4–5

    Remotion templates, script LLM, render queue, catalogue publishing.

  5. Notifications + hardeningMonth 5–6

    Priority classifier, multi-channel fan-out, delivery ledger, production rollout.

§ 08

Outcome.

A government-grant-funded B2B hiring platform with an agentic interviewer, a live jobs engine and a course studio — architected and integrated by the author across ElevenLabs, Anthropic, Simli, HeyGen, Firecrawl and Remotion.

Want to build an agentic AI platform that a grant committee, or an investor, will take seriously?

Iklavya was funded because the architecture mapped cleanly to measurable outcomes. If you're pitching a similar system, I'd rather help you scope it than watch you guess. Book a 30-minute call.