Based on Anthropic’s “Building Effective Agents” framework. Human-in-the-loop integrates human judgment into AI workflows through iterative generation and feedback cycles. The agent generates content, presents it for human review, incorporates feedback, and continues refining until approval is received, combining AI efficiency with human expertise for quality-sensitive tasks.
Client
Agent
Generator
Human Review
content request
generate content
draft content
review request
feedback/approval
refine with feedback
improved content
final approved content

When to Use

Use human-in-the-loop when content quality depends on subjective judgment, brand consistency, or specialized expertise that AI cannot fully capture. This pattern is ideal for creative content, marketing copy, or compliance-sensitive material where human oversight is essential. Avoid when human review creates unacceptable delays or when quality standards can be adequately automated.

Implementation

This example demonstrates social media content creation where AI generates posts that require human approval for brand voice and messaging, with iterative refinement based on reviewer feedback.

Agent Code

import { icepick } from "@hatchet-dev/icepick";
import z from "zod";
import { generatePostTool } from "@tools/generate-post";
import { sendToSlackTool } from "@tools/send-to-slack";

const ContentCreationInput = z.object({
  topic: z.string(),
  audience: z.string(),
});

const ContentCreationOutput = z.object({
  finalPost: z.string(),
  iterations: z.number(),
  approved: z.boolean(),
});

export const contentCreationAgent = icepick.agent({
  name: "content-creation-agent",
  executionTimeout: "30m",
  inputSchema: ContentCreationInput,
  outputSchema: ContentCreationOutput,
  description: "Creates social media content with human approval loop",
  fn: async (input, ctx) => {
    let currentPost = "";
    let feedback = "";
    const maxIterations = 3;

    for (let iteration = 1; iteration <= maxIterations; iteration++) {
      // GENERATE: Create or refine content based on feedback
      const { post } = await generatePostTool.run({
        topic: input.topic,
        audience: input.audience,
        previousFeedback: feedback,
        previousPost: currentPost,
      });

      currentPost = post;

      // HUMAN REVIEW: Send to human reviewer and wait for response
      await sendToSlackTool.run({
        post: currentPost,
        iteration: iteration,
      });

      // WAIT FOR FEEDBACK: Pause execution until human responds
      const reviewResult = await ctx.waitFor({
        event: "post_review",
        timeout: "24h",
      });

      if (reviewResult.approved) {
        return {
          finalPost: currentPost,
          iterations: iteration,
          approved: true,
        };
      }

      feedback = reviewResult.feedback || "";
    }

    // FALLBACK: Return best attempt if max iterations reached
    return {
      finalPost: currentPost,
      iterations: maxIterations,
      approved: false,
    };
  },
});
The pattern uses ctx.waitFor() to pause execution while awaiting human feedback, then incorporates that feedback into subsequent generations. This creates a collaborative workflow that combines AI speed with human judgment for quality-sensitive content creation. This pattern works well with evaluator-optimizer for automated quality gates and can be combined with multi-agent systems where different specialists require human oversight for their specialized outputs.