Skip to main content
Based on Anthropic’s “Building Effective Agents” framework. Parallelization executes independent tasks simultaneously rather than sequentially, improving both speed and quality through specialized processing. Tasks that don’t depend on each other can run concurrently, with results aggregated using various strategies like sectioning different concerns or voting for consensus-based decisions.
Client
Agent
Task A
Task B
request
parallel task A
parallel task B
result A
result B
aggregated response

When to Use

Use parallelization when you have independent tasks that can run simultaneously, such as content generation with safety checking, or multiple evaluations requiring consensus. It’s ideal when specialized processing improves quality and when speed gains from concurrency outweigh coordination overhead. Avoid when tasks have dependencies or when the aggregation complexity exceeds the benefits.

Implementation

This example demonstrates sectioning parallelization where appropriateness checking and main content generation run simultaneously, combining focused attention on different concerns with improved response time.

Agent Code

import { icepick } from "@hatchet-dev/icepick";
import z from "zod";
import { appropriatenessCheckTool } from "@tools/appropriateness.tool";
import { mainContentTool } from "@tools/main-content.tool";

const SectioningAgentInput = z.object({
  message: z.string(),
});

const SectioningAgentOutput = z.object({
  response: z.string(),
  isAppropriate: z.boolean(),
});

export const sectioningAgent = icepick.agent({
  name: "sectioning-agent",
  executionTimeout: "2m",
  inputSchema: SectioningAgentInput,
  outputSchema: SectioningAgentOutput,
  description: "Demonstrates parallel processing with sectioning approach",
  fn: async (input, ctx) => {
    // PARALLEL EXECUTION: Both tasks run simultaneously
    const [{ isAppropriate, reason }, { mainContent }] = await Promise.all([
      appropriatenessCheckTool.run({
        message: input.message,
      }),
      mainContentTool.run({
        message: input.message,
      }),
    ]);

    // AGGREGATION: Combine results based on appropriateness check
    if (!isAppropriate) {
      return {
        response: "I can't help with that request. Please ensure your message is appropriate and respectful.",
        isAppropriate: false,
      };
    }

    return {
      response: mainContent,
      isAppropriate: true,
    };
  },
});
import { icepick } from "@hatchet-dev/icepick";
import z from "zod";
import { generateObject } from "ai";

export const appropriatenessCheckTool = icepick.tool({
  name: "appropriateness-check-tool",
  description: "Evaluates message appropriateness and safety",
  inputSchema: z.object({
    message: z.string(),
  }),
  outputSchema: z.object({
    isAppropriate: z.boolean(),
    reason: z.string(),
  }),
  fn: async (input) => {
    const result = await generateObject({
      model: icepick.defaultLanguageModel,
      prompt: `Evaluate if the following message is appropriate and safe:

Criteria:
- No harmful, offensive, or inappropriate content
- Doesn't promote dangerous activities
- Respectful and professional tone

Message: ${input.message}

Provide your assessment and reasoning.`,
      schema: z.object({
        isAppropriate: z.boolean(),
        reason: z.string(),
      }),
    });

    return {
      isAppropriate: result.object.isAppropriate,
      reason: result.object.reason,
    };
  },
});
This tool runs in parallel with content generation, providing a safety guardrail without blocking the main processing. Using structured output ensures reliable boolean results for decision-making.
import { icepick } from "@hatchet-dev/icepick";
import z from "zod";
import { generateText } from "ai";

export const mainContentTool = icepick.tool({
  name: "main-content-tool",
  description: "Generates detailed, helpful responses",
  inputSchema: z.object({
    message: z.string(),
  }),
  outputSchema: z.object({
    mainContent: z.string(),
  }),
  fn: async (input) => {
    const result = await generateText({
      model: icepick.defaultLanguageModel,
      prompt: `You are a helpful assistant. Provide a detailed and informative response to the user's message. Be thorough, accurate, and directly address their question or request.

User message: ${input.message}`,
    });

    return {
      mainContent: result.text,
    };
  },
});
This tool focuses solely on generating quality content without concern for safety filtering, allowing it to optimize for helpfulness and detail. The parallel execution means safety checking doesn’t slow down content generation.
The pattern uses Promise.all() to execute independent tasks simultaneously, then aggregates results based on the appropriateness evaluation. This approach provides both speed benefits and specialized processing, with each tool focusing on its specific concern without coordination overhead. This pattern works well with routing for handling different request types and can be combined with evaluator-optimizer for iterative improvement workflows where multiple evaluators provide parallel feedback.
I