picoflow.io

Lesson 2: Hello world


Let's build a super simple flow Tutorial Flow

We want to create a conversation to capture a user's name, and if they enter John Doe, it will be rejected. Any other names will be accepted and then the conversaiton ends. Simple conversation


Set up your controller.

  • The controller below is reachable by `http://localhost:8080/ai/chat
  • It registers a TutorialFlow we are going to implement.
  • It also registers the two available LLM models that can be used.
TIP

Notice we inject FlowEngine into the controller.

It is kicked off by

this.flowEngine.run

The full code is below:

@Controller('ai')
export class TutorialController {
  constructor(private flowEngine: FlowEngine) {
    //register flows
    flowEngine.registerFlows({ TutorialFlow});

    //register models
    flowEngine.registerModel(ChatGoogleGenerativeAI, {
      model: 'gemini-2.0-flash',
      temperature: CoreConfig.llmTemperature,
      apiKey: CoreConfig.GeminiKey,
      maxRetries: CoreConfig.llmRetry,
    });

    flowEngine.registerModel(ChatOpenAI, {
      model: 'gemini-2.5-flash',
      temperature: CoreConfig.llmTemperature,
      apiKey: CoreConfig.OpenAIKey,
      maxRetries: CoreConfig.llmRetry,
    });
  }
  //.................................................................
  @HttpCode(HttpStatus.OK)
  @Post('chat')
  async chat(
    @Res() res: FastifyReply,
    @Body(K.message) userMessage: string,
    @Body(K.flowName) flowName: string,
    @Body('config') config: object,
    @Headers(K.ChatSessionID) sessionId?: string,
  ) {
    await this.flowEngine.run(res, flowName, userMessage, sessionId, config);
  }
}

Set up your TuturialFlow.

Next you create your TutorialFlow that is derived from the base class flow

Info

The most important thing is to defined Steps that are possible computing sequence of actions that can be executed.

export class TutorialFlow extends Flow {
  public constructor() {
    super(TutorialFlow);
  }

  protected defineSteps(): Step[] {
    return [
      new HelloStep(this, true).useMemory('default'),
      new EndStep(this).useModel('gemini-2.5-flash').useMemory('default'),
    ];
  }
}
  • There are two steps , the main HelloStep and a built-in EndStep we installed.
  • Each step can use or share memory space, in the above we shared default memory space.
  • Each step can use a default LLM (first LLM registered in the controller i.e. 'gemini-2.0-flash), or it can specific which LLM to use in its execution.
Tip

Notice that we don’t need to define the possible execution paths (graph edges) that LangGraph typically requires. This is a significant advantage because business logic often produces a large number of potential step transitions, which can lead to cluttered and hard-to-maintain code when using LangGraph.


Implementing your Step

Step are individual composable LLM logic. They are the building blocks of an Agentic Flow Step has these capabilities:

  1. It can contain its own prompt when it is activated.
  2. It can define multiple tools for use by itself or other steps.
  3. It can decide which tools can be invoked when it is activated.
  4. It can store its states to be used by itself or other steps.
  • The HelloStep class provides its prompt : getPrompt()
  • Defines new tool(s) to be used: defineTool()
  • Specifies what tool can be used: getTool()
  • Note the handler of the tool call that matches the tool name is implemented in
protected async capture_name(){}
  • The example also shows returning a rejecting message to LLM to ask for re-capture a new name.
  • once the name is captured, it stores in its session memory persistently.
  • last when everything is OK, it transition to a pre-built EndStep that ends the conversation and mark the session completed.

The HelloStep class:

export class HelloStep extends Step {
  constructor(flow: Flow, isActive?: boolean) {
    super(HelloStep, StepKind.LlmTool, flow, isActive);
  }

  public getPrompt(): string {
    const prompt = `.
        Ask the name of the user.
        When you get the name of the user, call tool capture_name.
        Greet the user. Chat with them.
    `;
    return prompt;
  }

  public defineTool(): ToolType[] {
    return [
      {
        name: 'capture_name',
        description: 'Capture name of user',
        schema: z.object({
          name: z.string().describe('Name of user'),
        }),
      },
    ];
  }

  public getTool(): string[] {
    return ['capture_name'];
  }

  protected async capture_name(tool: ToolCall): Promise<ToolResponseType> {
    this.saveState({ name: tool.args?.name });
    if (tool.args?.name === 'John Doe') {
      return {
        step: HelloStep,
        tool: 'Cannot accept John Doe, please choose a different name.',
      };
    } else {
      return EndStep;
    }
  }
}

Wrapping up

We’ve covered a lot in this lesson. A developer with minimal LLM experience can now focus on a structured process: defining a flow, defining the steps within that flow, and determining how transitions occur between steps. This greatly simplifies the implementation of agentic, flow-based applications—especially conversational business chat-bots.

Next, we’ll take a quick look at the session information we store and retrieve in the NoSQL database.