Pattern: LLM In Frontend

programming, architecture, ai

LlmInFrontend is an architectural approach for building LLM-powered applications where the frontend owns the AI experience: prompts, orchestration, and even OpenAI SDK usage all live in the browser. The backend’s job is to securely proxy OpenAI requests and expose business data, but it no longer blocks rapid iteration on prompts or AI flows.

See also: LlmInBackend.

When to Use LlmInFrontend

Use the LlmInFrontend pattern when:

Avoid this pattern if:

LlmInFrontend is best for teams and products that value speed, flexibility, and a modern, collaborative workflow between frontend and backend roles.

How LlmInFrontend Works

In this model, the UI isn’t just a thin client. The frontend manages prompts, orchestrates AI logic, and controls the flow of interaction. The backend acts as a secure gatekeeper, proxying requests and keeping secrets safe.

Mermaid diagram

Advantages

Disadvantages

Expanded Architecture

Here’s what a typical LlmInFrontend architecture looks like in practice:

Mermaid diagram

Summary

LlmInFrontend architecture is ideal for teams that want to move fast with LLMs, empower frontend developers, and keep sensitive operations secure. If you want to ship better AI features, faster, with a leaner backend, LlmInFrontend is a proven approach.