AI-Assisted Editing Guide
This guide covers how to integrate LLM-powered features into a Verevoir application. Two capabilities are available today: copy generation for rich text fields and asset analysis for auto-generating image alt text and tags. Both follow the same pattern — the framework defines an interface, the app provides the LLM call.
Design Pattern
Every AI feature in Verevoir follows a three-layer separation:
- Interface layer — a package defines a callback shape (
CopyAssistFn,AssetAnalyzer). No LLM dependency. - UI/behaviour layer — a component or manager calls the callback at the right moment. No LLM dependency.
- App layer — the app implements the callback, owns the LLM client, and controls the prompt.
This means:
- Packages stay lightweight with no AI SDK dependency
- The app chooses the model, manages API keys, and shapes the prompts
- AI features are env-gated — they gracefully degrade when no API key is set
Shared LLM Client
Both features can share a single Anthropic client, initialised once and env-gated:
import Anthropic from '@anthropic-ai/sdk';
let client: Anthropic | null = null;
let checked = false;
export function getLlmClient(): Anthropic | null {
if (!checked) {
checked = true;
if (process.env.ANTHROPIC_API_KEY) {
client = new Anthropic();
}
}
return client;
}
Both the copy assist action and the asset analyzer receive this client. If the key is missing, both features are silently disabled.
Copy Generation
Per-Field Directives
Add .hint() to any text or rich-text field to attach a natural-language directive:
import { defineBlock, text, richText } from '@verevoir/schema';
const speaker = defineBlock({
name: 'speaker',
fields: {
name: text('Name').max(100),
bio: richText('Bio')
.hint('Third person, 2-3 sentences. Mention role, expertise, and one notable achievement.')
.optional(),
},
});
The hint is stored on field.meta.hint and passed to the generate function at suggestion time. Fields without a hint still work — the LLM just doesn't receive a field-specific directive.
Wiring Up the Editor
Wrap your editor in CopyAssistProvider, passing an async generate function:
import { CopyAssistProvider } from '@verevoir/editor';
import type { CopyAssistRequest } from '@verevoir/editor';
import { generateCopy } from './actions/copy-assist';
function AdminShell({ children }) {
const generate = useCallback(
(request: CopyAssistRequest) => generateCopy(request),
[],
);
return (
<CopyAssistProvider generate={generate}>
{children}
</CopyAssistProvider>
);
}
CopyAssistRequest
The generate function receives:
| Field | Type | Description |
|---|---|---|
fieldName | string | The field key (e.g. "bio", "abstract") |
fieldLabel | string | Human-readable label (e.g. "Speaker Bio") |
hint | string | undefined | Per-field directive from .hint() |
currentValue | string | Existing content (markdown), empty if blank |
context | Record<string, unknown> | Sibling field values from the same block |
The context field lets the LLM use surrounding information — for example, a speaker's name and company help generate their bio.
Implementing the Generate Function
The app owns the LLM call. This is where the global voice lives:
'use server';
import Anthropic from '@anthropic-ai/sdk';
const VOICE = `You are writing copy for a professional technology conference website.
Tone: confident, approachable, and concise. Avoid jargon unless the audience is technical.
Write for the web — short paragraphs, scannable text, active voice.`;
export async function generateCopy(input: {
fieldName: string;
fieldLabel: string;
hint?: string;
currentValue: string;
context: Record<string, unknown>;
}): Promise<string> {
const client = new Anthropic();
const contextLines = Object.entries(input.context)
.filter(([k, v]) => k !== input.fieldName && v != null && v !== '')
.map(([k, v]) => `${k}: ${String(v).slice(0, 200)}`)
.join('\n');
const parts: string[] = [
VOICE,
'',
`You are writing the "${input.fieldLabel}" field.`,
];
if (input.hint) {
parts.push(`Directive: ${input.hint}`);
}
if (contextLines) {
parts.push('', 'Context from other fields:', contextLines);
}
if (input.currentValue) {
parts.push(
'', 'Current content:', input.currentValue,
'', 'Write an improved version. Keep the same meaning but improve clarity and tone.',
);
} else {
parts.push('', 'The field is empty. Write initial content.');
}
parts.push('', 'Return ONLY the content in markdown format.');
const response = await client.messages.create({
model: 'claude-haiku-4-5-20251001',
max_tokens: 512,
messages: [{ role: 'user', content: parts.join('\n') }],
});
const text = response.content[0].type === 'text' ? response.content[0].text : '';
return text.trim();
}
Global Voice
The voice is a plain string constant in the app's generate function. It's not a framework concept — the app owns it entirely. Different apps can have different voices. You can also load the voice from a config document or environment variable.
User Experience
When CopyAssistProvider is present:
- A ✦ button appears in the rich text toolbar
- Clicking it calls the generate function with the field's metadata and current content
- A suggestion panel appears below the editor showing the generated copy
- The user can Accept (replaces content), Regenerate (fetches a new suggestion), or Dismiss (closes the panel)
The workflow is non-destructive — the current content stays visible while reviewing the suggestion.
If no CopyAssistProvider is present, the suggest button is hidden and rich text fields work exactly as before.
Asset Analysis
The AssetAnalyzer Interface
@verevoir/assets defines an AssetAnalyzer interface that runs on upload:
interface AnalyzerInput {
data: Uint8Array;
contentType: string;
filename: string;
existingTags?: string[];
}
interface AnalyzerResult {
alt: string;
tags: string[];
}
interface AssetAnalyzer {
analyze(input: AnalyzerInput): Promise<AnalyzerResult>;
}
The AssetManager accepts an optional analyzer. When present, it auto-populates alt and tags on every upload. Failure is non-fatal — the upload succeeds, metadata is just empty.
Implementing an Analyzer
Use vision to describe the image and generate tags:
import type { AssetAnalyzer, AnalyzerInput, AnalyzerResult } from '@verevoir/assets';
import type Anthropic from '@anthropic-ai/sdk';
export function createAssetAnalyzer(client: Anthropic): AssetAnalyzer {
return {
async analyze(input: AnalyzerInput): Promise<AnalyzerResult> {
if (!input.contentType.startsWith('image/')) {
return { alt: '', tags: [] };
}
const existingTagsHint = input.existingTags?.length
? `\nExisting tags: ${input.existingTags.join(', ')}. Prefer reusing these.`
: '';
const response = await client.messages.create({
model: 'claude-haiku-4-5-20251001',
max_tokens: 256,
messages: [{
role: 'user',
content: [
{
type: 'image',
source: {
type: 'base64',
media_type: input.contentType as 'image/jpeg' | 'image/png' | 'image/gif' | 'image/webp',
data: Buffer.from(input.data).toString('base64'),
},
},
{
type: 'text',
text: `Analyze this image (filename: ${input.filename}) and return JSON:
- "alt": concise alt text for accessibility (1-2 sentences)
- "tags": 3-8 lowercase tags${existingTagsHint}
Return ONLY the JSON object.`,
},
],
}],
});
const raw = response.content[0].type === 'text' ? response.content[0].text : '';
const text = raw.replace(/^```(?:json)?\s*\n?/i, '').replace(/\n?```\s*$/i, '');
const parsed = JSON.parse(text);
return {
alt: String(parsed.alt || ''),
tags: Array.isArray(parsed.tags)
? parsed.tags.map((t: unknown) => String(t).toLowerCase())
: [],
};
},
};
}
Wiring into AssetManager
Pass the analyzer when constructing the AssetManager:
import { AssetManager } from '@verevoir/assets';
const llm = getLlmClient();
const manager = new AssetManager({
storage,
blobStore,
analyzer: llm ? createAssetAnalyzer(llm) : undefined,
});
When ANTHROPIC_API_KEY is set, every image upload gets auto-generated alt text and tags. When it's not set, the manager works without analysis.
Tag Vocabulary Convergence
The existingTags field in AnalyzerInput receives all tags currently in the system. The LLM is instructed to prefer reusing existing tags, which naturally converges the vocabulary over time without requiring a fixed taxonomy.
Architecture
┌─────────────────────────────────────────────────────────┐
│ Content Model (schema engine) │
│ richText('Bio').hint('Third person, 2-3 sentences') │
│ │
│ Asset Interface (assets package) │
│ AssetAnalyzer { analyze(input) → {alt, tags} } │
├──────────────────────────┬──────────────────────────────┤
│ Editor │ AssetManager │
│ CopyAssistProvider │ analyzer option │
│ ✦ button → generate() │ runs on upload │
├──────────────────────────┴──────────────────────────────┤
│ App (owns LLM client, prompts, and voice) │
│ generateCopy() createAssetAnalyzer() │
│ ↓ ↓ │
│ Anthropic API Anthropic API (vision) │
└─────────────────────────────────────────────────────────┘
Model Selection
Both features use claude-haiku-4-5-20251001 by default for speed and cost efficiency. Asset analysis benefits from Haiku's strong vision capabilities. For higher-quality copy suggestions, consider claude-sonnet-4-6.
Combining with Link Search
CopyAssistProvider and LinkSearchProvider are independent contexts that can be nested in any order:
<LinkSearchProvider search={searchDocuments}>
<CopyAssistProvider generate={generateCopy}>
{children}
</CopyAssistProvider>
</LinkSearchProvider>
Rich text fields show both the link button and the suggest button when both providers are present.
Future Directions
- AI-driven validation — flag content that doesn't match the field hint
- Multi-field generation — generate several related fields at once
- Streaming suggestions — show tokens as they arrive
- Video analysis — extract keyframes and generate descriptions