Complete reference for the Canvas App SDK. The SDK is available via window.Agentastic or by importing from the bridge script.
Getting Started
Include the bridge script in your Canvas app:
<script src="agentastic://sdk/bridge.js"></script>
Or wait for the bridge to be ready:
document.addEventListener('agentastic:ready', () => {
console.log('SDK ready:', Agentastic);
});
Core API
State Management
getState()
Get the current app state.
const state = Agentastic.getState();
console.log(state.count);
patchState(patch)
Update app state with a partial patch (shallow merge).
Agentastic.patchState({ count: state.count + 1 });
Events
emit(event, payload)
Emit an event to the host.
Agentastic.emit('form:submit', { name: 'Alice' });
Common events:
app:ready- App finished loadingapp:error- Error occurredform:submit- Form submittedstate:patch- Request state update
on(event, handler)
Subscribe to events from the host. Returns an unsubscribe function.
const unsubscribe = Agentastic.on('state:changed', (newState) => {
console.log('State updated:', newState);
});
// Later: unsubscribe();
Host events:
state:changed- State was updatedtheme:changed- Theme changedapp:focus- App gained focusapp:blur- App lost focus
Actions
run(actionName, args, options)
Execute a named action from your manifest.
const result = await Agentastic.run('greet', { name: 'Alice' });
console.log(result); // "Hello, Alice!"
Options:
| Option | Type | Description |
|---|---|---|
scopeFor | 'full' | 'model' | 'app' | Response visibility scope (default: 'app') |
Response Visibility Scoping
Actions can return structured results with different visibility levels. This helps you control what data is visible to the AI model versus only to your app.
Configure in manifest.yaml:
actions:
- name: fetchUserData
description: Fetch user profile data
responseScoping:
enabled: true
modelFields:
- name
- email
- summary
appOnlyFields:
- internalId
- createdAt
- rawData
Using scoped responses:
// Get full scoped result
const full = await Agentastic.run('fetchUserData', { id: 123 }, { scopeFor: 'full' });
console.log(full.structuredContent); // Data for model AND app
console.log(full.content); // Text summary for model only
console.log(full.meta); // Data for app only
// Get model-only view (for sending to AI)
const modelView = await Agentastic.run('fetchUserData', { id: 123 }, { scopeFor: 'model' });
// Get app view (default) - merged structuredContent + meta
const appView = await Agentastic.run('fetchUserData', { id: 123 }, { scopeFor: 'app' });
Context
context
Environment information about the runtime.
const ctx = Agentastic.context;
console.log(ctx.locale); // 'en-US'
console.log(ctx.platform); // 'macos'
console.log(ctx.platformVersion); // 'macOS 14.0'
console.log(ctx.displayMode); // 'inline' | 'slide' | 'modal' | 'fullscreen'
console.log(ctx.colorScheme); // 'light' | 'dark'
console.log(ctx.hostVersion); // '1.2.3'
console.log(ctx.sdkVersion); // '0.2.0'
console.log(ctx.isDevelopment); // false
| Property | Type | Description |
|---|---|---|
locale | string | User's locale (BCP 47 format) |
platform | 'macos' | 'windows' | 'linux' | Host platform |
platformVersion | string | Platform version string |
displayMode | string | Current display mode |
colorScheme | 'light' | 'dark' | Current color scheme |
hostVersion | string | Host application version |
sdkVersion | string | SDK protocol version |
isDevelopment | boolean | Whether running in dev mode |
Capabilities
capabilities
Map of capability names to availability.
if (Agentastic.capabilities['llm.stream']) {
// Use streaming
} else {
// Fall back to non-streaming
}
LLM Namespace
AI completion capabilities.
llm.ask(opts)
Request an LLM completion.
const result = await Agentastic.llm.ask({
system: 'You are a helpful assistant.',
user: 'Explain quantum computing briefly.',
model: 'gpt-4.1-mini',
temperature: 0.7,
maxTokens: 500
});
| Option | Type | Description |
|---|---|---|
system | string | System prompt |
user | string | User message |
model | string | Model identifier |
temperature | number | Sampling temperature (0-2) |
maxTokens | number | Maximum tokens |
outputSchema | object | JSON Schema for structured output |
llm.stream(opts)
Request a streaming LLM completion. Returns an async iterator.
for await (const chunk of Agentastic.llm.stream({
user: 'Write a story about a robot.',
model: 'gpt-4.1-mini'
})) {
if (chunk.delta) {
output += chunk.delta;
}
if (chunk.done) {
console.log('Complete!');
}
}
Agent Namespace
Integration with the host's AI agent and conversation system.
agent.getContext()
Get the current agent context.
const context = await Agentastic.agent.getContext();
console.log(context.conversationId); // Current conversation ID
console.log(context.model); // Active model
console.log(context.selection); // Selected text (if any)
console.log(context.isAgentMode); // Whether agent mode is active
console.log(context.isVisionEnabled);// Whether vision is enabled
| Property | Type | Description |
|---|---|---|
conversationId | string | null | Current conversation identifier |
model | string | null | Active model name |
selection | string | null | Currently selected text |
activeDocument | string | null | Active document path |
isAgentMode | boolean | Whether agent mode is active |
isVisionEnabled | boolean | Whether vision is enabled |
agent.registerTool(tool)
Register a custom tool for the agent to use.
Agentastic.agent.registerTool({
name: 'calculateSum',
description: 'Calculate the sum of numbers',
parameters: {
type: 'object',
properties: {
numbers: { type: 'array', items: { type: 'number' } }
}
}
});
agent.unregisterTool(toolName)
Unregister a previously registered tool.
Agentastic.agent.unregisterTool('calculateSum');
agent.onToolCall(handler)
Handle tool calls from the agent. Returns an unsubscribe function.
const unsubscribe = Agentastic.agent.onToolCall(async (tool, args) => {
if (tool === 'calculateSum') {
return args.numbers.reduce((a, b) => a + b, 0);
}
});
Notes:
- Call
initBridge()before registering tool handlers so the host can deliver tool requests. - Return JSON-serializable data. Throw to surface an error back to the agent.
- Tools are only callable when agent mode is active and your Canvas app is visible or detached.
agent.onThinking(handler)
Subscribe to thinking state changes.
const unsubscribe = Agentastic.agent.onThinking((isThinking) => {
if (isThinking) {
showLoadingIndicator();
} else {
hideLoadingIndicator();
}
});
agent.ask(prompt, opts)
Request an agent action.
const response = await Agentastic.agent.ask('Summarize this document', {
context: documentText,
model: 'gpt-4o',
tools: ['web_search', 'calculateSum']
});
agent.sendMessage(content, options)
Send a message to the conversation. Allows your app to inject messages as either the user or assistant.
// Send as user and trigger AI response
await Agentastic.agent.sendMessage('Please analyze this data', {
role: 'user',
autoSend: true,
triggerResponse: true
});
// Send as assistant (no AI response triggered)
await Agentastic.agent.sendMessage('Here is the analysis...', {
role: 'assistant',
autoSend: true
});
// Populate input field for user review (don't send)
await Agentastic.agent.sendMessage('Draft message for review', {
autoSend: false
});
| Option | Type | Default | Description |
|---|---|---|---|
role | 'user' | 'assistant' | 'user' | Message sender role |
autoSend | boolean | true | Send immediately or populate input |
triggerResponse | boolean | false | Trigger AI response after sending |
agent.setInput(content)
Populate the input field without sending. Useful for suggesting messages that the user can review and edit.
await Agentastic.agent.setInput('Suggested prompt: Explain this concept...');
UI Namespace
UI utilities for notifications, modals, theming, and window control.
ui.toast(opts)
Show a toast notification.
Agentastic.ui.toast({
message: 'Saved successfully!',
type: 'success',
duration: 3000
});
| Option | Type | Default | Description |
|---|---|---|---|
message | string | required | Toast message text |
type | 'info' | 'success' | 'warning' | 'error' | 'info' | Toast type |
duration | number | 3000 | Duration in milliseconds |
ui.modal(opts)
Show a modal dialog. Returns a promise that resolves when the modal is closed.
const result = await Agentastic.ui.modal({
title: 'Confirm',
content: 'Are you sure?',
size: 'small'
});
| Option | Type | Default | Description |
|---|---|---|---|
title | string | required | Modal title |
content | string | required | Modal content |
size | 'small' | 'medium' | 'large' | 'medium' | Modal size |
closable | boolean | true | Whether user can close |
ui.getTheme()
Get current theme information.
const theme = await Agentastic.ui.getTheme();
console.log(theme.colorScheme); // 'light' or 'dark'
console.log(theme.primaryColor); // '#007AFF'
console.log(theme.accentColor); // '#5856D6'
console.log(theme.backgroundColor); // '#FFFFFF'
console.log(theme.textColor); // '#000000'
ui.onThemeChange(handler)
Subscribe to theme changes. Returns an unsubscribe function.
const unsubscribe = Agentastic.ui.onThemeChange((theme) => {
document.body.className = theme.colorScheme;
});
ui.openExternal(url)
Open a URL in the user's default browser. Only http:// and https:// URLs are allowed.
const opened = await Agentastic.ui.openExternal('https://example.com');
if (opened) {
console.log('URL opened in browser');
}
ui.setHeight(pixels)
Notify the host of your app's preferred height.
// Request 400px height
Agentastic.ui.setHeight(400);
// Dynamically adjust based on content
const contentHeight = document.body.scrollHeight;
Agentastic.ui.setHeight(contentHeight);
ui.setSize(size)
Notify the host of your app's preferred dimensions.
// Set both dimensions
Agentastic.ui.setSize({ width: 600, height: 400 });
// Set only height
Agentastic.ui.setSize({ height: 300 });
ui.requestExpand()
Request to expand the app to a fullscreen modal view.
await Agentastic.ui.requestExpand();
// App is now in fullscreen mode
ui.requestCollapse()
Request to collapse the app from expanded view back to inline.
await Agentastic.ui.requestCollapse();
// App is back to inline mode
ui.onDisplayModeChange(handler)
Subscribe to display mode changes. The handler is called immediately with the current mode.
const unsubscribe = Agentastic.ui.onDisplayModeChange((mode) => {
console.log('Display mode:', mode); // 'inline' | 'slide' | 'modal' | 'fullscreen'
if (mode === 'fullscreen') {
showExpandedLayout();
} else {
showCompactLayout();
}
});
| Mode | Description |
|---|---|
inline | App is displayed inline within content |
slide | App is in a slide-out panel |
modal | App is in a modal dialog |
fullscreen | App is expanded to fullscreen |
Filesystem Namespace
Sandboxed file system access. See Filesystem API for complete documentation.
fs.read(path)
Read a file's contents.
const data = await Agentastic.fs.read('data/notes.json');
fs.write(path, data)
Write data to a file.
await Agentastic.fs.write('data/notes.json', JSON.stringify(notes));
fs.exists(path)
Check if a file exists.
const exists = await Agentastic.fs.exists('config.json');
fs.list(path)
List directory contents.
const files = await Agentastic.fs.list('data');
Network Namespace
Network request capabilities.
net.fetch(url, options)
Make HTTP requests with the host's network stack.
const response = await Agentastic.net.fetch('https://api.example.com/data', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({ query: 'test' })
});
const data = await response.json();
IPC Namespace
Inter-app communication (future).
ipc.broadcast(channel, payload)
Broadcast a message to all apps.
Agentastic.ipc.broadcast('data:updated', { id: 123 });
ipc.send(appId, message)
Send a direct message to another app.
const response = await Agentastic.ipc.send('com.example.other-app', {
type: 'request',
data: { ... }
});
ipc.subscribe(channel, handler)
Subscribe to broadcast messages.
Agentastic.ipc.subscribe('data:updated', (payload, senderId) => {
console.log(`Received from ${senderId}:`, payload);
});
Next Steps
- SDK Overview - Introduction to Canvas apps
- Filesystem API - Sandboxed file access
- Testing & Debugging - Debug your Canvas apps