Skip to main content

What You Can Do With Lamina

Lamina lets you run packaged AI workflows, called apps, over a simple HTTP API. With the public Apps API you can:
  • discover apps available to your workspace
  • inspect each app’s input parameters
  • execute apps asynchronously
  • receive results via webhook or poll for outputs
  • get images, videos, or text as output
This API is a good fit for:
  • backend automations
  • internal tools
  • agentic systems
  • custom product integrations

How The API Works

Every integration follows the same basic lifecycle:
1

Discover an app

Call GET /api/apps to find something your workspace can run.
2

Inspect inputs

Call GET /api/apps/{appId} to learn the app’s input contract.
3

Start execution

Call POST /api/apps/{appId}/executions?webhook=<your_url> with your input payload.
4

Get results

Receive results via webhook callback, or poll GET /api/executions/{executionId}.

Authentication

All public endpoints use a workspace-scoped API key.
x-api-key: lma_your_api_key
or:
Authorization: Bearer lma_your_api_key
Read Authentication for the full model.

Parameter Types

When providing inputs to an app, each parameter has a type:
TypeWhat to sendExample
textA string value"A product on white background"
optionsOne of the listed options"Bright" (from options list)
urlA publicly accessible URL"https://example.com/photo.jpg"
For options parameters, send the label, not an internal ID or hidden value.

Asynchronous By Design

Apps are executed asynchronously. That means:
  • the run endpoint returns quickly with an executionId
  • the execution may take seconds or many minutes depending on the workflow
  • your integration should poll for the final result
Common statuses:
  • queued
  • running
  • completed
  • failed
Read Handle Long-Running Executions for production recommendations.

For AI Agents

If you’re building with Claude Code, Cursor, or other AI-powered tools, fetch the machine-readable API spec at /llms.txt. This gives your agent everything it needs to discover apps, run them, and handle results — in a format optimized for LLMs.

Start Here