Module Reference
51 modules, zero dependencies. Use assay context <query> for LLM-ready docs on any module.
assay modules # list all modules
assay context "grafana health" # get detailed docs for LLM
Rust Builtins
Available globally in all .lua scripts — no require needed.
HTTP & Networking
| Module | Key Functions | Description |
|---|---|---|
http |
http.get/post/put/patch/delete/serve |
HTTP client, server, and SSE streaming. http.serve response headers accept array values to emit the same header name multiple times (e.g., multiple Set-Cookie). |
ws |
ws.connect/send/recv/close |
WebSocket client |
SSE Streaming Example
http.serve(8080, {
GET = {
["/events"] = function(req)
return {
sse = function(send)
send({ data = "connected" })
sleep(1)
send({ event = "update", data = json.encode({count = 1}), id = "1" })
-- stream closes when function returns
end
}
end
}
})
The send callback accepts event, data, id,
and retry fields. SSE headers (text/event-stream, no-cache,
keep-alive) are set automatically.
Multi-Value Response Headers
Response handlers can return an array of strings as a header value. Each entry emits the
same header name once — required for setting multiple Set-Cookie values in a
single response and useful for headers like Link, Vary, and
Cache-Control. String values continue to work as before.
http.serve(8080, {
GET = {
["/login"] = function(req)
return {
status = 200,
headers = {
["Set-Cookie"] = {
"session=abc; Path=/; HttpOnly",
"csrf=xyz; Path=/",
},
},
json = { ok = true },
}
end
}
})
Serialization
| Module | Key Functions | Description |
|---|---|---|
json |
json.parse/encode |
JSON parse and encode |
yaml |
yaml.parse/encode |
YAML parse and encode |
toml |
toml.parse/encode |
TOML parse and encode |
base64 |
base64.encode/decode |
Base64 encoding |
System
| Module | Key Functions | Description |
|---|---|---|
fs |
fs.read/write |
Filesystem operations |
env |
env.get |
Environment variables |
sleep |
sleep(n) |
Pause execution |
time |
time() |
Unix timestamp |
Cryptography
| Module | Key Functions | Description |
|---|---|---|
crypto |
crypto.jwt_sign/jwt_decode/hash/hmac/random |
JWT signing & decoding, hashing, HMAC |
regex |
regex.match/find/replace |
Regular expressions |
Database
| Module | Key Functions | Description |
|---|---|---|
db |
db.connect/query/execute/close |
SQL — Postgres, MySQL, SQLite |
Templates & Async
| Module | Key Functions | Description |
|---|---|---|
template |
template.render/render_string |
Jinja2-compatible templates |
async |
async.spawn/spawn_interval |
Async task execution |
Testing & Logging
| Module | Key Functions | Description |
|---|---|---|
assert |
assert.eq/ne/gt/lt/contains/not_nil/matches |
Assertions |
log |
log.info/warn/error |
Structured logging |
Stdlib Modules
34 embedded Lua modules loaded via require("assay.<name>").
All follow the client pattern: M.client(url, opts) → client object → c:method().
Monitoring & Observability
| Module | require() | Description | Client Pattern |
|---|---|---|---|
assay.prometheus |
require("assay.prometheus") |
PromQL queries, alerts, targets | prom.query(url, expr) |
assay.alertmanager |
require("assay.alertmanager") |
Alert management, silences | am.client(url) |
assay.loki |
require("assay.loki") |
Log push, query, labels | loki.client(url) |
assay.grafana |
require("assay.grafana") |
Health, dashboards, datasources | gf.client(url, opts) |
Kubernetes & GitOps
| Module | require() | Description | Client Pattern |
|---|---|---|---|
assay.k8s |
require("assay.k8s") |
30+ K8s resource types, CRDs | k8s.client(url, opts) |
assay.argocd |
require("assay.argocd") |
Apps, sync, health, projects | argo.client(url, opts) |
assay.kargo |
require("assay.kargo") |
Stages, freight, promotions | kargo.client(url, opts) |
assay.flux |
require("assay.flux") |
GitRepositories, Kustomizations | flux.client(url, opts) |
assay.traefik |
require("assay.traefik") |
Routers, services, middlewares | traefik.client(url) |
Security & Identity
| Module | require() | Description | Client Pattern |
|---|---|---|---|
assay.vault |
require("assay.vault") |
KV secrets, transit, PKI | vault.client(url, opts) |
assay.openbao |
require("assay.openbao") |
OpenBao (Vault API-compatible) | bao.client(url, opts) |
assay.certmanager |
require("assay.certmanager") |
Certificates, issuers, ACME | cm.client(url, opts) |
assay.eso |
require("assay.eso") |
ExternalSecrets, SecretStores | eso.client(url, opts) |
assay.dex |
require("assay.dex") |
OIDC discovery, JWKS, health | dex.client(url) |
assay.zitadel |
require("assay.zitadel") |
OIDC identity, JWT machine auth | zitadel.client(url, opts) |
assay.ory.kratos |
require("assay.ory.kratos") |
Ory Kratos identity — login flows, identities, sessions, schemas | kratos.client({public_url, admin_url}) |
assay.ory.hydra |
require("assay.ory.hydra") |
Ory Hydra OAuth2/OIDC — clients, authorize, tokens, login/consent, JWKs | hydra.client({public_url, admin_url}) |
assay.ory.keto |
require("assay.ory.keto") |
Ory Keto ReBAC — relation tuples, permission checks, expand | keto.client({read_url, write_url}) |
assay.ory.rbac |
require("assay.ory.rbac") |
Capability-based RBAC engine over Keto — roles + capabilities, separation of duties, idempotent membership management, http.serve middleware | rbac.policy({namespace, keto, roles, default_role}) |
assay.ory |
require("assay.ory") |
Ory stack wrapper — builds kratos/hydra/keto clients in one call; also re-exports rbac |
ory.connect({kratos_public, hydra_admin, keto_read, ...}) |
Infrastructure
| Module | require() | Description | Client Pattern |
|---|---|---|---|
assay.crossplane |
require("assay.crossplane") |
Providers, XRDs, compositions | xp.client(url, opts) |
assay.velero |
require("assay.velero") |
Backups, restores, schedules | velero.client(url, opts) |
assay.harbor |
require("assay.harbor") |
Projects, repos, vulnerability scanning | harbor.client(url, opts) |
Data & Storage
| Module | require() | Description | Client Pattern |
|---|---|---|---|
assay.postgres |
require("assay.postgres") |
PostgreSQL user/db management | pg.client(url, opts) |
assay.s3 |
require("assay.s3") |
S3-compatible storage, Sig V4 | s3.client(url, opts) |
Utilities
| Module | require() | Description | Client Pattern |
|---|---|---|---|
assay.healthcheck |
require("assay.healthcheck") |
HTTP checks, JSON path, latency | hc.check(url, opts) |
assay.unleash |
require("assay.unleash") |
Feature flags, environments | unleash.client(url, opts) |
Ory Stack
Complete Ory identity and authorization stack: Kratos (identity management,
login/registration/recovery flows), Hydra (OAuth2 and OpenID Connect server),
and Keto (Zanzibar-style ReBAC). Each module can be used independently, or
combined via the assay.ory convenience wrapper.
Example: Build all three clients at once and check a permission
local ory = require("assay.ory")
local o = ory.connect({
kratos_public = "http://kratos-public:4433",
kratos_admin = "http://kratos-admin:4434",
hydra_public = "https://hydra.example.com",
hydra_admin = "http://hydra-admin:4445",
keto_read = "http://keto-read:4466",
keto_write = "http://keto-write:4467",
})
-- Kratos: introspect a session cookie
local session = o.kratos:whoami(cookie)
log.info("Logged in as: " .. session.identity.traits.email)
-- Hydra: register an OAuth2 client
local client = o.hydra:create_client({
client_name = "my-app",
grant_types = { "authorization_code", "refresh_token" },
redirect_uris = { "https://app.example.com/callback" },
})
-- Keto: check a ReBAC permission
local allowed = o.keto:check("apps", "cc", "admin", "user:alice")
if not allowed then error("permission denied") end
Temporal Workflow Engine
Full Temporal workflow orchestration with two complementary APIs:
a native gRPC client (Rust builtin, optional temporal feature) for starting and controlling workflows,
and an HTTP REST client (Lua stdlib) for querying the Temporal Web UI.
Native gRPC Client (builtin)
Available globally as temporal when built with --features temporal.
Connects directly to the Temporal frontend gRPC service. No require needed.
| Function | Description |
|---|---|
temporal.connect({ url, namespace? }) |
Connect to Temporal server, returns reusable client |
temporal.start({ url, namespace?, task_queue, workflow_type, workflow_id, input? }) |
One-shot: connect + start workflow + return handle |
client:start_workflow({ task_queue, workflow_type, workflow_id, input? }) |
Start a workflow execution → {workflow_id, run_id} |
client:signal_workflow({ workflow_id, signal_name, input? }) |
Send a signal to a running workflow |
client:query_workflow({ workflow_id, query_type, input? }) |
Query workflow state (returns decoded JSON) |
client:describe_workflow(workflow_id) |
Get status, type, timestamps, history length |
client:get_result({ workflow_id, follow_runs? }) |
Block until workflow completes, return result |
client:cancel_workflow(workflow_id) |
Request graceful cancellation |
client:terminate_workflow(workflow_id) |
Force terminate immediately |
gRPC Example: Start and Monitor a Workflow
-- Connect to Temporal (gRPC, no require needed)
local client = temporal.connect({
url = "temporal-frontend.infra:7233",
namespace = "production",
})
-- Start a promotion workflow
local handle = client:start_workflow({
task_queue = "promotions",
workflow_type = "PromoteToEnvironment",
workflow_id = "promote-prod-v1.2.0",
input = {
version = "v1.2.0",
target_env = "prod",
triggered_by = "admin@example.com",
},
})
log.info("Started workflow: " .. handle.run_id)
-- Check status
local info = client:describe_workflow("promote-prod-v1.2.0")
log.info("Status: " .. info.status) -- RUNNING, COMPLETED, FAILED, etc.
-- Signal the workflow (e.g., approve a step)
client:signal_workflow({
workflow_id = "promote-prod-v1.2.0",
signal_name = "approve_deploy",
input = { approved_by = "admin" },
})
-- Wait for the final result
local result = client:get_result({ workflow_id = "promote-prod-v1.2.0" })
log.info("Workflow completed: " .. json.encode(result))
HTTP REST Client (stdlib)
Loaded via require("assay.temporal"). Talks to the Temporal HTTP API for querying
workflows, namespaces, schedules, task queues, and history. Works with any Temporal deployment.
| Function | Description |
|---|---|
c:health() |
Check Temporal server health |
c:system_info() |
Get server version and capabilities |
c:namespaces() |
List all namespaces |
c:workflows(opts?) |
List workflow executions |
c:workflow(workflow_id, run_id?) |
Get workflow execution details |
c:workflow_history(workflow_id, run_id?) |
Get full event history |
c:signal_workflow(workflow_id, signal_name, input?) |
Signal a workflow via HTTP |
c:cancel_workflow(workflow_id) |
Cancel a workflow via HTTP |
c:terminate_workflow(workflow_id, reason?) |
Terminate a workflow via HTTP |
c:task_queue(name) |
Get task queue pollers and backlog |
c:schedules() |
List all schedules |
c:search(query) |
Search workflows by visibility query |
c:is_workflow_running(workflow_id) |
Check if a workflow is currently running |
c:wait_workflow_complete(workflow_id, timeout_secs) |
Poll until workflow completes or timeout |
HTTP Example: Query Workflows and History
local temporal = require("assay.temporal")
local c = temporal.client("http://temporal-web:8080", {
namespace = "production",
api_key = env.get("TEMPORAL_API_KEY"),
})
-- List running workflows
local result = c:workflows({ query = "WorkflowType='PromoteToEnvironment'" })
for _, exec in ipairs(result.executions or {}) do
log.info(exec.execution.workflowId .. " — " .. exec.status)
end
-- Get full history for a workflow
local history = c:workflow_history("promote-prod-v1.2.0")
for _, event in ipairs(history.history.events) do
log.info(event.eventType)
end
-- Check task queue health
local tq = c:task_queue("promotions")
log.info("Active pollers: " .. #tq.pollers)
When to Use Which
| Use case | API | Why |
|---|---|---|
| Start a workflow | gRPC (temporal.connect) |
Proper SDK payload encoding, direct to frontend |
| Signal / query / cancel | gRPC (client:signal_workflow) |
Reliable, typed payloads |
| List workflows, search | HTTP (c:workflows) |
Richer query filtering via visibility API |
| View event history | HTTP (c:workflow_history) |
Full event detail not available via gRPC client |
| Check task queue health | HTTP (c:task_queue) |
Poller info, backlog counts |
| Schedules | HTTP (c:schedules) |
Schedule management via REST |
AI Agent & Workflow
Integrate with AI agent platforms, GitHub, Gmail, and Google Calendar — all via native HTTP, zero shell dependencies.
assay.openclaw
OpenClaw AI agent platform integration. Invoke tools, send messages, manage state, spawn sub-agents, approval gates, LLM tasks.
local openclaw = require("assay.openclaw")
local c = openclaw.client() -- auto-discovers $OPENCLAW_URL + $OPENCLAW_TOKEN
-- Invoke any OpenClaw tool
local result = c:invoke("message", "send", { target = "#general", message = "Hello!" })
-- Shorthand: send message
c:send("discord", "#alerts", "Service is down!")
c:notify("ops-team", "Deployment complete")
-- Persistent state
c:state_set("last-deploy", { version = "1.2.3", time = time() })
local prev = c:state_get("last-deploy")
-- Diff detection
local diff = c:diff("pr-state", new_snapshot)
if diff.changed then log.info("State changed!") end
-- LLM task
local answer = c:llm_task("Summarize this PR", { model = "claude-sonnet" })
-- Cron jobs
c:cron_add({ schedule = "0 9 * * *", task = "daily-report" })
-- Sub-agents
c:spawn("Fix the login bug", { model = "gpt-4o" })
assay.github
GitHub REST API client. PRs, issues, actions, repositories, GraphQL. No gh CLI dependency.
local github = require("assay.github")
local c = github.client() -- uses $GITHUB_TOKEN
-- Pull requests
local pr = c:pr_view("owner/repo", 123)
local prs = c:pr_list("owner/repo", { state = "open" })
local reviews = c:pr_reviews("owner/repo", 123)
c:pr_merge("owner/repo", 123, { merge_method = "squash" })
-- Issues
local issues = c:issue_list("owner/repo", { labels = "bug", state = "open" })
local issue = c:issue_get("owner/repo", 42)
c:issue_create("owner/repo", "Bug title", "Description", { labels = {"bug"} })
c:issue_comment("owner/repo", 42, "Fixed in PR #123")
-- Actions
local runs = c:runs_list("owner/repo", { status = "completed" })
local run = c:run_get("owner/repo", 12345)
-- GraphQL
local data = c:graphql("query { viewer { login } }")
assay.gmail
Gmail REST API with OAuth2 token auto-refresh. Search, read, reply, send emails.
local gmail = require("assay.gmail")
local c = gmail.client({
credentials_file = "/path/to/credentials.json",
token_file = "/path/to/token.json",
})
-- Search emails
local emails = c:search("newer_than:1d is:unread", { max = 20 })
-- Read a message
local msg = c:get("message-id-here")
-- Reply to an email
c:reply("message-id", { body = "Thanks for the update!" })
-- Send new email
c:send("user@example.com", "Subject", "Email body")
-- List labels
local labels = c:labels()
assay.gcal
Google Calendar REST API with OAuth2 token auto-refresh. Events CRUD, calendar list.
local gcal = require("assay.gcal")
local c = gcal.client({
credentials_file = "/path/to/credentials.json",
token_file = "/path/to/token.json",
})
-- List upcoming events
local events = c:events({ timeMin = "2026-04-05T00:00:00Z", maxResults = 10 })
-- Get specific event
local event = c:event_get("event-id")
-- Create event
c:event_create({
summary = "Team standup",
start = { dateTime = "2026-04-06T09:00:00Z" },
["end"] = { dateTime = "2026-04-06T09:30:00Z" },
})
-- Update / delete
c:event_update("event-id", updated_event)
c:event_delete("event-id")
-- List all calendars
local calendars = c:calendars()
assay.oauth2
Google OAuth2 token management. File-based credentials loading, automatic access token refresh, token persistence.
local oauth2 = require("assay.oauth2")
-- Load credentials and token from default paths
-- (~/.config/gog/credentials.json and ~/.config/gog/token.json)
local auth = oauth2.from_file()
-- Or specify custom paths
local auth = oauth2.from_file("/path/to/credentials.json", "/path/to/token.json")
-- Refresh access token
auth:refresh()
-- Persist updated token back to file
auth:save()
-- Get auth headers for API calls
local headers = auth:headers()
-- Returns: {Authorization = "Bearer ...", ["Content-Type"] = "application/json"}
-- Use with http builtins
local resp = http.get("https://www.googleapis.com/calendar/v3/calendars/primary/events", {
headers = auth:headers()
})
assay.email_triage
Email classification and triage. Deterministic rule-based categorization or OpenClaw LLM-assisted classification into action, reply, and FYI buckets.
local email_triage = require("assay.email_triage")
local gmail = require("assay.gmail")
local gc = gmail.client({
credentials_file = "/path/to/credentials.json",
token_file = "/path/to/token.json",
})
-- Fetch recent emails
local emails = gc:search("newer_than:1d", { max = 50 })
-- Deterministic categorization (no LLM needed)
local buckets = email_triage.categorize(emails)
log.info("Needs reply: " .. #buckets.needs_reply)
log.info("Needs action: " .. #buckets.needs_action)
log.info("FYI: " .. #buckets.fyi)
-- LLM-assisted triage via OpenClaw (smarter classification)
local openclaw = require("assay.openclaw")
local oc = openclaw.client()
local smart_buckets = email_triage.categorize_llm(emails, oc)
| Module | require() | Description | Client Pattern |
|---|---|---|---|
assay.openclaw |
require("assay.openclaw") |
AI agent tools, state, approve, LLM | openclaw.client(url?, opts?) |
assay.github |
require("assay.github") |
PRs, issues, actions, GraphQL | github.client(opts?) |
assay.gmail |
require("assay.gmail") |
Email search, read, reply, send | gmail.client(opts) |
assay.gcal |
require("assay.gcal") |
Calendar events CRUD | gcal.client(opts) |
assay.oauth2 |
require("assay.oauth2") |
OAuth2 token management, auto-refresh | oauth2.from_file(creds?, token?) |
assay.email_triage |
require("assay.email_triage") |
Email classification, triage buckets | email_triage.categorize(emails) |
Custom Modules
Place .lua files in ./modules/ (project-local) or ~/.assay/modules/ (global).
Or set ASSAY_MODULES_PATH to override the global directory.
They are auto-discovered and appear in assay modules output.
./modules/
myapi.lua # require("assay.myapi")
~/.assay/modules/
company.lua # require("assay.company")
For LLM agents: llms.txt · llms-full.txt