Takumi API Released
Takumi's features are now available through a public HTTP API, without going through the Web UI or Slack. You can integrate Takumi directly into CI/CD pipelines, ticket management systems, or your own vulnerability management workflows—making Takumi a more seamless part of your development lifecycle.
Currently, the API supports the "Full Assessment" mode for both whitebox and blackbox assessments. Support for "Scoped Assessment" mode and Autofix will be added in future updates.
What You Can Do with the Takumi API
You can call Takumi's assessment and other features via HTTP API on your own schedule and from your own tooling. You can also receive Webhook notifications when assessments complete.
The following features are currently available. More features and modes will be added over time.
| Feature | Notes |
|---|---|
| Whitebox Assessment | "Full Assessment" mode only |
| Blackbox Assessment | "Full Assessment" mode only |
By triggering Takumi on your own schedule and routing results wherever you need them, you can integrate Takumi into your development lifecycle more deeply than before.
Example Use Cases
Scheduled Assessments
Call the Takumi API on a recurring schedule to run assessments periodically.
Previously, periodic assessments were only available for whitebox assessments on connected GitHub repositories. With the Takumi API, you can now run blackbox assessments on a schedule, or run whitebox assessments against codebases outside of GitHub.
Implementing scheduled assessments via the API is similar to the CI/CD pipeline integration example below.
CI/CD Pipeline Integration
For example, you can call the Takumi API as part of your deployment workflow to automatically run a whitebox assessment after deploying to a staging environment. A GitHub Actions implementation would look roughly like this. The same approach works with any CI/CD platform.
jobs:
deploy-to-staging:
# ...
dispatch-takumi-assessment:
runs-on: ubuntu-latest
needs: deploy-to-staging
steps:
- uses: actions/checkout@v4
# Upload source code ZIP to Takumi API
- name: Upload source code
run: |-
zip -r source.zip .
UPLOAD=$(curl -s -H "Authorization: Bearer $TOKEN" \
"$TAKUMI_API/v1/o/$TAKUMI_ORG/input-files/get-upload-url" \
--json '{"content_length": '$(stat --format=%s source.zip)'}')
FILE_ID=$(echo "$UPLOAD" | jq -r '.file_id')
curl -s -X PUT "$(echo "$UPLOAD" | jq -r '.upload_url')" --upload-file source.zip
curl -s -H "Authorization: Bearer $TOKEN" \
"$TAKUMI_API/v1/o/$TAKUMI_ORG/input-files/confirm-upload" \
--json '{"file_id": "'"$FILE_ID"'"}'
echo "FILE_ID=$FILE_ID" >> "$GITHUB_ENV"
# Dispatch whitebox assessment
- name: Dispatch whitebox assessment
run: |-
curl -s -H "Authorization: Bearer $TOKEN" \
"$TAKUMI_API/v1/o/$TAKUMI_ORG/workflows/whitebox-assessment/dispatch" \
--json '{
"input": {
"language": "english",
"targets": [{"file_upload": {"file_id": "'"$FILE_ID"'", "archive_type": "zip"}}]
},
"notification": {"webhook_endpoint_ids": ["'"$WEBHOOK_ID"'"]}
}'
# Completion notifications are delivered via webhook (see next example)
Routing Results to Ticket Systems
Using webhook notifications as a trigger, you can build automations that create issues in GitHub Issues, Jira, or other systems based on assessment results. For example, creating a GitHub Issue for each detected vulnerability would look roughly like this:
const handleWebhookEvent = async (event: WebhookEvent) => {
// Example event:
// {
// type: "api.workflow_run.exited",
// data: { workflow_id: "whitebox-assessment", workflow_run_id: "TWR..." },
// }
// Retrieve the workflow results
const { output } = await fetch(
`${TAKUMI_API}/v1/o/${TAKUMI_ORG}/workflows/whitebox-assessment/describe`,
{
method: "POST",
headers,
body: JSON.stringify({ workflow_run_id: event.data.workflow_run_id }),
},
).then((r) => r.json());
if (output.kind !== "SUCCESS") {
console.log("Assessment was aborted:", output.abortedReason);
return;
}
// Download the list of detected vulnerabilities
const { url } = await fetch(
`${TAKUMI_API}/v1/o/${TAKUMI_ORG}/workflows/whitebox-assessment/get-artifact-download-url`,
{
method: "POST",
headers,
body: JSON.stringify({
workflow_run_id: event.data.workflow_run_id,
artifact_name: output.successOutput.findings.artifact_name,
}),
},
).then((r) => r.json());
const findingsData = await fetch(url).then((r) => r.json());
// Create a GitHub Issue for each vulnerability
for (const finding of findingsData.findings) {
await octokit.rest.issues.create({
owner: GITHUB_ORG,
repo: GITHUB_REPO,
title: `[Takumi] ${finding.title}`,
body: `**Severity**: ${finding.description}\n\nThis vulnerability was detected by Takumi.`,
labels: ["security"],
});
}
};
Relationship with Web Console Assessments
Takumi API and the web console's "Assessment" feature use the same assessment engine, but assessment data is not shared between them. Workflow results from the API do not appear in the web console's "Assessment" tab, and assessments run through the web console cannot be retrieved via the API.
The data is managed independently because the two are built on different design philosophies:
- The web console is designed for easy progress tracking. It manages the end-to-end flow—crawling, selecting features, running inspections—as state transitions within a single "assessment" unit.
- Takumi API is designed for maximum flexibility. It exposes crawling, assessment, and other capabilities as independent workflows, so you can combine them freely without being constrained by the web console's assessment flow.
Getting Started
See the user guide for details.
▼ User Guide: Takumi API
