This guide walks you through installing the Axiom AI SDK and authenticating with the Axiom AI SDK CLI so your evaluation results are tracked and attributed correctly in the Axiom Console.
Prerequisites
- Node.js 18 or later
- An Axiom account with a dataset for storing evaluation traces
- A TypeScript or JavaScript project (evaluations work best with TypeScript frameworks like Vercel AI SDK)
Install the Axiom AI SDK
Install the axiom package in your project:
This package provides:
- The
Eval function for defining evaluations
- The
Scorer wrapper for creating custom scorers
- Instrumentation helpers for capturing AI telemetry
- The Axiom AI SDK CLI for running evaluations
Authenticate with Axiom AI SDK CLI
The Axiom AI SDK includes a dedicated CLI for running evaluations. This CLI is separate from Axiom’s main data platform CLI and is focused specifically on AI engineering workflows.
Authenticating with the CLI ensures that evaluation runs are recorded in Axiom and attributed to your user account. This makes it easy to track who ran which experiments and compare results across your team.
Login with OAuth
Run the login command to authenticate via OAuth:
This opens your browser and prompts you to authorize the CLI with your Axiom account. Once authorized, the CLI stores your credentials securely on your machine.
Check authentication status
Verify you’re logged in and see which organization you’re using:
This displays your current authentication state, including your username and active organization.
Switch organizations
If you belong to multiple Axiom organizations, switch between them:
This presents a list of organizations you can access and lets you select which one to use for evaluations.
Logout
To remove stored credentials:
Authenticate with environment variables
If you prefer not to use OAuth (for example, in CI/CD environments), you can authenticate using environment variables:
export AXIOM_TOKEN="your-api-token"
export AXIOM_DATASET="your-dataset-name"
export AXIOM_ORG_ID="your-org-id"
export AXIOM_URL="https://api.axiom.co"
Create the Axiom configuration file
Create an axiom.config.ts file in your project root to configure how evaluations run:
import { defineConfig } from 'axiom/ai/config';
import { setupAppInstrumentation } from './src/instrumentation';
export default defineConfig({
eval: {
// Glob patterns for evaluation files
include: ['**/*.eval.{ts,js,mts,mjs,cts,cjs}'],
exclude: ['**/node_modules/**'],
// Instrumentation hook - called before running evals
instrumentation: ({ url, token, dataset, orgId }) =>
setupAppInstrumentation({ url, token, dataset, orgId }),
// Timeout for individual test cases (milliseconds)
timeoutMs: 60_000,
},
});
Set up instrumentation
The instrumentation function initializes OpenTelemetry tracing so evaluation runs are captured as traces in Axiom. Create a file to set up your tracing provider:
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http';
import { NodeTracerProvider } from '@opentelemetry/sdk-trace-node';
import { BatchSpanProcessor } from '@opentelemetry/sdk-trace-node';
import { initAxiomAI } from 'axiom/ai';
import type { AxiomEvalInstrumentationHook } from 'axiom/ai/config';
let provider: NodeTracerProvider | undefined;
export const setupAppInstrumentation: AxiomEvalInstrumentationHook = async (options) => {
if (provider) {
return { provider };
}
const exporter = new OTLPTraceExporter({
url: `${options.url}/v1/traces`,
headers: {
Authorization: `Bearer ${options.token}`,
'X-Axiom-Dataset': options.dataset,
...(options.orgId ? { 'X-AXIOM-ORG-ID': options.orgId } : {}),
},
});
provider = new NodeTracerProvider({
spanProcessors: [new BatchSpanProcessor(exporter)],
});
provider.register();
// Initialize Axiom AI instrumentation
initAxiomAI({
tracer: provider.getTracer('axiom-ai'),
});
return { provider };
};
If you’re already using Axiom for observability in your application, you can reuse your existing tracing setup. The evaluation framework integrates seamlessly with your existing instrumentation.
Recommended folder structure
Organize your evaluation files for easy discovery and maintenance:
your-project/
├── axiom.config.ts
├── src/
│ ├── lib/
│ │ ├── app-scope.ts
│ │ └── capabilities/
│ │ └── capability-name/
│ │ ├── prompts.ts
│ │ ├── schemas.ts
│ │ └── evaluations/
│ │ └── eval-name.eval.ts
│ ├── instrumentation.ts
│ └── ...
Name evaluation files with the .eval.ts extension so they’re automatically discovered by the CLI.
Verify your setup
Test that everything is configured correctly:
This lists all evaluation files found in your project without running them. If you see your evaluation files listed, you’re ready to start writing evaluations.
What’s next?
Now that you’re set up, learn how to write your first evaluation in Write evaluations.