LLM calls. Tools. Schema.
Zero dependencies.

Built-in search, code execution, and file processing.
Lighter than LangChain. Smarter than raw API calls.

Get Started Try Playground →
$ npm install agentic-core Click to copy
import { ask } from 'agentic-core' const result = await ask('Latest AI news today', { provider: 'openai', apiKey: process.env.OPENAI_API_KEY, tools: ['search', 'code'], }) result.answer // → summarized response result.sources // → [{ title, url, snippet }] result.codeResults // → [{ code, output }] result.images // → ["https://..."]

Everything you need, nothing you don't

🔍

Web Search

Tavily-powered search with structured sources and image results. The model searches, you get citations.

💻

Code Execution

Sandboxed JS eval. Let the model compute, transform data, or verify answers with real code.

📁

File I/O

Read and write files. The model can process documents, generate outputs, and persist results.

🔄

Multi-round Loop

Up to 10 rounds of tool use. The model reasons, acts, observes — automatically.

🌊

SSE Streaming

Real-time status updates during tool execution. Token-by-token answer rendering.

🔌

Any Provider

Anthropic, OpenAI, or any OpenAI-compatible proxy. One interface, swap models freely.

Works with your stack

Anthropic, OpenAI, or bring your own endpoint.

await ask('Explain quantum computing', { provider: 'anthropic', apiKey: 'sk-ant-...', model: 'claude-sonnet-4-20250514', tools: ['search', 'code'], })
await ask('Explain quantum computing', { provider: 'openai', apiKey: 'sk-...', model: 'gpt-4o', tools: ['search', 'code'], })
await ask('Explain quantum computing', { provider: 'openai', baseUrl: 'https://my-proxy.com/v1', apiKey: 'sk-...', model: 'any-model', tools: ['search'], })

Structured, not stringly-typed

Every call returns a typed result. No parsing, no guessing.

interface AgenticResult { answer: string // Final synthesized response sources?: Source[] // Search citations images?: string[] // Image URLs from search codeResults?: CodeResult[] // { code, output, error? } files?: FileResult[] // { path, action, content? } toolCalls?: ToolCall[] // Raw tool invocations usage?: { input: number; output: number } }

How it compares

LangChain
agentic-core
Install size
~50MB
~200KB
API surface
Chains, Agents, Tools, Memory, Callbacks, Runnables...
ask()
Concepts to learn
15+
1
Dependencies
50+
0
Time to first result
~30 min
~2 min