Built-in search, code execution, and file processing.
Lighter than LangChain. Smarter than raw API calls.
import { ask } from 'agentic-core' const result = await ask('Latest AI news today', { provider: 'openai', apiKey: process.env.OPENAI_API_KEY, tools: ['search', 'code'], }) result.answer // → summarized response result.sources // → [{ title, url, snippet }] result.codeResults // → [{ code, output }] result.images // → ["https://..."]
Tavily-powered search with structured sources and image results. The model searches, you get citations.
Sandboxed JS eval. Let the model compute, transform data, or verify answers with real code.
Read and write files. The model can process documents, generate outputs, and persist results.
Up to 10 rounds of tool use. The model reasons, acts, observes — automatically.
Real-time status updates during tool execution. Token-by-token answer rendering.
Anthropic, OpenAI, or any OpenAI-compatible proxy. One interface, swap models freely.
Anthropic, OpenAI, or bring your own endpoint.
Every call returns a typed result. No parsing, no guessing.