Typebulb runs apps in markdown files called bulbs. Quick to make & share, runs online or locally via the CLI, & can call an LLM at runtime.
A bulb is composed of code blocks providing the minimum viable structure for apps that may have runtime inference. Each block is optional, and maps to an editor tab:
| Block | Purpose | AI editable |
|---|---|---|
| code.tsx | App logic and UI (TypeScript/TSX) | Yes |
| styles.css | Styling | Yes |
| index.html | Usually just an HTML stub or fragment, but can be whole page, or blank for console apps | Yes |
| data.txt | Data chunks for your code to process (JSON, CSV, XML, YAML, or plain text); multiple chunks separated by two blank lines | No |
| infer.md | Prompt for runtime LLM calls via tb.infer() | Yes |
| insight.json | Output from tb.infer(), read via tb.insight() | Yes |
| config.json | Dependencies, app description, inference modal settings | Partial |
| notes.md | Persistent context for the AI assistant | No |
| server.ts | Server-side Node.js code (CLI only) | N/A |
The Data tab is for content that your code processes, whether structured (JSON, CSV, XML, YAML) or unstructured (plain text). The AI gets a read-only, schema-aware truncated view. Multiple chunks are separated by 2 blank lines.
Why a separate tab instead of pasting data into chat or code?
The Infer tab is a prompt for runtime LLM calls. When your code calls tb.infer(), the LLM receives: your inference instructions, an example output (from Insight), your code (so it knows the expected JSON shape), and the data to process. A modal shows the user what will be sent; confirm and the LLM streams its response.
The Insight tab holds JSON that serves as both a working example (showing the LLM what shape you expect) and the current output (updated after tb.infer() completes). Your code reads it via tb.insight(). There's no separate JSON Schema. The code IS the schema.
Free tier users are rate limited. Users with their own API keys have no Typebulb-imposed limits (within reason). Runtime inference always uses the user's API keys, or Typebulb's courtesy models, never your API keys.
Persistent context for the AI, carried across conversations and clones. Useful for API docs or examples the LLM isn't familiar with. So much of what makes LLMs produce good code is guidance from markdown documents.
Yes. Export your bulb to a .bulb.md file, which is just a markdown file with named code blocks. Run with:
npx typebulb my-bulb.bulb.md
The typebulb CLI compiles and serves any .bulb.md file on localhost. Features:
--no-watch)tb.fs.read() and tb.fs.write() for local files.env and .env.local auto-loaded from cwdserver.ts block; exported functions become callable from the browser via tb.server.<name>() (e.g., export async function query(...) → await tb.server.query(...))tb.server.log(...) prints to the CLI's stdout--server runs only the server.ts section in Node, skipping the web server. Bulbs with only server.ts (no code.tsx) use this mode automatically.Because bulbs are just markdown files with code blocks, you can edit bulb files locally with your favourite AI code editor, such as Claude Code, Codex, or Cursor.
You can re-import your bulbs into Typebulb.
Bulbs implicitly have access to the tb const. This is useful when the client code needs to interact with the host, or access special features of Typebulb.
| API | Description | Runs on |
|---|---|---|
tb.data(n) / tb.json(n) | Access data chunks from the Data tab | Both |
tb.dump(...) | Log lazy/device-backed tensor values to the console | Both |
tb.copy(text) | Copy to clipboard | Both |
tb.url() | Get the canonical bulb URL | Both |
tb.proxy(url) | Proxy CDN URL for Web Worker/WASM same-origin loading | Both |
tb.insight() | Read the current Insight JSON | Both |
tb.infer(data?) | Call an LLM at runtime (uses Infer tab instructions) | Client |
tb.fs.read(path) / tb.fs.write(path, content) | Local filesystem access | CLI |
tb.server.<name>(...) | Call exported server-side functions by name | CLI |
tb.server.log(...) | Built-in: prints to CLI stdout, falls back to console.log on web | Both |
OpenAI, Anthropic, Gemini and OpenRouter. Bring your own keys. Your keys are used for the AI assistant and tb.infer() calls that you make, but not anyone else who runs yoru bulb.
| Mode | AI sees your bulb | AI will edit | Use case |
|---|---|---|---|
| Code | Yes | Yes | Default. Sees code, HTML, CSS, notes, truncated data, errors, and logs. |
| Ask | Yes | No | Same context as Code, but when you just want discuss, not edit code. |
| Chat | No | No | General conversation, unrelated to your bulb. |
| Raw | No | No | No system prompt. Good for prompt testing. |
In Code mode, Typebulb will automatically reply to the AI if it makes a patch error or generates TypeScript errors, with precise details on how to fix them.
Yes, with caveats. Web search is most reliable with native provider integrations (OpenAI/Anthropic/Gemini); OpenRouter support can be model-dependent and less consistent.
This is web search, not browsing. It works best for well-indexed topics (news, popular libraries, recent releases), and may not retrieve obscure npm or GitHub pages from a URL alone.
If you need the AI to use specific documentation, paste excerpts into the Notes tab; for structured data, use the Data tab.
No, if anything we'll move to even more agentic workflows. Every month that goes by I use AI in a higher level way, writing explicit code less and less.
It's 2026; just use the AI. It can add console.logs to your code, and logs are automatically fed back to the AI.
But if you really want to debug manually, we automatically generate TypeScript source maps:
code.tsxTypebulb is heavily optimized for building responsive bulbs that work beautifully on both mobile and desktop. "alt-enter" toggles between a mobile friendly portrait vs full screen layout. For building the bulbs, Typebulb works on mobile, but is heavily optimized for desktop usage.
A bulb has 3 possible visibilities adjustable in the toolbar/menu:
When you share a bulb, the user will see any changes you make if you save the bulb and they refresh their browser. The user has a read-only view of your bulb; if they try to edit it a clone is made.
This is what we store:
We don't use your data for training nor sell it to third parties. Your AI providers have their own data policies.
Ben Albahari. I worked as a Program Manager at MSFT, and co-authored the C# in a Nutshell series (with my brother, Joe Albahari, the author of the popular LINQPad tool). The core of Typebulb is extremely economical to maintain. However, I do incur costs for bulbs with inference, but I rate limit that to what I can afford. If it blows up in popularity, I'll seek investment. Inference is reducing by a factor of 10 per year, which is an amazing tailwind to have.
Obviously I love all the thinking bulbs that use inference, as these types of apps were impossible just a couple of years ago. But the rocket balancer bulb is pretty cool too. The rocket balancing algorithm was written by my friend, Brian Beckman. I pasted his physics equations into the chat, and (mostly) Opus wrote the 3D simulation of it.
← Back to Typebulb