Typebulb FAQ

What's Typebulb?

Typebulb runs apps in markdown files called bulbs. Quick to make & share, runs online or locally via the CLI, & can call an LLM at runtime.

What comprises a bulb?

A bulb is composed of code blocks providing the minimum viable structure for apps that may have runtime inference. Each block is optional, and maps to an editor tab:

BlockPurposeAI editable
code.tsxApp logic and UI (TypeScript/TSX)Yes
styles.cssStylingYes
index.htmlUsually just an HTML stub or fragment, but can be whole page, or blank for console appsYes
data.txtData chunks for your code to process (JSON, CSV, XML, YAML, or plain text); multiple chunks separated by two blank linesNo
infer.mdPrompt for runtime LLM calls via tb.infer()Yes
insight.jsonOutput from tb.infer(), read via tb.insight()Yes
config.jsonDependencies, app description, inference modal settingsPartial
notes.mdPersistent context for the AI assistantNo
server.tsServer-side Node.js code (CLI only)N/A

What's the Data tab?

The Data tab is for content that your code processes, whether structured (JSON, CSV, XML, YAML) or unstructured (plain text). The AI gets a read-only, schema-aware truncated view. Multiple chunks are separated by 2 blank lines.

Why a separate tab instead of pasting data into chat or code?

  1. Runtime inference — Used by an LLM at runtime via the instructions in Infer to generate Insight.
  2. Context conservation — Typebulb auto-truncates for the LLM without losing structure. The LLM doesn't need every row of a CSV.
  3. Correctness & auditability — By forcing the LLM to write code that processes the data, you can verify the output. Pasting data directly into a prompt means praying the LLM doesn't introduce mistakes.

How does inference work?

The Infer tab is a prompt for runtime LLM calls. When your code calls tb.infer(), the LLM receives: your inference instructions, an example output (from Insight), your code (so it knows the expected JSON shape), and the data to process. A modal shows the user what will be sent; confirm and the LLM streams its response.

The Insight tab holds JSON that serves as both a working example (showing the LLM what shape you expect) and the current output (updated after tb.infer() completes). Your code reads it via tb.insight(). There's no separate JSON Schema. The code IS the schema.

Free tier users are rate limited. Users with their own API keys have no Typebulb-imposed limits (within reason). Runtime inference always uses the user's API keys, or Typebulb's courtesy models, never your API keys.

What's the Notes tab?

Persistent context for the AI, carried across conversations and clones. Useful for API docs or examples the LLM isn't familiar with. So much of what makes LLMs produce good code is guidance from markdown documents.

Can I run bulbs locally?

Yes. Export your bulb to a .bulb.md file, which is just a markdown file with named code blocks. Run with:

npx typebulb my-bulb.bulb.md

The typebulb CLI compiles and serves any .bulb.md file on localhost. Features:

Because bulbs are just markdown files with code blocks, you can edit bulb files locally with your favourite AI code editor, such as Claude Code, Codex, or Cursor.

You can re-import your bulbs into Typebulb.

tb.* API

Bulbs implicitly have access to the tb const. This is useful when the client code needs to interact with the host, or access special features of Typebulb.

APIDescriptionRuns on
tb.data(n) / tb.json(n)Access data chunks from the Data tabBoth
tb.dump(...)Log lazy/device-backed tensor values to the consoleBoth
tb.copy(text)Copy to clipboardBoth
tb.url()Get the canonical bulb URLBoth
tb.proxy(url)Proxy CDN URL for Web Worker/WASM same-origin loadingBoth
tb.insight()Read the current Insight JSONBoth
tb.infer(data?)Call an LLM at runtime (uses Infer tab instructions)Client
tb.fs.read(path) / tb.fs.write(path, content)Local filesystem accessCLI
tb.server.<name>(...)Call exported server-side functions by nameCLI
tb.server.log(...)Built-in: prints to CLI stdout, falls back to console.log on webBoth

AI Assistant

What AI providers do you support?

OpenAI, Anthropic, Gemini and OpenRouter. Bring your own keys. Your keys are used for the AI assistant and tb.infer() calls that you make, but not anyone else who runs yoru bulb.

What are the chat modes?

ModeAI sees your bulbAI will editUse case
CodeYesYesDefault. Sees code, HTML, CSS, notes, truncated data, errors, and logs.
AskYesNoSame context as Code, but when you just want discuss, not edit code.
ChatNoNoGeneral conversation, unrelated to your bulb.
RawNoNoNo system prompt. Good for prompt testing.

In Code mode, Typebulb will automatically reply to the AI if it makes a patch error or generates TypeScript errors, with precise details on how to fix them.

Can the AI assistant search the web?

Yes, with caveats. Web search is most reliable with native provider integrations (OpenAI/Anthropic/Gemini); OpenRouter support can be model-dependent and less consistent.

This is web search, not browsing. It works best for well-indexed topics (news, popular libraries, recent releases), and may not retrieve obscure npm or GitHub pages from a URL alone.

If you need the AI to use specific documentation, paste excerpts into the Notes tab; for structured data, use the Data tab.

Will you provide AI for auto complete?

No, if anything we'll move to even more agentic workflows. Every month that goes by I use AI in a higher level way, writing explicit code less and less.

How do I debug?

It's 2026; just use the AI. It can add console.logs to your code, and logs are automatically fed back to the AI.

But if you really want to debug manually, we automatically generate TypeScript source maps:

Does Typebulb work on mobile?

Typebulb is heavily optimized for building responsive bulbs that work beautifully on both mobile and desktop. "alt-enter" toggles between a mobile friendly portrait vs full screen layout. For building the bulbs, Typebulb works on mobile, but is heavily optimized for desktop usage.

Why TypeScript?

Privacy & Sharing

Are my bulbs visible to others?

A bulb has 3 possible visibilities adjustable in the toolbar/menu:

When you share a bulb, the user will see any changes you make if you save the bulb and they refresh their browser. The user has a read-only view of your bulb; if they try to edit it a clone is made.

How do you handle my data?

This is what we store:

We don't use your data for training nor sell it to third parties. Your AI providers have their own data policies.

About

Who's behind the website?

Ben Albahari. I worked as a Program Manager at MSFT, and co-authored the C# in a Nutshell series (with my brother, Joe Albahari, the author of the popular LINQPad tool). The core of Typebulb is extremely economical to maintain. However, I do incur costs for bulbs with inference, but I rate limit that to what I can afford. If it blows up in popularity, I'll seek investment. Inference is reducing by a factor of 10 per year, which is an amazing tailwind to have.

Do you have a favourite bulb?

Obviously I love all the thinking bulbs that use inference, as these types of apps were impossible just a couple of years ago. But the rocket balancer bulb is pretty cool too. The rocket balancing algorithm was written by my friend, Brian Beckman. I pasted his physics equations into the chat, and (mostly) Opus wrote the 3D simulation of it.

← Back to Typebulb