HomeGlossary › Token (AI / LLM)

Token (AI / LLM)

In AI, a token is the basic unit of text a language model processes. One English word averages 1.3 tokens. Token counts determine API costs and context window limits.

Definition: In large language model contexts, a token is the basic unit of text the model processes. Tokens are sub-word fragments produced by a tokenizer — common words are usually one token, rare or compound words may be 2-4 tokens. As a rough rule for English: 1 token ≈ 4 characters ≈ 0.75 words. 1,000 tokens ≈ 750 English words.

How it works

Tokens are the unit of measurement for two critical LLM constraints: (1) context window — how much text the model can process per request — and (2) API pricing — most LLM APIs charge per million input tokens and per million output tokens. Understanding tokens helps users estimate cost and avoid context window overflow.

Example

The sentence "PromptForge is a $4.99 iOS app" tokenizes to approximately 9 tokens (Prompt-Forge-is-a-$-4-.-99-iOS-app). A 5,000-word document is roughly 6,500-7,000 tokens. A model with a 200,000-token context window can process roughly 150,000 words at once — about a 600-page book.

Comparison + context

Why tokens vs words: Tokenization is more efficient for the model's neural network than word-based processing because it handles rare/unknown words gracefully. Tokenization differs by model: GPT models use OpenAI's tokenizer (tiktoken); Claude uses Anthropic's; Gemini uses Google's. Identical text may produce slightly different token counts on different models.

See also