OpenAI Token Counter

Estimate tokens for GPT-5.4, GPT-4o, and GPT-4o Mini

OpenAI Token Counting

OpenAI’s GPT models – from GPT-4o Mini through GPT-5.4 – all use tokenizers from the tiktoken library. If you’ve worked with OpenAI’s API, you know that every request you send gets billed by token count. Understanding how those tokens are calculated helps you predict costs and stay within context limits.

GPT models use a BPE (Byte Pair Encoding) tokenizer. In practice, this means English text averages about 4 characters per token. But that’s just an average – common words like “the” or “is” are single tokens, while technical jargon or uncommon words might be split across three or four tokens. Code tends to tokenize less efficiently than prose, especially when it’s full of variable names and special syntax.

GPT Model Pricing and Limits

ModelContextMax OutputInput $/1MOutput $/1M
GPT-5.4256K32K$10.00$30.00
GPT-4o128K16K$2.50$10.00
GPT-4o Mini128K16K$0.15$0.60

GPT-4o Mini is an excellent pick for high-volume tasks where you don’t need the full reasoning power of GPT-5.4. At $0.15 per million input tokens, you can process enormous volumes of text for pennies.

Optimizing Token Usage with OpenAI

A few things that help cut token costs with GPT models:

  • Use prompt caching. OpenAI caches identical prompt prefixes, so repeated system instructions don’t get re-billed at full price.
  • Pick the smallest model that works. GPT-4o Mini handles classification, extraction, and simple generation surprisingly well.
  • Keep your system prompt tight. Every token in your system prompt gets charged on every request. Shaving 500 tokens off a system prompt saves real money at scale.
  • Use structured outputs. Requesting JSON mode with a schema often produces shorter, more predictable responses than free-form text.

For exact token counts in production, use OpenAI’s tiktoken Python library or the tokenizer endpoint in their API. This tool gives you fast estimates for planning and cost comparison.

Frequently Asked Questions

How many tokens does GPT-5.4 support?

GPT-5.4 supports a context window of 256,000 tokens with a maximum output of 32,000 tokens. This is double the context window of GPT-4o.

How does OpenAI's tokenizer work?

OpenAI uses a BPE (Byte Pair Encoding) tokenizer called tiktoken. For GPT models, English text averages about 4 characters per token. Code, non-Latin scripts, and special characters use more tokens.

How much does GPT-5.4 cost per token?

GPT-5.4 costs $10.00 per million input tokens and $30.00 per million output tokens. GPT-4o is cheaper at $2.50 input / $10.00 output per million tokens.

What's the difference between GPT-5.4 and GPT-4o token counts?

Both models use the same tokenizer family, so they produce identical token counts for the same text. The difference is in pricing and context window size.