Checked other resources
Package (Required)
langchain-openai
Describe the bug
Found this while reading the service_tier token arithmetic in _create_usage_metadata. When service_tier is "priority" or "flex" and the API response doesn't include cached_tokens in prompt_tokens_details, the subtraction crashes:
from langchain_openai.chat_models.base import _create_usage_metadata
_create_usage_metadata(
{"prompt_tokens": 100, "completion_tokens": 50, "total_tokens": 150,
"prompt_tokens_details": {}, "completion_tokens_details": {}},
service_tier="priority"
)
# TypeError: unsupported operand type(s) for -: 'int' and 'NoneType'
The dict stores None for missing cached_tokens via .get("cached_tokens"), then .get(key, 0) returns None (not 0) because the key exists with value None. Same bug in _create_usage_metadata_responses.
Related to #36500 which fixed the same pattern in other token paths but missed the service_tier arithmetic.
System Info
langchain-openai 1.1.12, Python 3.12
Checked other resources
Package (Required)
langchain-openai
Describe the bug
Found this while reading the service_tier token arithmetic in
_create_usage_metadata. Whenservice_tieris "priority" or "flex" and the API response doesn't includecached_tokensinprompt_tokens_details, the subtraction crashes:The dict stores
Nonefor missingcached_tokensvia.get("cached_tokens"), then.get(key, 0)returnsNone(not 0) because the key exists with value None. Same bug in_create_usage_metadata_responses.Related to #36500 which fixed the same pattern in other token paths but missed the service_tier arithmetic.
System Info
langchain-openai 1.1.12, Python 3.12