Fixed a bug when trying to edit an empty chat message
Build 3
Fixed unintended parsing of tool calls within reasoning blocks
Fixed a bug where parallel tool calls would fail for some models (for example, GLM)
Fixed a bug in OpenAI-compatible /v1/responses which sometimes caused "Output items missing; timeline invariant violated"
Fixed $...$ parsing so plain currency/text (for example $10, $1.23, $37 trillion) is no longer incorrectly rendered as math
Fixed single-dollar boundary handling to reduce false positives when $ appears in normal prose
Fixed \[ \] and \( \) handling so bracket/paren math parses correctly, and empty forms stay visible as literal text
Fixes responsive handling of models table on narrow screens, adds resizable column handles
Fixed a bug where Anthropic-compatible /v1/messages API would error when properties were not provided for a tool input schema
Expose Models Directory selector in Settings
Fixed tool calling parsing bugs for Qwen 3.5 and GLM models
Tool call parameters with string type were sometimes incorrectly parsed as object/number/boolean
Added tool call grammar for gpt-oss models using llama.cpp engine, significantly increasing tool call success rate for these models (requires llama.cpp engines updated to v2.7.1 or later)
Build 2
Global chat search now takes into account chat titles
Add notification UI when LM Link versions are incompatible between devices
Fixed a bug creating a duplicate onboarding popover on the LM Link page
Make XML-like tool call parsing (e.g., Nemotron 3) more reliable for boolean values
Fixed a bug where clicking the Attach File button in chat input would lock the text input UI
Fixes a bug where tags were showing as text in markdown tables
Fixed a responsive UI overlap bug on server page stacked content
Fixed a bug where an unnamed chat title would appear as the chat id in the chat sidebar search results
Fixed a bug where on certain devices, the app would crash if an image is fed to a vision model
Fixed a bug where model load guardrails and resource usage estimates were inaccurate for some models
Anthropic-compatible /v1/messages API now surfaces errors when the model generates an invalid tool call, enabling Claude Code to recover gracefully
Build 1
New default: "separate reasoning_content and content in API responses" is now ON by default in order to improve compatibility with /v1/chat/completions clients
If your use case requires this setting to be off (previous default), you can disable it in the Developer Settings
Fixed app header nav button hotkeys
Add parallel parameter to /api/v1/load endpoint
Add presence_penalty sampling parameter
Fix hover effect visual bug on Model Picker model options in chat input
Fixed responsive UI styling on the LM Link page
[Linux] Fix regression caused by some app files having a space in their name.
Fix OpenAI-compatible /v1/responses endpoint erroring on none and xhigh reasoning effort
Fixed a bug where /v1/responses responses included logProbs for MLX models even if message.output_text.logprobs was omitted