LM StudioLM Studio
ModelsDocsBlogEnterpriseLM LinkNew
  • Download
  • Login
Login or SignupDownload AppModelsDeveloper DocsLM LinkNewCareersWe're Hiring!
Blog
ChangelogEnterprise SolutionsPrivacy PolicyTerms of Use

Blog 👾

Looking for the Changelog? Click here.
Locally AI joins LM Studio
Adrien and the Locally AI apps are joining the LM Studio family to double down on Apple platforms

Yagil Burowski

•

Apr 8, 2026

Run open models on NVIDIA DGX Station GB300
LM Studio now supports NVIDIA DGX Station - GB300 Blackwell in a form factor you can run outside of the data center

LM Studio Team

•

Mar 18, 2026

Use your LM Studio Models in Claude Code
Run Claude Code with any local model using LM Studio's Anthropic-compatible API

LM Studio Team

•

Jan 30, 2026

Introducing LM Studio 0.4.0
Server deployment, parallel requests with continuous batching, new REST API endpoint, and refreshed application UI

LM Studio Team

•

Jan 28, 2026

Open Responses with local models via LM Studio
Update to LM Studio 0.3.39 for Open Responses support

LM Studio Team

•

Jan 15, 2026

LM Studio 0.3.37
LFM2 tool call support and a generator stability fix

LM Studio Team

•

Jan 6, 2026

LM Studio 0.3.38
Mac M5 MLX fix, enable optimized MLX auto-upgrade

LM Studio Team

•

Jan 6, 2026

How to fine-tune FunctionGemma and run it locally
Step by step guide for fine-tuning FunctionGemma with Unsloth, and then running it in LM Studio

LM Studio Team

•

Dec 23, 2025

LM Studio 0.3.36
Support for Google's FunctionGemma (270M)

LM Studio Team

•

Dec 18, 2025

LM Studio 0.3.35
Devstral-2, GLM-4.6V, and system prompt fixes

LM Studio Team

•

Dec 12, 2025

LM Studio 0.3.34
EssentialAI rnj-1 support and a Jinja prompt formatting fix

LM Studio Team

•

Dec 10, 2025

Ministral 3
LM Studio 0.3.33: Ministral 3 support, Olmo-3 tool calling, and release notes

LM Studio Team

•

Dec 2, 2025

LM Studio 0.3.32
GLM 4.5 tool calling, olmOCR-2, improved image input handling in /v1/responses, Flash Attention defaults for Vulkan/Metal, and bug fixes.

LM Studio Team

•

Nov 19, 2025

LM Studio 0.3.31
Image input improvements, MiniMax M2 tool calling, Flash Attention default for CUDA, new CLI runtime management, macOS 26 support, and bug fixes.

LM Studio Team

•

Nov 4, 2025

OpenAI gpt-oss-safeguard
Open safety reasoning models (120B and 20B) with bring-your-own-policy moderation, now supported in LM Studio on launch day.

LM Studio Team

•

Oct 29, 2025

NVIDIA DGX Spark
LM Studio now ships for Linux on ARM and launches with NVIDIA DGX Spark — a tiny but mighty Linux ARM box.

LM Studio Team

•

Oct 14, 2025

LM Studio 0.3.30
Bug fixes: Qwen tool-calling streaming, Vulkan iGPU loading, and developer role support in /v1/responses.

LM Studio Team

•

Oct 8, 2025

Use OpenAI's Responses API with local models
OpenAI-compatible /v1/responses endpoint (stateful chats, remote mcp, custom tools)

LM Studio Team

•

Oct 6, 2025

LM Studio 0.3.28
Default model‑variant selection in My Models, better RAM/VRAM estimates (mixed quantizations, mislabeled params, non‑transformers), and bug fixes.

LM Studio Team

•

Oct 1, 2025

LM Studio 0.3.27: Find in Chat and Search All Chats
Find/Search in chats, model resource estimation (GUI + CLI), CLI polish, and bug fixes.

LM Studio Team

•

Sep 24, 2025

LM Studio 0.3.26
Stream server, model logs (input and output) with lms log stream, native context menus, and bug fixes.

LM Studio Team

•

Sep 15, 2025

LM Studio 0.3.25
Select multiple chats for bulk actions, trash bin support, Google EmbeddingGemma, and NVIDIA Nemotron-Nano-v2 with tool calling capabilities.

LM Studio Team

•

Sep 4, 2025

LM Studio 0.3.24
Support for ByteDance/Seed-OSS, improved markdown code blocks and tables, bug fixes.

LM Studio Team

•

Aug 28, 2025

LM Studio 0.3.23
Improve in-app chat tool calling reliability for gpt-oss, and ability to place MoE expert weights on CPU

LM Studio Team

•

Aug 12, 2025

Run OpenAI's gpt-oss locally in LM Studio
We worked with OpenAI to ensure LM Studio supports running gpt-oss models locally on launch day 🎉

LM Studio Team

•

Aug 5, 2025

LM Studio 0.3.20
Bug fixes, UI improvements, and support for Qwen3-Coder-480B-A35B with tools.

LM Studio Team

•

Jul 23, 2025

LM Studio 0.3.19
ROCm / Linux support for AMD 9000 series GPUs, bug fixes for model loading, UI improvements, and auto-deletion of engine dependencies to save disk space.

LM Studio Team

•

Jul 21, 2025

LM Studio 0.3.18
MCP bug fixes and improvements, OpenAI compat API new streaming options and bug fixes, improved tools calling for Mistral models, and UI touchups.

LM Studio Team

•

Jul 10, 2025

LM Studio is free for use at work
Starting today, it's no longer necessary to get a commercial license for using LM Studio at work. No need to fill out a form or contact us. You and your team can just use the app!

Yagil Burowski

•

Jul 8, 2025

MCP in LM Studio
New in LM Studio 0.3.17: Model Context Protocol (MCP) Host support. Connect MCP servers to the app and use them with local models.

LM Studio Team

•

Jun 25, 2025

Introducing the unified multi-modal MLX engine architecture in LM Studio
Leveraging mlx-lm and mlx-vlm to achieve unified multi-modal LLM inference in LM Studio's mlx-engine.

Matt Clayton

•

May 30, 2025

DeepSeek-R1-0528 you can run on your computer
Run the distilled DeepSeek R1 0528 model (8B) locally in LM Studio on Mac, Windows, or Linux with as little as 4GB of RAM. Supports tool use and reasoning.

LM Studio Team

•

May 29, 2025

LM Studio 0.3.16
Public Preview of community presets, automatic deletion of least recently used Runtime Extension Packs, and a way to use LLMs as text embedding models.

LM Studio Team

•

May 23, 2025

LM Studio 0.3.15: RTX 50-series GPUs and improved tool use in the API
Support for CUDA 12, new system prompt editor UI, improved tool use API support, and preview of community presets.

LM Studio Team

•

Apr 24, 2025

LM Studio 0.3.14: Multi-GPU Controls 🎛️
Advanced controls for multi-GPU setups: enable/disable specific GPUs, choose allocation strategy, limit model weight to dedicated GPU memory, and more.

LM Studio Team

•

Mar 27, 2025

LM Studio 0.3.13: Google Gemma 3 Support
LM Studio 0.3.13 supports Google's latest multi-modal model, Gemma 3. Run it locally on your Mac, Windows, or Linux machine.

LM Studio Team

•

Mar 12, 2025

LM Studio 0.3.12
Bug fixes and document chunking speed improvements for RAG

LM Studio Team

•

Mar 7, 2025

Introducing lmstudio-python and lmstudio-js
Developer SDKs for Python and TypeScript are now available in a 1.0.0 release. A programmable toolkit for local AI software.

LM Studio Team

•

Mar 3, 2025

LM Studio 0.3.11
Support for LM Studio SDK (Python, TS/JS), advanced Speculative Decoding settings, and bug fixes

LM Studio Team

•

Mar 3, 2025

LM Studio 0.3.10: đź”® Speculative Decoding
Inference speed up with Speculative Decoding for llama.cpp and MLX

LM Studio Team

•

Feb 18, 2025

LM Studio 0.3.9
Idle TTL, auto-update for runtimes, support for nested folders in HF repos, and separate reasoning_content in chat completion responses

LM Studio Team

•

Jan 30, 2025

DeepSeek R1: open source reasoning model
Run DeepSeek R1 models locally and offline on your computer

LM Studio Team

•

Jan 29, 2025

LM Studio 0.3.8
Thinking UI for DeepSeek R1, LaTeX rendering improvements, and bug fixes

LM Studio Team

•

Jan 21, 2025

LM Studio 0.3.7
DeepSeek R1 support and KV Cache quantization for llama.cpp models

LM Studio Team

•

Jan 20, 2025

LM Studio 0.3.6
Tool Calling API in beta, new installer / updater system, and support for Qwen2VL and QVQ (both GGUF and MLX)

LM Studio Team

•

Jan 6, 2025

Introducing venvstacks: layered Python virtual environments
An open source utility for packaging Python applications and all their dependencies into a portable, deterministic format based on Python's sitecustomize.py.

Alyssa Coghlan,

Yagil Burowski

•

Oct 31, 2024

LM Studio 0.3.5
Headless mode, on-demand model loading, server auto-start, CLI command to download models from the terminal, and support for Pixtral with Apple MLX.

LM Studio Team

•

Oct 22, 2024

LM Studio 0.3.4 ships with Apple MLX
Super fast and efficient on-device LLM inferencing using MLX for Apple Silicon Macs.

Yagil Burowski,

Alyssa Coghlan,

Neil Mehta,

Matt Clayton

•

Oct 8, 2024

LM Studio 0.3.3
Config presets are back! So are live token counts for user input and system prompt. Many bug fixes. Also several new app languages thanks to community contributors.

LM Studio Team

•

Sep 30, 2024

LM Studio 0.3.2
LM Studio 0.3.2 Release Notes

LM Studio Team

•

Aug 27, 2024

LM Studio 0.3.1
LM Studio 0.3.1 Release Notes

LM Studio Team

•

Aug 23, 2024

LM Studio 0.3.0
LM Studio 0.3.0 is here! Built-in (naĂŻve) RAG, light theme, internationalization, Structured Outputs API, Serve on the network, and more.

LM Studio Team

•

Aug 22, 2024

Llama 3.1
Run Llama 3.1 locally on your computer with LM Studio.

LM Studio Team

•

Jul 23, 2024

Introducing lms: LM Studio's CLI
A command line tool for scripting and automating your local LLM workflows.

LM Studio Team

•

May 2, 2024

Product

Download the appModels
LM LinkNew
LM Studio HubBeta ReleasesChangelog

Developer

Developer Docslmstudio-jslmstudio-pythonLM Studio CLI (lms)llms.txt

Company

CareersWe're Hiring!
Blog
Enterprise Solutions

Legal

TermsPrivacy
Element Labs, Inc. © 2026
LinkedInGitHubDiscordTwitter / X