LM Studio Developer Docs
Build with LM Studio's local APIs and SDKs — TypeScript, Python, REST, and OpenAI and Anthropic-compatible endpoints.
Get to know the stack
TypeScript SDK: lmstudio-js
Use the TypeScript SDK to build apps, tools, and local AI workflows.
Python SDK: lmstudio-python
Work with local models from Python scripts, notebooks, and backend services.
LM Studio REST API
Use stateful chats, local server endpoints, and MCPs via HTTP.
OpenAI-compatible
Use chat, responses, embeddings, and other familiar OpenAI-style endpoints.
Anthropic-compatible
Use Claude-style Messages API flows against your local LM Studio server.
LM Studio CLI: lms
Download models, run the daemon, start the server, and script local workflows.
What you can build
Chat and text generation with streaming
Build local chat apps and text-generation flows with token streaming.
Tool calling and local agents with MCP
Connect tools, MCP servers, and agent-like workflows entirely on your machine.
Structured output (JSON schema)
Generate typed JSON outputs that validate against a schema.
Embeddings and tokenization
Create embeddings, inspect tokens, and build retrieval or indexing pipelines.
Model management (load, download, list)
Load models into memory, download new ones, and inspect what is available.
Install llmster for headless deployments
llmster is LM Studio's core, packaged as a daemon for headless deployment on servers, cloud instances, or CI. The daemon runs standalone, and it is not dependent on the LM Studio GUI.
Mac / Linux
curl -fsSL https://lmstudio.ai/install.sh | bashWindows
irm https://lmstudio.ai/install.ps1 | iexBasic usage
lms daemon up # Start the daemon
lms get <model> # Download a model
lms server start # Start the local server
lms chat # Open an interactive sessionLearn more: Headless deployments
Super quick start
TypeScript (lmstudio-js)
npm install @lmstudio/sdkimport { LMStudioClient } from "@lmstudio/sdk";
const client = new LMStudioClient();
const model = await client.llm.model("openai/gpt-oss-20b");
const result = await model.respond("Who are you, and what can you do?");
console.info(result.content);Full docs: lmstudio-js, Source: GitHub
Python (lmstudio-python)
pip install lmstudioimport lmstudio as lms
with lms.Client() as client:
model = client.llm.model("openai/gpt-oss-20b")
result = model.respond("Who are you, and what can you do?")
print(result)Full docs: lmstudio-python, Source: GitHub
HTTP (LM Studio REST API)
lms server start --port 1234curl http://localhost:1234/api/v1/chat \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $LM_API_TOKEN" \
-d '{
"model": "openai/gpt-oss-20b",
"input": "Who are you, and what can you do?"
}'Full docs: LM Studio REST API