gptel
Table of Contents
- 1. About
- 2. Overview
- 3. Installation
- 4. Setup
- 4.1. ChatGPT
- 4.2. Other LLM backends
- 4.2.1. (Optional) Securing API keys with
authinfo
- 4.2.2. Azure
- 4.2.3. GPT4All
- 4.2.4. Ollama
- 4.2.5. Open WebUI
- 4.2.6. Gemini
- 4.2.7. Llama.cpp or Llamafile
- 4.2.8. Kagi (FastGPT & Summarizer)
- 4.2.9. together.ai
- 4.2.10. Anyscale
- 4.2.11. Perplexity
- 4.2.12. Anthropic (Claude)
- 4.2.13. Groq
- 4.2.14. Mistral Le Chat
- 4.2.15. OpenRouter
- 4.2.16. PrivateGPT
- 4.2.17. DeepSeek
- 4.2.18. Sambanova (Deepseek)
- 4.2.19. Cerebras
- 4.2.20. Github Models
- 4.2.21. Novita AI
- 4.2.22. xAI
- 4.2.23. AI/ML API
- 4.2.24. GitHub CopilotChat
- 4.2.25. AWS Bedrock
- 4.2.26. Moonshot (Kimi)
- 4.2.1. (Optional) Securing API keys with
- 5. Quick start and commands
- 6. TODO gptel’s design
- 7. gptel’s transient interface
- 8. Configuration
- 9. Advanced configuration
- 10. TODO Extending gptel
- 11. FAQ
- 11.1. Chat buffer UI
- 11.1.1. I want the window to scroll automatically as the response is inserted
- 11.1.2. I want the cursor to move to the next prompt after the response is inserted
- 11.1.3. I want to change the formatting of the prompt and LLM response
- 11.1.4. How does gptel distinguish between user prompts and LLM responses?
- 11.2. Transient menu behavior
- 11.2.1. I want to set gptel options but only for this buffer
- 11.2.2. I want the transient menu options to be saved so I only need to set them once
- 11.2.3. Using the transient menu leaves behind extra windows
- 11.2.4. Can I change the transient menu key bindings?
- 11.2.5. (Doom Emacs) Sending a query from the gptel menu fails because of a key conflict with Org mode
- 11.3. Miscellaneous
- 11.1. Chat buffer UI
- 12. Alternatives
- 13. Acknowledgments
Table of Contents
- 1. About
- 2. Overview
- 3. Installation
- 4. Setup
- 4.1. ChatGPT
- 4.2. Other LLM backends
- 4.2.1. (Optional) Securing API keys with
authinfo
- 4.2.2. Azure
- 4.2.3. GPT4All
- 4.2.4. Ollama
- 4.2.5. Open WebUI
- 4.2.6. Gemini
- 4.2.7. Llama.cpp or Llamafile
- 4.2.8. Kagi (FastGPT & Summarizer)
- 4.2.9. together.ai
- 4.2.10. Anyscale
- 4.2.11. Perplexity
- 4.2.12. Anthropic (Claude)
- 4.2.13. Groq
- 4.2.14. Mistral Le Chat
- 4.2.15. OpenRouter
- 4.2.16. PrivateGPT
- 4.2.17. DeepSeek
- 4.2.18. Sambanova (Deepseek)
- 4.2.19. Cerebras
- 4.2.20. Github Models
- 4.2.21. Novita AI
- 4.2.22. xAI
- 4.2.23. AI/ML API
- 4.2.24. GitHub CopilotChat
- 4.2.25. AWS Bedrock
- 4.2.26. Moonshot (Kimi)
- 4.2.1. (Optional) Securing API keys with
- 5. Quick start and commands
- 6. TODO gptel’s design
- 7. gptel’s transient interface
- 8. Configuration
- 9. Advanced configuration
- 10. TODO Extending gptel
- 11. FAQ
- 11.1. Chat buffer UI
- 11.1.1. I want the window to scroll automatically as the response is inserted
- 11.1.2. I want the cursor to move to the next prompt after the response is inserted
- 11.1.3. I want to change the formatting of the prompt and LLM response
- 11.1.4. How does gptel distinguish between user prompts and LLM responses?
- 11.2. Transient menu behavior
- 11.2.1. I want to set gptel options but only for this buffer
- 11.2.2. I want the transient menu options to be saved so I only need to set them once
- 11.2.3. Using the transient menu leaves behind extra windows
- 11.2.4. Can I change the transient menu key bindings?
- 11.2.5. (Doom Emacs) Sending a query from the gptel menu fails because of a key conflict with Org mode
- 11.3. Miscellaneous
- 11.1. Chat buffer UI
- 12. Alternatives
- 13. Acknowledgments
1. About
This is the user and developer manual for gptel, a simple Large Language Model (LLM) client for Emacs.
The documentation herein corresponds to stable version 0.9.8.5, dated 2024-12-31. The development target is version 0.9.9-dev.
- Package name (nonGNU ELPA):
gptel
- Official manual: TODO
- Git repository: https://github.com/karthink/gptel
- Bug tracker: https://github.com/karthink/gptel/issues
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no Invariant Sections, with the Front-Cover Texts being โA GNU Manual,โ and with the Back-Cover Texts as in (a) below. A copy of the license is included in the section entitled โGNU Free Documentation License.โ
(a) The FSFโs Back-Cover Text is: โYou have the freedom to copy and modify this GNU manual.โ
2. Overview
gptel is a Large Language Model client for Emacs, with support for multiple models and backends. It works in the spirit of Emacs, available at any time and uniformly in any buffer.
gptel supports the following services.
LLM Backend | Requires |
---|---|
ChatGPT | API key |
Anthropic (Claude) | API key |
Gemini | API key |
Ollama | Ollama running locally |
Llama.cpp | Llama.cpp running locally |
Llamafile | Local Llamafile server |
GPT4All | GPT4All running locally |
Kagi FastGPT | API key |
Kagi Summarizer | API key |
Azure | Deployment and API key |
Groq | API key |
Perplexity | API key |
OpenRouter | API key |
together.ai | API key |
Anyscale | API key |
PrivateGPT | PrivateGPT running locally |
DeepSeek | API key |
Cerebras | API key |
Github Models | Token |
Novita AI | Token |
xAI | API key |
2.1. Basic concepts
A Large Language Model (LLM) is a neural network trained on a large corpus of information to generate text or audio-visual output data based on input. The input can be in many formats as well, including audio-visual data. In this manual we are primarily interested in textual input and output. A subclass of these models are trained to generate text to convincingly simulate the format of a back-and-forth conversation. gptel provides an Emacs interface to use these so-called “instruct” models. In this manual LLM refers only to “instruct” models.
LLMs are categorized by the information they were trained with, tasks they are trained for, by their capabilities, and by their size. Their size is typically measured in billions of network parameters, where the smallest models (1-120 billion parameters) can run on today’s consumer hardware. The larger models typically consist of hundreds of billions of parameters, and require clusters or supercomputers to run.
Some LLMs – typically the smaller ones – are permissively licensed and free. Most larger models are proprietary and only available as a service that charges by the word.
gptel does not provide or run these models. Instead, it only acts as a client, sending and receiving conversation text over HTTP. To run free models on your hardware, you can use software such as llama.cpp or Ollama. Larger and more capable models typically require paid API access.
Presently, gptel works only with “chat” models, which are trained to respond to input in a manner resembling a conversational reply. Such models (nicknamed “instruct models”) are among the most popular and easy to use without additional tooling, as the interaction format prescribes an interface by itself.
3. Installation
Note: gptel requires Transient 0.7.4 or higher. Transient is a built-in package and Emacs does not update it by default. Ensure that package-install-upgrade-built-in
is true, or update Transient manually.
- Release version:
M-x package-install
โgptel
in Emacs. - Development snapshot: Add MELPA or NonGNU-devel ELPA to your list of package sources, then install with
M-x package-install
โgptel
. - Optional: Install
markdown-mode
.
3.1. Straight
(straight-use-package 'gptel)
3.2. Manual
Note: gptel requires Transient 0.7.4 or higher. Transient is a built-in package and Emacs does not update it by default. Ensure that package-install-upgrade-built-in
is true, or update Transient manually.
Clone or download this repository and run M-x package-install-fileโ
on the repository directory.
3.3. Doom Emacs
In packages.el
(package! gptel :recipe (:nonrecursive t))
In config.el
(use-package! gptel :config (setq! gptel-api-key "your key"))
“your key” can be the API key itself, or (safer) a function that returns the key. Setting gptel-api-key
is optional, you will be asked for a key if it’s not found.
3.4. Spacemacs
In your .spacemacs
file, add llm-client
to dotspacemacs-configuration-layers
.
(llm-client :variables
llm-client-enable-gptel t)
4. Setup
4.1. ChatGPT
Procure an OpenAI API key.
Optional: Set gptel-api-key
to the key. Alternatively, you may choose a more secure method such as:
- Setting it to a custom function that returns the key.
- Leaving it set to the default
gptel-api-key-from-auth-source
function which reads keys from~/.authinfo
. (See [BROKEN LINK: optional-securing-api-keys-with-authinfo])
4.2. Other LLM backends
ChatGPT is configured out of the box. If you want to use other LLM backends (like Ollama, Claude/Anthropic or Gemini) you need to register and configure them first.
As an example, registering a backend typically looks like the following:
(gptel-make-anthropic "Claude" :stream t :key gptel-api-key)
Once this backend is registered, you’ll see model names prefixed by “Claude:” appear in gptel’s menu.
See below for details on your preferred LLM provider, including local LLMs.
4.2.1. (Optional) Securing API keys with authinfo
You can use Emacs’ built-in support for authinfo
to store API keys required by gptel. Add your API keys to ~/.authinfo
, and leave gptel-api-key
set to its default. By default, the API endpoint DNS name (e.g. “api.openai.com”) is used as HOST and “apikey” as USER.
machine api.openai.com login apikey password sk-secret-openai-api-key-goes-here machine api.anthropic.com login apikey password sk-secret-anthropic-api-key-goes-here
4.2.2. Azure
Register a backend with
(gptel-make-azure "Azure-1" ;Name, whatever you'd like :protocol "https" ;Optional -- https is the default :host "YOUR_RESOURCE_NAME.openai.azure.com" :endpoint "/openai/deployments/YOUR_DEPLOYMENT_NAME/chat/completions?api-version=2023-05-15" ;or equivalent :stream t ;Enable streaming responses :key #'gptel-api-key :models '(gpt-3.5-turbo gpt-4))
Refer to the documentation of gptel-make-azure
to set more parameters.
You can pick this backend from the menu when using gptel. (see Usage).
- (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of
gptel-backend
. Use this instead of the above.;; OPTIONAL configuration (setq gptel-model 'gpt-3.5-turbo gptel-backend (gptel-make-azure "Azure-1" :protocol "https" :host "YOUR_RESOURCE_NAME.openai.azure.com" :endpoint "/openai/deployments/YOUR_DEPLOYMENT_NAME/chat/completions?api-version=2023-05-15" :stream t :key #'gptel-api-key :models '(gpt-3.5-turbo gpt-4)))
4.2.3. GPT4All
Register a backend with
(gptel-make-gpt4all "GPT4All" ;Name of your choosing :protocol "http" :host "localhost:4891" ;Where it's running :models '(mistral-7b-openorca.Q4_0.gguf)) ;Available models
These are the required parameters, refer to the documentation of gptel-make-gpt4all
for more.
You can pick this backend from the menu when using gptel (see Usage).
- (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of
gptel-backend
. Use this instead of the above. Additionally you may want to increase the response token size since GPT4All uses very short (often truncated) responses by default.;; OPTIONAL configuration (setq gptel-max-tokens 500 gptel-model 'mistral-7b-openorca.Q4_0.gguf gptel-backend (gptel-make-gpt4all "GPT4All" :protocol "http" :host "localhost:4891" :models '(mistral-7b-openorca.Q4_0.gguf)))
4.2.4. Ollama
Register a backend with
(gptel-make-ollama "Ollama" ;Any name of your choosing :host "localhost:11434" ;Where it's running :stream t ;Stream responses :models '(mistral:latest)) ;List of models
These are the required parameters, refer to the documentation of gptel-make-ollama
for more.
You can pick this backend from the menu when using gptel (see Usage)
- (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of
gptel-backend
. Use this instead of the above.;; OPTIONAL configuration (setq gptel-model 'mistral:latest gptel-backend (gptel-make-ollama "Ollama" :host "localhost:11434" :stream t :models '(mistral:latest)))
4.2.5. Open WebUI
Open WebUI is an open source, self-hosted system which provides a multi-user web chat interface and an API endpoint for accessing LLMs, especially LLMs running locally on inference servers like Ollama.
Because it presents an OpenAI-compatible endpoint, you use gptel-make-openai
to register it as a backend.
For instance, you can use this form to register a backend for a local instance of Open Web UI served via http on port 3000:
(gptel-make-openai "OpenWebUI" :host "localhost:3000" :protocol "http" :key "KEY_FOR_ACCESSING_OPENWEBUI" :endpoint "/api/chat/completions" :stream t :models '("gemma3n:latest"))
Or if you are running Open Web UI on another host on your local network (box.local
), serving via https with self-signed certificates, this will work:
(gptel-make-openai "OpenWebUI" :host "box.local" :curl-args '("--insecure") ; needed for self-signed certs :key "KEY_FOR_ACCESSING_OPENWEBUI" :endpoint "/api/chat/completions" :stream t :models '("gemma3n:latest"))
To find your API key in Open WebUI, click the user name in the bottom left, Settings, Account, and then Show by API Keys section.
Refer to the documentation of gptel-make-openai
for more configuration options.
You can pick this backend from the menu when using gptel (see Usage)
- (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of
gptel-backend
. Use this instead of the above.;; OPTIONAL configuration (setq gptel-model "gemma3n:latest" gptel-backend (gptel-make-openai "OpenWebUI" :host "localhost:3000" :protocol "http" :key "KEY_FOR_ACCESSING_OPENWEBUI" :endpoint "/api/chat/completions" :stream t :models '("gemma3n:latest")))
4.2.6. Gemini
Register a backend with
;; :key can be a function that returns the API key. (gptel-make-gemini "Gemini" :key "YOUR_GEMINI_API_KEY" :stream t)
These are the required parameters, refer to the documentation of gptel-make-gemini
for more.
You can pick this backend from the menu when using gptel (see Usage)
- (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of
gptel-backend
. Use this instead of the above.;; OPTIONAL configuration (setq gptel-model 'gemini-2.5-pro-exp-03-25 gptel-backend (gptel-make-gemini "Gemini" :key "YOUR_GEMINI_API_KEY" :stream t))
4.2.7. Llama.cpp or Llamafile
(If using a llamafile, run a server llamafile instead of a “command-line llamafile”, and a model that supports text generation.)
Register a backend with
;; Llama.cpp offers an OpenAI compatible API (gptel-make-openai "llama-cpp" ;Any name :stream t ;Stream responses :protocol "http" :host "localhost:8000" ;Llama.cpp server location :models '(test)) ;Any names, doesn't matter for Llama
These are the required parameters, refer to the documentation of gptel-make-openai
for more.
You can pick this backend from the menu when using gptel (see Usage)
- (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of
gptel-backend
. Use this instead of the above.;; OPTIONAL configuration (setq gptel-model 'test gptel-backend (gptel-make-openai "llama-cpp" :stream t :protocol "http" :host "localhost:8000" :models '(test)))
4.2.8. Kagi (FastGPT & Summarizer)
Kagi’s FastGPT model and the Universal Summarizer are both supported. A couple of notes:
- Universal Summarizer: If there is a URL at point, the summarizer will summarize the contents of the URL. Otherwise the context sent to the model is the same as always: the buffer text upto point, or the contents of the region if the region is active.
- Kagi models do not support multi-turn conversations, interactions are “one-shot”. They also do not support streaming responses.
Register a backend with
(gptel-make-kagi "Kagi" ;any name :key "YOUR_KAGI_API_KEY") ;can be a function that returns the key
These are the required parameters, refer to the documentation of gptel-make-kagi
for more.
You can pick this backend and the model (fastgpt/summarizer) from the transient menu when using gptel.
- (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of
gptel-backend
. Use this instead of the above.;; OPTIONAL configuration (setq gptel-model 'fastgpt gptel-backend (gptel-make-kagi "Kagi" :key "YOUR_KAGI_API_KEY"))
The alternatives to
fastgpt
includesummarize:cecil
,summarize:agnes
,summarize:daphne
andsummarize:muriel
. The difference between the summarizer engines is documented here.
4.2.9. together.ai
Register a backend with
;; Together.ai offers an OpenAI compatible API (gptel-make-openai "TogetherAI" ;Any name you want :host "api.together.xyz" :key "your-api-key" ;can be a function that returns the key :stream t :models '(;; has many more, check together.ai mistralai/Mixtral-8x7B-Instruct-v0.1 codellama/CodeLlama-13b-Instruct-hf codellama/CodeLlama-34b-Instruct-hf))
You can pick this backend from the menu when using gptel (see Usage)
- (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of
gptel-backend
. Use this instead of the above.;; OPTIONAL configuration (setq gptel-model 'mistralai/Mixtral-8x7B-Instruct-v0.1 gptel-backend (gptel-make-openai "TogetherAI" :host "api.together.xyz" :key "your-api-key" :stream t :models '(;; has many more, check together.ai mistralai/Mixtral-8x7B-Instruct-v0.1 codellama/CodeLlama-13b-Instruct-hf codellama/CodeLlama-34b-Instruct-hf)))
4.2.10. Anyscale
Register a backend with
;; Anyscale offers an OpenAI compatible API (gptel-make-openai "Anyscale" ;Any name you want :host "api.endpoints.anyscale.com" :key "your-api-key" ;can be a function that returns the key :models '(;; has many more, check anyscale mistralai/Mixtral-8x7B-Instruct-v0.1))
You can pick this backend from the menu when using gptel (see Usage)
- (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of
gptel-backend
. Use this instead of the above.;; OPTIONAL configuration (setq gptel-model 'mistralai/Mixtral-8x7B-Instruct-v0.1 gptel-backend (gptel-make-openai "Anyscale" :host "api.endpoints.anyscale.com" :key "your-api-key" :models '(;; has many more, check anyscale mistralai/Mixtral-8x7B-Instruct-v0.1)))
4.2.11. Perplexity
Register a backend with
(gptel-make-perplexity "Perplexity" ;Any name you want :key "your-api-key" ;can be a function that returns the key :stream t) ;If you want responses to be streamed
You can pick this backend from the menu when using gptel (see Usage)
- (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of
gptel-backend
. Use this instead of the above.;; OPTIONAL configuration (setq gptel-model 'sonar gptel-backend (gptel-make-perplexity "Perplexity" :key "your-api-key" :stream t))
4.2.12. Anthropic (Claude)
Register a backend with
(gptel-make-anthropic "Claude" ;Any name you want :stream t ;Streaming responses :key "your-api-key")
The :key
can be a function that returns the key (more secure).
You can pick this backend from the menu when using gptel (see Usage).
- (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of
gptel-backend
. Use this instead of the above.;; OPTIONAL configuration (setq gptel-model 'claude-3-sonnet-20240229 ; "claude-3-opus-20240229" also available gptel-backend (gptel-make-anthropic "Claude" :stream t :key "your-api-key"))
- (Optional) Interim support for Claude 3.7 Sonnet
To use Claude 3.7 Sonnet model in its “thinking” mode, you can define a second Claude backend and select it via the UI or elisp:
(gptel-make-anthropic "Claude-thinking" ;Any name you want :key "your-API-key" :stream t :models '(claude-sonnet-4-20250514 claude-3-7-sonnet-20250219) :request-params '(:thinking (:type "enabled" :budget_tokens 2048) :max_tokens 4096))
You can set the reasoning budget tokens and max tokens for this usage via the
:budget_tokens
and:max_tokens
keys here, respectively.You can control whether/how the reasoning output is shown via gptel’s menu or
gptel-include-reasoning
, see [BROKEN LINK: handle-reasoning-content].
4.2.13. Groq
Register a backend with
;; Groq offers an OpenAI compatible API (gptel-make-openai "Groq" ;Any name you want :host "api.groq.com" :endpoint "/openai/v1/chat/completions" :stream t :key "your-api-key" ;can be a function that returns the key :models '(llama-3.1-70b-versatile llama-3.1-8b-instant llama3-70b-8192 llama3-8b-8192 mixtral-8x7b-32768 gemma-7b-it))
You can pick this backend from the menu when using gptel (see Usage). Note that Groq is fast enough that you could easily set :stream nil
and still get near-instant responses.
- (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of
gptel-backend
. Use this instead of the above.;; OPTIONAL configuration (setq gptel-model 'mixtral-8x7b-32768 gptel-backend (gptel-make-openai "Groq" :host "api.groq.com" :endpoint "/openai/v1/chat/completions" :stream t :key "your-api-key" :models '(llama-3.1-70b-versatile llama-3.1-8b-instant llama3-70b-8192 llama3-8b-8192 mixtral-8x7b-32768 gemma-7b-it)))
4.2.14. Mistral Le Chat
Register a backend with
;; Mistral offers an OpenAI compatible API (gptel-make-openai "MistralLeChat" ;Any name you want :host "api.mistral.ai" :endpoint "/v1/chat/completions" :protocol "https" :key "your-api-key" ;can be a function that returns the key :models '("mistral-small"))
You can pick this backend from the menu when using gptel (see Usage).
- (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of
gptel-backend
. Use this instead of the above.;; OPTIONAL configuration (setq gptel-model 'mistral-small gptel-backend (gptel-make-openai "MistralLeChat" ;Any name you want :host "api.mistral.ai" :endpoint "/v1/chat/completions" :protocol "https" :key "your-api-key" ;can be a function that returns the key :models '("mistral-small")))
4.2.15. OpenRouter
Register a backend with
;; OpenRouter offers an OpenAI compatible API (gptel-make-openai "OpenRouter" ;Any name you want :host "openrouter.ai" :endpoint "/api/v1/chat/completions" :stream t :key "your-api-key" ;can be a function that returns the key :models '(openai/gpt-3.5-turbo mistralai/mixtral-8x7b-instruct meta-llama/codellama-34b-instruct codellama/codellama-70b-instruct google/palm-2-codechat-bison-32k google/gemini-pro))
You can pick this backend from the menu when using gptel (see Usage).
- (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of
gptel-backend
. Use this instead of the above.;; OPTIONAL configuration (setq gptel-model 'mixtral-8x7b-32768 gptel-backend (gptel-make-openai "OpenRouter" ;Any name you want :host "openrouter.ai" :endpoint "/api/v1/chat/completions" :stream t :key "your-api-key" ;can be a function that returns the key :models '(openai/gpt-3.5-turbo mistralai/mixtral-8x7b-instruct meta-llama/codellama-34b-instruct codellama/codellama-70b-instruct google/palm-2-codechat-bison-32k google/gemini-pro)))
4.2.16. PrivateGPT
Register a backend with
(gptel-make-privategpt "privateGPT" ;Any name you want :protocol "http" :host "localhost:8001" :stream t :context t ;Use context provided by embeddings :sources t ;Return information about source documents :models '(private-gpt))
You can pick this backend from the menu when using gptel (see Usage).
- (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of
gptel-backend
. Use this instead of the above.;; OPTIONAL configuration (setq gptel-model 'private-gpt gptel-backend (gptel-make-privategpt "privateGPT" ;Any name you want :protocol "http" :host "localhost:8001" :stream t :context t ;Use context provided by embeddings :sources t ;Return information about source documents :models '(private-gpt)))
4.2.17. DeepSeek
Register a backend with
(gptel-make-deepseek "DeepSeek" ;Any name you want :stream t ;for streaming responses :key "your-api-key") ;can be a function that returns the key
You can pick this backend from the menu when using gptel (see Usage).
- (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of
gptel-backend
. Use this instead of the above.;; OPTIONAL configuration (setq gptel-model 'deepseek-reasoner gptel-backend (gptel-make-deepseek "DeepSeek" :stream t :key "your-api-key"))
4.2.18. Sambanova (Deepseek)
Sambanova offers various LLMs through their Samba Nova Cloud offering, with Deepseek-R1 being one of them. The token speed for Deepseek R1 via Sambanova is about 6 times faster than when accessed through deepseek.com
Register a backend with
(gptel-make-openai "Sambanova" ;Any name you want :host "api.sambanova.ai" :endpoint "/v1/chat/completions" :stream t ;for streaming responses :key "your-api-key" ;can be a function that returns the key :models '(DeepSeek-R1))
You can pick this backend from the menu when using gptel (see Usage).
- (Optional) Set as the default gptel backend
The code aboves makes the backend available for selection. If you want it to be the default backend for gptel, you can set this as the value of
gptel-backend
. Add these two lines to your configuration:;; OPTIONAL configuration (setq gptel-model 'DeepSeek-R1) (setq gptel-backend (gptel-get-backend "Sambanova"))
4.2.19. Cerebras
Register a backend with
;; Cerebras offers an instant OpenAI compatible API (gptel-make-openai "Cerebras" :host "api.cerebras.ai" :endpoint "/v1/chat/completions" :stream t ;optionally nil as Cerebras is instant AI :key "your-api-key" ;can be a function that returns the key :models '(llama3.1-70b llama3.1-8b))
You can pick this backend from the menu when using gptel (see Usage).
- (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of
gptel-backend
. Use this instead of the above.;; OPTIONAL configuration (setq gptel-model 'llama3.1-8b gptel-backend (gptel-make-openai "Cerebras" :host "api.cerebras.ai" :endpoint "/v1/chat/completions" :stream nil :key "your-api-key" :models '(llama3.1-70b llama3.1-8b)))
4.2.20. Github Models
NOTE: GitHub Models is not GitHub Copilot! If you want to use GitHub Copilot chat via gptel, look at the instructions for GitHub CopilotChat below instead.
Register a backend with
;; Github Models offers an OpenAI compatible API (gptel-make-openai "Github Models" ;Any name you want :host "models.inference.ai.azure.com" :endpoint "/chat/completions?api-version=2024-05-01-preview" :stream t :key "your-github-token" :models '(gpt-4o))
You will need to create a github token.
For all the available models, check the marketplace.
You can pick this backend from the menu when using (see Usage).
- (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of
gptel-backend
. Use this instead of the above.;; OPTIONAL configuration (setq gptel-model 'gpt-4o gptel-backend (gptel-make-openai "Github Models" ;Any name you want :host "models.inference.ai.azure.com" :endpoint "/chat/completions?api-version=2024-05-01-preview" :stream t :key "your-github-token" :models '(gpt-4o))
4.2.21. Novita AI
Register a backend with
;; Novita AI offers an OpenAI compatible API (gptel-make-openai "NovitaAI" ;Any name you want :host "api.novita.ai" :endpoint "/v3/openai" :key "your-api-key" ;can be a function that returns the key :stream t :models '(;; has many more, check https://novita.ai/llm-api gryphe/mythomax-l2-13b meta-llama/llama-3-70b-instruct meta-llama/llama-3.1-70b-instruct))
You can pick this backend from the menu when using gptel (see Usage)
- (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of
gptel-backend
. Use this instead of the above.;; OPTIONAL configuration (setq gptel-model 'gryphe/mythomax-l2-13b gptel-backend (gptel-make-openai "NovitaAI" :host "api.novita.ai" :endpoint "/v3/openai" :key "your-api-key" :stream t :models '(;; has many more, check https://novita.ai/llm-api mistralai/Mixtral-8x7B-Instruct-v0.1 meta-llama/llama-3-70b-instruct meta-llama/llama-3.1-70b-instruct)))
4.2.22. xAI
Register a backend with
(gptel-make-xai "xAI" ; Any name you want :stream t :key "your-api-key") ; can be a function that returns the key
You can pick this backend from the menu when using gptel (see Usage)
- (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of
gptel-backend
. Use this instead of the above.(setq gptel-model 'grok-3-latest gptel-backend (gptel-make-xai "xAI" ; Any name you want :key "your-api-key" ; can be a function that returns the key :stream t))
4.2.23. AI/ML API
AI/ML API provides 300+ AI models including Deepseek, Gemini, ChatGPT. The models run at enterprise-grade rate limits and uptimes.
Register a backend with
;; AI/ML API offers an OpenAI compatible API (gptel-make-openai "AI/ML API" ;Any name you want :host "api.aimlapi.com" :endpoint "/v1/chat/completions" :stream t :key "your-api-key" ;can be a function that returns the key :models '(deepseek-chat gemini-pro gpt-4o))
You can pick this backend from the menu when using gptel (see Usage).
- (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of
gptel-backend
. Use this instead of the above.;; OPTIONAL configuration (setq gptel-model 'gpt-4o gptel-backend (gptel-make-openai "AI/ML API" :host "api.aimlapi.com" :endpoint "/v1/chat/completions" :stream t :key "your-api-key" :models '(deepseek-chat gemini-pro gpt-4o)))
4.2.24. GitHub CopilotChat
Register a backend with
(gptel-make-gh-copilot "Copilot")
You will be informed to login into GitHub
as required.
You can pick this backend from the menu when using gptel (see Usage).
- (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of
gptel-backend
. Use this instead of the above.;; OPTIONAL configuration (setq gptel-model 'claude-3.7-sonnet gptel-backend (gptel-make-gh-copilot "Copilot"))
4.2.25. AWS Bedrock
Register a backend with
(gptel-make-bedrock "AWS" ;; optionally enable streaming :stream t :region "ap-northeast-1" ;; subset of gptel--bedrock-models :models '(claude-sonnet-4-20250514) ;; Model region for cross-region inference profiles. Required for models such ;; as Claude without on-demand throughput support. One of 'apac, 'eu or 'us. ;; https://docs.aws.amazon.com/bedrock/latest/userguide/inference-profiles-use.html :model-region 'apac)
The Bedrock backend gets your AWS credentials from the environment variables. It expects to find either
AWS_ACCESS_KEY_ID
, AWS_SECRET_ACCESS_KEY
, AWS_SESSION_TOKEN
(optional), or if present, can use AWS_PROFILE
to get these directly from the aws
cli.
NOTE: The Bedrock backend needs curl >= 8.5 in order for the sigv4 signing to work properly, https://github.com/curl/curl/issues/11794
An error will be signalled if gptel-curl
is NIL
.
You can pick this backend from the menu when using gptel (see Usage).
- (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of
gptel-backend
. Use this instead of the above.;; OPTIONAL configuration (setq gptel-model 'claude-sonnet-4-20250514 gptel-backend (gptel-make-bedrock "AWS" ;; optionally enable streaming :stream t :region "ap-northeast-1" ;; subset of gptel--bedrock-models :models '(claude-sonnet-4-20250514) ;; Model region for cross-region inference profiles. Required for models such ;; as Claude without on-demand throughput support. One of 'apac, 'eu or 'us. ;; https://docs.aws.amazon.com/bedrock/latest/userguide/inference-profiles-use.html :model-region 'apac))
4.2.26. Moonshot (Kimi)
Register a backend with
(gptel-make-openai "Moonshot" :host "api.moonshot.cn" ;; or "api.moonshot.ai" for the global site :key "your-api-key" :stream t ;; optionally enable streaming :models '(kimi-latest kimi-k2-0711-preview))
See Moonshot.ai document for a complete list of models.
- (Optional) Use the builtin search tool
Moonshot supports a builtin search tool that does not requires the user to provide the tool implementation. To use that, you first need to define the tool and add to
gptel-tools
(while it does not requires the client to provide the search implementation, it does expects the client to reply a tool call message with its given argument, to be consistent with other tool calls):(setq gptel-tools (list (gptel-make-tool :name "$web_search" :function (lambda (&optional search_result) (json-serialize `(:search_result ,search_result))) :description "Moonshot builtin web search. Only usable by moonshot model (kimi), ignore this if you are not." :args '((:name "search_result" :type object :optional t)) :category "web")))
Then you also need to add the tool declaration via
:request-params
because it needs a specialbuiltin_function
type:(gptel-make-openai "Moonshot" :host "api.moonshot.cn" ;; or "api.moonshot.ai" for the global site :key "your-api-key" :stream t ;; optionally enable streaming :models '(kimi-latest kimi-k2-0711-preview) :request-params '(:tools [(:type "builtin_function" :function (:name "$web_search"))]))
Now the chat should be able to automatically use search. Try “what’s new today” and you should expect the up-to-date news in response.
5. Quick start and commands
The primary means of using gptel is by invoking the command
gptel-send
. It can be invoked on any text and in any buffer,
including within the minibuffer or special, read-only buffers.
- Function
gptel-send
Arguments:
ARG
This command sends the buffer text from the start upto the cursor to the LLM as a prompt, and inserts the response it receives below the cursor. It treats the buffer like a chat interface. If the region is active, it sends only the text in the region instead. Narrowing is respected.
Most gptel commands including gptel-send
are asynchronous, so you
can continue to use Emacs while waiting for the response to be
received.
Calling gptel-send
with a prefix argument invokes a “transient” menu
where you can specify various gptel options. This menu may also be
invoked directly via gptel-menu
:
- Function
gptel-menu
Display a menu
- to set chat parameters (model, backend, system message),
- include quick instructions for the next request only,
- to add additional context – regions, buffers or files – to gptel,
- to include tools with the request,
- to read the prompt from or redirect the response elsewhere,
- or to replace the prompt with the response.
gptel-menu
is the primary way to tune gptel’s behavior interactively.- Function
gptel-abort
Arguments:
BUFFER
This command will cancel the latest pending or ongoing request (LLM interaction) in the current buffer.
5.1. gptel in a dedicated buffer
gptel-send
works uniformly in any buffer in Emacs, and you are
encouraged to use it without requiring a context switch to a dedicated
interface. However, it does provide the option to create a buffer
dedicated to chatting with an LLM with the gptel
command.
- Function
gptel
Arguments:
(NAME &optional _ INITIAL INTERACTIVEP)
Switch to or create a chat session with
NAME
.If region is active, use it as the
INITIAL
prompt. Return the buffer created or switched to.INTERACTIVEP
ist
whengptel
is called interactively.
Running gptel
interactively will prompt you for an API key if one is
needed, and switch you to a dedicated chat buffer (the “gptel
buffer”). In the gptel buffer, gptel-send
is bound to C-c RET
by
default.
The gptel buffer is a normal Emacs buffer in all respects, but with some extra niceties for chat interaction.
- Variable
gptel-default-mode
- The major mode used in gptel
buffers. It is one of
markdown-mode
,org-mode
andtext-mode
. It usesmarkdown-mode
if available, and defaults totext-mode
.
5.2. Chat persistence
5.3. The rewrite interface
6. TODO gptel’s design
- gptel tries to be general, not specific
- gptel tries to be always available
7. gptel’s transient interface
- = Scope
- Most actions in gptel’s transient menus that
involve setting variables can be scoped to act globally,
buffer-locally, or set them for the next request only.
Interactively, this is the way to specify different backends, models
and system messages in different Emacs buffers, or to temporarily
specify them for a one-shot request. The Scope option is available
in several gptel menus, including
gptel-menu
,gptel-tools
andgptel-system-prompt
.
8. Configuration
8.1. The anatomy of gptel-send
The following flowchart provides an overview of the most common user
options and hooks available for customizing the behavior of
gptel-send
. The left and right columns show user options and hooks
respectively. The central column illustrates the control flow of
gptel-send
, and where in the pipeline the user options or hooks are
applied.
(USER OPTIONS, TASKS) GPTEL-SEND (HOOKS) โ โ โ v v v โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ โญโโโโโโโโโโโโโดโโโโโโโโโโโโโโฎ โ (Org mode only) โ โ Copy region โ โ gptel-org-ignore-elements โ โ (or buffer above cursor) โ โgptel-org-branching-contextโโโโ>โค to a temporary buffer โ โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ โฐโโโโโโโโโโโโโฌโโโโโโโโโโโโโโฏ โญโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ โยทโถโโดยทโถโโดยทโถโโดgptel-prompt-transform-functions โ gptel-track-response โโโโฎ v โ โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ โ โญโโโโโโโโโโโโโดโโโโโโโโโโโโโโโฎ โญโโโโโโโโโโโโดโโโโโโโ-โโโโโฎ โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ โ โ Create messages array, โ โ Apply @presets โ โ Add base64-encoded media โ โโ>โค Assign user and LLM roles โ โ โ โ from links โโโฏ โ to text โ โAdd regions, buffers andโ โ gptel-track-media โ โฐโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโฏ โfiles from gptel-contextโ โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ โ โ gptel-use-context โ โญโโโโโโโโโโโโโโโโโโโโโโโโโโฎ โ โฐโโโโโโโโโโโโโโโโโโโโโโโโโฏ โ gptel-model โโโโฎ โ โฐโโโโโโโโโโโโโโโโโโโโโโโโโโฏ โ v โญโโโโโโโโโโโโโโโโโโโโโโโโโโฎ โ โญโโโโโโโโโโดโโโโโโโโโโโฎ โ Backend parameters โ โ โ โ โ gptel-backend โโโโผโโโโ>โ Create payload โ โ gptel--request-params โ โ โ โ โฐโโโโโโโโโโโโโโโโโโโโโโโโโโฏ โ โฐโโโโโโโโโโฌโโโโโโโโโโโฏ โญโโโโโโโโโโโโโโโโโโโโโโโโโโฎ โ v โ Run and add directive โ โ โโโโโโโโโโโงโโโโโโโโโโโ โ gptel-directives โโโโค โ Send request โ โ gptel--system-message โ โ โโโโโโโโโโโคโโโโโโโโโโโ โ gptel--schema โ โ โยทโถโโดยทโถโโดยทโถโโดยท gptel-post-request-hook โฐโโโโโโโโโโโโโโโโโโโโโโโโโโฏ โ โ โญโโโโโโโโโโโโโโโโโโโโโโโโโโฎ โ โ โ Prepare tools โ โ v โ gptel-use-tools โโโโค โถโโโดโโโด โ gptel-tools โ โ โญ โ โ โ โ โโฎ โฐโโโโโโโโโโโโโโโโโโโโโโโโโโฏ โ ASYNC WAIT โญโโโโโโโโโโโโโโโโโโโโโโโโโฎ โ โฐ โ โ โ โ โฏ โญ>โค Merge available โโโโโฏ โถโโโฌโโโด โ โ tool call results โ v โ โฐโโโโโโโโโโโโโโโโโโโโโโโโโฏ โยทโถโโดยทโถโโดยทโถโโดยท gptel-pre-response-hook โ โญโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ โญโโโโโโโโโโโโโโโโโโโโโโโโฎ โ โ Handle "Reasoning" โโโโโ>โค โ โ โ gptel-include-reasoning โ โญโ<โโค Parse partial responseโ<โโโโโโฎ โ โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ โโญ<โโค โ<โฎ โ โ โโ โฐโโโโโโโโโโโโโโโโโโโโโโโโฏ โ โต โ โโ โยทโถโgptel-post-stream-hook โ โโ โญโโโโโโโโโโโโโโโโโโโโโโโโฎ โ โท โ โโฐโโโค Insert response chunk โโโฏ โ โ โ โฐโโโโโโโโโโโโโโโโโโโโโโโโฏ โ โ โ โญโโโโโโโโโโโโโโโโโโโโโโโโฎ โ โ โฐโโ>โค โ โ โฐโ<โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ<โค Run tool calls โ โ โญโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ โญโโ>โค โ โ โ gptel-confirm-tool-calls โโโโฏ โฐโโโโโโโโโโโโฌโโโโโโโโโโโโฏ โ โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ โญโโโโโโโโโโโโดโโโโโโโโโโโโฎ โ โญโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ โญโโ>โค Insert tool results โ>โโโโโโฏ โgptel-include-tool-resultsโโโโฏ โฐโโโโโโโโโโโโฌโโโโโโโโโโโโฏ โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ โ vยทโถโโดยทโถโโดยทโถโโดยท gptel-post-response-functions โถโโโดโโโด
gptel-send
works by (i) building a backend-appropriate request
payload from the provided text, context, tools and active gptel
configuration, (ii) sending the request and (iii) inserting or
otherwise dispatching on the response as necessary. A detailed
description of gptel-send’s processing pipeline and concomitant
customization options follows.
Copy the text up to the cursor (or the selected region) from the “request buffer” to a temporary buffer. This serves as the primary prompt to be sent to the LLM.
If the request is sent from an Org mode buffer, this region may be modified in two different ways. If
gptel-org-branching-context
is non-nil, copy only the lineage of the current Org entry to the temporary buffer. Additionally, remove Org elements of the types ingptel-org-ignore-elements
from this text. By default, the latter is used to strip OrgPROPERTIES
blocks from the text before sending. See 8.2.1 for more details.Run the hook
gptel-prompt-transform-functions
in this buffer, with the cursor at the end. This can be used to modify the prompt text or local environment as required. By default, this hook serves a couple of functions:- If
gptel-use-context
is non-nil, add the contents of regions, buffers and files explicitly added to gptel’s context by the user. How exactly this is added to the request payload depends on the value ofgptel-use-context
, see 8.6. - Apply any presets specified in the prompt text via the
@preset
cookie (see 8.8.1).
gptel-prompt-transform-functions
can be used for arbitrarily complex prompt transformations. A typical example would be to search for occurrences of the pattern$(cmd)
and replace it with the output of the shell commandcmd
, making it easy to send dynamically generated shell command output. It is described in more detail in 9.2.- If
- Parse this buffer and collect text, sorting it into user and LLM
role buckets in an array of messages. gptel uses text-properties
to track the provenance of buffer text. If the user option
gptel-track-response
is non-nil, ignore the distinction between user and LLM roles and treat the entire buffer as a user prompt. If the user optiongptel-track-media
is non-nil, scan hyperlinks to files in this buffer and check if their MIME types are supported by the LLM (see 8.5). If they are, base64-encode them and include them in the messages array. - Build the payload using parameters specified by
gptel-backend
andgptel-model
. The former can include preferences like response streaming, LLM prompt caching, temperature etc. There are dozens of parameters governing backend API behavior and LLM output, and gptel provides user options for only a few of them, such asgptel-temperature
andgptel-cache
. To specify arbitrary LLM/backend API parameters, see 8.4. Create the system message and possible conversation template from
gptel--system-message
, and include it in the payload. If this variable is a string, it is included as is. If it is a function, the system message is generated dynamically. If it is a list of strings, the first element is treated as the system message, and the remaining elements are considered alternating user and LLM messages to be prepended to the messages array. See 8.3 for details.If
gptel-use-tools
is non-nil andgptel-tools
contains a list of gptel tools (See 8.7), include the tools in the payload.Make a HTTP request with this payload. The address, port and API key (if required) for the request are included in the
gptel-backend
struct. Rungptel-post-request-hook
immediately after starting the request. This hook may be used to do any cleanup or resetting – gptel uses this hook to reset user preferences after firing a “oneshot” request, see 7.gptel-send
then waits for a response. When a response is received, do some basic error handling. If the response has HTTP code 200/201, first rungptel-pre-response-hook
in the buffer from which the request was sent. This hook can be used to prepare the buffer for the response however you would like.Streaming responses only: Insert each chunk into the request buffer (or elsewhere if the output has been redirected, see 7.) After each insertion, run
gptel-post-stream-hook
. This hook runs in the request buffer and may be used for immediate actions such as recentering the view or scrolling the window with the response.If
gptel-include-reasoning
is non-nil and the model responds with a “thinking” or reasoning “block” of text, handle it according to this user option. Typically this involves formatting it specially.If the LLM responds with a tool call, either run the tool automatically or insert a prompt into the request buffer seeking confirmation from the user. This depends on both the value of
gptel-confirm-tool-calls
and the tool’s:confirm
slot. If the output has been redirected to a non-buffer destination, tool call confirmation is sought from the minibuffer instead.- If a tool has been run (automatically or after confirmation),
conditionally insert the result into the request buffer, depending
on the value of
gptel-include-tool-results
and the tool’s:include
slot. - If a tool has been run: add the tool call result to the messages array and resend it to the LLM. Goto step 9.
- After the response ends, run the hook
gptel-post-response-functions
in the request buffer. This hook can be used for cleanup, formatting or modifying the LLM output, etc. Note that this hook always runs, even if the response fails.
After the request ends, you can examine a pretty-printed view of the
state and details of the last request sent from the buffer at any time
via the function gptel--inspect-fsm
. In chat buffers, you can click
on the status text in the header-line instead. This is primarily
intended for introspection and debugging.
Alternatively, you can inspect the variable gptel--fsm-last
, which
always contains the last request as a gptel state-machine object (see
gptel’s state machine).
8.2. TODO gptel chat buffer UI
8.2.1. gptel in Org mode
gptel-org-branching-context
gptel-org-convert-response
gptel-org-ignore-elements
gptel-org-set-topic
gptel-org-set-properties
8.3. Directives
In addition to the text in your buffer, LLMs can be prompted with instructions on how they should respond. They are prioritized and treated specially by most LLMs, and is one of the primary levers for configuring its behavior. In popular use these instructions are referred to as the “system message”, “system prompt” or “directives”. gptel refers to them as the “system message” and “directives”.
The system message can be used to specify the LLM’s general tone and tenor, output format, structure or restrictions, as well as general objectives it should work towards in its interactions with the user.
The following is a brief system message describing the tone and proscribing certain common LLM behaviors.
To assist: Be terse. Do not offer unprompted advice or clarifications. Speak in specific, topic relevant terminology. Do NOT hedge or qualify. Speak directly and be willing to make creative guesses.
Explain your reasoning. if you donโt know, say you donโt know. Be willing to reference less reputable sources for ideas.
Do NOT summarize your answers. Never apologize. Ask questions when unsure.
Here is another example, this time specifying an objective for the LLM to work towards:
You are a tutor and domain expert in the domain of my questions. You will lead me to discover the answer myself by providing hints. Your instructions are as follows:
- If the question or notation is not clear to you, ask for clarifying details.
- At first your hints should be general and vague.
- If I fail to make progress, provide more explicit hints.
- Never provide the answer itself unless I explicitly ask you to. If my answer is wrong, again provide only hints to correct it.
- If you use LaTeX notation, enclose math in \( and \) or \[ and \] delimiters.
In practice system messages can be document-length, composed of several sections that provide both instructions and a generous amount of context required to accomplish a task.
You can control system message gptel uses via the variable
gptel--system-message
. This is most commonly a string containing
the text of the instructions. But it can also be a directive - a
function or a list of strings, as explained below.
While you can set gptel--system-message
to any string, gptel
provides the alist gptel-directives
as a registry of directives.
gptel’s idea of the directive is more general than a static string.
A directive in gptel-directives
can be
- A string, interpreted as the system message.
- A list of strings, whose first (possibly nil) element is interpreted as the system message, and the remaining elements as (possibly nil) alternating user prompts and LLM responses. This can be used to template the initial part of a conversation.
- A function that returns a string or a list of strings, interpreted
as the above. This can be used to dynamically generate a system
message and/or conversation template based on the current context.
(See the definition of
gptel--rewrite-directive-default
for an example.)
Each entry in gptel-directives
maps a symbol naming the directive to
the directive itself. By default, gptel uses the directive with the
key default
, so you should set this to what gptel should use out of
the box:
(setf (alist-get 'default gptel-directives) "My default system message here.")
8.4. Backends
A gptel-backend
is an object containing LLM connection,
authentication, model information and other request parameters.
To determine how to construct an LLM query, gptel uses the backend that is “active” in the Emacs buffer from which the query originates. Only one backend can be “active” in a buffer at a time, and each Emacs buffer can use a different backend.
The current backend is controlled by the variable gptel-backend
. It
holds a gptel-backend
object.
The backend may be set interactively from gptel-menu
:
- -m Model
- Set the gptel backend and model to use. Note that the gptel’s scope action is available in this menu, so the backend and model may be specified globally, buffer-locally or for the next request only.
gptel includes a pre-defined backend for ChatGPT, and methods to simplify creating backends for several supported LLM providers (See 2 and 4.)
Every gptel backend may include the following keys, although some may be left unspecified:
NAME
- Name of the backend, can be any string
Connection information:
PROTOCOL
- used to communicate with the provider, typically “http” or “https”.
HOST
- hostname of the provider, typically a domain or IP address.
ENDPOINT
- API endpoint for chat completion requests, such as
/v1/chat/completions
. HEADER
- An alist or function that returns an alist specifying
additional headers to send with each request. The
Content-Type
header is set toapplication/json
by gptel automatically and need not be specified. (Seeurl-request-extra-headers
for more details.) CURL-ARGS
- (List of strings) When using Curl, add these command
line arguments to the Curl process in addition to the ones gptel
uses by default. This can also be set globally instead of
per-backend instance via
gptel-curl-extra-args
. gptel’s use of Curl for requests is determined bygptel-use-curl
.
Authentication (if required):
KEY
- A string, variable (symbol) or function to retrieve an API key for requests. How this is included in the request depends on the implementation.
Models provided by the backend:
MODELS
- A list of symbols representing LLM names. Each symbol can also include model metadata and capabilities, see 8.5 for details.
Request parameters (optional):
STREAM
- (Boolean) Stream responses when using this backend, if supported.
REQUEST-PARAMS
- A plist of additional request parameters (as plist keys) and their values supported by the API. Its contents are API-specific, and can be used to set parameters that gptel does not provide user options for. It will be converted to JSON and included with all requests made with this backend.
Here is an annotated example of a full backend specification. In
practice gptel provides several specialized backend-creation functions
(gptel-make-*
) that handle most of this for you, as described in
4.
;; We use -openai since Openrouter provides an OpenAI-compatible API (gptel-make-openai "Openrouter-example" ;NAME, for your reference ;; Connection information :protocol "https" :host "openrouter.ai" ;Only domain name :endpoint "/api/v1/chat/completions" ;Only endpoint :header ;Adds KEY to HTTP header (lambda () (when-let* ((key (gptel--get-api-key))) `(("Authorization" . ,(concat "Bearer " key))))) ;; Wait for up to an hour for the response, and use a proxy :curl-args '("--keepalive-time" "3600" "--proxy" "proxy.yourorg.com:80") ;; Authentication: fetch API key from Emacs' environment :key (lambda () (getenv "OPENROUTER_API_KEY")) :models '(;; Specified as MODEL-NAME, a symbol deepseek/deepseek-r1-distill-llama-70b ;; Alternatively, a model can be (MODEL-NAME . METADATA-PLIST) ;; See the Models section for details (openai/gpt-oss-120b :description "OpenAI's most powerful open-weight model" :context-window 131 :capabilities (reasoning json tool-use))) ;; Request parameters :stream t ;Enable response streaming :request-params ;API-specific parameters, do not copy! '( :top_p 0.80 ;Adjust sampling :top_k 20 ;Adjust sampling :max_tokens 1024)) ;Fix max response size
When a backend is defined, it is added to gptel’s registry of defined
backends. A backend object can be accessed by name via
gptel-get-backend
:
- Function
gptel-get-backend
Arguments:
(NAME)
Retrieve the backend object with
NAME
.
Fields of a gptel backend can be obtained via accessors. For example,
the :request-params
of the active backend can be obtained via
(gptel-backend-request-params gptel-backend)
, and that of a backend
with name “Openrouter-example” (as above) can be obtained as
(gptel-backend-request-params
(gptel-get-backend "Openrouter-example"))
All gptel backend fields can be modified in place. To add a model to the list of models in the above backend, for example, you can use
(push 'qwen3/qwen3-coder (gptel-backend-models (gptel-get-backend "Openrouter-example")))
8.5. Models
A model in gptel is a symbol denoting a specific LLM, whose name is as expected by the active LLM provider’s API.
Along with the active backend (see 8.4), gptel uses the value of
the user option gptel-model
as the LLM to query. Only one model can
be “active” in an Emacs buffer at a time, and each buffer can use a
different model.
The model may be set interactively from gptel-menu
:
- -m Model
- Set the gptel backend and model to use. Note that the gptel’s scope action is available in this menu, so the backend and model may be specified globally, buffer-locally or for the next request only.
Each gptel backend is typically associated with a list of available
models. When defining the backend, each model in this list can be
specified with additional metadata if required, as
(MODEL-NAME . METADATA-PLIST)
.
Here is an example:
(claude-sonnet-4-20250514 ;model name :description "High-performance model with exceptional reasoning and efficiency" :capabilities (media tool-use cache) :mime-types ("image/jpeg" "image/png" "image/gif" "image/webp" "application/pdf") :context-window 200 :input-cost 3 :output-cost 15 :cutoff-date "2025-03" :request-params (:thinking (:type "enabled" :budget_tokens 2048)))
This metadata is displayed when selecting models interactively. It is also used internally by gptel to assess model capabilities such as its ability to parse binary file formats.
The following metadata keys are recognized:
Model information only:
DESCRIPTION
- For your reference.
CONTEXT-WINDOW
- Size in thousands of tokens of the context window of the model.
INPUT-COST
andOUTPUT-COST
- Cost per million input/output tokens. The currency is indeterminate and left to your interpretation.
CUTOFF-DATE
- Cutoff date for the data the model was trained on.
Keys used by gptel:
CAPABILITIES
- It is assumed that any model that gptel
communicates with can ingest and generate text. This is a list of
symbols denoting additional capabilities the model possesses:
tool-use
: The model is capable of using toolsjson
: The model can produce output structured according to a specified schemareasoning
: The model can produce a stream of “reasoning tokens”, separate from its final response.nostream
: The model or API cannot produce streaming responses. This denotes an incapability, since gptel’s default assumption is that all models can.media
: The model can understand binary formats (input-only)
MIME-TYPES
- List of MIME types a
media
capable model can parse. REQUEST-PARAMS
- A plist of additional request parameters (as plist keys) and their values supported by the API. Its contents are API-specific, and can be used to set parameters that gptel does not provide user options for. It will be converted to JSON and included with all requests made with this model.
8.6. TODO Context
8.7. TODO Tools
gptel can provide the LLM with client-side elisp “tools”, or function specifications, along with the request. A “tool” is an elisp function along with metadata intended to describe its purpose, arguments and return value as you would to a human:
“This function is used to do X. It accepts two arguments, a string and a list of numbers, and returns Y.”
If the LLM decides to run the tool, it supplies the tool call arguments, which gptel uses to run the tool in your Emacs session. The result is optionally returned to the LLM to complete the task.
This exchange can be used to equip the LLM with capabilities or knowledge beyond what is available out of the box – for instance, you can get the LLM to control your Emacs frame, create or modify files and directories, or look up information relevant to your request via web search or in a local database.
To use tools in gptel, you need
- a model that supports this usage. All the flagship models support tool use, as do many of the smaller open models.
- Tool specifications that gptel understands. gptel does not currently include any tool specifications out of the box.
8.7.1. Obtaining tools
8.7.2. Writing tools
A gptel tool is a structure specifying an Elisp function, the format of its arguments and accompanying documentation intended for the LLM. This documentation includes a description of the function and its arguments.
- Type
gptel-tool
A structure containing the fields specified below in calls to
gptel-make-tool
.- Function
gptel-make-tool
Arguments:
(&key NAME FUNCTION DESCRIPTION ARGS CATEGORY INCLUDE CONFIRM ASYNC)
Make a gptel tool for LLM use. The following keyword arguments are available, of which the first four are required.
NAME
: The name of the tool, recommended to be in Javascript style snake_case.FUNCTION
: The function itself (lambda or symbol) that runs the tool.DESCRIPTION
: A verbose description of what the tool does, how to call it and what it returns.ARGS
: A list of plists specifying the arguments, or nil for a function that takes no arguments. Each plist in ARGS requires the following keys:- argument
:name
and:description
, as strings. - argument
:type
, as a symbol. Allowed types are those understood by the JSON schema:string
,number
,integer
,boolean
,array
, object or null
The following plist keys are conditional/optional:
:optional
, boolean indicating if argument is optional:enum
for enumerated types, whose value is a vector of strings representing allowed values. Note that:type
is still required for enums.:items
, if the:type
is array. Its value must be a plist including at least the itemโs:type
.:properties
, if the type is object. Its value must be a plist that can be serialized into a valid JSON object specification byjson-serialize
.
See 8.7.2.1 for examples of structured tool arguments.
ASYNC
: boolean indicating if the elisp function is asynchronous. IfASYNC
is t, the function should take a callback as its first argument, along with the arguments specified inARGS
, and run the callback with the tool call result when itโs ready. The callback itself is an implementation detail and must not be included inARGS
.The following keys are optional:
CATEGORY
: A string indicating a category for the tool. This is used only for grouping in gptelโs UI. Defaults to “misc”.CONFIRM
: Whether the tool call should wait for the user to run it. If true, the user will be prompted with the proposed tool call, which can be examined, accepted, deferred or canceled.INCLUDE
: Whether the tool results should be included as part of the LLM output. This is useful for logging and as context for subsequent requests in the same buffer. This is primarily useful in chat buffers.- argument
- Specifying tool arguments
Tool arguments are specified in an Elisp format that mirrors the JSON schema for that object1. Each argument spec must be a plist with special keywords. gptel supports a small subset of the keywords supported by the JSON schema.
Argument specification is best understood by looking at some examples.
Consider a function argument named
some_text
that is expected to be a string. This argument can be specified as(:name "some_text" :description "Text to insert into a buffer" :type string)
This is translated (roughly) to the JSON object
{ "some_text": { "type": "string", "description": "Text to insert at buffer end" } }
In a tool definition, this appears as a member of the
:args
list. In this example there is only one argument:(gptel-make-tool :name "append_to_current_buffer" :function (lambda (some_text) (end-of-buffer) (insert some_text)) :args '((:name "some_text" ;NOTE: This is a list of argument specs :description "Text to insert into a buffer" :type string)))
Multiple arguments are specified as a list of plists. For example,
((:name "buffer" :description "Name of buffer to append to" :type string) (:name "some_text" :description "Text to insert at buffer end" :type string))
which is translated (roughly) to the JSON object
{ "buffer": { "type": "string", "description": "Name of buffer to append to" }, "some_text": { "type": "string", "description": "Text to insert at buffer end" } }
A description of argument specification keywords recognized by gptel follows. The following keywords are always required:
:name
- (string) The name of the argument as it appears to the LLM. Using a snake_case or CamelCase name is preferred.
:description
- (string) A description of the argument, intended
for humans and the LLM. This can be as verbose as required, and can
include examples. You can use this to guide the LLM’s behavior, and
include hints such as when this argument might not be requried (see
:optional
below). :type
- (symbol) Any datatype recognized by the JSON schema:
string
,number
,integer
,boolean
,array
,object
ornull
. The compound typesarray
andobject
require further specification, covered below.
The following keyword is required if (and only if) the type is
array
::items
Its value must be a plist including at least the item’s type. Examples:
:items (:type string) ;Array of strings :items (:type array :items (:type number)) ;Array of array of numbers
The following keys is required if (and only if) the type is
object
::properties
A plist, each of whose keys is the name of a property and value is the schema used to validate the property. Example:
:properties (:red (:type number :description "red value [0.0, 1.0") :blue (:type number :description "blue value [0.0, 1.0") :green (:type number :description "green value [0.0, 1.0") :alpha (:type number :description "opacity [0.0, 1.0"))
:required
(vector of strings) specification of which keys of the object are required. For instance, if the
:alpha
key is optional in the above example::required ["red" "blue" "green"]
Here is an example of a spec for an argument named “key_colors” that is an array of color descriptions, where each color description is an object with several keys, all of which are required:
(:name "key_colors" :description "Key colors in the image. Limit to less than four." :type array :items (:type "object" :properties (:r (:type number :description "red value [0.0, 1.0]") :g (:type number :description "green value [0.0, 1.0]") :b (:type number :description "blue value [0.0, 1.0]") :name (:type string :description: "Human-readable color name in snake_case, e.g. \"olive_green\" or \"turquoise\"")) :required ["r" "g" "b" "name"]))
Finally, the following optional argument keywords are recognized:
:optional
- (boolean) Specifies whether this argument is
optional. (Note that
:required
above specifies required object keys, not whether the argument itself is optional.) :enum
- (vector of strings) If the argument is of an enumerated
type, the value of this key is a vector of strings representing
allowed values. Note that
:type
is still required for enums.
Here is an example of an argument list including an optional enum, the “unit” argument:
((:name "location" :type object :properties (:lat (:type number :description "Latitude, [-90.0, 90.0]") :lon (:type number :description "Longitude, [-180.0, 180.0]")) :required ["lat" "lon"] :description "The latitude and longitude, in degrees. South and West (resp) are negative.") (:name "unit" :type string :description "The unit of temperature, either 'celsius' or 'fahrenheit'" :enum ("celsius" "farenheit") :optional t))
8.7.3. Tools from MCP servers
8.7.4. Selecting tools
- Function
gptel-get-tool
Interactively:
- Function
gptel-tools
- Command to select tools and set tool-related behavior for gptel.
Running
gptel-tools
interactively brings up a transient menu where these options may be specified. Note that the gptel’s scope action is available in this menu, so these settings may be specified as global, buffer-local or “oneshot”.
Via elisp:
8.8. Presets
If you use several LLMs, system messages and tools for different LLM tasks, it can be tedious to set options like the backend, model, system message and included tools repeatedly for each task or in each buffer. This is one of the main points of friction with using gptel interactively.2
gptel allows bundles of compatible options to be to be pre-specified and applied together, making it feasible to switch rapidly between different kinds of LLM tasks. A collection of such options is referred to as a “preset”.
Once defined, you can switch to a preset from gptel’s transient menu
(gptel-menu
). When a gptel preset is applied, the gptel options it
specifies are set, and the ones it does not specify are simply left
untouched. So you can layer several presets on top of each other,
with later presets taking precedence over the ones applied earlier.
Presets can be applied globally (across the Emacs session), buffer-locally or for the next request only. This is controlled by the “Scope” option in gptel’s transient menus – see 7.
Depending on the task, options in a preset could be
- Basic ones like selecting the LLM provider, the model and system message.
- Tools to include with requests..
- Request parameters like the temperature, the maximum reply size and whether to stream responses,
- gptel-specific behavior like whether it should distinguish between
user prompts and LLM responses in the prompt
(
gptel-track-response
), include images and documents with the prompt (gptel-track-media
).
A preset is not limited to these options. You can specify the value of any variable that begins with “gptel-”.
To define a preset, use gptel-make-preset
.
- Function
gptel-make-preset
Arguments:
(NAME [KEY1 VALUE1] [KEY2 VALUE2] ...)
Register a gptel options preset with
NAME
.A preset is a combination of gptel options intended to be applied and used together. Presets make it convenient to change multiple gptel settings on the fly.
Typically a preset will include a model, backend, system message and perhaps some tools, but any set of gptel options can be set this way.
NAME
must be a symbol.KEYS
is a plist ofKEY
andVALUE
pairs corresponding to the options being set. Recognized keys include:DESCRIPTION
is a description of the preset, used when selecting a preset.PARENTS
is a preset name (or list of preset names) to apply before this one.PRE
andPOST
are functions to run before and after the preset is applied. They take no arguments.BACKEND
is thegptel-backend
to set, or its name (like “ChatGPT”).MODEL
is thegptel-model
.SYSTEM
is the directive. It can be- the system message (a string),
- a list of strings (template)
- or a function (dynamic system message).
- It can also be a symbol naming a directive in
gptel-directives
.
TOOLS
is a list ofgptel-tools
or tool names, like'("read_url" "read_buffer" ...)
Recognized keys are not limited to the above. Any other key, like
:foo
, corresponds to the value of eithergptel-foo
(prioritized) orgptel--foo
.- So
TOOLS
corresponds togptel-tools
, CONFIRM-TOOL-CALLS
togptel-confirm-tool-calls
,TEMPERATURE
togptel-temperature
and so on.
See gptelโs customization options for all available settings.
Presets can be used to set individual options. An example of a preset to set the system message (and do nothing else):
(gptel-make-preset 'explain :system "Explain what this code does to a novice programmer.")
Here are some more comprehensive examples of presets:
(gptel-make-preset 'coder :description "A preset optimized for coding tasks" ;for your reference :backend "Claude" ;gptel backend or backend name :model 'claude-3-7-sonnet-20250219.1 :system "You are an expert coding assistant. Your role is to provide high-quality code solutions, refactorings, and explanations." :tools '("read_buffer" "modify_buffer")) ;gptel tools or tool names
(gptel-make-preset 'editor ;can also be a string, but symbols are preferred :description "Preset for proofreading tasks" :backend "ChatGPT" :system 'proofread ;system message looked up in gptel-directives :model 'gpt-4.1-mini :tools '("read_buffer" "spell_check" "grammar_check") :temperature 0.7)
The following is a preset that sets the temperature and max tokens, and specifies how context (attached regions, buffers or files) and “reasoning” text should be handled. Crucially, it does not set the model or the backend, so it is intended to be used as a “parent” of other more specific presets.
(gptel-make-preset 'misc :temperature 0.2 ;sets gptel-temperature :max-tokens 512 ;sets gptel-max-tokens :include-reasoning nil ;sets gptel-include-reasoning :use-context 'system) ;sets gptel-use-context
For programmatic use, you can use gptel-with-preset
to send requests
with presets temporarily applied.
- Macro
gptel-with-preset
Arguments:
(NAME &REST BODY)
Run
BODY
with gptel presetNAME
applied.This macro can be used to create
gptel-request
command with settings from a gptel preset applied.NAME
is the preset name, a symbol.
Consider the common case of needing to send an LLM query with specific parameters:
(let ((gptel-backend ...) (gptel-model ...) (gptel--system-message ...) (gptel-tools (mapcar #'gptel-get-tool ...)) ...) (gptel-request "Prompt" :callback ...))
If the required configuration is available as a preset, you can instead run
(gptel-with-preset editor ;name of preset (gptel-request "Prompt" :callback ...))
8.8.1. Specifying presets in the prompt
It is sometimes useful to be able to send a single LLM query with
options different from the active ones. One way to do this is to set
the scope to oneshot
in gptel’s transient menus before changing
options (Scope) This makes it so that the previous set of options is
restored after the request is sent.
A second, possibly more convenient way is to specify a preset in the prompt text itself, which requires no fiddling with menus or other elisp.
Imagine that you have the following preset defined:
(gptel-make-preset 'websearch :description "Haiku with basic web search capability." ;; System message with instructions about searching, citations :system 'searcher ;a symbol: looked up in `gptel-directives' :backend "Claude" :model 'claude-3-5-haiku-20241022 :temperature 0.7 :tools '("search_web" "read_url" "get_youtube_meta"))
This preset includes tools for searching the web, reading URLs and
finding YouTube transcripts that the LLM can use. Irrespective of the
active gptel settings, you can send a query with this preset applied
by including @websearch
in your query:
@websearch Are there any 13“ e-ink monitors on the market? Create a table comparing them, sourcing specs and reviews from online sources. Also do the same for ”transreflective-LCD" displays – I’m not sure what exactly they’re called but they’re comparable to e-ink.
This @preset-name
cookie only applies to the final user turn of the
coversation that is sent – your latest question/response – and the
preset will not be applied if it is present in earlier messages.
The @preset-name
cookie can be anywhere in the prompt. For example:
<long piece of text>
What do you make of the above description, @editor?
Presets corresponding to @preset-name
cookies are applied after the
cookie itself is stripped from the prompt, with the cursor placed at
the cookie location. This can be used to make a preset cookie modify
the request and/or the prompt in a context-sensitive manner. For
example, we can define a json
preset to mandate a specified schema
from the LLM response:
(gptel-make-preset 'json :pre (lambda () (setq-local gptel--schema (buffer-substring-no-properties (point) (point-max))) (delete-region (point) (point-max))) :include-reasoning nil)
Then the @json
cookie may be used as follows:
@websearch What are three popular GNU projects? Use the provided tools to search for details. Reply in the specified format.
@json [ name, current_version number, mailing_list_email, start_year int ]
The LLM will search the web (with the tools included via the
@websearch
preset) and reply with JSON output akin to:
{ "items": [ { "name": "GNU Compiler Collection (GCC)", "current_version": 13.2, "mailing_list_email": "gcc@gcc.gnu.org", "start_year": 1987 }, { "name": "GNU Bash (Bourne Again SHell)", "current_version": 5.3, "mailing_list_email": "bug-bash@gnu.org", "start_year": 1989 }, { "name": "GNU Emacs", "current_version": 29.3, "mailing_list_email": "emacs-devel@gnu.org", "start_year": 1985 } ] }
The text following the @json
cookie is used to construct the
response JSON schema. (See 9.3 for
details on shorthand specifications of JSON schema in gptel.)
In chat buffers, a valid preset cookie is highlighted automatically
and can be completed via completion-at-point
. This is Emacs’
familiar tab-completion in buffers, see Symbol Completion
This method of specifying a preset takes priority over all the other ways of setting gptel request options, including via elisp, from gptel’s transient menus, Org properties in the current buffer, etc.
9. Advanced configuration
9.1. The gptel-request
API
The heart of gptel is the function gptel-request
. It offers an
easy, flexible and comprehensive way to interact with LLMs, and is
responsible for state handling and for every HTTP request made by
gptel. All commands offered by gptel that involve sending and
receiving prompts and replies work by calling gptel-request
internally.
gptel-request
can be used to extend gptel, or write your own
functionality independent of that offered by gptel. Below is a
schematic and the full documentation of gptel-request
. You may
prefer to learn from examples and modify them to suit your needs
instead, in which case see 10.
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ GPTEL-REQUEST โ Arguments โ โ โ โ v โ (Payload) โ โญโโโโโโโโโโโโโดโโโโโโโโโโโโโโโฎ โ prompt, system, transforms โ โSingle or multi-part PROMPTโ โ โโ>โค โ โ (Emacs state) โ โSingle or multi-part SYSTEMโ โ context, buffer, position โ โฐโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโฏ โ โ v โ (Response handling) โ โ โ callback, stream, fsm โ โ โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ โญโโโโโโโโโโดโโโโโโโโโโโฎ โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ โ Create payload โยทยทยทยทยท>ยทยท โ Environment โโโโโ>โค INFO โ ยท โ โ โฐโโโโโโโโโโฌโโโโโโโโโโโฏ ยท โ gptel-model โ v ยท โ gptel-backend โ โโโโโโโโโโโงโโโโโโโโโโโ ยท โ gptel--system-message โ โ Send request โ ยท โ gptel-use-tools โ โโโโโโโโโโโคโโโโโโโโโโโ ยท โ gptel-tools โ v ยท โ gptel--schema โ โถโโโดโโโด ยท โ gptel-cache โ โญ โ โ โ โ โโฎ ยท โ gptel-include-reasoning โ ASYNC WAIT ยท โ gptel-track-response โ โฐ โ โ โ โ โฏ ยท โ โ โถโโโฌโโโด ยท โ gptel-org-convert-response โ v ยท โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ โญโโโโโโโโโโโโโดโโโโโโโโโโโโโโฎ ยท โ Call โ ยท โ (CALLBACK response INFO) โยทยท<ยทยท โฐโโโโโโโโโโโโโฌโโโโโโโโโโโโโโฏ v โถโโโดโโโด
- Function
gptel-request
Arguments:
(&optional PROMPT &key CALLBACK (BUFFER (current-buffer)) POSITION CONTEXT DRY-RUN (STREAM nil) (IN-PLACE nil) (SYSTEM gptel--system-message) SCHEMA TRANSFORMS (FSM (gptel-make-fsm)))
Request a response from the current
gptel-backend
forPROMPT
.The request is asynchronous, this function returns immediately.
If
PROMPT
is- a string, it is used to create a full prompt suitable for sending to the LLM.
- A list of strings, it is interpreted as a conversation, i.e. a series of alternating user prompts and LLM responses.
nil
but region is active, the region contents are used.nil
, the current bufferโs contents up to (point) are used. Previous responses from the LLM are identified as responses.
Keyword arguments:
CALLBACK
, if supplied, is a function of two arguments, called with theRESPONSE
(usually a string) andINFO
(a plist):(funcall CALLBACK RESPONSE INFO)
RESPONSE
is- A string if the request was successful
nil
if there was no response or an error.
These are the only two cases you typically need to consider, unless you need to clean up after aborted requests, use LLM tools, handle “reasoning” content specially or stream responses (see
STREAM
). In these cases,RESPONSE
can be- The symbol
abort
if the request is aborted, seegptel-abort
. A cons cell of the form
(tool-call . ((TOOL ARGS CB) ...))
where
TOOL
is a gptel-tool struct,ARGS
is a plist of arguments, andCB
is a function for handling the results. You can callCB
with the result of calling the tool to continue the request.A cons cell of the form
(tool-result . ((TOOL ARGS RESULT) ...))
where
TOOL
is a gptel-tool struct,ARGS
is a plist of arguments, andRESULT
was returned from calling the tool function.A cons cell of the form
(reasoning . text)
where text is the contents of the reasoning block. (Also see
STREAM
if you are using streaming.)
See
gptel--insert-response
for an example callback handling all cases.The
INFO
plist has (at least) the following keys::data
- The request data included with the query:position
- marker at the point the request was sent, unlessPOSITION
is specified.:buffer
- The buffer current when the request was sent, unlessBUFFER
is specified.:status
- Short string describing the result of the request, including possible HTTP errors.Example of a callback that messages the user with the response and info:
(lambda (response info) (if (stringp response) (let ((posn (marker-position (plist-get info :position))) (buf (buffer-name (plist-get info :buffer)))) (message "Response for request from %S at %d: %s" buf posn response)) (message "gptel-request failed with message: %s" (plist-get info :status))))
Or, for just the response:
(lambda (response _) ;; Do something with response (and (stringp response) (message (rot13-string response))))
If
CALLBACK
is omitted, the response is inserted at the point the request was sent.STREAM
is a boolean that determines if the response should be streamed, as ingptel-stream
. If the model or the backend does not support streaming, this will be ignored.When streaming responses
CALLBACK
will be called repeatedly with eachRESPONSE
text chunk (a string) as it is received.- When the
HTTP
request ends successfully,CALLBACK
will be called with aRESPONSE
argument of t to indicate success. - Similarly,
CALLBACK
will be called with(reasoning . text-chunk)
for each reasoning chunk, and(reasoning . t)
to indicate the end of the reasoning block.
BUFFER
andPOSITION
are the buffer and position (integer or marker) at which the response is inserted. If aCALLBACK
is specified, no response is inserted and these arguments are ignored, but they are still available in theINFO
plist passed toCALLBACK
for you to use.BUFFER
defaults to the current buffer, andPOSITION
to the value of (point) or (region-end), depending on whether the region is active.CONTEXT
is any additional data needed for the callback to run. It is included in theINFO
argument to the callback. Note: This is intended for storing Emacs state to be used byCALLBACK
, and unrelated to the context supplied to the LLM.SYSTEM
is the system message or extended chat directive sent to the LLM. This can be a string, a list of strings or a function that returns either; seegptel-directives
for more information. IfSYSTEM
is omitted, the value ofgptel--system-message
for the current buffer is used.The following keywords are mainly for internal use:
IN-PLACE
is a boolean used by the default callback when inserting the response to determine if delimiters are needed between the prompt and the response.If
DRY-RUN
is non-nil, do not send the request. Construct and return a state machine object that can be introspected and resumed.TRANSFORMS
is a list of functions used to transform the prompt or query parameters dynamically. Each function is called in a temporary buffer containing the prompt to be sent, and can conditionally modify this buffer. This can include changing the (buffer-local) values of the model, backend or system prompt, or augmenting the prompt with additional information (such as from a RAG engine).- Synchronous transformers are called with zero or one argument, the state machine for the request.
- Asynchronous transformers are called with two arguments, a callback and the state machine. It should run the callback after finishing its transformation.
See
gptel-prompt-transform-functions
for more.If provided, SCHEMA forces the LLM to generate JSON output. Its value is a JSON schema, which can be provided as
- an elisp object, a nested plist structure.
- A JSON schema serialized to a string.
- A shorthand object/array description, see
gptel--dispatch-schema-type
.
Note:
SCHEMA
is presently experimental and subject to change, and not all providers support structured output.FSM
is the state machine driving the request. This can be used to define a custom request control flow, see 9.4 for details.
Note:
- This function is not fully self-contained. Consider let-binding
the parameters
gptel-backend
,gptel-model
,gptel-use-tools
,gptel-track-response
andgptel-use-context
around calls to it as required. - The return value of this function is a state machine object that may be used to rerun or continue the request at a later time. See 9.4.
gptel-request
presents a versatile API, and its uses and arguments
are specified in greater detail in the sections that follow.
9.2. TODO Prompt transformations
9.3. Output in a specified JSON schema
gptel-request
can force the LLM to generate output that follows a
specified JSON schema. This can be useful when it is used as part of
a data processing pipeline, or when gptel-request needs to be plugged
into Elisp code that expects structured data.
Here is a frivolous example demonstrating this feature:
(gptel-request "Generate three quirky dogs" :system nil :schema "[name, age int, hobby, short_bio]")
This returns
{ "items": [ { "name": "Baxter", "age": 5, "hobby": "Chasing shadows and hoarding squeaky toys", "short_bio": "Baxter is the neighborhood's self-declared shadow detective." }, { "name": "Peaches", "age": 3, "hobby": "Dancing on hind legs and stealing socks", "short_bio": "Peaches twirls through life with a sock in her mouth and a heart full of mischief." }, { "name": "Ziggy", "age": 7, "hobby": "Sniffing out hidden treats and composing howling symphonies", "short_bio": "Ziggy is a gourmet snack seeker and an opera star in the canine world." } ] }
A more useful example is generating diagnostics for an Emacs buffer. For example:
(save-excursion (goto-char (point-max)) (gptel-request nil ;send whole buffer :system "Proofread this text buffer for stylistic errors, cliche and purple prose. Ignore parts that look like markup or code. Do NOT report text that is satisfactory." :schema "[start_line int: Starting line number of text to which this diagnostic applies text: Text to which this diagnostic applies problem: Short description of problem replacement: Exact replacement text or fix for diagnostic]"))
The list of diagnostics can be plugged into a linting interface such as Flymake, for example. (See flymake.)
The SCHEMA
argument may be specified in many ways. From the easiest
(and most restrictive) to the hardest (and most flexible), these are:
As a comma-separated list of keys with optional types. The following are equivalent ways of specifying a simple object, i.e. one that does not itself contain objects/arrays:
;; prop [type], prop [type], ... "name, age integer, hobby, short_bio" ;type is assumed to be string
Type defaults to string
if not specified. Types can be shortened as
long as they match a supported JSON schema type uniquely. (number
,
integer
, string
, boolean
, null
).
"name str, age int, hobby, short_bio str" ;with shortened types
gptel expands this to the schema
{ "type": "object", "properties": { "name": { "type": "string" }, "age": { "type": "integer" }, "hobby": { "type": "string" }, "short_bio": { "type": "string" } } }
To specify an array of objects of this type, enclose the above
specifications in [
and ]
:
;; [prop [type], prop [type], ...] "[name, age integer, hobby, short_bio]"
gptel expands this to the schema3
{ "type": "array", "items": { "type": "object", "properties": { "name": {"type": "string"}, "age": {"type": "integer"}, "hobby": {"type": "string"}, "short_bio": {"type": "string"} } } }
Often it is useful to add a description to JSON object properties to help guide the LLM. You can do this by using a multi-line shorthand like the following:
;; key [type]: [description] ;; key [type]: [description]... "name: Name of the dog age int: Age of the dog in years hobby: What the dog likes to do short_bio: A one-sentence biography of the dog"
gptel expands this to
{ "type": "object", "properties": { "name": { "type": "string", "description": "Name of the dog" }, "age": { "type": "integer", "description": "Age of the dog in years" }, "hobby": { "type": "string", "description": "What the dog likes to do" }, "short_bio": { "type": "string", "description": "A one-sentence biography of the dog" } } }
Whitespace between fields is not significant. The type, separator
":"
and the description are all optional, so this is valid:
"name:
age int: Age of the dog in years
hobby
short_bio: A one-sentence biography of the dog"
As before, to specify an array of objects of this type you can enclose
the text in [
and ]
:
"[name:
age int: Age of the dog in years
hobby
short_bio: A one-sentence biography of the dog]"
More complex schema (with enums, optional entries etc) can be specified in two ways.
If you have the JSON schema at hand, you can instead supply it directly as a serialized string:
(gptel-request "Generate a quirky dog" :system nil :schema "{\"type\":\"object\",\"properties\": {\"name\":{\"type\":\"string\"}, \"age\":{\"type\":\"integer\"}, \"hobby\":{\"type\":\"string\"}, \"short_bio\":{\"type\":\"string\"}}}")
Otherwise, it must be specified as a plist, similar to how tool arguments are, see 8.7.2.1.
gptel’s elisp specification object and array versions of the above examples are, for instance:
( :type object ;Object version :properties ( :name ( :type string) :age ( :type integer :description "Age of the dog in years") :hobby ( :type string) :short_bio ( :type string :description "A one-sentence biography of the dog")))
( :type array ;Array of objects version :items ( :type object :properties ( :name ( :type string) :age ( :type integer :description "Age of the dog in years") :hobby ( :type string) :short_bio ( :type string :description "A one-sentence biography of the dog"))))
9.4. gptel’s finite state machine
gptel’s interactions with LLMs are typically limited to a query followed a response, but can involve several back-and-forth exchanges when tool calls or custom behavior is involved. Under the hood, gptel uses a Finite State Machine (FSM) to manage the lifecycle of all LLM interactions.
- Datatype
gptel-fsm
Fields:
STATE TABLE HANDLERS INFO
A finite state machine object consists of the fields
STATE
,TABLE
,HANDLERS
andINFO
.
FSMs may be created by the constructor gptel-make-fsm
.
- Function
gptel-make-fsm
Arguments:
(&key STATE TABLE HANDLERS INFO)
STATE
: The current state of the machine, can be any symbol.TABLE
: Alist mapping states to possible next states along with predicates to determine the next state. Seegptel-request--transitions
for an example.HANDLERS
: Alist mapping states to state handler functions. Handlers are called when entering each state. Seegptel-request--handlers
for an exampleINFO
: The state machine’s current context. This is a plist holding all the information required for the ongoing request, and can be used to tweak and resume a paused request. (This should be called “context”, but context means too many things already in gptel.)
Each gptel request is passed an instance of this state machine and driven by it.
The FSM is in one of several possible states, and collects contextual
information in its INFO
plist.
Its transition table (TABLE
) encodes possible states and predicates
that are used to decide which state to switch to next. This is an
example of a transition table:
((INIT . ((t . WAIT))) (WAIT . ((t . TYPE))) (TYPE . ((gptel--error-p . ERRS) (gptel--tool-use-p . TOOL) (t . DONE))) (TOOL . ((gptel--error-p . ERRS) (gptel--tool-result-p . WAIT) (t . DONE))))
The possible states of the FSM in this example are INIT
, WAIT
,
TYPE
, TOOL
, ERRS
and DONE
. These are gptel’s default FSM
states and denoted by upper-case symbols here. But there is no
special significance to them, and they can be arbitrary identifiers.
Each state in this table maps to a list of conses of the form
(predicate . NEXT-STATE)
.
- Function
gptel--fsm-next
Arguments:
(MACHINE)
Determine the next state for
MACHINE
. Run through the predicates for the current state in the transition table, calling each one withINFO
until one succeeds. A predicate oft
is treated as always true. Return the corresponding state.
The FSM’s HANDLERS
is a list of functions that are run upon entering
a new state. This is an example of FSM handlers:
((WAIT gptel--handle-wait) (TOOL gptel--handle-tool-use))
Both the WAIT
and TOOL
states have one handler each, and other
states do not have any handlers associated with them.
The state handler is the workhorse: its job is to produce the side
effects required for the LLM request, such as inserting responses into
buffers, updating the UI, running tools and so on. Handlers also
upate the FSM’s INFO
as necessary, capturing information for the
transition-table predicates to use, and transition the FSM to the next
state.
- Function
gptel--fsm-transition
Arguments:
(MACHINE &optional NEW-STATE)
Transition
MACHINE
toNEW-STATE
or its natural next state. Run theHANDLERS
corresponding to that state.
Handlers can be asynchronous, in that the call to
gptel--fsm-transition
can occur in a process sentinel or some other
kind of delayed callback.
A typical state sequence for a gptel request can thus look like
INIT -> WAIT -> TYPE -> TOOL -> WAIT -> TYPE -> DONE
corresponding to a query that resulted in a tool call, followed by sending the tool result back to the LLM to be interpreted, and then a final response.
The buffer-local variable gptel--fsm-last
stores the FSM for the
latest gptel request, and is updated as it changes. You can inspect
this at any time to track what gptel is up to in that buffer. gptel
provides a helper function that visualizes the state of the FSM:
- Function
gptel--inspect-fsm
- Pop up a buffer to inspect the latest (possibly in-progress) gptel request in the current buffer.
In between conversation turns or calls to gptel-request
, gptel is
mostly stateless. However it maintains a limited amount of state in
the buffer text itself via text-properties. This state is used only
to assign user/LLM/tool roles to the text, and may be persisted to the
file. No other history is maintained, and gptel--fsm-last
is
overwritten when another request is started from the same buffer.
9.4.1. TODO Beyond hooks: changing gptel’s control flow
By modifying gptel’s default FSM transition-table and handlers, you can gain fine-grained access over the control flow of gptel well beyond what is possible via the provided hooks.
Entirely new applications and flows may be created with a custom state machine, although this requires exercising some care around the transitions that gptel imposes during its network handling.
This is a sentence that will be filled in later.
10. TODO Extending gptel
This section provides recipes for…
10.1. Simple gptel-request
commands
10.2. Building an application
11. FAQ
11.1. Chat buffer UI
11.1.1. I want the window to scroll automatically as the response is inserted
To be minimally annoying, gptel does not move the cursor by default. Add the following to your configuration to enable auto-scrolling.
(add-hook 'gptel-post-stream-hook 'gptel-auto-scroll)
11.1.2. I want the cursor to move to the next prompt after the response is inserted
To be minimally annoying, gptel does not move the cursor by default. Add the following to your configuration to move the cursor:
(add-hook 'gptel-post-response-functions 'gptel-end-of-response)
You can also call gptel-end-of-response
as a command at any time.
11.1.3. I want to change the formatting of the prompt and LLM response
For dedicated chat buffers: customize gptel-prompt-prefix-alist
and gptel-response-prefix-alist
. You can set a different pair for each major-mode.
Anywhere in Emacs: Use gptel-pre-response-hook
and gptel-post-response-functions
, which see.
11.1.4. How does gptel distinguish between user prompts and LLM responses?
gptel uses text-properties to watermark LLM responses. Thus this text is interpreted as a response even if you copy it into another buffer. In regular buffers (buffers without gptel-mode
enabled), you can turn off this tracking by unsetting gptel-track-response
.
When restoring a chat state from a file on disk, gptel will apply these properties from saved metadata in the file when you turn on gptel-mode
.
gptel does not use any prefix or semantic/syntax element in the buffer (such as headings) to separate prompts and responses. The reason for this is that gptel aims to integrate as seamlessly as possible into your regular Emacs usage: LLM interaction is not the objective, it’s just another tool at your disposal. So requiring a bunch of “user” and “assistant” tags in the buffer is noisy and restrictive. If you want these demarcations, you can customize gptel-prompt-prefix-alist
and gptel-response-prefix-alist
. Note that these prefixes are for your readability only and purely cosmetic.
11.2. Transient menu behavior
11.2.1. I want to set gptel options but only for this buffer
In every menu used to set options, gptel provides a “scope” option, bound to the =
key:
You can flip this switch before setting the option to buffer
or oneshot
. You only need to flip this switch once, it’s a persistent setting. buffer
sets the option buffer-locally, oneshot
will set it for the next gptel request only. The default scope is global.
11.2.2. I want the transient menu options to be saved so I only need to set them once
Any model options you set are saved according to the scope (see previous question). But the redirection options in the menu are set for the next query only:
You can make them persistent across this Emacs session by pressing C-x C-s
:
(You can also cycle through presets you’ve saved with C-x p
and C-x n
.)
Now these will be enabled whenever you send a query from the transient menu. If you want to use these saved options without invoking the transient menu, you can use a keyboard macro:
;; Replace with your key to invoke the transient menu: (keymap-global-set "<f6>" "C-u C-c <return> <return>")
Or see this wiki entry.
11.2.3. Using the transient menu leaves behind extra windows
If using gptel’s transient menus causes new/extra window splits to be created, check your value of transient-display-buffer-action
. See this discussion for more context.
If you are using Helm, see Transient#361.
In general, do not customize this Transient option unless you know what you’re doing!
11.2.4. Can I change the transient menu key bindings?
Yes, see transient-suffix-put
. This changes the key to select a backend/model from “-m” to “M” in gptel’s menu:
(transient-suffix-put 'gptel-menu (kbd "-m") :key "M")
11.2.5. (Doom Emacs) Sending a query from the gptel menu fails because of a key conflict with Org mode
Doom binds RET
in Org mode to +org/dwim-at-point
, which appears to conflict with gptel’s transient menu bindings for some reason.
Two solutions:
- Press
C-m
instead of the return key. Change the send key from return to a key of your choice:
(transient-suffix-put 'gptel-menu (kbd "RET") :key "<f8>")
11.3. Miscellaneous
11.3.1. I want to use gptel in a way that’s not supported by gptel-send
or the options menu
gptel’s default usage pattern is simple, and will stay this way: Read input in any buffer and insert the response below it. Some custom behavior is possible with the transient menu (C-u M-x gptel-send
).
For more programmable usage, gptel provides a general gptel-request
function that accepts a custom prompt and a callback to act on the response. You can use this to build custom workflows not supported by gptel-send
. See the documentation of gptel-request
, and the wiki for examples.
11.3.2. (ChatGPT) I get the error “(HTTP/2 429) You exceeded your current quota”
(HTTP/2 429) You exceeded your current quota, please check your plan and billing details.
Using the ChatGPT (or any OpenAI) API requires adding credit to your account.
11.3.3. Why another LLM client?
Other Emacs clients for LLMs prescribe the format of the interaction (a comint shell, org-babel blocks, etc). I wanted:
- Something that is as free-form as possible: query the model using any text in any buffer, and redirect the response as required. Using a dedicated
gptel
buffer just adds some visual flair to the interaction. - Integration with org-mode, not using a walled-off org-babel block, but as regular text. This way the model can generate code blocks that I can run.
12. Alternatives
Other Emacs clients for LLMs include
- llm: llm provides a uniform API across language model providers for building LLM clients in Emacs, and is intended as a library for use by package authors. For similar scripting purposes, gptel provides the command
gptel-request
, which see. - Ellama: A full-fledged LLM client built on llm, that supports many LLM providers (Ollama, Open AI, Vertex, GPT4All and more). Its usage differs from gptel in that it provides separate commands for dozens of common tasks, like general chat, summarizing code/text, refactoring code, improving grammar, translation and so on.
- chatgpt-shell: comint-shell based interaction with ChatGPT. Also supports DALL-E, executable code blocks in the responses, and more.
- org-ai: Interaction through special
#+begin_ai ... #+end_ai
Org-mode blocks. Also supports DALL-E, querying ChatGPT with the contents of project files, and more. - Minuet: Code-completion using LLM. Supports fill-in-the-middle (FIM) completion for compatible models such as DeepSeek and Codestral.
12.1. Packages using gptel
gptel is a general-purpose package for chat and ad-hoc LLM interaction. The following packages use gptel to provide additional or specialized functionality:
Lookup helpers: Calling gptel quickly for one-off interactions
- gptel-quick: Quickly look up the region or text at point.
Task-driven workflows: Different interfaces to specify tasks for LLMs.
These differ from full “agentic” use in that the interactions are “one-shot”, not chained.
- gptel-aibo: A writing assistant system built on top of gptel.
- Evedel: Instructed LLM Programmer/Assistant.
- Elysium: Request AI-generated changes as you code.
- gptel-watch: Automatically call gptel when typing lines that indicate intent.
Agentic use: Use LLMs as agents, with tool-use
- Macher: Project-aware multi-file LLM editing for Emacs.
Text completion
- gptel-autocomplete: Inline completions using gptel.
Integration with major-modes
- ob-gptel: Org-babel backend for running gptel queries.
- ai-blog.el: Streamline generation of blog posts in Hugo.
- gptel-commit: Generate commit messages using gptel.
- magit-gptcommit: Generate commit messages within magit-status Buffer using gptel.
- gptel-magit: Generate commit messages for magit using gptel.
Chat interface addons
- Corsair: Helps gather text to populate LLM prompts for gptel.
- ai-org-chat: Provides branching conversations in Org buffers using gptel. (Note that gptel includes this feature as well (see
gptel-org-branching-context
), but requires a recent version of Org mode 9.7 or later to be installed.)
Integration with other packages
- consult-omni: Versatile multi-source search package. It includes gptel as one of its many sources.
gptel configuration management
- gptel-prompts: System prompt manager for gptel.
13. Acknowledgments
- Felipe Ochoa and akssri for adding AWS Bedrock support to gptel.
- John Wiegley for the design of gptel’s presets and gptel-request’s async pipeline, but also for loads of general feedback and advice.
- Henrik Ahlgren for a keen eye to detail and polish applied to gptel’s UI.
- psionic-k for extensive testing of the tool use feature and the design of gptel’s in-buffer tool use records.
- JD Smith for feedback and code assistance with gptel-menu’s redesign
- Abin Simon for extensive feedback on improving gptel’s directives and UI.
- Alexis Gallagher and Diego Alvarez for fixing a nasty multi-byte bug with
url-retrieve
. - Jonas Bernoulli for the Transient library.
- daedsidog for adding context support to gptel.
- Aquan1412 for adding PrivateGPT support to gptel.
- r0man for improving gptel’s Curl integration.
Footnotes:
This is not an issue for
programmatic use of gptel, where you can let-bind gptel-backend
,
gptel-model
and so on around calls to gptel-request
. Presets can
simplify this too, see gptel-with-preset
above.
Note that this is not a valid JSON schema as the top-level is expected to be a JSON object, not an array. gptel handles this issue internally and wraps it an object if required.