CLI Reference#
gptme#
GPTMe, a chat-CLI for LLMs, enabling them to execute commands and code.
If PROMPTS are provided, a new conversation will be started with it.
If one of the PROMPTS is ‘-’, following prompts will run after the assistant is done answering the first one.
The chat offers some commands that can be used to interact with the system:
gptme [OPTIONS] [PROMPTS]...
Options
- --prompt-system <prompt_system>#
System prompt. Can be ‘full’, ‘short’, or something custom.
- --name <name>#
Name of conversation. Defaults to generating a random name. Pass ‘ask’ to be prompted for a name.
- --llm <llm>#
LLM to use.
- Options:
openai | azure | local
- --model <model>#
Model to use.
- --stream, --no-stream#
Stream responses
- -v, --verbose#
Verbose output.
- -y, --no-confirm#
Skips all confirmation prompts.
- -i, --interactive, -n, --non-interactive#
Choose interactive mode, or not. Non-interactive implies –no-confirm, and is used in testing.
Show hidden system messages.
- --version#
Show version.
Arguments
- PROMPTS#
Optional argument(s)
gptme-server#
Starts a server and web UI for gptme.
Note that this is very much a work in progress, and is not yet ready for normal use.
gptme-server [OPTIONS]
Options
- -v, --verbose#
Verbose output.
- --llm <llm>#
LLM to use.
- Options:
openai | local
- --model <model>#
Model to use by default, can be overridden in each request.