CLI Reference

Contents

CLI Reference#

gptme#

GPTMe, a chat-CLI for LLMs, enabling them to execute commands and code.

If PROMPTS are provided, a new conversation will be started with it.

If one of the PROMPTS is ‘-’, following prompts will run after the assistant is done answering the first one.

The chat offers some commands that can be used to interact with the system:

/undo Undo the last action.
/log Show the conversation log.
/edit Edit the conversation in your editor.
/rename Rename the conversation.
/fork Create a copy of the conversation with a new name.
/summarize Summarize the conversation.
/save Save the last code block to a file.
/shell Execute shell code.
/python Execute Python code.
/replay Re-execute codeblocks in the conversation, wont store output in log.
/impersonate Impersonate the assistant.
/tokens Show the number of tokens used.
/help Show this help message.
/exit Exit the program.
gptme [OPTIONS] [PROMPTS]...

Options

--prompt-system <prompt_system>#

System prompt. Can be ‘full’, ‘short’, or something custom.

--name <name>#

Name of conversation. Defaults to generating a random name. Pass ‘ask’ to be prompted for a name.

--llm <llm>#

LLM to use.

Options:

openai | azure | local

--model <model>#

Model to use.

--stream, --no-stream#

Stream responses

-v, --verbose#

Verbose output.

-y, --no-confirm#

Skips all confirmation prompts.

-i, --interactive, -n, --non-interactive#

Choose interactive mode, or not. Non-interactive implies –no-confirm, and is used in testing.

--show-hidden#

Show hidden system messages.

--version#

Show version.

Arguments

PROMPTS#

Optional argument(s)

gptme-server#

Starts a server and web UI for gptme.

Note that this is very much a work in progress, and is not yet ready for normal use.

gptme-server [OPTIONS]

Options

-v, --verbose#

Verbose output.

--llm <llm>#

LLM to use.

Options:

openai | local

--model <model>#

Model to use by default, can be overridden in each request.