Running a MemGPT Server
You can run MemGPT as a service to support multiple users and multiple agents.
Docker
The easiest way to start up a MemGPT service is with Docker. Make sure you have docker installed on your machine before running these steps.
Using with providers other than OpenAI
The current container deployment has not been tested with providers other than OpenAI (e.g. local LLMs). If you want to use the server with local LLMs, we recommend using the CLI option.
First, clone the MemGPT repository. Then, copy the text below into a .env
file:
MEMGPT_SERVER_PASS=test_server_token # TODO: configure your admin password
MEMGPT_PG_DB=memgpt
MEMGPT_PG_USER=memgpt
MEMGPT_PG_PASSWORD=memgpt
MEMGPT_PG_URL=memgpt
MEMGPT_PG_HOST=localhost
MEMGPT_PG_PORT=8888
OPENAI_API_KEY=... # TODO: provide your OpenAI key
Inside the cloned repository folder, run:
docker compose up
You can configure the server with the compose.yml
file.
Re-setting the database
Data is persisted in the folder ./.persist/pgdata
. If you need to clear your database data, you can run rm -r ./.persist/pgdata
.
CLI
If you have MemGPT installed via pip or from source, you can start the server from the CLI. First, make sure you have already configured MemGPT with:
memgpt configure
Then, you run run the server:
memgpt server [--debug]
Updated about 1 month ago