Installation
NOTE: It is recommended that you set up of a python virtual environment using mamba, conda, or poetry. To install anyGPT:
Using Docker
The Docker image supports GPU passthrough for training and inference. In order to enable GPU passthrough please follow the guide for installing the NVidia Container Toolkit for your OS.
NOTE On Windows you need to follow the guide to get NVidia Container Toolkit setup on WSL2. Docker WSL2 Backend is required.
Once NVidia Container Toolkit and Docker is setup correctly, build the Docker image
Use the following command to login to the container interactively, and use anygpt as if it was on your local host
Mounting Volumes
It is recommended to mount a local directory into your container in order to share data between your local host and the container. This will allow you to save trained checkpoints, reuse datasets between runs and more.
The above example mounts /path/to/local/dir
to the /data
directory in the container, and all data and changes are shared between them dynamically.
Non interactive Docker
The above documentation explains how to run a Docker container with an interactive session of anyGPT. You can also run anyGPT commands to completion using Docker by overriding the entrypoint
$ docker run --gpus=all -v /path/to/your/data:/data --entrypoint anygpt-run -it anygpt /data/test.ckpt "hello world"
The above command runs anygpt-run
with the parameters /data/test.ckpt "hello world"
Dependencies
- torch >= 2.0.0
- numpy
- transformers
- datasets
- tiktoken
- wandb
- tqdm
- PyYAML
- lightning
- tensorboard