Dockers
We provide a Docker image for Bench2Drive-VL to simplify the setup process.
To pull,
docker pull meteorcollector/b2dvl_carla
What's inside the docker
Here is a table summarizing what is included and not included in our Docker image:
Component | Included | Notes |
---|---|---|
CARLA | ✅ Yes | Contains the core CARLA simulator and its dependencies |
CARLA Additional Maps and Assets (used by Bench2Drive-VL) | ✅ Yes | Includes extra maps and assets required by Bench2Drive-VL |
Runtime environment for CARLA inference kernel | ✅ Yes | Includes dependencies such as Python packages, and other necessary libs |
Runtime environment for VLM server (e.g., transformer) | ❌ No | Does not include the environment needed for the Vision-Language Model server |
Bench2Drive-VL project code | ❌ No | Project source code is not included; recommended to mount via -v $(pwd):/workspace/Bench2Drive-VL |
This Docker simplifies the setup for CARLA and its dependencies. We recommend running the VLM server and Bench2Drive-VL components in separate environments, as combining them can lead to complex configuration issues.
How to use
Firstly, start the docker
docker run -it --gpus all \
-v /path/to/your/Bench2Drive-VL:/workspace/Bench2Drive-VL \
b2dvl_carla /bin/bash
When entered the docker, you should be under /workspace/Bench2Drive-VL
. Now, activate the conda environment:
conda activate b2dvl
Then, you're ready to go! Just follow the tutorial of closed loop inference. Environment set up stage is skipped since the environment is ready in the docker.