This document covers how to install Detectron, its dependencies (including Caffe2), and the COCO dataset.
本文档介绍如何安装Detectron及其依赖项(包括Caffe2)和COCO数据集。
README.md
.Requirements: 要求:
Notes:
To install Caffe2 with CUDA support, follow the installation instructions from the Caffe2 website. If you already have Caffe2 installed, make sure to update your Caffe2 to a version that includes the Detectron module.
要使用CUDA支持安装Caffe2,请按照Caffe2网站上的安装说明进行操作。 如果您已经安装了Caffe2,请确保将您的Caffe2更新为包含Detectron模块的版本。
Please ensure that your Caffe2 installation was successful before proceeding by running the following commands and checking their output as directed in the comments.
在继续运行以下命令并按照注释中的指示检查它们的输出之前,请确保您的Caffe2安装成功。
# To check if Caffe2 build was successful 检查Caffe2构建是否成功python2 -c 'from caffe2.python import core' 2>/dev/null && echo "Success" || echo "Failure"# To check if Caffe2 GPU build was successful 检查Caffe2 GPU构建是否成功# This must print a number > 0 in order to use Detectron 这必须打印一个数字> 0才能使用Detectronpython2 -c 'from caffe2.python import workspace; print(workspace.NumCudaDevices())'
If the caffe2
Python package is not found, you likely need to adjust your PYTHONPATH
environment variable to include its location (/path/to/caffe2/build
, where build
is the Caffe2 CMake build directory).
如果未找到caffe2 Python包,则可能需要调整PYTHONPATH环境变量以包含其位置(/ path / to / caffe2 / build,其中build是Caffe2 CMake构建目录)。
Install the COCO API: 安装COCO API:
# COCOAPI=/path/to/clone/cocoapigit clone https://github.com/cocodataset/cocoapi.git $COCOAPIcd $COCOAPI/PythonAPI# Install into global site-packages 安装到全球站点包make install# Alternatively, if you do not have permissions or prefer 或者,如果您没有权限或偏好# not to install the COCO API into global site-packages 不要将COCO API安装到全球站点包中python2 setup.py install --user
Note that instructions like # COCOAPI=/path/to/install/cocoapi
indicate that you should pick a path where you'd like to have the software cloned and then set an environment variable (COCOAPI
in this case) accordingly.
请注意,像#COCOAPI = / path / to / install / cocoapi这样的指令表明您应该选择一个想要克隆该软件的路径,然后相应地设置一个环境变量(在这种情况下为COCOAPI)。
Clone the Detectron repository:
克隆Detectron存储库:
# DETECTRON=/path/to/clone/detectrongit clone https://github.com/facebookresearch/detectron $DETECTRON
Install Python dependencies:
安装Python依赖关系:
pip install -r $DETECTRON/requirements.txt
Set up Python modules:
设置Python模块:
cd $DETECTRON && make
Check that Detectron tests pass (e.g. for SpatialNarrowAsOp test
):
检查Detectron测试通过(例如SpatialNarrowAsOp测试):
python2 $DETECTRON/detectron/tests/test_spatial_narrow_as_op.py
At this point, you can run inference using pretrained Detectron models. Take a look at our inference tutorial for an example. If you want to train models on the COCO dataset, then please continue with the installation instructions.
此时,您可以使用pretrained Detectron模型进行推理。 查看我们的推理教程以获取示例。 如果您想在COCO数据集上训练模型,请继续阅读安装说明。
Detectron finds datasets via symlinks from detectron/datasets/data
to the actual locations where the dataset images and annotations are stored. For instructions on how to create symlinks for COCO and other datasets, please see detectron/datasets/data/README.md
.
Detectron通过来自探测器/数据集/数据的符号链接,找到数据集图像和注释的实际存储位置。 有关如何为COCO和其他数据集创建符号链接的说明,请参阅detectron / datasets / data / README.md。
After symlinks have been created, that's all you need to start training models.
符号链接创建完成后,您就可以开始训练模型。
Please read the custom operators section of the FAQ
first.
请首先阅读常见问题的自定义操作员部分。
For convenience, we provide CMake support for building custom operators. All custom operators are built into a single library that can be loaded dynamically from Python. Place your custom operator implementation under detectron/ops/
and see detectron/tests/test_zero_even_op.py
for an example of how to load custom operators from Python.
为了方便起见,我们提供CMake支持来构建自定义运算符。 所有的自定义操作符都被构建到一个可以从Python动态加载的库中。 将您的自定义操作员实现放置在detectron / ops /下,并参阅detectron / tests / test_zero_even_op.py以获取如何从Python加载自定义操作符的示例。
Build the custom operators library:
cd $DETECTRON && make ops
Check that the custom operator tests pass:
python2 $DETECTRON/detectron/tests/test_zero_even_op.py
We provide a Dockerfile
that you can use to build a Detectron image on top of a Caffe2 image that satisfies the requirements outlined at the top. If you would like to use a Caffe2 image different from the one we use by default, please make sure that it includes the Detectron module.
Build the image:
cd $DETECTRON/dockerdocker build -t detectron:c2-cuda9-cudnn7 .
Run the image (e.g. for BatchPermutationOp test
):
nvidia-docker run --rm -it detectron:c2-cuda9-cudnn7 python2 detectron/tests/test_batch_permutation_op.py
In case of Caffe2 installation problems, please read the troubleshooting section of the relevant Caffe2 installation instructions first. In the following, we provide additional troubleshooting tips for Caffe2 and Detectron.
Caffe2 comes with performance profiling
support which you may find useful for benchmarking or debugging your operators (see BatchPermutationOp test
for example usage). Profiling support is not built by default and you can enable it by setting the -DUSE_PROF=ON
flag when running Caffe2 CMake.
Sometimes CMake has trouble with finding CUDA and cuDNN dirs on your machine.
When building Caffe2, you can point CMake to CUDA and cuDNN dirs by running:
cmake .. # insert your Caffe2 CMake flags here -DCUDA_TOOLKIT_ROOT_DIR=/path/to/cuda/toolkit/dir -DCUDNN_ROOT_DIR=/path/to/cudnn/root/dir
Similarly, when building custom Detectron operators you can use:
cd $DETECTRONmkdir -p build && cd buildcmake .. -DCUDA_TOOLKIT_ROOT_DIR=/path/to/cuda/toolkit/dir -DCUDNN_ROOT_DIR=/path/to/cudnn/root/dirmake
Note that you can use the same commands to get CMake to use specific versions of CUDA and cuDNN out of possibly multiple versions installed on your machine.
Caffe2 uses protobuf as its serialization format and requires version 3.2.0
or newer. If your protobuf version is older, you can build protobuf from Caffe2 protobuf submodule and use that version instead.
To build Caffe2 protobuf submodule:
# CAFFE2=/path/to/caffe2cd $CAFFE2/third_party/protobuf/cmakemkdir -p build && cd buildcmake .. -DCMAKE_INSTALL_PREFIX=$HOME/c2_tp_protobuf -Dprotobuf_BUILD_TESTS=OFF -DCMAKE_CXX_FLAGS="-fPIC"make install
To point Caffe2 CMake to the newly built protobuf:
cmake .. # insert your Caffe2 CMake flags here -DPROTOBUF_PROTOC_EXECUTABLE=$HOME/c2_tp_protobuf/bin/protoc -DPROTOBUF_INCLUDE_DIR=$HOME/c2_tp_protobuf/include -DPROTOBUF_LIBRARY=$HOME/c2_tp_protobuf/lib64/libprotobuf.a
You may also experience problems with protobuf if you have both system and anaconda packages installed. This could lead to problems as the versions could be mixed at compile time or at runtime. This issue can also be overcome by following the commands from above.
In case you experience issues with CMake being unable to find the required Python paths when building Caffe2 Python binaries (e.g. in virtualenv), you can try pointing Caffe2 CMake to python library and include dir by using:
cmake .. # insert your Caffe2 CMake flags here -DPYTHON_LIBRARY=$(python2 -c "from distutils import sysconfig; print(sysconfig.get_python_lib())") -DPYTHON_INCLUDE_DIR=$(python2 -c "from distutils import sysconfig; print(sysconfig.get_python_inc())")
Detectron does not require Caffe2 built with NNPACK support. If you face NNPACK related issues during Caffe2 installation, you can safely disable NNPACK by setting the -DUSE_NNPACK=OFF
CMake flag.
Analogously to the NNPACK case above, you can disable OpenCV by setting the -DUSE_OPENCV=OFF
CMake flag.
If you encounter a COCO API import error due to an undefined symbol, as reported here, make sure that your python versions are not getting mixed. For instance, this issue may arise if you have both system and conda numpy installed.
In case you experience issues with CMake being unable to find the Caffe2 package when building custom operators, make sure you have run make install
as part of your Caffe2 installation process.
联系客服