1. Ubuntu下dlib库编译安装
sudo apt-get install libboost-all-dev
git clone https://github.com/davisking/dlib.git
开始build
$cd dlib
$mkdir build; cd build; cmake .. ; cmake --build .
可选模式:
mkdir build; cd build; cmake .. -DDLIB_USE_CUDA=0 -DUSE_AVX_INSTRUCTIONS=1; cmake --build .
-DDLIB_USE_CUDA=0不使用cuda
-DUSE_AVX_INSTRUCTIONS=1使用cpu的AVX加速
Build and install the Python extensions(build并安装python扩展):
cd ..
python3 setup.py install
可选模式:
python setup.py install --yes USE_AVX_INSTRUCTIONS --no DLIB_USE_CUDA
这里可以指定python2或python3,也可以在虚拟环境中安装.
--no DLIB_USE_CUDA选项不使用cuda,使用cuda可以不指定该选项或DLIB_USE_CUDA
At this point, you should be able to run python3 and type import dlib successfully(安装成功可以在python中导入dlib).
import dlib
编译过程中可能有一个错误
cmake:/usr/local/lib/libcurl.so.4 no version information available (required by cmake)
原因:ubuntu系统中已经有libcurl库文件了,但是我们自己安装curl软件时又安装了libcurl,与系统中的版本冲突了
解决办法:将我们自己安装的libcurl库的软连接重新连接到系统库上
$sudo locate libcurl.so.4
$ll /usr/local/lib/libcurl.so.4
$sudo rm -rf /usr/local/lib/libcurl.so.4
$sudo ln -s /usr/lib/x86_64-linux-gnu/libcurl.so.4.5.0 /usr/local/lib/libcurl.so.4
验证:
$cmake 没有问题
2. 安装face_recognition
git clone https://github.com/ageitgey/face_recognition
pip2 install face_recognition [为什么不是编译源码?]
----------------------------------------------------------------------------------------
face recognition的API
face_recognition.api.batch_face_locations(images, number_of_times_to_upsample=1, batch_size=128)
Returns an 2d array of bounding boxes of human faces in a image using the cnn face detector If you are using a GPU, this can give you much faster results since the GPU can process batches of images at once. If you aren’t using a GPU, you don’t need this function.
Parameters:
img – A list of images (each as a numpy array)
number_of_times_to_upsample – How many times to upsample the image looking for faces. Higher numbers find smaller faces.
batch_size – How many images to include in each GPU processing batch.
Returns:
A list of tuples of found face locations in css (top, right, bottom, left) order
face_recognition.api.compare_faces(known_face_encodings, face_encoding_to_check, tolerance=0.6)
Compare a list of face encodings against a candidate encoding to see if they match.
Parameters:
known_face_encodings – A list of known face encodings
face_encoding_to_check – A single face encoding to compare against the list
tolerance – How much distance between faces to consider it a match. Lower is more strict. 0.6 is typical best performance.
Returns:
A list of True/False values indicating which known_face_encodings match the face encoding to check
face_recognition.api.face_distance(face_encodings, face_to_compare)[source]
Given a list of face encodings, compare them to a known face encoding and get a euclidean distance for each comparison face. The distance tells you how similar the faces are.
Parameters:
faces – List of face encodings to compare
face_to_compare – A face encoding to compare against
Returns:
A numpy ndarray with the distance for each face in the same order as the ‘faces’ array
face_recognition.api.face_encodings(face_image, known_face_locations=None, num_jitters=1)[source]
Given an image, return the 128-dimension face encoding for each face in the image.
Parameters:
face_image – The image that contains one or more faces
known_face_locations – Optional - the bounding boxes of each face if you already know them.
num_jitters – How many times to re-sample the face when calculating encoding. Higher is more accurate, but slower (i.e. 100 is 100x slower)
Returns:
A list of 128-dimensional face encodings (one for each face in the image)
face_recognition.api.face_landmarks(face_image, face_locations=None, model='large')[source]
Given an image, returns a dict of face feature locations (eyes, nose, etc) for each face in the image
Parameters:
face_image – image to search
face_locations – Optionally provide a list of face locations to check.
model – Optional - which model to use. “large” (default) or “small” which only returns 5 points but is faster.
Returns:
A list of dicts of face feature locations (eyes, nose, etc)
face_recognition.api.face_locations(img, number_of_times_to_upsample=1, model='hog')[source]
Returns an array of bounding boxes of human faces in a image
Parameters:
img – An image (as a numpy array)
number_of_times_to_upsample – How many times to upsample the image looking for faces. Higher numbers find smaller faces.
model – Which face detection model to use. “hog” is less accurate but faster on CPUs. “cnn” is a more accurate deep-learning model which is GPU/CUDA accelerated (if available). The default is “hog”.
Returns:
A list of tuples of found face locations in css (top, right, bottom, left) order
face_recognition.api.load_image_file(file, mode='RGB')[source]
Loads an image file (.jpg, .png, etc) into a numpy array
Parameters:
file – image file name or file object to load
mode – format to convert the image to. Only ‘RGB’ (8-bit RGB, 3 channels) and ‘L’ (black and white) are supported.
Returns:
image contents as numpy array
联系客服