Click here to Skip to main content
15,884,388 members
Articles / Artificial Intelligence / Tensorflow

Deploying YOLOv5 Model on Raspberry Pi with Coral USB Accelerator

Rate me:
Please Sign up or sign in to vote.
2.33/5 (2 votes)
1 Feb 2021CPOL5 min read 30.8K   7   7
In this article we’ll deploy our YOLOv5 face mask detector on Raspberry Pi.
Here we deploy our detector solution on an edge device – Raspberry Pi with the Coral USB accelerator.


In the previous article, we tested a face mask detector on a regular computer. In this one, we’ll deploy our detector solution on an edge device – Raspberry Pi with the Coral USB accelerator.

The hardware requirements for this part are:

  • Raspberry Pi 3 / 4 with an Internet connection (only for the configuration) running the Raspberry Pi OS (previously called Raspbian)
  • Raspberry Pi HQ camera (any USB webcam should work)
  • Coral USB accelerator
  • Monitor compatible with your Pi

The Coral USB accelerator is a hardware accessory designed by Google. It adds an edge TPU processor to your system, enabling it to run machine learning models at very high speeds. I suggest you have a look at its data sheet. You can plug it into virtually any device. In our case, we’ll use it in conjunction with our Pi because this board doesn’t make good detection at high FPS by itself.

A Raspberry Pi with an accelerator isn't the only portable you can use for this task. In fact, if you're working on a commercial solution, you'd be better off with a standalone Coral board, or something from NVidia's Jetson hardware series.

Both of these offer attractive options for scaling up toward mass production once you're done prototyping. For basic prototyping and experimentation, however, Raspberry Pi 4 with a Coral accelerator stick will do the job.

A final note: you don't have to deploy this solution to a Raspberry Pi if you don't want to. This model will run happily on a desktop or laptop PC if that's what you'd prefer to do.

Initial Steps on Raspberry Pi

Unplug the Coral stick if it’s connected already to Pi. I’ll let you know when you can attach it. We have to install several libraries first to run the YOLO models on it and take advantage of the Coral device.

First of all, let’s update the Raspberry Pi board. Open up a terminal window and run:

sudo apt-get update
sudo apt-get upgrade

The above lines could take several minutes to complete. Check if the camera’s interface is active by clicking the top left Raspberry icon > Preferences > Raspberry Pi configuration > Interfaces tab. Select the camera’s Enable radio button and click OK. Reboot your Raspberry Pi board.

Preparing the Directory and Creating the Virtual Environment

Let’s start by creating an isolated environment to avoid conflicts in the future. This is a practice you should adopt from now on, as it will save you from encountering a lot of issues. To get the virtual environment’s pip package, run the line:

sudo pip3 install virtualenv

Once the installation is done, run these lines to get everything prepared:

mkdir FaceMaskDetection
cd FaceMaskDetection
git clone
cd yolov5
git checkout tf-android
git clone
cd FaceMaskDetectionOnPi
mv customweights requirements4pi.txt /home/pi/FaceMaskDetection/yolov5
cd ..

Alright, now it’s time to create the isolated environment in the project directory and activate it so the dependencies’ installation can be performed:

python3 -m venv tflite1-env
source tflite1-env/bin/activate

Finally, run the following:

pip install -r requirements4pi.txt

Installing TFLite Interpreter and PyTorch on Raspberry Pi

I previously mentioned that Raspberry Pi boards are not very good at running TensorFlow models as they simply don’t have the required processing power. To make this project viable, we need to take the TFLite workaround, and then install PyTorch because our script is still using some of its methods. Note that we won’t run the model on top of PyTorch, we will just use some of its power. Let’s start determining what your Pi’s processor architecture is by issuing the following:

uname -a

If it shows that your processor architecture is ARM7L, let’s install the corresponding TFLite version.

If your Pi has installed Python 3.6:

pip3 install

If your Pi has installed Python 3.7:

pip3 install

If your Pi has any other Python version, or a different processor architecture, I suggest that you check this guide and, in the "Install just the TensorFlow Lite interpreter" section, look for Linux (ARM or AARCH) platform options.

Once that’s done, it’s time to install PyTorch – an open-source Python machine learning library used in multiple types of applications, including those that involve computer vision. Our Raspberry Pi doesn’t have the regular Intel x86 processor architecture. Instead, it has an ARM one; therefore. all Python packages that you want to install on Pi must be compiled for this specific architecture.

There is no official package for ARM processors. We still can install PyTorch from a pre-compiled wheel package, but they may vary depending on the processor version that your Pi has. Explore this NVIDIA forum in order to find the proper version and instructions for its installation. I also found this repo that contains the packages for some other ARM builds. I have Raspberry Pi with an ARMv7l processor.

If your Raspberry Pi’s processor is the same, you can use the wheel I’ve used. You’ll find it available at my Git repo.

Let’s start by installing the PyTorch dependencies required for it to run smoothly:

sudo apt install libopenblas-dev libblas-dev m4 cmake cython python3-dev python3-yaml python3-setuptools

Once that’s completed, browse to /Documents in your Raspberry Pi and issue this command to get the .whl file:

cd /home/pi/Documents
git clone

Once that’s done, run this to begin the installation:

cd /home/pi/Documents/PyTorchForRPi
pip3 torch-1.0.0a0+8322165-cp37-cp37m-linux_armv7l.whl

The last command will take about two minutes to complete. If everything goes well, congratulations! You’re done with the hardest part.

Installing the Coral USB Accelerator Dependencies on Raspberry Pi

This is not mandatory - you only need this step if you want to accelerate the detection speed.

In the same terminal window, navigate to the project’s folder. Once there, add the Coral package repo to your apt-get distro list with the next lines:

echo "deb coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list
curl | sudo apt-key add -
sudo apt-get update

Now we can finally install the Edge TPU runtime, which is the only requirement to use the USB accelerator. Install the libedgetpu library (the standard one that won’t make your Coral overheat) by running:

sudo apt-get install libedgetpu1-std

You can now plug the Coral USB board into Raspberry Pi.

Deploying the Detector on the Raspberry Pi Board

Make sure your Coral USB accelerator and webcam are plugged in. Open a terminal window and navigate to the project’s directory:

cd /home/pi/FaceMaskDetection
source tflite1-env/bin/activate
cd yolov5

Install the project’s requirements by running:

pip3 install -r requirements4pi.txt

To initialize the detector without the Coral USB accelerator, issue:

!python --weights customweights/best-fp16.tflite --img 416 --conf 0.45 --source 0 --classes 0 1

Otherwise, you’ll need to run this command:

!python --weights customweights/best-fp16.tflite --img 416 --conf 0.45 --source 0 --classes 0 1 --edgetpu

Both options will open the webcam and initialize the detection as expected:

Next Step?

Actually, none. We’ve reached the end of this challenging project! I hope that the final outcome is what you’ve expected.

This article is part of the series 'AI on the Edge: Face Mask Detection View All


This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)

Written By
United States United States
Sergio Virahonda grew up in Venezuela where obtained a bachelor's degree in Telecommunications Engineering. He moved abroad 4 years ago and since then has been focused on building meaningful data science career. He's currently living in Argentina writing code as a freelance developer.

Comments and Discussions

Questionone of the worst and misleading tutorial i have seen Pin
T abdullah31-Aug-21 19:43
T abdullah31-Aug-21 19:43 

long story short,,
u can just convert the yolov5 to tflite quantized throw this repo
GitHub - zldrobit/yolov5: YOLOv5 in PyTorch > ONNX > CoreML > iOS[^]
from that repo u can export tflite. easy..
u make ur tutorial unreadable so ppl can access it many time trying to solve the issues, and u benfit from there click... you just misleading us for 4 months now.
AnswerRe: one of the worst and misleading tutorial i have seen Pin
Gireesh Sharma2-May-22 20:11
Gireesh Sharma2-May-22 20:11 
Questiongood article for raspberry pi but missing the steps for using TPU Pin
Member 284353616-May-21 0:06
Member 284353616-May-21 0:06 
AnswerRe: good article for raspberry pi but missing the steps for using TPU Pin
T abdullah31-Aug-21 16:34
T abdullah31-Aug-21 16:34 
QuestionFPS attained with RPi+Coral accelerator Pin
Member 1515795718-Apr-21 6:53
Member 1515795718-Apr-21 6:53 
QuestionCould you share the FPS data for YOLOv5 on Raspberry Pi with Coral USB Accelerator? Pin
Member 150866601-Mar-21 5:29
Member 150866601-Mar-21 5:29 
SuggestionMessage Closed Pin
25-Feb-21 18:57
Member 1508439325-Feb-21 18:57 
Questionis the USB accelerator really needed? Pin
Member 1507735618-Feb-21 12:45
Member 1507735618-Feb-21 12:45 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.