This is the first article in a four-part series
Xilinx provides numerous reference designs and examples for neural network processing on Zynq Ultrascale+ based development boards which are based on pre-built binaries or platforms. Extending these examples and/or integrating them into a system that is more application-specific is often the desired next step. This may involve deploying on a custom board or using a specific sensor, for example.
The purpose of this paper is to demonstrate key concepts that are necessary to create a custom Vitis platform that is suitable for DPU-based neural network acceleration on edge devices.
This document is relevant to 2020_05_28_vitis_ai_multicam.zip design files (or git tag rev3)
This document is not designed to be a tutorial for any specific element, such as the camera interfacing, Vivado, Vitis AI, or PetaLinux, but is intended as an aid to make the prototyping process easier. This document is not intended to be a step by step recipe with every exact command but is intended to be similar to a lab notebook description with key concepts highlighted.
ZCU102
/content/xilinx/en/support/documentation/boards_and_kits/zcu102/ug1182-zcu102-eval-bd.pdf
FMC Card
http://ultrazed.org/product/multi-camera-fmc-module
http://zedboard.org/sites/default/files/documentations/5361-pb-multicamera-v3b.pdf
Vitis
/content/xilinx/en/support/documentation/sw_manuals/xilinx2019_2/ug1400-vitis-embedded.pdf
Vitis AI
/content/xilinx/en/support/documentation/sw_manuals/vitis_ai/1_0/ug1414-vitis-ai.pdf
/content/xilinx/en/support/documentation/user_guides/ug1354-xilinx-ai-sdk.pdf
/content/xilinx/en/support/documentation/user_guides/ug1355-xilinx-ai-sdk-programming-guide.pdf
Petalinux
/content/xilinx/en/support/documentation/sw_manuals/xilinx2019_2/ug1144-petalinux-tools-reference-guide.pdf
Xilinx Runtime (XRT)
The flow demonstrated in this paper is complex and involves a number of tools, libraries, drivers, and design files to be installed and configured. In order to help save time gathering installation information from all these various sources, this section documents the setup that was used in developing this platform. For more details on any specific item, refer to the appropriate documentation.
This design was built on an Ubuntu 18.04.3 LTS machine
Several packages/libraries are installed throughout the rest of this section of the document, but some additional ones should be installed now:
sudo apt-get install -y docker wget git make unzip
Note that this installs docker from the Ubuntu repositories. The official method to install docker may be preferable:
https://docs.docker.com/engine/install/ubuntu/
https://docs.docker.com/engine/install/linux-postinstall/
Docker needs to be run as a normal user. To do this, add docker group to the user
sudo groupadd docker
sudo usermod -aG docker ${USER}
Log into a new terminal session for the changes to take effect.
1. Download Vitis 2019.2 /content/xilinx/en/support/download/index.html/content/xilinx/en/downloadNav/vitis.html
2. Run the xsetup script to launch the installer
3. Proceed through the installer with desired settings with the following notes highlighted
At the Select Product to Install page, select Vitis
At the Vitis Unified Software Platform page, make sure to enable Vitis, Vivado, Install devices for Alveo and Xilinx edge acceleration platforms, Zynq Ultrascale+ MPSoC
1. Download Petalinux 2019.2 – /content/xilinx/en/support/download/index.html/content/xilinx/en/downloadNav/embedded-design-tools.html
2. Install all the pre-requisite libraries per UG1144
3. Run petalinux-v2019.2-final-installer.run to install it
The Vitis AI Acceleration flow (which this design is using to deploy the DPU) utilizes a software layer called XRT which unifies deployment of accelerator kernels on either edge (i.e. Zynq PS + PL) or cloud (i.e. x86+PCIe FPGA) designs. This design uses Zynq PS to run XRT during deployment which will be installed to the target during the Petalinux build. However, XRT also needs to be installed to the development host so that Vitis can compile and link against it.
For embedded deployment, XRT can be built from source and installed on the host following the steps on the XRT github https://github.com/Xilinx/XRT/blob/master/src/runtime_src/doc/toc/build.rst
1. Clone the XRT git repo
git clone https://github.com/Xilinx/XRT
2. Install dependencies
sudo src/runtime_src/tools/scripts/xrtdeps.sh
3. Build the runtime
cd build
./build.sh
4. Build the .deb
cd build/Release
make package
cd ../Debug
make package
5. Install the .deb
sudo apt install --reinstall ./xrt_201830.2.1.0_18.10.deb
The Vitis AI tools are provided as docker images which need to be fetched
docker pull xilinx/vitis-ai:tools-1.0.0-cpu
docker pull xilinx/vitis-ai:runtime-1.0.0-cpu
List docker images to make sure they are installed correctly and with the following name
Note that the scripts in this design will use the CPU tools docker image, not the GPU image.
Once all the required tools are installed, do the following steps build and run the entire design
1. To build the entire design, execute the following commands:
source <vitis_install>/2019.2/settings64.sh
source <petalinux_install>/2019.2/settings.sh
source <xrt_install>/xrt/setup.sh
unzip <date>_vitis_ai_multicam.zip
cd vitis_ai_multicam
make
2. Prepare the SD card
a. Create a FAT32 boot partition and EXT4 rootfs partition
b. Copy everything from sd_card directory to the FAT32 boot partition except rootfs.tar.gz
c. Extract rootfs.tar.gz to EXT4 rootfs partition
tar -xvf rootfs.tar.gz -C <path_to_sd_rootfs_mountpoint>
3. Hardware setup
a. Insert SD Card into ZCU102
b. Set boot mode to SD
c. Plug-in power cable
d. Connect Avnet FMC MULTICAM4-G FMC card with all 4 AR0231 imagers connected to HPC0
e Connect DisplayPort output to 720p-capable monitor
f. Connect MicroUSB cable to USB UART connector
g. Connect a serial terminal with 115200 baud
4. Run the design
a. Power on the board
b. Login
Username: root
Password: root
c. Run the demo
cd /mnt
facedetect
5. Miscellaneous
a. I2C writes to set up the imager may occasionally fail with this error:
If this happens, re-run facedetect and/or re-boot.
Brian Wiec is a Field Applications Engineer in the Detroit area serving AMD Automotive customer base supporting applications in ADAS, autonomous driving, infotainment, and powertrain control. He has worked at AMD for eight years, both in the field and factory support roles with experience in video, signal processing, and embedded systems design/implementation. Brian is always happy to partner with customers to help them solve their technical challenges and enjoys participating in their innovations. In his free time, Brian likes spending time with his family, hiking, listening to music, playing hockey, and watching college football (Go Blue!).