Overview
Intelligent Traffic Management is designed to detect and track vehicles as well as pedestrians and to estimate a safety metric for an intersection. Object tracking recognizes the same object across successive frames, giving the ability to estimate trajectories and speeds of the objects. The reference implementation also detects collisions and near misses. A real-time dashboard visualizes the intelligence extracted from the traffic intersection along with annotated video stream(s).
This collected intelligence can be used to adjust traffic lights to optimize the traffic flow of the intersection, or to evaluate and enhance the safety of the intersection by allowing Emergency services notifications, such as 911 calls, to be triggered by collision detection, reducing emergency response times.
To run the reference implementation, you will need to first configure the control plane host and the worker node host as presented in Prerequisites.
Select Configure & Download to download the reference implementation and the software listed below.
NOTE: This software package will not work on the People's Republic of China (PRC) network.
- Time to Complete: 30 - 45 minutes
- Programming Language: Python*
- Software:
- Intel® Distribution of OpenVINO™ toolkit 2021 Release
- Kubernetes*
Target System Requirements
Control Plane
-
One of the following processors:
- 6th to 12th Generation Intel® Core™ processors with Iris® Pro Graphics or Intel® HD Graphics
-
At least 32 GB RAM.
-
At least 256 GB hard drive.
-
An Internet connection.
-
One of the following operating systems:
- Ubuntu* 20.04 LTS Server.
- Lubuntu* 20.04 LTS.
Worker Nodes
-
One of the following processors:
- 6th to 12th Generation Intel® Core™ processors with Iris® Pro Graphics or Intel® HD Graphics
-
At least 32 GB RAM.
-
At least 256 GB hard drive.
-
An Internet connection.
-
One of the following operating systems:
- Ubuntu* 20.04 LTS Server.
- Lubuntu* 20.04 LTS.
-
IP camera or pre-recorded video(s).
How It Works
The application uses the inference engine and the Intel® Deep Learning Streamer (Intel® DL Streamer) included in the Intel® Distribution of OpenVINO™ toolkit. The solution is designed to detect and track vehicles and pedestrians and upload cloud data to Amazon Web Services* (AWS*) S3 storage.
Figure 1: How It Works
The Intelligent Traffic Management application requires the services pods, database and a visualizer. Once the installation is successful, the application is ready to be deployed using Helm. After the deployment, the application pod takes in the virtual/real RTSP stream addresses and performs inference and sends metadata for each stream to the InfluxDB* database. The visualizer in parallel shows the analysis over the metadata like pedestrians detected, observed collisions and processed video feed.
The application has the capability to perform inferences over as much as 20 channels. In addition, the visualizer is capable of showing each feed separately as well as all the feeds at the same time using Grafana*. The user can visualize the output remotely over a browser, provided that they are in the same network.
New in this release:
-
Fix for RTSP issue where RTSP might not work due to first 10-15 frames that are received without data
-
Optimization for Intel® DL Streamer image
Figure 2: Architecture Diagram
Get Started
Prerequisites
In order to run the latest version of Intelligent Traffic Management, you will need 2 Linux hosts: one for Kubernetes control plane and one for Kubernetes worker. The following steps describe how to prepare both targets before installing the reference implementation.
-
Install docker-ce and docker-compose. Run the following commands on both targets:
-
Install the latest Docker CLI and Docker daemon by following the Docker instructions to Install using the repository and Install Docker Engine.
-
Run Docker without sudo following the Manage Docker as a non-root user instructions.
-
If your hosts are running behind a HTTP/S proxy server, perform these steps. If not, you can skip this step.
-
Configure proxy settings for the Docker* client to connect to internet and for containers to access the internet by following Configure Docker to use a proxy server.
-
Configure proxy settings for the Docker* daemon by following HTTP/HTTPS proxy.
-
-
Install the docker-compose tool by following Install Compose.
-
Configure the Docker service by adding the following in the
/etc/docker/daemon.json
file:
-
-
Install Helm. Run the following commands on both targets:
-
Install and configure the Kubernetes cluster. Run the following commands on both targets:
-
Get Google key:
-
Add kube apt repo:
-
Disable swap on your machine. (Kubernetes cluster doesn't work while using swap memory.)
-
Install Kubernetes binaries:
-
-
Initialize the Kubernetes cluster on the Control Plane machine:
NOTE: Save the kube join command prompted at the end of the cluster creation.
-
Configure access to Kubernetes cluster:
-
Current user configuration:
-
Root user configuration:
-
-
Add network plugin to your Kubernetes cluster:
-
Enable kubelet service and check status:
-
Enable kubelet service:
-
Check kubelet service status:
The expected status is Active.
-
-
Check that the current node is ready by using the following command:
Output should look like:
-
Join Kubernetes worker node:
-
If you didn't save the join command in step 4, run the following command on the control plane to generate another token. (If you have the join command, skip this step.)
-
Run the kubeadm join command on the worker node, for example:
-
If join failed, proceed with the following step and give kubelet access to network policy:
-
-
Configure Kubernetes on the worker side:
-
Create .kube config folder on worker side:
-
Copy the configuration file from controller to worker node:
-
Restart kubelet service:
-
-
Check that Kubernetes nodes on both machines are ready with the command:
Output should look similar to:
-
Assign role to the worker node from control-plane host:
Check again to see that the label was placed using the following command:
Output should look similar to:
-
Install required package on worker node:
-
Install required package on control plane:
NOTE: For local build using Harbor local registry, add the following line in the
/etc/docker/daemon.json
configuration file:"insecure-registries": ["https://WORKER_IP:30003"]
Step 1: Install the Reference Implementation
NOTE: The following sections may use
<Controller_IP>
in a URL or command. Make note of your Edge Controller’s IP address and substitute it in these instructions.
Select Configure & Download to download the reference implementation and then follow the steps below to install it.
-
Make sure that the Target System Requirements are met properly before proceeding further.
-
If you are behind a proxy network, be sure that proxy addresses are configured in the system:
-
Open a new terminal, go to the downloaded folder and unzip the downloaded RI package:
-
Go to the
intelligent_traffic_management/
directory. -
Change permissions of the executable edgesoftware file to enable execution.
-
Run the command below to install the Reference Implementation:
-
During the installation, you will be prompted for the AWS Key ID, AWS Secret, AWS Bucket and Product Key. The Product Key is contained in the email you received from Intel confirming your download. AWS credentials are optional. AWS Key ID, AWS Secret and AWS Bucket are obtained after following the steps in the Set Up Amazon Web Services* Cloud Storage section. If you do not need the cloud upload feature, simply provide empty values by pressing Enter when prompted for the AWS credentials.
NOTE: Installation logs are available at the path:
/var/log/esb-cli/Intelligent_Traffic_Management_<version>/<Component_Name>/install.log
Figure 3: Product Key
-
When the installation is complete, you see the message “Installation of package complete” and the installation status for each module.
Figure 4: Installation Success
Step 2: Check the Application
Check the Intelligent_Traffic_Management
pods with the command:
You will see output similar to:
Figure 5: Intelligent Traffic Management Pods Status
NOTE: If the pods have a status of ContainerCreating, please wait for some time, since Kubernetes will pull the images from the registry and then deploy them. This happens only the first time the containers are deployed, and the wait time will depend upon the network bandwidth available.
Step 3: Data Visualization on Grafana
-
Navigate to https://Controller_IP:30300/dashboard on your browser to check the Intelligent Traffic Management dashboard.
Figure 6: Login to Intelligent Traffic Management Dashboard
-
Navigate to https://Controller_IP:30303/camera/0 on your browser to check the camera.
Figure 7: Intelligent Traffic Management Camera 0
Figure 8: Intelligent Traffic Management Dashboard
-
Navigate to <Controller_IP>:32000 on your browser to login to the Grafana dashboard.
-
Get Grafana Password with the command:
-
Login with user as admin and password as Grafana Password.
-
Click Home and select the ITM to open the main dashboard.
Figure 9: Grafana Home Screen
Figure 10: Grafana Dashboard List
An example of the Intelligent Traffic Management dashboard:
Figure 11: Grafana Main Dashboard – Intelligent Traffic Management
The above dashboard shows the number of vehicles, pedestrians and collisions detected on the left side. These may be used for adjusting traffic lights and calling emergency services if collisions are detected.
The blue drop pins on the Map are the geographic coordinates of camera. By clicking on these pins, a small window of the camera feed can be visible with the detection results, as shown in the figure below.
Figure 12: Detection Results on MapUI
To open the Grafana Dashboard for a particular camera with the detection results and other data metrics, click on the camera feed on the small window, as shown in the figure below.
NOTE: To close the small window with camera feed, click the close button (X) on the top left corner of the window.
Figure 13: Grafana Dashboard of an Individual Camera Feed
To view the detection results of all the configured camera feeds, click on View All Streams from the top right corner on the MapUI from the main Grafana Dashboard i.e. ITM. Refer to Figure 11, Grafana Main Dashboard – Intelligent Traffic Management.
Figure 14: Detection Results of all the Configured Camera Feeds
NOTE: To open combined streams in full tab, go to:
https://<Controller_IP>:30303/get_all_streams
If the AWS credentials were provided during the installation steps, then you enabled the Cloud Upload feature.
Navigate to the configured AWS storage to find the uploaded video captures.
Figure 15: List of AWS S3 Bucket Objects
Figure 16: AWS S3 Bucket Object Properties
Figure 17: AWS S3 Bucket Object Photo
Step 4: Uninstall the Application
-
Check installed modules with the following command:
All installed modules will show as seen in the screen below:
Figure 18: Installed Modules List
-
Run the command below to uninstall all the modules:
-
Run the command below to uninstall the Intelligent Traffic Management reference implementation:
Figure 19: Uninstalled Modules
Public Helm Registry for Helm Charts
Installation of Intelligent Traffic Management Reference Implementation on local Kubernetes Cluster is accomplished using Helm charts. In earlier releases, Helm charts used to be a part of Reference Implementation installation package. Now a global Helm repo is issued so that Reference Implementation Helm charts can be accessible from private and public networks. This will speed up and ease the process of introducing updates and their integration with Reference Implementations.
Local Build Instructions
After you have installed Kubernetes Cluster from Prerequisites, you can build your own Intelligent Traffic Management Docker image using the following instructions.
You can proceed with the steps presented using either edgesoftware sources or GitHub sources: Intelligent Traffic Management
Setup
Change the directory to repository path with one of the following options.
For GitHub:
Add the following line in the /etc/docker/daemon.json
configuration file:
Restart the Docker service:
NOTE: You must check that the pods are ready and restarted after each Docker service restart.
If the edgesoftware installation was not executed, install Grafana and local Harbor registry using the below commands.
-
Grafana steps:
-
Harbor Helm install command:
Use your preferred text editor to make the following file updates.
In the next steps, the tag <REPOSITORY_PATH>
indicates the path to the repository.
In the Change examples, replace the line indicated by - with the line indicated by +
-
<REPOSITORY_PATH>/src/build_images.sh
- update the tag and version for the image. -
<REPOSITORY_PATH>/deploy/services/values.yaml
- update image deployment harbor. -
<REPOSITORY_PATH>/deploy/services/values.yaml
- update version.Make sure the tag is identical to the tag used on
build_images.sh
script.
Build and Install
Build the Docker image with the following commands:
Install ITM application with the following commands:
-
Get Grafana password:
-
Get Grafana service IP using the following command:
-
Get the host IP using the following command:
-
Change directory to deployment directory from repository path:
-
Deploy the MQTT broker and wait for it to initialize:
-
Using the host IP, Grafana service IP and password from steps 1 and 2, run the following Helm installation command:
NOTES:
If your host is not behind a firewall, then skip setting the http and https proxy.
Cloud connector requires your AWS credentials to connect to it to upload video captures in case of collision, near miss and overcrowd events. If you don't want this feature enabled, then skip setting these parameters. For instructions on how to configure AWS, refer to the Set Up Amazon Web Services* Cloud Storage section.
After step 6 completes, use your preferred browser to access ITM at: https://Controller_IP:30300
and Grafana at: https://Controller_IP:32000
.
Single Node Deployment
Prerequisites
Be sure you have completed the items below before continuing.
Proxy Settings
If you are not behind a proxy network, skip this section.
If you are behind a proxy network, check that proxy addresses are configured in the system. An example of configuring the proxy environment is shown below.
Edit the /etc/environment
file for proxy configuration.
Reboot your system for the new changes to take place.
Install and Configure Docker*
Follow the below steps to continue installation for Docker* CE using the repository.
-
Follow the Docker instructions to Install using the repository.
-
Follow the Docker instructions to Install Docker engine.
-
(OPTIONAL) If you are running behind a proxy, follow the Docker instructions to configure Docker to use a proxy server and Docker daemon HTTP/HTTPS proxy.
-
Follow the Docker instructions to Manage Docker as a non-root user.
-
Follow the Docker instructions to Install docker compose on Ubuntu using the repository.
-
Configure the Docker service:
-
Add the following line to
/etc/docker/daemon.json
file. -
Restart the Docker service for the changes to take place.
-
Install Helm
Follow the below steps to install the Helm component. If you are running behind a corporate proxy, be sure the proxy is set up correctly. For details, see Proxy Settings.
Install and Configure Kubernetes Cluster
Follow the steps below to install and configure the Kubernetes cluster on the system.
NOTE: If the system is rebooted or powered off, you must repeat step 2 to disable swap.
-
Set up the Kubernetes environment for installation:
-
Disable swap on the system:
-
Install kubelet, kubeadam, kubectl and Kubernetes-cni:
-
Initialize the Kubernetes cluster on the machine:
-
Configure access for the Kubernetes cluster for user:
-
Configure access for the Kubernetes cluster for Root user:
-
Add the network plugin to the Kubernetes cluster:
-
Enable the Kubernetes service:
Install ITM on Single Node Deployment
The below steps describe how to install the Intelligent Traffic Management Reference Implementation on a single node.
If you are running behind a corporate proxy, please ensure the proxy is set up correctly. For details, see Proxy Settings.
Enable Kube Cluster on the Machine
Enable kube cluster setup on the machine with the command:
Install and Run ITM Reference Implementation
Install the ITM reference implementation on the machine with these commands:
-
Clone the ITM GitHub repository:
-
Install Grafana in Kube using the script on the repository. Replace the IP with the current system IP and use the proxy parameter if you are running behind a proxy.
Figure 20: Grafana Install on Single Node
-
Get the Grafana service IP and Password for the deployment:
-
Add the
intel
Helm repository: -
Install the ITM
hivemq
mqtt broker:Before proceeding with the next step, ensure itm-mqtt broker pods named
hivemq-
are in the running state.Figure 21: HiveMQ* Pods Ready
-
Install the ITM application and change the Grafana password, Grafana IP, and host IP using the information from the previous few steps.
If you are running behind a corporate proxy, use the
--set proxy.http
and--set proxy.https
parameters, otherwise you can skip those settings.The
--set num_video_instance
parameter is optional. The default value is 8. You can change the value to the number of instances that you want to use.Figure 22: Intelligent Traffic Management Install Success Output
-
Check the installation:
Figure 23: Intelligent Traffic Management Pods in Running State
-
Access the dashboard and Grafana, change the HOST_IP accordingly. Login to Grafana using the admin as username and password generated from the previous step.
Dashboard link:
https://Controller_IP:30300/dashboard
Figure 24: Intelligent Traffic Management Dashboard
Grafana link:
https://HOST_IP:32000
Figure 25: Intelligent Traffic Management Grafana Dashboard
Optional Steps
Configure the Input
The Helm templates contain all the necessary configurations for the cameras.
If you wish to change the input, edit the ./deploy/services/values.yaml
file and add the video inputs to the test_videos
array:
To use camera stream instead of video, replace the video file name with: /dev/video0
To use RTSP stream instead of video, replace the video file name with the RTSP link: - uri: "rtsp://<RTSP_IP>:8554/mystream"
Each ITM Video Inference service will pick a video input in the order above.
If you wish to change the coordinates, address and the analytics type of the cameras, edit the ./deploy/services/templates/itm-analytics-configmap.yaml
file:
-
address: Name of the camera’s geographic location. Must be a non-empty alphanumeric string.
-
latitude: Latitude of the camera’s geographic location.
-
longitude: Longitude of the camera’s geographic location.
-
analytics: Attribute to be detected by the model.
NOTE: The default model supports pedestrian, vehicle and bike detection. You can select desired attributes from these, e.g., "analytics": "pedestrian vehicle detection".
Stop the Application
To remove the deployment of this reference implementation, run the following commands.
NOTE: The following commands will remove all the running pods and the data and configuration stored in the device, except the MQTT Broker.
If you wish to remove the MQTT Broker also, enter the command:
Set Up Amazon Web Services Cloud* Storage
To enable Cloud Storage on the installed Reference Implementation, you will need Amazon Web Services* (AWS*) paid/free subscription to enable your root user account that has to support the following services:
- Identity and Access Management (IAM)
- Amazon S3 Bucket
After finishing the setup for IAM and S3, you will have your AWS_KEY_ID
, AWS_SECRET_KEY
and AWS_BUCKET_NAME
to be used on your Intelligent Traffic Management Cloud Connector - Configuration.
References
Setup Steps
-
From your AWS management console, search for IAM and open the IAM Dashboard.
Figure 26: IAM Dashboard
-
On the left menu of the dashboard, go to Access management and click on Users to open the IAM Users tab.
Figure 27: IAM Users Tab
-
From the IAM users tab, click on Add User to access the AWS add user setup.
-
On the first tab, provide the username and select the AWS credentials type to be Access key.
Figure 28: Set User Details Tab
-
On the second tab, create a group to attach policies for the new IAM user.
a. Search for S3 and select AmazonS3FullAccess policy.
b. Click on Create group.
Figure 29: Create Group Tab
-
Select the group you have created and click on Next: Tags.
-
Tags are optional. If you don't want to add tags, you can continue to the Review tab by clicking on Next: Review.
-
After review, you can click on the Create User button.
-
On this page, you have access to AWS Key and AWS Secret Access key. (Click on Show to view them.)
a. Save both of them to be used later on your Cloud Data - Configuration on the Edge Insights for Fleet Reference Implementation you have installed.
NOTE: The AWS Secret Key is visible only on this page, you cannot get the key in another way.
b. If you forget to save the AWS Secret Key, you can delete the last key and create another key.
Figure 30: AWS Key and Secret Access Key
-
After you have saved the keys, close the tab. You are returned to the IAM Dashboard page.
-
Click on the user created and save the User ARN to be used on S3 bucket setup.
NOTE: If you forget to save the AWS Secret key from the User tab, you can select Security Credentials, delete the Access Key and create another one.
S3 Bucket
S3 Bucket Service offers cloud storage to be used on cloud based applications.
Perform the steps below to set up S3 Bucket Service.
-
Open the Amazon Management Console and search for Amazon S3.
-
Click on S3 to open the AWS S3 Bucket dashboard.
Figure 31: AWS S3 Bucket Dashboard
-
On the left side menu, click on Buckets.
-
Click on the Create Bucket button to open the Create Bucket dashboard.
-
Enter a name for your bucket and select your preferred region.
Figure 32: Create Bucket General Configuration
-
Scroll down and click on Create Bucket.
-
From the S3 Bucket Dashboard, click on the newly created bucket and go to the Permissions tab.
-
Scroll to Bucket Policy and click on Edit to add a new statement in statements tab that is already created to deny all the uploads.
Figure 33: Edit Bucket Policy
-
You must add a comma before adding the following information.
a. Update with the following statement with statement name, your user ARN saved at IAM setup - step 11 and your bucket name.
b. Click on Save changes. If the change is successful, you will see a success saved message, otherwise you need to re-analyze the json file to fix the error.
Summary and Next Steps
This application successfully leverages Intel® Distribution of OpenVINO™ toolkit plugins for detecting and tracking vehicles and pedestrians and estimating a safety metric for an intersection. It can be extended further to provide support for a feed from a network stream (RTSP or camera device).
As a next step, you can experiment with accuracy/throughput trade-offs by substituting object detector models and tracking and collision detection algorithms with alternative ones.
Create a Microsoft Azure* IoT Central Dashboard
As a next step, you can create an Azure* IoT Central dashboard for this reference implementation, run standalone Python code to fetch telemetry data from InfluxDB, and send data to the Azure IoT Central dashboard for visualizing telemetry data. See Connect Edge Devices to Azure IoT* for instructions.
Learn More
To continue your learning, see the following guides and software resources:
Troubleshooting
Pods status check
Verify that the pods are Ready as well as in Running state using below command:
If any pods are not in Running state, use the following command to get more information about the pod state:
ITM Dashboard Not Showing on Browser After Restart Server
Run the following commands:
Pod status shows “ContainerCreating” for long time
If Pod status shows ContainerCreating or Error or CrashLoopBackOff for 5 minutes or more, run the following commands:
Subprocess:32 issue
If you see any error related to subprocess, run the command below:
pip install --ignore-installed subprocess32==3.5.4
Support Forum
If you're unable to resolve your issues, contact the Support Forum.
To attach the installation logs with your issue, execute the command below to consolidate a list of the log files in tar.gz compressed format, e.g., ITM.tar.gz.
tar -czvf ITM.tar.gz /var/log/esb-cli/Intelligent_Traffic_Management_<version>/Component_name/install.log