Author Topic: Object Tracking in Tensorflow  (Read 144 times)

0 Members and 1 Guest are viewing this topic.

Offline Flavio58

Object Tracking in Tensorflow
« Reply #1 on: July 10, 2018, 11:05:27 PM »
Advertisement
1.Introduction
This Repository is my Master Thesis Project: "Develop a Video Object Tracking with Tensorflow Technology" and it's still developing, so many updates will be made. In this work, I used the architecture and problem solving strategy of the Paper T-CNN(Arxiv), that won last year IMAGENET 2015 Teaser Challenge VID. So the whole script architecture will be made of several component in cascade:

Still Image Detection (Return Tracking Results on single Frame);
Temporal Information Detection( Introducing Temporal Information into the DET Results);
Context Information Detection( Introducing Context Information into the DET Results);
Notice that the Still Image Detection component could be unique or decompose into two sub-component:

First: determinate "Where" in the Frame;
Second: determinate "What" in the Frame.
My project use many online tensorflow projects, as:

YOLO Tensorflow;
TensorBox.
Inception.
2.Requirement & Installation
To install the script you only need to download the Repository. To Run the script you have to had installed:

Tensorflow;
OpenCV;
Python;
All the Python library necessary could be installed easily trought pip install package-name. If you want to follow a guide to install the requirements here is the link for a tutorial I wrote for myself and for a course of Deep Learning at UPC.

3.YOLO Script Usage
i.Setting Parameters
This are the inline terminal argmunts taken from the script, most of them aren't required, only the video path must be specified when we call the script:

  parser = argparse.ArgumentParser()
  parser.add_argument('--det_frames_folder', default='det_frames/', type=str)
  parser.add_argument('--det_result_folder', default='det_results/', type=str)
  parser.add_argument('--result_folder', default='summary_result/', type=str)
  parser.add_argument('--summary_file', default='results.txt', type=str)
  parser.add_argument('--output_name', default='output.mp4', type=str)
  parser.add_argument('--perc', default=5, type=int)
  parser.add_argument('--path_video', required=True, type=str)
Now you have to download the weights for YOLO and put them into /YOLO_DET_Alg/weights/.

For YOLO knowledge here you can find Original code(C implementation) & paper.

ii.Usage
After Set the Parameters, we can proceed and run the script:

  python VID_yolo.py --path_video video.mp4
You will see some Terminal Output like:

alt tag

You will see a realtime frames output(like the one here below) and then finally all will be embedded into the Video Output( I uploaded the first two Test I've made in the folder /video_result, you can download them and take a look to the final result. The first one has problems in the frames order, this is why you will see so much flickering in the video image,the problem was then solved and in the second doesn't show frames flickering ):

alt tag

4.VID TENSORBOX Script Usage
i.Setting Parameters
This are the inline terminal argmunts taken from the script, most of them aren't required. As before, only the video path must be specified when we call the script:

  parser.add_argument('--output_name', default='output.mp4', type=str)
  parser.add_argument('--hypes', default='./hypes/overfeat_rezoom.json', type=str)
  parser.add_argument('--weights', default='./output/save.ckpt-1090000', type=str)
  parser.add_argument('--perc', default=2, type=int)
  parser.add_argument('--path_video', required=True, type=str)
I will soon put a weight file to download. For train and spec on the multiclass implementation I will add them after the end of my thesis project.

ii.Usage
Download the .zip files linked in the Download section and replace the folders.

Then, after set the parameters, we can proceed and run the script:

  python VID_tensorbox_multi_class.py --path_video video.mp4
5.Tensorbox Tests
In the folder video_result_OVT you can find files result of the runs of the VID TENSOBOX scripts.

6.Dataset Scripts
All the scripts below are for the VID classes so if you wonna adapt them for other you have to simply change the Classes.py file where are defined the correspondencies between codes and names. All the data on the image are made respect a specific Image Ratio, because TENSORBOX works only with 640x480 PNG images, you will have to change the code a little to adapt to your needs. I will provide four scripts:

Process_Dataset_heavy.py: Process your dataset with a brute force approach, you will obtain more bbox and files for each class;
Process_Dataset_lightweight.py: Process your dataset with a lightweight approach making, you will obtain less bbox and files for each class;
Resize_Dataset.py: Resize your dataset to 640x480 PNG images;
Test_Processed_Data.py: Will test that the process end well without errors.
I've also add some file scripts to pre process and prepare the dataset to train the last component, the Inception Model, you can find them in a subfolder of the dataset script folder.

7.Copyright
According to the LICENSE file of the original code,

Me and original author hold no liability for any damages;
Do not use this on commercial!.
8.State of the Project
Support YOLO (SingleClass) DET Algorithm;
Support Training ONLY TENSOBOX and INCEPTION Training;
USE OF TEMPORAL INFORMATION [This are retrieved through some post processing algorithm I've implemented in the Utils_Video.py file NOT TRAINABLE];
Modular Architecture composed in cascade by: Tensorbox (as General Object Detector), Tracker and Smoother and Inception (as Object Classifier);
9.Downloads
Here below the links of the weights file for Inception and Tensorbox from my retraining experiments:

INCEPTION
TENSORBOX
10.Acknowledgements
Thanks to Professors:

Elena Baralis from Politecnico di Torino Dipartimento di Automatica e Informatica;
Jordi Torres from BSC Department of Computer Science;
Xavi Giro ”I” Nieto from UPC Department of Image Processing.
11.Bibliography
i.Course
Deep Learning for Computer Vision Barcelona
Build Deep Learning Env with Tensorflow Python OpenCV
ii.Classification
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. “ImageNet Classification with Deep Convolutional Neural Networks”.
Christian Szegedy et al. “Going Deeper with Convolutions”.
Christian Szegedy et al. “Rethinking the Inception Architecture for ComputerVision”.
Kaiming He et al. “Delving Deep into Rectifiers: Surpassing Human-LevelPerformance on ImageNet Classification”.
iii.Detection
Russell Stewart and Mykhaylo Andriluka. “End-to-end people detection incrowded scenes”.
Pierre Sermanet et al. “OverFeat: Integrated Recognition, Localization andDetection using Convolutional Networks”.
S. Ren et al. “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks”.
iv.Tracking
Dinesh Jayaraman and Kristen Grauman. “Slow and Steady Feature Analy-sis: Higher Order Temporal Coherence in Video”.
K. Kang et al. “T-CNN: Tubelets with Convolutional Neural Networks forObject Detection from Videos”.
W. Han et al. “Seq-NMS for Video Object Detection”. -J. Redmon et al. “You Only Look Once: Unified, Real-Time Object Detection”

https://github.com/DrewNF/Tensorflow_Object_Tracking_Video


Consulente in Informatica dal 1984

Software automazione, progettazione elettronica, computer vision, intelligenza artificiale, IoT, sicurezza informatica, tecnologie di sicurezza militare, SIGINT. 

Facebook:https://www.facebook.com/flaviobernardotti58
Twitter : https://www.twitter.com/Flavio58

Cell:  +39 366 3416556

f.bernardotti@deeplearningitalia.eu

#deeplearning #computervision #embeddedboard #iot #ai

 

Related Topics

  Subject / Started by Replies Last post
0 Replies
108 Views
Last post July 23, 2018, 10:02:27 PM
by Flavio58
0 Replies
78 Views
Last post July 25, 2018, 04:05:47 AM
by Flavio58
0 Replies
117 Views
Last post July 30, 2018, 08:01:49 PM
by Flavio58
0 Replies
79 Views
Last post September 22, 2018, 12:04:36 AM
by Flavio58
0 Replies
77 Views
Last post October 24, 2018, 12:19:02 AM
by Ruggero Respigo

Sitemap 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326