A CPU based video summarization and classification tool. Extracts every movement out of a given video, aggregates the extracted changes into movement layers, which can be exported or further analysed and classified..
Go to file
Patrice Matz 0f302004b5
Update README.md
2020-12-27 21:44:40 +01:00
.vscode contours are getting extracted 2020-09-22 20:25:06 +02:00
Application added requirements.txt 2020-12-26 14:58:58 +01:00
docs docs 2020-12-22 13:58:47 +01:00
generate test footage banished ghosts 2020-12-13 21:36:04 +01:00
.gitignore . 2020-12-19 14:52:06 +01:00
README.md Update README.md 2020-12-27 21:44:40 +01:00
licens.txt added licens 2020-12-19 14:51:32 +01:00
main.py removed import of remove class 2020-12-26 15:06:00 +01:00
requirements.txt added tensorflow to dep 2020-12-26 15:08:34 +01:00

README.md

Video Synopsis and Classification

Example:

docs/demo.gif
What you see above is a 15 second excerpt of a 2 minute overlayed synopsis of a 2.5h video from an on campus web cam.
The synopsis took 40 minutes from start to finish on a 8 core machine and used a maximum of 6Gb of RAM.

However since the contour extraction could be performed on a video stream, the benchmark results show that a single core would be enough to process a video faster than real time.

Benchmark

Below you can find the benchmark results for a 10 minutes clip, with the stacked time per component on the x-axis.
The tests were done on a machine with a Ryzen 3700X with 8 cores 16 threads and 32 Gb of RAM.
On my configuration 1 minutes of of the original Video can be processed in about 20 seconds, the expected processing time is about 1/3 of the orignial video length.

  • CE = Contour Extractor
  • LE = LayerFactory
  • LM = LayerManager
  • EX = Exporter

docs/demo.gif

notes:

install tensorflow==1.15.0 and tensorflow-gpu==1.15.0, cuda 10.2 and 10.0, copy missing files from 10.0 to 10.2, restart computer, set maximum vram