A CPU based video summarization and classification tool. Extracts every movement out of a given video, aggregates the extracted changes into movement layers, which can be exported or further analysed and classified..
Go to file
Askill 7559d37787 added heatmap 2022-01-09 12:28:18 +01:00
Application added heatmap 2022-01-09 12:25:22 +01:00
docs added heatmap 2022-01-09 12:28:18 +01:00
generate test footage banished ghosts 2020-12-13 21:36:04 +01:00
.gitignore . 2021-02-04 23:14:07 +01:00
README.md added heatmap 2022-01-09 12:28:05 +01:00
licens.txt added licens 2020-12-19 14:51:32 +01:00
main.py added heatmap 2022-01-09 12:25:22 +01:00
requirements.txt added tensorflow to dep 2020-12-26 15:08:34 +01:00

README.md

Video Summary and Classification

Example:

docs/demo.gif
What you see above is a 15 second excerpt of a 2 minute overlayed synopsis of a 2.5h video from an on campus web cam.
The synopsis took 40 minutes from start to finish on a 8 core machine and used a maximum of 6Gb of RAM.

However since the contour extraction could be performed on a video stream, the benchmark results show that a single core would be enough to process a video faster than real time.

Heatmap

Benchmark

Below you can find the benchmark results for a 10 minutes clip, with the stacked time per component on the x-axis.
The tests were done on a machine with a Ryzen 3700X with 8 cores 16 threads and 32 Gb of RAM.
On my configuration 1 minutes of of the original Video can be processed in about 20 seconds, the expected processing time is about 1/3 of the orignial video length.

  • CE = Contour Extractor
  • LE = LayerFactory
  • LM = LayerManager
  • EX = Exporter

docs/demo.gif

notes:

optional:

install tensorflow==1.15.0 and tensorflow-gpu==1.15.0, cuda 10.2 and 10.0, copy missing files from 10.0 to 10.2, restart computer, set maximum vram