[–] Palaver 0 points 12 points (+12|-0) ago 

Impressive bit of work there. Found it pretty readable at first run through (which surprised me considering how long it has been since I've touched c++).

I would recommend moving to a different git host though, GitHub has banned developers for unpopular speech on a few occasions. Bitbucket is a little better, and still free. Others exist (gitlab is completely open source I believe).

[–] Xicronic 0 points 1 points (+1|-0) ago  (edited ago)

I second GitLab. They don't support full free speech, but they are fully open source and have free private repos.

[–] hedidnothingwrong 0 points 1 points (+1|-0) ago 

Hello, I've got a few questions,

Why didn't you use Opencv?

What kind of algorithm do you use? I guess it's probably some background subtraction algorithm, do you have any links (papers)?

Thank you in advance.

[–] kevf4 [S] 0 points 1 points (+1|-0) ago 

The reason I didn't use OpenCV is because I used this project for learning and I wanted to try some new approaches. Also, I wasn't sure how well OpenCV would perform on a Raspberry Pi . I'm using OpenGL shaders for processing when possible.

The function Analytics::update in Analytics.cpp on line 637 gives a high level overview of the detection algorithms. In a nutshell:

  • A background frame is calculated which is the average of frames over the last few minutes. This may change to a Gaussian distribution to reduce spikes, but so far simple averaging is working well. (see Shaders::incrementShaderSource at Shaders.cpp line 141).

  • The current frame frame is then subtracted from the background frame, which generates a motion mask. (see Shaders::motionMaskShaderSource at Shaders.cpp line 102).

  • Blobs are detected within the motion mask by using a depth first search. (see Analytics::_detectObjects at Analytics.cpp line 91).

  • Blobs are tracked using a simple distanced based approach. (see Analytics::_trackObjects at Analytics.cpp line 284).

  • When a user "trains" the system it is essentially creating a heatmap with thresholds. If the heatmap has a "hot" spot, this tells the algorithm to ignore alarms in that area. (see ExclusionZones.cpp)

I don't have any documentation or diagrams. This project was never really intended for public use, but I am sharing because it could be useful to someone.

[–] hedidnothingwrong 0 points 0 points (+0|-0) ago 

Thanks, it was a very clear explanation.

[–] 1moar 0 points 1 points (+1|-0) ago 

Saved, thanks mate.

[–] [deleted] 0 points 1 points (+1|-0) ago 


[–] Maroonsaint 2 points -2 points (+0|-2) ago 

Ok but what were they shoving into the mass graves in the videos? I'm stupid, I'm really really fucking dumb..I'm racist and hateful, I don't want to be but this anger is god given. You take both sides and you take all the evidence of both sides and you deniers lose so bad. I mean it's like a a football game and the score is 9000-3. Not that faggy soccer shit real football. You have fucking nothing the holocaust happened. Idgaf. Idgaf about the American Indians. Idgaf if whites are exterminated. Idgaf I accept I'm on this side but ffs stfu about it not fucking happening you low iq piece of shit. You're creating weak arguments that will inevitably be used against the cause. Fuck I would fucking cut you up before I cut up some fucking nigger. Atleast I understand the negroes motives. They wanna get fucked up! Who the fuck doesn'? Stfu with your fuckinf bullshit argument.

[–] stayawayfrommybutt 0 points 2 points (+2|-0) ago 

You ok bud?

[–] natehigger44 0 points 0 points (+0|-0) ago 

Maroonsaint, coming to a school shooting near you.

[–] KikeFree 0 points 1 points (+1|-0) ago  (edited ago)

I made a system for networked H264 cameras, with realtime tracking of multiple targets that works well.

So, some comments from my experience that might be helpful:

Where these things usually fall down, particularly with multiple HD streams is from the load on the CPUs.

I highly recommend reading and decoding the full resolution stream into a framebuffer (looks like you get jpegs) (generally you want the full frames saved for the record if motion is detected anyway), make a lower resolution copy of the frame buffer to run your motion detection/tracking algorithms on (cubic or linear interpolation). Multiple passes of gaussian blur with a running average frame subtracted from the live image, combined with a hole filling algorithm will produce blobs that cover anything moving. You can measure the blob sizes to discard blowing leaves, or massive shadows from moving clouds, and finding the center of the blobs for targeting/tracking purposes is trivial. Use a mask image the same dimensions in concert with your highly optimized gaussian blur algorithm to avoid calculations on pixels outside of your interest.

Reduce your stream buffer size to a minimum to keep tracking lag to a minimum.

I haven't open sourced it, but maybe one day.

[–] kevf4 [S] 0 points 1 points (+1|-0) ago 

You're right about HD processing being an issue. I've worked on a system used for airports and prisons, and they usually have two streams per optical camera. One HD stream that goes to the VMS and another low res stream for object detection + onvif metadata generation. This was just a hobby project for learning and I didn't want to go all-in, otherwise I would have used ffmpeg or opencv.

For blob detection I'm just using a simple depth-first algorithm and culling based on user parameters and a heatmap of negatively scored detections. I tried using random forests for culling, but the algorithm required a bit too much tweaking to get it to work correctly. A few people I work with have researched SVMs and deep learning networks, but those required a massive amount of data that I'm not too interested in collecting.

[–] KikeFree 0 points 1 points (+1|-0) ago 

More than one camera can help reject false positives like rain drops or blowing snow.

When it comes down to intelligently rejecting movement, the probable identity of the object needs to be determined, and it's so difficult even a human watching can reject legitimate movement or needs a double-take to figure out wtf it was that changed. For some things thermal imaging can help. I don't think conventional deep learning is the way forward, I lean towards making progress on nested pattern recognition.

[–] BentAxel 0 points 1 points (+1|-0) ago 

Mother Effer. It tracks the objects in the frame? That's some super cool stuff.

[–] ThisIsntMe123 0 points 0 points (+0|-0) ago 

How is this different from MotionEyeOS or whatever?

[–] kevf4 [S] 0 points 1 points (+1|-0) ago 

I don't have any experience with MotionEyeOS, but it looks very similar. My project (pisentinel) supports SeekThermal cameras, IP MJPEG cameras, has a training mode to prevent false alarms, and supports up to 720p frame processing. I will try to throw together a video sometime this week that breaks down the features.

[–] ThisIsntMe123 0 points 0 points (+0|-0) ago 

Optional cloud/ftp backup would be cool.

[–] brianforward 0 points 0 points (+0|-0) ago 

It doesn't open...