hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From venito camelas <robotirlan...@gmail.com>
Subject Hadoop: precomputing data
Date Thu, 06 Oct 2016 13:14:30 GMT
I'm designing a prototype using *Hadoop* for video processing to do face
recognition. I thought of 2 ways of doing it.

*Approach 1:*

I was thinking of doing something in 2 steps:

   1. A map that receives frames and if a face is found it gets stored for
   the next step.
   2. A map that receives the frames from step 1 (all frames containing 1
   face at least) and does face recognition.

Step 1 would be ran only once while step 2 runs every time I want recognize
a new face.


*Approach 2:*

The other approach I thought about is to do face recognition to all the
data every time

The first approach saves time because I don't have to process faceless
frames every time I want to do face recognition, it also uses more disk
space (and it could be a lot of space).


I'm not sure whats better. Is it a bad thing to leave that precomputed
frames there forever?

Mime
View raw message