hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From felix gao <gre1...@gmail.com>
Subject How to record the bad records encountered by hadoop
Date Mon, 20 Dec 2010 20:06:12 GMT

Not sure if this is the right mailing list of this question. I am using pig
to do some data analysis and I am wondering if there a way to tell hadoop
when it encountered a bad log files either due to uncompression failures or
what ever caused the job to die, record the line and if possible the
filename it is working on in the some logs so I can go back to take a look
at it later?



View raw message