hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From unmesha sreeveni <unmeshab...@gmail.com>
Subject Re: Decision Tree implementation not working in cluster !
Date Fri, 06 Dec 2013 10:37:48 GMT
Any idea?


On Thu, Dec 5, 2013 at 4:13 PM, unmesha sreeveni <unmeshabiju@gmail.com>wrote:

>
> Desicion tree is working perfectly in Eclipse Juno.
>
> But wen i tried to run that in my cluster it is showing error
> ======================================================================
> *In main() *
> *In main() +++++++ run *
> *13/12/05 16:10:40 WARN mapred.JobClient: Use GenericOptionsParser for
> parsing the arguments. Applications should implement Tool for the same.*
> *13/12/05 16:10:41 INFO mapred.FileInputFormat: Total input paths to
> process : 1*
> *13/12/05 16:10:41 INFO mapred.JobClient: Running job:
> job_201310231648_0427*
> *13/12/05 16:10:42 INFO mapred.JobClient:  map 0% reduce 0%*
> *13/12/05 16:10:49 INFO mapred.JobClient:  map 100% reduce 0%*
> *13/12/05 16:10:53 INFO mapred.JobClient:  map 100% reduce 100%*
> *13/12/05 16:10:55 INFO mapred.JobClient: Job complete:
> job_201310231648_0427*
> *13/12/05 16:10:55 INFO mapred.JobClient: Counters: 33*
> *13/12/05 16:10:55 INFO mapred.JobClient:   File System Counters*
> *13/12/05 16:10:55 INFO mapred.JobClient:     FILE: Number of bytes
> read=987*
> *13/12/05 16:10:55 INFO mapred.JobClient:     FILE: Number of bytes
> written=589647*
> *13/12/05 16:10:55 INFO mapred.JobClient:     FILE: Number of read
> operations=0*
> *13/12/05 16:10:55 INFO mapred.JobClient:     FILE: Number of large read
> operations=0*
> *13/12/05 16:10:55 INFO mapred.JobClient:     FILE: Number of write
> operations=0*
> *13/12/05 16:10:55 INFO mapred.JobClient:     HDFS: Number of bytes
> read=846*
> *13/12/05 16:10:55 INFO mapred.JobClient:     HDFS: Number of bytes
> written=264*
> *13/12/05 16:10:55 INFO mapred.JobClient:     HDFS: Number of read
> operations=5*
> *13/12/05 16:10:55 INFO mapred.JobClient:     HDFS: Number of large read
> operations=0*
> *13/12/05 16:10:55 INFO mapred.JobClient:     HDFS: Number of write
> operations=2*
> *13/12/05 16:10:55 INFO mapred.JobClient:   Job Counters *
> *13/12/05 16:10:55 INFO mapred.JobClient:     Launched map tasks=2*
> *13/12/05 16:10:55 INFO mapred.JobClient:     Launched reduce tasks=1*
> *13/12/05 16:10:55 INFO mapred.JobClient:     Data-local map tasks=2*
> *13/12/05 16:10:55 INFO mapred.JobClient:     Total time spent by all maps
> in occupied slots (ms)=11179*
> *13/12/05 16:10:55 INFO mapred.JobClient:     Total time spent by all
> reduces in occupied slots (ms)=3964*
> *13/12/05 16:10:55 INFO mapred.JobClient:     Total time spent by all maps
> waiting after reserving slots (ms)=0*
> *13/12/05 16:10:55 INFO mapred.JobClient:     Total time spent by all
> reduces waiting after reserving slots (ms)=0*
> *13/12/05 16:10:55 INFO mapred.JobClient:   Map-Reduce Framework*
> *13/12/05 16:10:55 INFO mapred.JobClient:     Map input records=14*
> *13/12/05 16:10:55 INFO mapred.JobClient:     Map output records=56*
> *13/12/05 16:10:55 INFO mapred.JobClient:     Map output bytes=869*
> *13/12/05 16:10:55 INFO mapred.JobClient:     Input split bytes=220*
> *13/12/05 16:10:55 INFO mapred.JobClient:     Combine input records=0*
> *13/12/05 16:10:55 INFO mapred.JobClient:     Combine output records=0*
> *13/12/05 16:10:55 INFO mapred.JobClient:     Reduce input groups=20*
> *13/12/05 16:10:55 INFO mapred.JobClient:     Reduce shuffle bytes=993*
> *13/12/05 16:10:55 INFO mapred.JobClient:     Reduce input records=56*
> *13/12/05 16:10:55 INFO mapred.JobClient:     Reduce output records=20*
> *13/12/05 16:10:55 INFO mapred.JobClient:     Spilled Records=112*
> *13/12/05 16:10:55 INFO mapred.JobClient:     CPU time spent (ms)=2450*
> *13/12/05 16:10:55 INFO mapred.JobClient:     Physical memory (bytes)
> snapshot=636420096*
> *13/12/05 16:10:55 INFO mapred.JobClient:     Virtual memory (bytes)
> snapshot=2955980800*
> *13/12/05 16:10:55 INFO mapred.JobClient:     Total committed heap usage
> (bytes)=524484608*
> *13/12/05 16:10:55 INFO mapred.JobClient:   File Input Format Counters *
> *13/12/05 16:10:55 INFO mapred.JobClient:     Bytes Read=417*
> *Current  NODE INDEX . ::0*
> *java.io.FileNotFoundException: C45/intermediate0.txt (No such file or
> directory)*
> * at java.io.FileInputStream.open(Native Method)*
> * at java.io.FileInputStream.<init>(FileInputStream.java:138)*
> * at java.io.FileInputStream.<init>(FileInputStream.java:97)*
> * at tech.GainRatio.getcount(GainRatio.java:110)*
> * at tech.C45.main(C45.java:105)*
> * at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)*
> * at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)*
> * at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)*
> * at java.lang.reflect.Method.invoke(Method.java:601)*
> * at org.apache.hadoop.util.RunJar.main(RunJar.java:208)*
> *Exception in thread "main" java.lang.NumberFormatException: null*
> * at java.lang.Integer.parseInt(Integer.java:454)*
> * at java.lang.Integer.parseInt(Integer.java:527)*
> * at tech.GainRatio.currNodeEntophy(GainRatio.java:28)*
> * at tech.C45.main(C45.java:106)*
> * at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)*
> * at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)*
> * at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)*
> * at java.lang.reflect.Method.invoke(Method.java:601)*
> * at org.apache.hadoop.util.RunJar.main(RunJar.java:208)*
> ======================================================================
> But the folder "C45 " is created ,
> but the intermediate files are not created within it why is it so?
> Code availabe :https://github.com/studhadoop/DT
> Any suggesions
>
> --
> *Thanks & Regards*
>
> Unmesha Sreeveni U.B
>
> *Junior Developer*
>
>
>


-- 
*Thanks & Regards*

Unmesha Sreeveni U.B

*Junior Developer*

Mime
View raw message