hadoop-hive-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Soundararajan Velu (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HIVE-943) Hive jdbc client - result is NULL when I run a query to select a large of data (with starting mapreduce)
Date Fri, 02 Jul 2010 14:24:51 GMT

    [ https://issues.apache.org/jira/browse/HIVE-943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12884688#action_12884688
] 

Soundararajan Velu commented on HIVE-943:
-----------------------------------------

Vu, I tried this piece of code and it works just fine, the only change I had to do was to
replace the createstatement method with the default method with no arguments. It will be good
if you can let us know the exact problem that you are facing here. 

> Hive jdbc client - result is NULL when I run a query to select a large of data (with
starting mapreduce)
> --------------------------------------------------------------------------------------------------------
>
>                 Key: HIVE-943
>                 URL: https://issues.apache.org/jira/browse/HIVE-943
>             Project: Hadoop Hive
>          Issue Type: Bug
>          Components: Clients
>    Affects Versions: 0.4.0
>            Reporter: Vu Hoang
>             Fix For: 0.4.2
>
>
> - some main output messages i got from console:
> Total MapReduce jobs = 1
> 09/11/18 15:56:03 INFO ql.Driver: Total MapReduce jobs = 1
> 09/11/18 15:56:03 INFO exec.ExecDriver: BytesPerReducer=1000000000 maxReducers=999 totalInputFileSize=1289288953
> Number of reduce tasks not specified. Estimated from input data size: 2
> 09/11/18 15:56:03 INFO exec.ExecDriver: Number of reduce tasks not specified. Estimated
from input data size: 2
> In order to change the average load for a reducer (in bytes):
> 09/11/18 15:56:03 INFO exec.ExecDriver: In order to change the average load for a reducer
(in bytes):
>   set hive.exec.reducers.bytes.per.reducer=<number>
> 09/11/18 15:56:03 INFO exec.ExecDriver:   set hive.exec.reducers.bytes.per.reducer=<number>
> In order to limit the maximum number of reducers:
> 09/11/18 15:56:03 INFO exec.ExecDriver: In order to limit the maximum number of reducers:
>   set hive.exec.reducers.max=<number>
> 09/11/18 15:56:03 INFO exec.ExecDriver:   set hive.exec.reducers.max=<number>
> In order to set a constant number of reducers:
> 09/11/18 15:56:03 INFO exec.ExecDriver: In order to set a constant number of reducers:
>   set mapred.reduce.tasks=<number>
> 09/11/18 15:56:03 INFO exec.ExecDriver:   set mapred.reduce.tasks=<number>
> 09/11/18 15:56:03 INFO exec.ExecDriver: Using org.apache.hadoop.hive.ql.io.HiveInputFormat
> Starting Job = job_200911122011_0639, Tracking URL = http://**********/jobdetails.jsp?jobid=job_200911122011_0639
> 09/11/18 15:56:04 INFO exec.ExecDriver: Starting Job = job_200911122011_0639, Tracking
URL = http://**********/jobdetails.jsp?jobid=job_200911122011_0639
> Kill Command = /data/hadoop-hive/bin/../bin/hadoop job  -Dmapred.job.tracker=**********
-kill job_200911122011_0639
> 09/11/18 15:56:04 INFO exec.ExecDriver: Kill Command = /data/hadoop-hive/bin/../bin/hadoop
job  -Dmapred.job.tracker=********** -kill job_200911122011_0639
> 2009-11-18 03:56:05,701 map = 0%,  reduce = 0%
> 09/11/18 15:56:05 INFO exec.ExecDriver: 2009-11-18 03:56:05,701 map = 0%,  reduce = 0%
> 2009-11-18 03:56:21,798 map = 4%,  reduce = 0%
> 09/11/18 15:56:21 INFO exec.ExecDriver: 2009-11-18 03:56:21,798 map = 4%,  reduce = 0%
> 2009-11-18 03:56:22,818 map = 8%,  reduce = 0%
> 09/11/18 15:56:22 INFO exec.ExecDriver: 2009-11-18 03:56:22,818 map = 8%,  reduce = 0%
> 2009-11-18 03:56:23,832 map = 13%,  reduce = 0%
> 09/11/18 15:56:23 INFO exec.ExecDriver: 2009-11-18 03:56:23,832 map = 13%,  reduce =
0%
> 2009-11-18 03:56:24,854 map = 17%,  reduce = 0%
> 09/11/18 15:56:24 INFO exec.ExecDriver: 2009-11-18 03:56:24,854 map = 17%,  reduce =
0%
> 2009-11-18 03:56:25,864 map = 21%,  reduce = 0%
> 09/11/18 15:56:25 INFO exec.ExecDriver: 2009-11-18 03:56:25,864 map = 21%,  reduce =
0%
> 2009-11-18 03:56:29,890 map = 25%,  reduce = 0%
> 09/11/18 15:56:29 INFO exec.ExecDriver: 2009-11-18 03:56:29,890 map = 25%,  reduce =
0%
> 2009-11-18 03:56:30,900 map = 29%,  reduce = 0%
> 09/11/18 15:56:30 INFO exec.ExecDriver: 2009-11-18 03:56:30,900 map = 29%,  reduce =
0%
> 2009-11-18 03:56:31,909 map = 33%,  reduce = 0%
> 09/11/18 15:56:31 INFO exec.ExecDriver: 2009-11-18 03:56:31,909 map = 33%,  reduce =
0%
> 2009-11-18 03:56:33,933 map = 37%,  reduce = 0%
> 09/11/18 15:56:33 INFO exec.ExecDriver: 2009-11-18 03:56:33,933 map = 37%,  reduce =
0%
> 2009-11-18 03:56:35,946 map = 50%,  reduce = 0%
> 09/11/18 15:56:35 INFO exec.ExecDriver: 2009-11-18 03:56:35,946 map = 50%,  reduce =
0%
> 2009-11-18 03:56:36,956 map = 54%,  reduce = 0%
> 09/11/18 15:56:36 INFO exec.ExecDriver: 2009-11-18 03:56:36,956 map = 54%,  reduce =
0%
> 2009-11-18 03:56:37,965 map = 58%,  reduce = 0%
> 09/11/18 15:56:37 INFO exec.ExecDriver: 2009-11-18 03:56:37,965 map = 58%,  reduce =
0%
> 2009-11-18 03:56:38,978 map = 79%,  reduce = 0%
> 09/11/18 15:56:38 INFO exec.ExecDriver: 2009-11-18 03:56:38,978 map = 79%,  reduce =
0%
> 2009-11-18 03:56:39,988 map = 83%,  reduce = 0%
> 09/11/18 15:56:39 INFO exec.ExecDriver: 2009-11-18 03:56:39,988 map = 83%,  reduce =
0%
> 2009-11-18 03:56:40,998 map = 96%,  reduce = 0%
> 09/11/18 15:56:41 INFO exec.ExecDriver: 2009-11-18 03:56:40,998 map = 96%,  reduce =
0%
> 2009-11-18 03:56:42,006 map = 100%,  reduce = 0%
> 09/11/18 15:56:42 INFO exec.ExecDriver: 2009-11-18 03:56:42,006 map = 100%,  reduce =
0%
> 2009-11-18 03:56:46,031 map = 100%,  reduce = 13%
> 09/11/18 15:56:46 INFO exec.ExecDriver: 2009-11-18 03:56:46,031 map = 100%,  reduce =
13%
> 2009-11-18 03:56:51,060 map = 100%,  reduce = 25%
> 09/11/18 15:56:51 INFO exec.ExecDriver: 2009-11-18 03:56:51,060 map = 100%,  reduce =
25%
> 2009-11-18 03:56:56,091 map = 100%,  reduce = 67%
> 09/11/18 15:56:56 INFO exec.ExecDriver: 2009-11-18 03:56:56,091 map = 100%,  reduce =
67%
> 2009-11-18 03:56:57,102 map = 100%,  reduce = 100%
> 09/11/18 15:56:57 INFO exec.ExecDriver: 2009-11-18 03:56:57,102 map = 100%,  reduce =
100%
> Ended Job = job_200911122011_0639
> 09/11/18 15:56:59 INFO exec.ExecDriver: Ended Job = job_200911122011_0639
> 09/11/18 15:56:59 INFO exec.FileSinkOperator: Moving tmp dir: hdfs://**********/tmp/hive-asadm/1751400872/_tmp.10001
to: hdfs://**********/tmp/hive-asadm/1751400872/_tmp.10001.intermediate
> 09/11/18 15:56:59 INFO exec.FileSinkOperator: Moving tmp dir: hdfs://**********/tmp/hive-asadm/1751400872/_tmp.10001.intermediate
to: hdfs://**********/tmp/hive-asadm/1751400872/10001
> OK
> 09/11/18 15:56:59 INFO ql.Driver: OK
> 09/11/18 15:56:59 INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:_col0,
type:string, comment:from deserializer)], properties:null)
> 09/11/18 15:56:59 INFO ql.Driver: Returning Thrift schema: Schema(fieldSchemas:[FieldSchema(name:_col0,
type:string, comment:from deserializer)], properties:null)
> 09/11/18 15:56:59 INFO service.HiveServer: Returning schema: Schema(fieldSchemas:[FieldSchema(name:_col0,
type:string, comment:from deserializer)], properties:null)
> 09/11/18 15:56:59 INFO mapred.FileInputFormat: Total input paths to process : 2
> ||_col0||
> |NULL|
> - this problem DOES NOT appear when mapreduce was not running ?
> - i made something wrong at configuration from Hive jdbc api ?
> - temporary data has been moved twice, that's reason why ?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message