kylin-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From LIU Ze (刘则) <liu...@wanda.cn>
Subject Re: Error of " Calculate HTable Region Splits"
Date Fri, 06 Nov 2015 01:21:59 GMT
Thanks Li Yang.
the CubeCapacity does not have the value of "SMALL/MEDIUM/LARGE",so I modify the value is
just "MEDIUM",and it skip this error temporary.
but I do not understand  why the value  of CubeCapacity is null ,I download source code from
https://github.com/apache/incubator-kylin/tree/kylin-1.1-incubating .
and it makes an error when " Load HFile to HBase Table",please see the mail of "Error in "
Load HFile to HBase Table",thanks !

the commit.sha1:
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#


the code of "org/apache/kylin/job/hadoop/cube/RangeKeyDistributionReducer.java " which I modify:

    public static final int SMALL_CUT = 10; //  10 GB per region
    public static final int MEDIUM_CUT = 20; //  20 GB per region
    public static final int LARGE_CUT = 100; // 100 GB per region

@Override
    protected void setup(Context context) throws IOException {
        super.publishConfiguration(context.getConfiguration());

        //CubeCapacity cubeCapacity = CubeCapacity.valueOf(context.getConfiguration().get(BatchConstants.CUBE_CAPACITY));
        CubeCapacity cubeCapacity = CubeCapacity.valueOf("MEDIUM");
        switch (cubeCapacity) {
            case SMALL:
                cut = SMALL_CUT;
                break;
            case MEDIUM:
                cut = MEDIUM_CUT;
                break;
            case LARGE:
                cut = LARGE_CUT;
                break;
        }

        logger.info("Chosen cut for htable is " + cut);
        if (context.getConfiguration().get(BatchConstants.REGION_SPLIT_SIZE) != null) {
            cut = Integer.valueOf(context.getConfiguration().get(BatchConstants.REGION_SPLIT_SIZE));
        }

        if (context.getConfiguration().get(BatchConstants.REGION_NUMBER_MIN) != null) {
            minRegionCount = Integer.valueOf(context.getConfiguration().get(BatchConstants.REGION_NUMBER_MIN));
        }

        if (context.getConfiguration().get(BatchConstants.REGION_NUMBER_MAX) != null) {
            maxRegionCount = Integer.valueOf(context.getConfiguration().get(BatchConstants.REGION_NUMBER_MAX));
        }

        logger.info("Chosen cut for htable is " + cut + ", max region count=" + maxRegionCount
+ ", min region count =" + minRegionCount);
    }
    @Override
    public void reduce(Text key, Iterable<LongWritable> values, Context context) throws
IOException, InterruptedException {
        for (LongWritable v : values) {
            bytesRead += v.get();
        }

        if (bytesRead >= ONE_GIGA_BYTES) {
            gbPoints.add(new Text(key));
            bytesRead = 0; // reset bytesRead
        }
    }



________________________________
The stack trace does not match the 1.1 code..... Could you show the contents of "KYLILN_HOME/commit.sha1"?
That's the exact commit version. https://github.com/apache/incubator-kylin/blob/kylin-1.1-incubating/job/src/main/java/org/apache/kylin/job/hadoop/cube/RangeKeyDistributionReducer.java
Anyway, suggest get 1.1 code again and redeploy. Seems the Kylin package is not official 1.1
release. On Tue, Nov 3, 2015 at 5:07 PM, LIU Ze (刘则) wrote: > > hi all, > it
make an error in the step of " Calculate HTable Region Splits",and the > version is 1.1.
> > kylin_job.log: > [pool-5-thread-9]:[2015-11-03 > 16:43:09,393][DEBUG][org.apache.kylin.job.tools.HadoopStatusChecker.checkStatus(HadoopStatusChecker.java:57)]
> - State of Hadoop job: job_1444723293631_39546:FINISHED-FAILED > [pool-5-thread-9]:[2015-11-03
> 16:43:09,707][ERROR][org.apache.kylin.job.common.HadoopCmdOutput.updateJobCounter(HadoopCmdOutput.java:100)]
> - java.io.IOException: Unknown Job job_1444723293631_39546 > at > org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.verifyAndGetJob(HistoryClientService.java:218)
> at > org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.getCounters(HistoryClientService.java:232)
> at > org.apache.hadoop.mapreduce.v2.api.impl.pb.service.MRClientProtocolPBServiceImpl.getCounters(MRClientProtocolPBServiceImpl.java:159)
> at > org.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProtocolService$2.callBlockingMethod(MRClientProtocol.java:281)
> at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045) > at java.security.AccessController.doPrivileged(Native
Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043) > > > yarn logs
-applicationId application_1444723293631_39546: > > 2015-11-03 16:42:56,428 WARN [main]
org.apache.hadoop.mapred.YarnChild: > Exception running child : java.lang.NullPointerException:
Name is null > at java.lang.Enum.valueOf(Enum.java:235) > at > org.apache.kylin.cube.model.v1.CubeDesc$CubeCapacity.valueOf(CubeDesc.java:72)
> at > org.apache.kylin.job.hadoop.cube.RangeKeyDistributionReducer.setup(RangeKeyDistributionReducer.java:59)
> at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:168) > at > org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627)
> at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
> at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415)
> at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) > > ________________________________
>
Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message