kylin-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "刘成军" <>
Subject 回复:Help for job build: directory item limit exceeded exception
Date Tue, 20 Nov 2018 15:25:35 GMT
    Thx for your reply, i wil try it late.

But i have check the source code, the code set with 
    Configuration conf = HBaseConfiguration.create(HadoopUtil.getCurrentConfiguration());
    if (StringUtils.isBlank(conf.get("hbase.fs.tmp.dir"))) {
            conf.set("hbase.fs.tmp.dir", "/tmp");

    My question is i have set the hbase.fs.tmp.dir property in hbase-site.xml( and restart
kylin), but it still write data to /tmp directory.

    Any one has other suggestion?

发件人:JiaTao Tao <>
发送时间:2018年11月20日(星期二) 22:59
收件人:user <>; 刘成军 <>
主 题:Re: Help for job build: directory item limit exceeded exception


Seems that there are too many files in "/tmp", try to modify the config below in "hdfs-site.xml".

  <description>Defines the maximum number of items that a directory may
      contain. Cannot set the property to a value less than 1 or more than

And here's a link for you:
It is generally recommended that users do not tune these values except in very unusual circumstances.

刘成军 <> 于2018年11月20日周二 上午11:01写道:
    Build cube from my cdh(5.13 cluster, with kerberos enabled),  when the Job comes with
the step(#10): Convert Cuboid Data to HFile,
it comes the followed exception:
 I also change the hbase config(hbase.fs.tmp.dir=/usr/tmp/hbase) in my hbase-site.xml, but
it comes the same exception;
 How can i do with it?

   I did not have the permission to delete the data in /tmp. 
Best Regards


Aron Tao

View raw message