kylin-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From JiaTao Tao <taojia...@gmail.com>
Subject Re: Help for job build: directory item limit exceeded exception
Date Tue, 20 Nov 2018 14:58:54 GMT
Hi

Seems that there are too many files in "/tmp", try to modify the config
below in "hdfs-site.xml".

<property>
>   <name>dfs.namenode.fs-limits.max-directory-items</name>
>   <value>1048576</value>
>   <description>Defines the maximum number of items that a directory may
>       contain. Cannot set the property to a value less than 1 or more than
>       6400000.</description>
> </property>



And here's a link for you:
https://tw.saowen.com/a/fa6aea71141c6241f496093d9b0feb0c87bf4c30cf40b4ff6fdc065a8228231a
.
It is generally recommended that users do not tune these values except in
very unusual circumstances.

刘成军 <liucj@zqykj.com> 于2018年11月20日周二 上午11:01写道:

> Hi,
>     Build cube from my cdh(5.13 cluster, with kerberos enabled),  when the
> Job comes with the step(#10): Convert Cuboid Data to HFile,
> it comes the followed exception:
>
>
>  I also change the hbase config(hbase.fs.tmp.dir=/usr/tmp/hbase) in my
> hbase-site.xml, but it comes the same exception;
>  How can i do with it?
>
> PS:
>    I did not have the permission to delete the data in /tmp.
>
> Best Regards
>
> -----------------------------
>
> *刘成军(**Gavin**)*
>
> ————————————————
>
> 手机:13913036255
>
>
>
>
>
>

-- 


Regards!

Aron Tao

Mime
View raw message