hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Xu Chen (JIRA)" <j...@apache.org>
Subject [jira] [Created] (HDFS-8972) EINVAL Invalid argument when RAM_DISK usage 90%+
Date Thu, 27 Aug 2015 05:56:46 GMT
Xu Chen created HDFS-8972:

             Summary: EINVAL Invalid argument when RAM_DISK usage 90%+
                 Key: HDFS-8972
                 URL: https://issues.apache.org/jira/browse/HDFS-8972
             Project: Hadoop HDFS
          Issue Type: Bug
            Reporter: Xu Chen
            Priority: Critical

the directory  which is use LAZY_PERSIST policy , so use "df" command look up tmpfs is usage
>=90% , run spark,hive or mapreduce application , Datanode come out  following exception

2015-08-26 17:37:34,123 WARN org.apache.hadoop.io.ReadaheadPool: Failed readahead on null
EINVAL: Invalid argument
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posix_fadvise(Native Method)
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX.posixFadviseIfPossible(NativeIO.java:267)
        at org.apache.hadoop.io.nativeio.NativeIO$POSIX$CacheManipulator.posixFadviseIfPossible(NativeIO.java:146)
        at org.apache.hadoop.io.ReadaheadPool$ReadaheadRequestImpl.run(ReadaheadPool.java:206)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)

And the application is slowly than without exception  25%


This message was sent by Atlassian JIRA

View raw message