hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Tony Reix (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-8520) Patch for PPC64 block size
Date Tue, 25 Aug 2015 09:19:46 GMT

    [ https://issues.apache.org/jira/browse/HDFS-8520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14710968#comment-14710968
] 

Tony Reix commented on HDFS-8520:
---------------------------------

Hi Andrew
Sorry of answering so late. I was busy with other staff.

My understanding of this defect is:
- I had analyzed the code and had found hardcoded 4096 values for PageSize in many places,
which does cover mainly basic x86 case but not other architectures
- on Linux on Power, PageSize is different
- I had proposed to use the getOperatingSystemPageSize() routine instead of hardcoded 4096
value in order to make the test generic (work on all targets with no change)
- You proposed instead to hardcode 4096 in the existing test and to add a new test with 64KB
as hardcoded value (which would cover x86 Linux Large Page case)
     I guess that this would mean to add some code to check, before running these 2 tests,
which ones are applicable on the targeted architecture.
    What about other architectures that would not use 4096 nor 64KB ?

What is the status now for Hadoop 2.7.2 ?
Are you planning to make the code more homogeneous and to enable it to handle cases (like
Linux on Power) where basic PageSize is NOT 4096 ?
It is very difficult for me to propose a patch since:
        1) there are a lot of places where 4096 and PageSize appear, with no clear clue if
that deals with my issue
        2) your approach is very different from mine: hardcode values instead of making code
generic
Moreover, the way the tests "fail" is a pain, since they end with TimeOut.

In my opinion, a huge work of rewriting test code so that it deals by default with very different
architectures (including x86 4KB and 64KB cases) would be the best approach.
I also would suggest to add a Linux/PPC64 machine in the development/test environment in order
to handle different architectures at soon as possible.

> Patch for PPC64 block size
> --------------------------
>
>                 Key: HDFS-8520
>                 URL: https://issues.apache.org/jira/browse/HDFS-8520
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 2.7.1
>         Environment: RHEL 7.1 /PPC64
>            Reporter: Tony Reix
>            Assignee: Tony Reix
>         Attachments: HDFS-8520-2.patch, HDFS-8520.patch
>
>
> The attached patch enables Hadoop to work on PPC64.
> That deals with SystemPageSize and BloclSize , which are not 4096 on PPC64.
> There are changes in 3 files:
> - hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/nativeio/NativeIO.java
> - hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestFsDatasetCache.java
> - hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCacheDirectives.java
> where 4096 is replaced by getOperatingSystemPageSize() or by using PAGE_SIZE
> The patch has been built on branch-2.7 .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message