hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hudson (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-11466) FastByteComparisons: do not use UNSAFE_COMPARER on the SPARC architecture because it is slower there
Date Thu, 22 Jan 2015 14:10:38 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-11466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14287464#comment-14287464
] 

Hudson commented on HADOOP-11466:
---------------------------------

FAILURE: Integrated in Hadoop-Hdfs-trunk #2013 (See [https://builds.apache.org/job/Hadoop-Hdfs-trunk/2013/])
HADOOP-11466. FastByteComparisons: do not use UNSAFE_COMPARER on the SPARC architecture because
it is slower there (Suman Somasundar via Colin P.  McCabe) (cmccabe: rev ee7d22e90ce67de3e7ee92f309c048a1d4be0bbe)
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/FastByteComparisons.java


> FastByteComparisons: do not use UNSAFE_COMPARER on the SPARC architecture because it
is slower there
> ----------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-11466
>                 URL: https://issues.apache.org/jira/browse/HADOOP-11466
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: io, performance, util
>         Environment: Linux X86 and Solaris SPARC
>            Reporter: Suman Somasundar
>            Assignee: Suman Somasundar
>            Priority: Minor
>              Labels: patch
>             Fix For: 2.7.0
>
>         Attachments: HADOOP-11466.002.patch, HADOOP-11466.003.patch
>
>
> One difference between Hadoop 2.x and Hadoop 1.x is a utility to compare two byte arrays
at coarser 8-byte granularity instead of at the byte-level. The discussion at HADOOP-7761
says this fast byte comparison is somewhat faster for longer arrays and somewhat slower for
smaller arrays ( AVRO-939). In order to do 8-byte reads on addresses not aligned to 8-byte
boundaries, the patch uses Unsafe.getLong. The problem is that this call is incredibly expensive
on SPARC. The reason is that the Studio compiler detects an unaligned pointer read and handles
this read in software. x86 supports unaligned reads, so there is no penalty for this call
on x86. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message