manifoldcf-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Karl Wright (JIRA)" <j...@apache.org>
Subject [jira] [Resolved] (CONNECTORS-970) Hadoop error and silent failure
Date Tue, 23 Sep 2014 07:50:33 GMT

     [ https://issues.apache.org/jira/browse/CONNECTORS-970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Karl Wright resolved CONNECTORS-970.
------------------------------------
       Resolution: Won't Fix
    Fix Version/s:     (was: ManifoldCF 2.0)

Haven't seen this in the latest hadoop.


> Hadoop error and silent failure
> -------------------------------
>
>                 Key: CONNECTORS-970
>                 URL: https://issues.apache.org/jira/browse/CONNECTORS-970
>             Project: ManifoldCF
>          Issue Type: Bug
>          Components: HDFS connector
>    Affects Versions: ManifoldCF 1.6.1
>            Reporter: Karl Wright
>            Assignee: Minoru Osuka
>
> HDFS output connector, during its check() call, succeeds even though the following is
dumped to StdErr:
> {code}
> [Thread-475] ERROR org.apache.hadoop.util.Shell - Failed to locate the winutils
> binary in the hadoop binary path
> java.io.IOException: Could not locate executable null\bin\winutils.exe in the Ha
> doop binaries.
>         at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:278)
>         at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:300)
>         at org.apache.hadoop.util.Shell.<clinit>(Shell.java:293)
>         at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:76)
>         at org.apache.hadoop.conf.Configuration.getTrimmedStrings(Configuration.
> java:1546)
>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:519)
>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:453)
>         at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFi
> leSystem.java:136)
>         at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2433
> )
>         at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:88)
>         at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:246
> 7)
>         at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2449)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:367)
>         at org.apache.hadoop.fs.FileSystem$1.run(FileSystem.java:156)
>         at org.apache.hadoop.fs.FileSystem$1.run(FileSystem.java:153)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInforma
> tion.java:1491)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:153)
>         at org.apache.manifoldcf.agents.output.hdfs.HDFSSession.<init>(HDFSSessi
> on.java:59)
>         at org.apache.manifoldcf.agents.output.hdfs.HDFSOutputConnector$GetSessi
> onThread.run(HDFSOutputConnector.java:822)
> {code}
> So there may be two problems here.  (1) If Hadoop has requirements beyond standard Java
support for the environment, we need to know what those are and make sure they are written
up.  And: (2) if the requirements are not met, the check() method should print an appropriate
status, not succeed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message