manifoldcf-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Karl Wright (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (CONNECTORS-970) Hadoop error and silent failure
Date Mon, 04 Aug 2014 22:31:13 GMT

     [ https://issues.apache.org/jira/browse/CONNECTORS-970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Karl Wright updated CONNECTORS-970:
-----------------------------------

    Fix Version/s:     (was: ManifoldCF 1.7)
                   ManifoldCF 2.0

Moving to MCF 2.0, since I haven't heard from Minoru-san.

> Hadoop error and silent failure
> -------------------------------
>
>                 Key: CONNECTORS-970
>                 URL: https://issues.apache.org/jira/browse/CONNECTORS-970
>             Project: ManifoldCF
>          Issue Type: Bug
>          Components: HDFS connector
>    Affects Versions: ManifoldCF 1.6.1
>            Reporter: Karl Wright
>            Assignee: Minoru Osuka
>             Fix For: ManifoldCF 2.0
>
>
> HDFS output connector, during its check() call, succeeds even though the following is
dumped to StdErr:
> {code}
> [Thread-475] ERROR org.apache.hadoop.util.Shell - Failed to locate the winutils
> binary in the hadoop binary path
> java.io.IOException: Could not locate executable null\bin\winutils.exe in the Ha
> doop binaries.
>         at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:278)
>         at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:300)
>         at org.apache.hadoop.util.Shell.<clinit>(Shell.java:293)
>         at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:76)
>         at org.apache.hadoop.conf.Configuration.getTrimmedStrings(Configuration.
> java:1546)
>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:519)
>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:453)
>         at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFi
> leSystem.java:136)
>         at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2433
> )
>         at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:88)
>         at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:246
> 7)
>         at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2449)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:367)
>         at org.apache.hadoop.fs.FileSystem$1.run(FileSystem.java:156)
>         at org.apache.hadoop.fs.FileSystem$1.run(FileSystem.java:153)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInforma
> tion.java:1491)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:153)
>         at org.apache.manifoldcf.agents.output.hdfs.HDFSSession.<init>(HDFSSessi
> on.java:59)
>         at org.apache.manifoldcf.agents.output.hdfs.HDFSOutputConnector$GetSessi
> onThread.run(HDFSOutputConnector.java:822)
> {code}
> So there may be two problems here.  (1) If Hadoop has requirements beyond standard Java
support for the environment, we need to know what those are and make sure they are written
up.  And: (2) if the requirements are not met, the check() method should print an appropriate
status, not succeed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message