manifoldcf-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Karl Wright (JIRA)" <>
Subject [jira] [Created] (CONNECTORS-970) Hadoop error and silent failure
Date Wed, 18 Jun 2014 14:20:03 GMT
Karl Wright created CONNECTORS-970:

             Summary: Hadoop error and silent failure
                 Key: CONNECTORS-970
             Project: ManifoldCF
          Issue Type: Bug
          Components: HDFS connector
    Affects Versions: ManifoldCF 1.6.1
            Reporter: Karl Wright
            Assignee: Minoru Osuka
             Fix For: ManifoldCF 1.7

HDFS output connector, during its check() call, succeeds even though the following is dumped
to StdErr:

[Thread-475] ERROR org.apache.hadoop.util.Shell - Failed to locate the winutils
binary in the hadoop binary path Could not locate executable null\bin\winutils.exe in the Ha
doop binaries.
        at org.apache.hadoop.util.Shell.getQualifiedBinPath(
        at org.apache.hadoop.util.Shell.getWinUtilsPath(
        at org.apache.hadoop.util.Shell.<clinit>(
        at org.apache.hadoop.util.StringUtils.<clinit>(
        at org.apache.hadoop.conf.Configuration.getTrimmedStrings(Configuration.
        at org.apache.hadoop.hdfs.DFSClient.<init>(
        at org.apache.hadoop.hdfs.DFSClient.<init>(
        at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFi
        at org.apache.hadoop.fs.FileSystem.createFileSystem(
        at org.apache.hadoop.fs.FileSystem.access$200(
        at org.apache.hadoop.fs.FileSystem$Cache.getInternal(
        at org.apache.hadoop.fs.FileSystem$Cache.get(
        at org.apache.hadoop.fs.FileSystem.get(
        at org.apache.hadoop.fs.FileSystem$
        at org.apache.hadoop.fs.FileSystem$
        at Method)
        at org.apache.hadoop.fs.FileSystem.get(
        at org.apache.manifoldcf.agents.output.hdfs.HDFSSession.<init>(HDFSSessi
        at org.apache.manifoldcf.agents.output.hdfs.HDFSOutputConnector$GetSessi

So there may be two problems here.  (1) If Hadoop has requirements beyond standard Java support
for the environment, we need to know what those are and make sure they are written up.  And:
(2) if the requirements are not met, the check() method should print an appropriate status,
not succeed.

This message was sent by Atlassian JIRA

View raw message