hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Chen Jia (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-15017) is not a valid DFS filename
Date Tue, 07 Nov 2017 01:06:00 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-15017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16241273#comment-16241273
] 

Chen Jia commented on HADOOP-15017:
-----------------------------------

Thanks for your answer and reminding :)
I have solved this problem by updating my Hadoop version in Maven.

>  is not a valid DFS filename
> ----------------------------
>
>                 Key: HADOOP-15017
>                 URL: https://issues.apache.org/jira/browse/HADOOP-15017
>             Project: Hadoop Common
>          Issue Type: Bug
>            Reporter: Chen Jia
>
> I encountered the following error:
> {code}
> 2017-11-06 14:20:23.039 ERROR   --- [           main] org.apache.sqoop.Sqoop        
          : Got exception running Sqoop: java.lang.IllegalArgumentException: Pathname /D:/Repositories/Maven/mysql/mysql-connector-java/5.1.39/mysql-connector-java-5.1.39.jar
from hdfs://10.162.3.171:8020/D:/Repositories/Maven/mysql/mysql-connector-java/5.1.39/mysql-connector-java-5.1.39.jar
is not a valid DFS filename.
> java.lang.IllegalArgumentException: Pathname /D:/Repositories/Maven/mysql/mysql-connector-java/5.1.39/mysql-connector-java-5.1.39.jar
from hdfs://10.162.3.171:8020/D:/Repositories/Maven/mysql/mysql-connector-java/5.1.39/mysql-connector-java-5.1.39.jar
is not a valid DFS filename.
> 	at org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:190)
> 	at org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:98)
> 	at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1112)
> 	at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1108)
> 	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> 	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1108)
> 	at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:288)
> 	at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:224)
> 	at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:93)
> 	at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.java:57)
> 	at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:265)
> 	at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:301)
> 	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:389)
> 	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
> 	at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Unknown Source)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
> 	at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
> 	at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
> 	at org.apache.sqoop.mapreduce.ExportJobBase.doSubmitJob(ExportJobBase.java:296)
> 	at org.apache.sqoop.mapreduce.ExportJobBase.runJob(ExportJobBase.java:273)
> 	at org.apache.sqoop.mapreduce.ExportJobBase.runExport(ExportJobBase.java:395)
> 	at org.apache.sqoop.manager.SqlManager.exportTable(SqlManager.java:828)
> 	at org.apache.sqoop.tool.ExportTool.exportTable(ExportTool.java:81)
> 	at org.apache.sqoop.tool.ExportTool.run(ExportTool.java:100)
> 	at org.apache.sqoop.Sqoop.run(Sqoop.java:145)
> 	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> 	at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:181)
> 	at cn.com.audiservice.service.SqoopTestService.main(SqoopTestService.java:42)
> {code}
> My code is as follows:
> public class SqoopT{code}estService {
> 	public static void main(String[] args){
> 		args = new String[]{
> 				"export","--connect","XXXXXXXXXXXXX",
> 				"--username","XXXXXXXXXXX",
> 				"--password","XXXXXXXXXXXXXX",
> 				"--table","XXXXXXXXXX",
> 				"--export-dir","/files/ftp_cluster_222/opdw4_ad/CD0402060201AAUDI201711020000000.txt",
> 				"--input-fields-terminated-by","'|'"
> 		};
> 		//Expand the option
> 		String[] expandedArgs = null;
> 		try{
> 			expandedArgs = OptionsFileUtil.expandArguments(args);
> 		}catch(Exception ex){
> 			System.err.println(ex.getMessage());
> 			System.err.println("Try 'sqoop help' for usage.");
> 		}
> 		
> 		String toolName = expandedArgs[0];
> 		com.cloudera.sqoop.tool.SqoopTool tool = (com.cloudera.sqoop.tool.SqoopTool) SqoopTool.getTool(toolName);
> 		if(null == tool){
> 			System.err.println("No such sqoop tool: " + toolName + ". See 'sqoop help''.");
> 		}
> 		
> 		Configuration conf = new Configuration();  
> 		conf.set("fs.hdfs.impl",org.apache.hadoop.hdfs.DistributedFileSystem.class.getName());
> 	    conf.set("fs.default.name", "hdfs://10.162.3.171:8020");//设置hadoop服务地址

> 		Configuration pluginConf = SqoopTool.loadPlugins(conf);
> 		
> 		Sqoop sqoop = new Sqoop(tool, pluginConf);
> 		Sqoop.runSqoop(sqoop, Arrays.copyOfRange(expandedArgs, 1, expandedArgs.length));
> 		
> 	}
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Mime
View raw message