hive-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hive QA (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HIVE-16983) getFileStatus on accessible s3a://[bucket-name]/folder: throws com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden;
Date Mon, 10 Jul 2017 06:23:02 GMT

    [ https://issues.apache.org/jira/browse/HIVE-16983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16079908#comment-16079908
] 

Hive QA commented on HIVE-16983:
--------------------------------



Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12876365/HIVE-16983-branch-2.1.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/5927/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/5927/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-5927/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and
output '+ date '+%Y-%m-%d %T.%3N'
2017-07-10 06:21:54.491
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-5927/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z branch-2.1 ]]
+ [[ -d apache-github-branch-2.1-source ]]
+ [[ ! -d apache-github-branch-2.1-source/.git ]]
+ [[ ! -d apache-github-branch-2.1-source ]]
+ date '+%Y-%m-%d %T.%3N'
2017-07-10 06:21:54.494
+ cd apache-github-branch-2.1-source
+ git fetch origin
>From https://github.com/apache/hive
   a27ccaa..5a62503  branch-2.1 -> origin/branch-2.1
   1a9ed41..07522ad  branch-1   -> origin/branch-1
   a3f718f..18ddf46  branch-1.2 -> origin/branch-1.2
   48f6e30..6a63742  branch-2   -> origin/branch-2
   b45e401..f24b76f  branch-2.0 -> origin/branch-2.0
   f31f749..61867c7  branch-2.2 -> origin/branch-2.2
 * [new branch]      branch-2.3 -> origin/branch-2.3
   ccea0d6..81853c1  hive-14535 -> origin/hive-14535
   5814c11..7f5460d  master     -> origin/master
 * [new branch]      storage-branch-2.3 -> origin/storage-branch-2.3
 * [new branch]      storage-branch-2.4 -> origin/storage-branch-2.4
 * [new tag]         storage-release-2.4.0rc0 -> storage-release-2.4.0rc0
 * [new tag]         rel/release-1.2.2 -> rel/release-1.2.2
 * [new tag]         rel/storage-release-2.3.0 -> rel/storage-release-2.3.0
 * [new tag]         rel/storage-release-2.3.1 -> rel/storage-release-2.3.1
 * [new tag]         release-2.3.0-rc0 -> release-2.3.0-rc0
 * [new tag]         storage-release-2.3.0rc0 -> storage-release-2.3.0rc0
+ git reset --hard HEAD
HEAD is now at a27ccaa HIVE-14084 HPLSQL multiple db connection does not switch back to Hive
(Fei Hui via Alan Gates)
+ git clean -f -d
+ git checkout branch-2.1
Already on 'branch-2.1'
Your branch is behind 'origin/branch-2.1' by 2 commits, and can be fast-forwarded.
  (use "git pull" to update your local branch)
+ git reset --hard origin/branch-2.1
HEAD is now at 5a62503 HIVE-16683. Backport of ORC-125 to fix incorrect handling of future
+ git merge --ff-only origin/branch-2.1
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2017-07-10 06:22:05.047
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch
error: a/pom.xml: No such file or directory
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12876365 - PreCommit-HIVE-Build

> getFileStatus on accessible s3a://[bucket-name]/folder: throws com.amazonaws.services.s3.model.AmazonS3Exception:
Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden;
> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HIVE-16983
>                 URL: https://issues.apache.org/jira/browse/HIVE-16983
>             Project: Hive
>          Issue Type: Bug
>          Components: Hive
>    Affects Versions: 2.1.1
>         Environment: Hive 2.1.1 on Ubuntu 14.04 AMI in AWS EC2, connecting to S3 using
s3a:// protocol
>            Reporter: Alex Baretto
>            Assignee: Vlad Gudikov
>             Fix For: 2.1.1
>
>         Attachments: HIVE-16983-branch-2.1.patch
>
>
> I've followed various published documentation on integrating Apache Hive 2.1.1 with AWS
S3 using the `s3a://` scheme, configuring `fs.s3a.access.key` and 
> `fs.s3a.secret.key` for `hadoop/etc/hadoop/core-site.xml` and `hive/conf/hive-site.xml`.
> I am at the point where I am able to get `hdfs dfs -ls s3a://[bucket-name]/` to work
properly (it returns s3 ls of that bucket). So I know my creds, bucket access, and overall
Hadoop setup is valid. 
>     hdfs dfs -ls s3a://[bucket-name]/
>     
>     drwxrwxrwx   - hdfs hdfs          0 2017-06-27 22:43 s3a://[bucket-name]/files
>     ...etc. 
>     hdfs dfs -ls s3a://[bucket-name]/files
>     
>     drwxrwxrwx   - hdfs hdfs          0 2017-06-27 22:43 s3a://[bucket-name]/files/my-csv.csv
> However, when I attempt to access the same s3 resources from hive, e.g. run any `CREATE
SCHEMA` or `CREATE EXTERNAL TABLE` statements using `LOCATION 's3a://[bucket-name]/files/'`,
it fails. 
> for example:
> >CREATE EXTERNAL TABLE IF NOT EXISTS mydb.my_table ( my_table_id string, my_tstamp
timestamp, my_sig bigint ) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LOCATION 's3a://[bucket-name]/files/';
> I keep getting this error:
> >FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask.
MetaException(message:Got exception: java.nio.file.AccessDeniedException s3a://[bucket-name]/files:
getFileStatus on s3a://[bucket-name]/files: com.amazonaws.services.s3.model.AmazonS3Exception:
Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: C9CF3F9C50EF08D1),
S3 Extended Request ID: T2xZ87REKvhkvzf+hdPTOh7CA7paRpIp6IrMWnDqNFfDWerkZuAIgBpvxilv6USD0RSxM9ymM6I=)
> This makes no sense. I have access to the bucket as one can see in the hdfs test. And
I've added the proper creds to hive-site.xml. 
> Anyone have any idea what's missing from this equation?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Mime
View raw message