accumulo-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mike Atlas <m...@weft.io>
Subject Re: first time setup: Mkdirs failed to create hdfs directory /accumulo/recovery/
Date Tue, 06 Jan 2015 01:01:39 GMT
Oh well. I ended up deleting my entire /accumulo hdfs directory and
re-init-ing accumulo. I'm in business now (see below)

Thanks...
Mike


$ java -cp ./target/geomesa-quickstart-1.0-SNAPSHOT.jar
org.geomesa.QuickStart -instanceId accumulo -zookeepers "localhost:2181"
-user root -password nowayjose -tableName geomQs
log4j:WARN No appenders could be found for logger
(org.apache.accumulo.fate.zookeeper.ZooSession).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for
more info.
Creating feature-type (schema):  QuickStart
Creating new features
Inserting new features
Submitting query
1.  Bierce|589|Sat Jul 05 06:02:15 UTC 2014|POINT (-76.88146600670152
-37.40156607152168)|null
2.  Bierce|322|Tue Jul 15 21:09:42 UTC 2014|POINT (-77.01760098223343
-37.30933767159561)|null
3.  Bierce|343|Wed Aug 06 08:59:22 UTC 2014|POINT (-76.66826220670282
-37.44503877750368)|null
4.  Bierce|925|Mon Aug 18 03:28:33 UTC 2014|POINT (-76.5621106573523
-37.34321201566148)|null
5.  Bierce|394|Fri Aug 01 23:55:05 UTC 2014|POINT (-77.42555615743139
-37.26710898726304)|null
6.  Bierce|640|Sun Sep 14 19:48:25 UTC 2014|POINT (-77.36222958792739
-37.13013846773835)|null
7.  Bierce|931|Fri Jul 04 22:25:38 UTC 2014|POINT (-76.51304097832912
-37.49406125975311)|null
8.  Bierce|886|Tue Jul 22 18:12:36 UTC 2014|POINT (-76.59795732474399
-37.18420917493149)|null
9.  Bierce|259|Thu Aug 28 19:59:30 UTC 2014|POINT (-76.90122194030118
-37.148525741002466)|null


On Mon, Jan 5, 2015 at 7:20 PM, Mike Atlas <mike@weft.io> wrote:

> Should have included that... It does seem that tserver is running as
> hduser as well. See below:
>
> *$hadoop version*
> Hadoop 2.2.0
> Subversion https://svn.apache.org/repos/asf/hadoop/common -r 1529768
> Compiled by hortonmu on 2013-10-07T06:28Z
> Compiled with protoc 2.5.0
> From source with checksum 79e53ce7994d1628b240f09af91e1af4
> This command was run using
> /usr/local/hadoop/share/hadoop/common/hadoop-common-2.2.0.jar
> *$jps -v*
> *7930* Main -Dapp=tserver -XX:+UseConcMarkSweepGC
> -XX:CMSInitiatingOccupancyFraction=75 -Djava.net.preferIPv4Stack=true
> -Xmx2g -Xms2g -XX:NewSize=1G -XX:MaxNewSize=1G -XX:OnOutOfMemoryError=kill
> -9 %p
> -Djavax.xml.parsers.DocumentBuilderFactory=com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderFactoryImpl
> -Djava.library.path=/usr/local/hadoop/lib/native
> -Dorg.apache.accumulo.core.home.dir=/usr/local/accumulo
> -Dhadoop.home.dir=/usr/local/hadoop
> -Dzookeeper.home.dir=/usr/share/zookeeper
> *$ ps -al | grep 7930*
> 0 S  1001  7930     1  8  80   0 - 645842 futex_ pts/2   00:00:36 java
> *hduser@accumulo:/home$ id -u hduser*
> 1001
>
> Also pardon the mixup above in hdfs /accumulo*0 *as I was trying a fresh
> hdfs folder while taking debugging notes for my first email. The same
> problem occurred.
>
> Thanks for any help,
>
> Mike
>
> On Mon, Jan 5, 2015 at 7:08 PM, John Vines <vines@apache.org> wrote:
>
>> And can you validate the user the tserver process is running as?
>>
>> On Mon, Jan 5, 2015 at 7:07 PM, John Vines <vines@apache.org> wrote:
>>
>>> What version of hadoop?
>>>
>>> On Mon, Jan 5, 2015 at 6:50 PM, Mike Atlas <mike@weft.io> wrote:
>>>
>>>> Hello,
>>>>
>>>> I'm running Accumulo 1.5.2, trying to test out the GeoMesa
>>>> <http://www.geomesa.org/2014/05/28/geomesa-quickstart/> family of
>>>> spatio-temporal iterators using their quickstart demonstration tool. I
>>>> think I'm not making progress due to my Accumulo setup, though, so can
>>>> someone validate that all looks good from here?
>>>>
>>>> start-all.sh output:
>>>>
>>>> hduser@accumulo:~$ $ACCUMULO_HOME/bin/start-all.sh
>>>> Starting monitor on localhost
>>>> Starting tablet servers .... done
>>>> Starting tablet server on localhost
>>>> 2015-01-05 21:37:18,523 [server.Accumulo] INFO : Attempting to talk to zookeeper
>>>> 2015-01-05 21:37:18,772 [server.Accumulo] INFO : Zookeeper connected and
initialized, attemping to talk to HDFS
>>>> 2015-01-05 21:37:19,028 [server.Accumulo] INFO : Connected to HDFS
>>>> Starting master on localhost
>>>> Starting garbage collector on localhost
>>>> Starting tracer on localhost
>>>>
>>>> hduser@accumulo:~$
>>>>
>>>>
>>>> I do believe my HDFS is set up correctly:
>>>>
>>>> hduser@accumulo:/home/ubuntu/geomesa-quickstart$ hadoop fs -ls /accumulo
>>>> Found 5 items
>>>> drwxrwxrwx   - hduser supergroup          0 2014-12-10 01:04 /accumulo/instance_id
>>>> drwxrwxrwx   - hduser supergroup          0 2015-01-05 21:22 /accumulo/recovery
>>>> drwxrwxrwx   - hduser supergroup          0 2015-01-05 20:14 /accumulo/tables
>>>> drwxrwxrwx   - hduser supergroup          0 2014-12-10 01:04 /accumulo/version
>>>> drwxrwxrwx   - hduser supergroup          0 2014-12-10 01:05 /accumulo/wal
>>>>
>>>>
>>>> However, when I check the Accumulo monitor logs, I see these errors
>>>> post-startup:
>>>>
>>>> java.io.IOException: Mkdirs failed to create directory /accumulo/recovery/15664488-bd10-4d8d-9584-f88d8595a07c/part-r-00000
>>>> 	java.io.IOException: Mkdirs failed to create directory /accumulo/recovery/15664488-bd10-4d8d-9584-f88d8595a07c/part-r-00000
>>>> 		at org.apache.hadoop.io.MapFile$Writer.<init>(MapFile.java:264)
>>>> 		at org.apache.hadoop.io.MapFile$Writer.<init>(MapFile.java:103)
>>>> 		at org.apache.accumulo.server.tabletserver.log.LogSorter$LogProcessor.writeBuffer(LogSorter.java:196)
>>>> 		at org.apache.accumulo.server.tabletserver.log.LogSorter$LogProcessor.sort(LogSorter.java:166)
>>>> 		at org.apache.accumulo.server.tabletserver.log.LogSorter$LogProcessor.process(LogSorter.java:89)
>>>> 		at org.apache.accumulo.server.zookeeper.DistributedWorkQueue$1.run(DistributedWorkQueue.java:101)
>>>> 		at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>>>> 		at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>>>> 		at org.apache.accumulo.trace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>>>> 		at org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
>>>> 		at java.lang.Thread.run(Thread.java:745)
>>>>
>>>>
>>>> I don't really understand - I started accumulo as the hduser, which is
>>>> the same user that has access to the HDFS directory /accumulo/recovery,
>>>> and it looks like the directory was created actually, except for the last
>>>> directory (part-r-0000):
>>>>
>>>> hduser@accumulo:~$ hadoop fs -ls /accumulo0/recovery/
>>>> Found 1 items
>>>> drwxr-xr-x   - hduser supergroup          0 2015-01-05 22:11 /accumulo/recovery/87fb7aac-0274-4aea-8014-9d53dbbdfbbc
>>>>
>>>>
>>>> I'm not out of physical disk space:
>>>>
>>>> hduser@accumulo:~$ df -h
>>>> Filesystem      Size  Used Avail Use% Mounted on
>>>> /dev/xvda1     1008G  8.5G  959G   1% /
>>>>
>>>>
>>>> What could be going on here? Any ideas on something simple I could have
>>>> missed?
>>>>
>>>> Thanks,
>>>> Mike
>>>>
>>>
>>>
>>
>

Mime
View raw message