hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ioakim Perros <imper...@gmail.com>
Subject Re: Efficient read/write - Iterative M/R jobs
Date Mon, 23 Jul 2012 23:15:12 GMT
Thank you very much for your instant response :-)

Hope Amazon Web Services will help me with this one.
IP


On 07/24/2012 02:06 AM, Jean-Daniel Cryans wrote:
>> ... INFO mapred.JobClient: Task Id : attempt_201207232344_0001_m_000000_0,
>> Status : FAILED
>> java.lang.IllegalArgumentException: *Can't read partitions file*
>>      at
>> org.apache.hadoop.hbase.mapreduce.hadoopbackport.TotalOrderPartitioner.setConf(TotalOrderPartitioner.java:111)
>> ...
>>
>> I followed this link, while googling for the solution :
>> http://hbase.apache.org/book/trouble.mapreduce.html
>> and it implies a misconfiguration concerning a fully distributed
>> environment.
>>
>> I would like, therefore, to ask if it is even possible to bulk import data
>> in a pseudo-distributed mode and if this is the case, does anyone have a
>> guess about this error?
> AFAIK you just can't use the local job tracker for this, so you do
> need to start one.
>
> J-D


Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message