Hi Krishna,
I get your point. I recommend writing a custom DBWritable class and
provide the implementation for the write() method and have this writable
instance as the value from you Mapper(if its a Map only job) or Reducer(in
case of a Map - Reduce job) .
The framework will do the work for you.
Regards
Ravi
On Sun, Mar 1, 2015 at 11:29 PM, Krishna <research800@gmail.com> wrote:
> Ravi, thanks.
> If the target table is salted, do I need to compute the leading byte (as i
> understand, its a hash value) in the mapper?
>
>
> On Sunday, March 1, 2015, Ravi Kiran <maghamravikiran@gmail.com> wrote:
>
>> Hi Krishna,
>>
>> I assume you have already taken a look at the example here
>> http://phoenix.apache.org/phoenix_mr.html
>>
>> > Is there a need to compute hash byte in the MR job?
>> Can you please elaborate a bit more on what hash byte is ?
>>
>> > Are keys and values stored in BytesWritable before doing a
>> "context.write(...)" in the mapper?
>> The Key-values from a mapper to reducer are the usual
>> Writable/WritableComparable instances and you can definitely write
>> BytesWritable .
>>
>> Regards
>> Ravi
>>
>> On Sun, Mar 1, 2015 at 10:04 PM, Krishna <research800@gmail.com> wrote:
>>
>>> Could someone comment of following questions regarding the usage of
>>> PhoenixOutputFormat in a standalone MR job:
>>>
>>> - Is there a need to compute hash byte in the MR job?
>>> - Are keys and values stored in BytesWritable before doing a
>>> "context.write(...)" in the mapper?
>>>
>>>
>>>
>>
|