accumulo-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Keys Botzum <kbot...@maprtech.com>
Subject Re: Accumulo on MapR Continued - LargeRowTest
Date Thu, 12 Apr 2012 12:26:27 GMT
Keith,

I've run the commands you requested. I hope this is helpful to you. By the way, the reason my path looks a little different is because I'm using MapR's NFS access which makes it a lot easier to get to files.

$ ./accumulo org.apache.accumulo.core.file.rfile.PrintInfo -d /mapr/my.cluster.com/user/mapr/accumulo-SE-test-04-28903/tables/1/default_tablet/F0000009.rf |grep 'false ->' |cut -c 1-64 | md5sum
Setting continuous mode
2012-04-12 05:08:44,2798 Program: fileclient on Host: NULL IP: 0.0.0.0, Port: 0, PID: 0
addcfa8442914899a998d38bbf917d67  -

Looks like that's the result you expect.

Now for the next command:

mapr@SE-test-04:/opt/accumulo-1.4.0/bin$ for RF in /mapr/my.cluster.com/user/mapr/accumulo-SE-test-04-28903/tables/2/*/*.rf; do ./accumulo org.apache.accumulo.core.file.rfile.PrintInfo -d $RF; done |grep 'false ->'|cut -c 1-64 |sort |md5sum
91fae26d3b1d0ccc8b7d860a6bdb8385  -


As you can see this result in very different. Just to make sure I also ran this command with the same result:
$ for RF in `hadoop fs -ls /user/mapr/accumulo-SE-test-04-28903/tables/2/*/*.rf | awk '{print $NF}'` ; do ./accumulo org.apache.accumulo.core.file.rfile.PrintInfo -d $RF; done |grep 'false ->'|cut -c 1-64 |sort |md5sum

In case it might be useful I checksummed each file separately:
$ for RF in `hadoop fs -ls /user/mapr/accumulo-SE-test-04-28903/tables/2/*/*.rf | awk '{print $NF}'` ; do echo $RF; ./accumulo org.apache.accumulo.core.file.rfile.PrintInfo -d $RF | grep 'false ->'|cut -c 1-64 | md5sum; done
/user/mapr/accumulo-SE-test-04-28903/tables/2/default_tablet/F000000y.rf
b0dda4e93f4fcc04a784dec8f8e9841d  -
/user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000000/F000000p.rf
e0736ed51112529836253e8d0afb3253  -
/user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000001/F000000q.rf
008d03a0643cfdf83198c60ac9d45807  -
/user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000002/F000000s.rf
244f3bb7e61a30b4daed3aceb4efa7a1  -
/user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000003/F000000r.rf
7c8f4ff718b75051c4e6cb684689ca69  -
/user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000004/F000000t.rf
e006faaee8f8a57c3285f7b77779d63b  -
/user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000005/F000000u.rf
e914b210587161e71f398f325472e2bc  -
/user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000006/F000000v.rf
112f7d8dbe5b4cc9c971f7a7dfb56d9d  -
/user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000007/F000000w.rf
df292cfffd3775a24b91fa56bd1f3d00  -
/user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000008/F000000x.rf
a4a89e5cafbed9e89aca773f8f904b8f  -

I also collected the first few lines of each file just to make sure the md5sum is on the right stuff. Does this look right to you?
mapr@SE-test-04:/opt/accumulo-1.4.0/bin$ for RF in `hadoop fs -ls /user/mapr/accumulo-SE-test-04-28903/tables/2/*/*.rf | awk '{print $NF}'` ; do echo $RF; ./accumulo org.apache.accumulo.core.file.rfile.PrintInfo -d $RF | grep 'false ->'|head; done
/user/mapr/accumulo-SE-test-04-28903/tables/2/default_tablet/F000000y.rf
l0\w{QJV@&S.<B8bf`#^W9aa%1"1J*$8!f2GSrAA$oGVIWqVHO<j$OM&F.54TE77... TRUNCATED : [] 1334232102926 false -> 21
lp*C*eLap:q6/'Ut/.+t.Wt;Jk%_H=^ yiBw-1q\,]RnC('d B3W'm..WP+8[9r)... TRUNCATED : [] 1334232103163 false -> 78
n=b\`{B!Bdv:=E:GQAhb`U_d< 6>RX$p_pJ'Gh>%>/,uI":r&g60=`]U-MSr]$i ... TRUNCATED : [] 1334232102935 false -> 18
o;z=U'`^gr{6Z]7z"hnc-U qiT= #/7.\!"6\jNqK EC*Y#(OCO!*$eWBv+[N$_L... TRUNCATED : [] 1334232103084 false -> 44
rHwA9Q[MVg^Jnap-hIj(5q5/#Cd{@^^%S,!mQ`.;f(\Ws#K6.[`sB5lI(MuVB^(F... TRUNCATED : [] 1334232103068 false -> 41
sF!!vX'8{uyE/1p 9pvA^];kP/*?m5gTP9_VfbPD+v7TTnSbC6SH/Uz2=v'^3ryX... TRUNCATED : [] 1334232103090 false -> 67
uS]C[&m^!kwa04(m$Q=$S0T?gf/F-dEeFDE,ZX[;E"%q@$5c!N{&HfdB8:?r(PIP... TRUNCATED : [] 1334232103084 false -> 64
uoa=PnI$MC6j=O!H['C- nKGB,Fawd:=JA#>$=%b=H]gFS!+]pSHEHM)'`"-!HW%... TRUNCATED : [] 1334232102935 false -> 7
vQeKP./>FeKGML)TQxO-4yD:8RkHF:0`CQE8[YGrfp2B!o[aJ* M5*d9Mc'QuP;p... TRUNCATED : [] 1334232103168 false -> 90
w AY5<o*;8 JRSY"FPI').'KYwPM774v@LU:t34%WPGUFzL];,OuJX3Bj7C'*;h)... TRUNCATED : [] 1334232102940 false -> 4
/user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000000/F000000p.rf
 )zUKu6Q,6Xv0>]F.7N '6=&Y)R?>oAJ1qlv#, jX<$ZQj+pXW2DPNn\z`Q;\-:A... TRUNCATED : [] 1334232102998 false -> 53
!8`u;;$^Q1X.(*NT48T4OesPWA#"-W1q*[W"^ Y,QfPw\Ebci7GMh>6+LV7Y`Si?... TRUNCATED : [] 1334232102883 false -> 24
"6 =0CNVv+xzEB7+a_2U0f\m(ub:ZBao' $is"=,rXMD=4ps6op.U^BWa#k{Qg0S... TRUNCATED : [] 1334232102994 false -> 50
#4xO%KdP?9aENYtH2>yKYR\jA`N_+ul4$-<I,$1l7JJez#>w_K%!V#B6.MoNBgfA... TRUNCATED : [] 1334232102998 false -> 76
/user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000001/F000000q.rf
%C?ypYJCe e.EY+/K@0x$%cH>dsfc\ENf*arht;/1`.\*FmCq+C0moW=[W_gEECW... TRUNCATED : [] 1334232103096 false -> 47
%] zG`VVmc:!p0^LT=5IwO{o3]V]54n#f4DxF[VnIdS_$e ce(M XqIp`JGG0QQG... TRUNCATED : [] 1334232102956 false -> 16
&A]UQ`"1B/@ebqrVxgpUM%6$kPqL4GC_c7#6!v1\RR]Eg5=%>czOZ3Ucp$YK6EcQ... TRUNCATED : [] 1334232103096 false -> 73
'?[3F /4g=?/#-U3IFa;.n <('vgazD9`DY96xKZsD0:H$)CS>Q%[TG()MKR'Y<M... TRUNCATED : [] 1334232103172 false -> 99
/user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000002/F000000s.rf
(N_S6.V@0$H!wu,OOG9&VAhs&?M#PbNYYB[;qlZ0lZIdSG=/dzd%sDuUWC/Q?8jf... TRUNCATED : [] 1334232103049 false -> 70
(ju{+.=^\XML(47D>f8l##aX]M5D>baB]?w/;QZ*d$c$m/TAD@.Bp&X6F!%)80zb... TRUNCATED : [] 1334232102883 false -> 13
)L#W+688U29C81-#4okb#-liSsd[!MB7VO;"*nv/1LN546++1Vu(`deul$`h08,Z... TRUNCATED : [] 1334232103147 false -> 96
)hc' 6-39g4%1K>2kEQ7LkS>v8[ak9NMZLAePSDF)rfdNfwzmx-z]FRr[J#.)0RM... TRUNCATED : [] 1334232103042 false -> 39
/user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000003/F000000r.rf
*c.`lDdN^N-')KrZ)GUU,?W{t<=^Z5ucS6.Y/G+&",_]YAT!"XHg-7`D-@7'-jd'... TRUNCATED : [] 1334232102804 false -> 10
+u?FaLr_'\blFc;SV&TbU?d?E(a;+hT*PWotDI+0CzLz:0f7K4m'vWXkBi+=z"'f... TRUNCATED : [] 1334232102804 false -> 36
,Yd_k`'$D'\w8H{ZfOGd+]Z+ibg%> MJMZF!gc&1Lh&J5]C<ln./xu >RCa+$r%l... TRUNCATED : [] 1334232103121 false -> 93
,s/bBSX)LV^0c{DI'M*2"+@@^[XOX?#IMdC(EJ'edl9Gw{LB`k2.cwD1W6_ak"EJ... TRUNCATED : [] 1334232102984 false -> 62
.&8IEv-*rQ]OZgPK,N;!IZq1[s4\H;{eGNi=9?'Y^: `'B!z*LV!2hW<<A!Un\`(... TRUNCATED : [] 1334232102817 false -> 33
.@;bx!k<z8$Lq=E1IL&Z@(3'Pl).bnNU3kgZ_%luv>wy!aftzI>\yjG0A4[0YTWz... TRUNCATED : [] 1334232102817 false -> 2
/$J/&!EE;KK#c"cKY-@2*GSk,K)0amPADoXz:@94#,?Qd1=M?'A {,AIQjK._\h`... TRUNCATED : [] 1334232102984 false -> 59
0"<sw)5a`Y^.$::K>UxbSGb*E6;n2D],A JQOB['DfIME A*h_(f M3ff7Y)P\<?... TRUNCATED : [] 1334232103115 false -> 85
11&PgKK3=Aa"x:T9DV:j{vxtC:kC!@.::fM?.6t/=41?PWX&y?A^8=798-W^T:L4... TRUNCATED : [] 1334232102978 false -> 56
2/:^\SmDbOf=9R!Bq5E#HbDm\%3BNsA>7+FVC8P$^&&`1FJ(FwN,%]'AMVokENSf... TRUNCATED : [] 1334232103106 false -> 82
/user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000004/F000000t.rf
BDF+!ccFzK!=:iq;p*?zAA5[#@<'DI,&72Ps93/U.a$e?,r)61I_N>k\(g9VF.F1... TRUNCATED : [] 1334232103090 false -> 57
C?WNm)i5C2,V2U"Kv+1`!)0F!X5x3EU%0xC$t'H<;08AJOD2G%o{f./ V]#,Jh.;... TRUNCATED : [] 1334232102946 false -> 28
/user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000005/F000000u.rf
DQ@oN0?/h@YcO%TcGR[<JqXZ:/H+Lw8L-=$@-)HDH!U]+>>@\H?QgOggk*?#;h`1... TRUNCATED : [] 1334232103068 false -> 54
EOd"C8I>1NNKX<!e,2J"s]<NSvx\yNg;*Jd>B+Dcio^(h-TB)$@$T[6a$S+;, _r... TRUNCATED : [] 1334232103163 false -> 80
/user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000006/F000000v.rf
GJj<2FG[W5^%O(ON13]6>0cIPziuiJy-$44$"{v^c)r9,e $;`&3#`\^Q]gg/F;#... TRUNCATED : [] 1334232103024 false -> 51
Gx@SyNCp_x7xz[JbN1N\5ZO?E+5P'!:3lQXn\b]N{Ax=n'OsC^TyjbF.VP[[vRu[... TRUNCATED : [] 1334232102846 false -> 20
H\U)'N'_4DYfl@:o^Z1^g0bw!fOg:!> !U3c7!,#(we=US8`d<aGl$>Df*I4 Z$p... TRUNCATED : [] 1334232103133 false -> 77
J)EU]dIpNmvV2_s,$m.T=.9<oc#UDP-TcH^\QXI6:I{k[9zR"vI3&s5a<'%N"1fd... TRUNCATED : [] 1334232102846 false -> 17
JUNohx3g"9yD%DX;I;2tpL<68U'[VP6wsL*V+s=#B7FiUfx6BT_%)5*FMIS6)9N_... TRUNCATED : [] 1334232103036 false -> 74
K'[F>#BG+ iL;.{\Q8J#fvk-,N1"q'*,`ieiRY;E[;Bj<(.A7Q8_o7[\QPg<[DcF... TRUNCATED : [] 1334232103018 false -> 43
L"R:B2!WPc&T3vn*k::'FJ3+*R'0`#-vYSFaEN^^TQkiG__ZH1w`+'e#7Gk4s#-t... TRUNCATED : [] 1334232102858 false -> 14
M4Cf#97"u]fFP2#d<u=8oJ$BW>Gv1V0gV`RRFOf[uCd0(N]nqiD],HAfLpG3d#[M... TRUNCATED : [] 1334232103004 false -> 40
N2[#tA9K>koomJ,Ti@_S<6,9p)o%J,#hS%c%[Q"(:5]_e=/w>EUXuTX[a=c@U#Ap... TRUNCATED : [] 1334232103036 false -> 66
N`+y-M`C6(hjBsSoLB4xEhrU{x9+0UXLgd%J!jUC"1"b#gMAJH-+.R(Z\JE=j+fK... TRUNCATED : [] 1334232103147 false -> 97
/user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000007/F000000w.rf
\wH{ZUeR(*LK;s3+{l)1^ZTQr_0pTTEci^D:]c9o@3i`Orq)X91fW&n>kyK8gRs0... TRUNCATED : [] 1334232103049 false -> 46
]-e3O]_IM$p D.(cL7#$?G(X/J4A%+,Ff#5Dre%La%F20aS/%q>PD2-]$Fg6Xf><... TRUNCATED : [] 1334232103063 false -> 72
]I=:DIg/1Yg>QIiL;j!AT([DfXJ o+1MVgBz<^d]Y7me64_Fa7jUA(S[o$_]Q^Hj... TRUNCATED : [] 1334232102899 false -> 15
^+u6DeEB*3quaF)*yr*4h3p?\"?GR^_Mc0)D+g!i&_6YYOe*NMGaES=w9' /Ifd5... TRUNCATED : [] 1334232103158 false -> 98
_B W)s]l{N9OfaV(nJdC]D?(!G,I/ZsULrd?0TWes??%"FQD?O2"Z9A9VC/OF<DJ... TRUNCATED : [] 1334232102899 false -> 12
`&Q93+Z2Pv7fXFzB6t+X3bYZY:GlBZs(]v[Wg[=$ -eaysZ$`-#.\C,gfy18LD_]... TRUNCATED : [] 1334232103063 false -> 69
`T!(fz'7D\"k'yg'?q6$*DH:N3Dd\161I7]_EV#D81`O_5+YT*c%GEXn#mOZ7Pk2... TRUNCATED : [] 1334232102912 false -> 38
a8(b(3^{u(r&u^q&c?+O\b`grmd3[0P4Z;V; ];.A{tOZbnE-eRkIc:$3G'U=D'c... TRUNCATED : [] 1334232103158 false -> 95
bO!XiA85"D4lfdo8Xsc3et5FK7WZL-f^C!gc%JsQ2[[%#mYWffG;rJ(KPc4IN/^x... TRUNCATED : [] 1334232102912 false -> 9
c1S*UH[,cyT,.b-B:{F0e"L8A]VRw_NxP2cIpS64['V*F,1ug!bzbtxLvfcoF7<%... TRUNCATED : [] 1334232103153 false -> 92
/user/mapr/accumulo-SE-test-04-28903/tables/2/t-0000008/F000000x.rf
iennd4L\IJ{GYSC^s&b\X;kDb4oco8!N*QPL8:^0MO"+v'GSuN>4!<TbGc#<u[td... TRUNCATED : [] 1334232103004 false -> 55
jc$DE;4v&E%IvkvdDa;,9<:h3 =M@kfk'^va9;x0Z@!!W^JyB*!=j\bc\0INf[$L... TRUNCATED : [] 1334232103126 false -> 81



This doesn't mean much to me, but based on your earlier point I think this is an issue. What else can I gather for you?

Thanks,
Keys
________________________________
Keys Botzum
Senior Principal Technologist
WW Systems Engineering
kbotzum@maprtech.com
443-718-0098
MapR Technologies
http://www.mapr.com



On Apr 11, 2012, at 1:04 PM, Keith Turner wrote:

> I generated the hash for the second table, and its the same as the
> first.  Makes sense, its the same data just split differently.  The
> reason I did the sort is because there are multiple files.  Depending
> on where the test fails, the second table may or may not have data.
> 
> $ for RF in `hadoop fs -ls
> /user/kturner/accumulo-mac1-5520/tables/2/*/*.rf | awk '{print $NF}'`;
> do ../../../bin/accumulo org.apache.accumulo.core.file.rfile.PrintInfo
> -d $RF; done | grep 'false ->' | cut -c 1-64 | sort | md5
> addcfa8442914899a998d38bbf917d67
> 
> Keith
> 
> On Wed, Apr 11, 2012 at 12:35 PM, Keith Turner <keith@deenlo.com> wrote:
>> Keys
>> 
>> This test uses a random number generator w/ a seed, so the test should
>> always generate the same data.  I ran the test twice in dirty mode and
>> then generated an md5 hash of the data.  Both times the hash was the
>> same.  Can you try to do this and see if you get the same hash.
>> 
>> $ ./run.py -d -t largerow
>> $ ../../../bin/accumulo org.apache.accumulo.core.file.rfile.PrintInfo
>> -d /user/kturner/accumulo-mac1-5520/tables/1/default_tablet/F0000009.rf
>> | grep 'false ->' | cut -c 1-64 | md5
>> addcfa8442914899a998d38bbf917d67
>> 
>> I did the grep to only get key values, the print info command prints
>> some summary info.  I did the cut inorder to just get row data,
>> including the timestamp would make the md5sum always change. I ran
>> this on a mac, I think on linux you will need to run md5sum.
>> 
>> The test creates two tables.  The md5 is for first table which seems
>> to have just one file.  I am seeing multiple files in the second
>> table.  I will put together a command to md5sum the second table and
>> send that shortly.
>> 
>> $ hadoop fs -lsr /user/kturner/accumulo-mac1-5520/tables/1
>> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
>> /user/kturner/accumulo-mac1-5520/tables/1/default_tablet
>> -rw-r--r--   3 kturner supergroup   32617935 2012-04-11 12:11
>> /user/kturner/accumulo-mac1-5520/tables/1/default_tablet/F0000009.rf
>> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
>> /user/kturner/accumulo-mac1-5520/tables/1/t-000000a
>> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
>> /user/kturner/accumulo-mac1-5520/tables/1/t-000000b
>> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
>> /user/kturner/accumulo-mac1-5520/tables/1/t-000000c
>> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
>> /user/kturner/accumulo-mac1-5520/tables/1/t-000000d
>> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
>> /user/kturner/accumulo-mac1-5520/tables/1/t-000000e
>> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
>> /user/kturner/accumulo-mac1-5520/tables/1/t-000000f
>> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
>> /user/kturner/accumulo-mac1-5520/tables/1/t-000000g
>> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
>> /user/kturner/accumulo-mac1-5520/tables/1/t-000000h
>> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
>> /user/kturner/accumulo-mac1-5520/tables/1/t-000000i
>> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
>> /user/kturner/accumulo-mac1-5520/tables/1/t-000000j
>> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
>> /user/kturner/accumulo-mac1-5520/tables/1/t-000000k
>> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
>> /user/kturner/accumulo-mac1-5520/tables/1/t-000000l
>> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:11
>> /user/kturner/accumulo-mac1-5520/tables/1/t-000000m
>> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:12
>> /user/kturner/accumulo-mac1-5520/tables/1/t-000000n
>> drwxr-xr-x   - kturner supergroup          0 2012-04-11 12:12
>> /user/kturner/accumulo-mac1-5520/tables/1/t-000000o
>> 
>> Keith
>> 
>> On Wed, Apr 11, 2012 at 9:48 AM, Keys Botzum <kbotzum@maprtech.com> wrote:
>>> Keith,
>>> 
>>> Thanks for the suggestion.  I made the change to the source as you suggested and rebuilt it using Maven (surprisingly easy).
>>> 
>>> Here's the log from tserver now. Does this help at all? I can of course provide the complete log or logs if useful to you. I can also provide the temporary tables and such if that's useful.
>>> 
>>> 
>>> 10 15:44:07,786 [cache.LruBlockCache] DEBUG: Block cache LRU eviction completed. Freed 2494168 bytes.  Priority Sizes: Single=3.2550507MB (3413168), Multi=1
>>> 3.89547MB (14570456),Memory=0.0MB (0)
>>> 10 15:44:07,798 [rfile.RelativeKey] DEBUG: len : 131072
>>> 10 15:44:07,799 [rfile.RelativeKey] DEBUG: data : Z"?7-,mE:5Di&ou.4/4.i+9zGo0K8%%TsSt#!&a!&s :OKl:2"cp>]yT(ZeP
>>> 10 15:44:07,799 [rfile.RelativeKey] DEBUG: len : 0
>>> 10 15:44:07,799 [rfile.RelativeKey] DEBUG: data :
>>> 10 15:44:07,799 [rfile.RelativeKey] DEBUG: len : 0
>>> 10 15:44:07,799 [rfile.RelativeKey] DEBUG: data :
>>> 10 15:44:07,799 [rfile.RelativeKey] DEBUG: len : 0
>>> 10 15:44:07,799 [rfile.RelativeKey] DEBUG: data :
>>> 10 15:44:07,799 [tabletserver.TabletServer] DEBUG: ScanSess tid 10.250.99.204:52572 2 1 entries in 0.03 secs, nbTimes = [25 25 25.00 1]
>>> 10 15:44:07,828 [rfile.RelativeKey] DEBUG: len : 65
>>> 10 15:44:07,828 [rfile.RelativeKey] DEBUG: data : SjC)$OQ7U!9Ng:i1#2Sxl.a"(d=Js>d!1u`)WAFAs{>n=7H<tM?.tP"RsUOI
>>> 10 15:44:07,828 [rfile.RelativeKey] DEBUG: len : 47
>>> 10 15:44:07,833 [problems.ProblemReports] DEBUG: Filing problem report 2 FILE_READ /user/mapr/accumulo-SE-test-04-18004/tables/2/t-0000000/F000000p.rf
>>> 10 15:44:07,834 [tabletserver.TabletServer] WARN : exception while scanning tablet 2;%9e.07Zx{t*taPSI\\;I4z*77vIG[Oa&(Dw?4_N(!OIA#Z(ZE%"v3gI9Q{ZlGNAGL@... T
>>> RUNCATED<
>>> java.io.EOFException
>>>        at java.io.DataInputStream.readFully(DataInputStream.java:180)
>>>        at java.io.DataInputStream.readFully(DataInputStream.java:152)
>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.read(RelativeKey.java:381)
>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.readFields(RelativeKey.java:135)
>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:584)
>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>        at org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)
>>>        at org.apache.accumulo.server.tabletserver.Tablet.nextBatch(Tablet.java:1766)
>>>        at org.apache.accumulo.server.tabletserver.Tablet.access$3200(Tablet.java:143)
>>>        at org.apache.accumulo.server.tabletserver.Tablet$Scanner.read(Tablet.java:1883)
>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler$NextBatchTask.run(TabletServer.java:905)
>>>        at org.apache.accumulo.cloudtrace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>        at java.lang.Thread.run(Thread.java:662)
>>> 
>>> 
>>> It appears to be attempting to read 47 bytes but isn't succeeding. Out of curiosity I changed the code to read what it could and print a warning. Here's the new code version:
>>> 
>>> 
>>> And this is a snippet of the exception which occurs with that change. Everything else is the same. As you can see my hack gets us past the read of the key, but then the next read fails.
>>> 
>>> 11 06:42:32,254 [rfile.RelativeKey] DEBUG: data :
>>> 11 06:42:32,254 [tabletserver.TabletServer] DEBUG: ScanSess tid 10.250.99.204:47993 2 1 entries in 0.02
>>>  secs, nbTimes = [23 23 23.00 1]
>>> 11 06:42:32,283 [rfile.RelativeKey] DEBUG: len : 65
>>> 11 06:42:32,283 [rfile.RelativeKey] DEBUG: data : SjC)$OQ7U!9Ng:i1#2Sxl.a"(d=Js>d!1u`)WAFAs{>n=7H<tM?.t
>>> P"RsUOI
>>> 11 06:42:32,283 [rfile.RelativeKey] DEBUG: len : 47
>>> 11 06:42:32,283 [rfile.RelativeKey] DEBUG: MISSING BYTES!!: read 45
>>> 11 06:42:32,283 [rfile.RelativeKey] DEBUG: data : )vRS>4 ?c>$Sgn#[QcscA!HAYcF;M_Jg3d&Jzc85$)6Y7^@^@
>>> 11 06:42:32,288 [problems.ProblemReports] DEBUG: Filing problem report 2 FILE_READ /user/mapr/accumulo-
>>> SE-test-04-32318/tables/2/t-0000000/F000000q.rf
>>> 11 06:42:32,289 [tabletserver.TabletServer] WARN : exception while scanning tablet 2;%9e.07Zx{t*taPSI\\
>>> ;I4z*77vIG[Oa&(Dw?4_N(!OIA#Z(ZE%"v3gI9Q{ZlGNAGL@... TRUNCATED<
>>> java.io.EOFException
>>>        at java.io.DataInputStream.readInt(DataInputStream.java:375)
>>>        at org.apache.accumulo.core.data.Value.readFields(Value.java:156)
>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:585)
>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.j
>>>       …..
>>> 
>>> So it looks like we are missing quite a bit of data.
>>> 
>>> Any help or ideas appreciated.
>>> 
>>> Thanks,
>>> Keys
>>> ________________________________
>>> Keys Botzum
>>> Senior Principal Technologist
>>> WW Systems Engineering
>>> kbotzum@maprtech.com
>>> 443-718-0098
>>> MapR Technologies
>>> http://www.mapr.com
>>> 
>>> 
>>> 
>>> On Apr 10, 2012, at 5:23 PM, Keith Turner wrote:
>>> 
>>>> Keys,
>>>> 
>>>> Looking at the test, it writes out random rows that 128k in len.  The
>>>> column family and column qualifier it writes out are 0 bytes long.
>>>> When the non compression test failed, it was trying to read a column
>>>> qualifier.  If we assume that it was reading a column qualifier from
>>>> the test table then it should be calling readFully() with a zero
>>>> length array.
>>>> 
>>>> Trying to think how to debug this.  One way may be to change the code
>>>> in RelativeKey to the following and run the test.  This will show us
>>>> what its trying to do right before it hits the eof, but it will also
>>>> generate a lot of noise as things scan the metadata table.
>>>> 
>>>>  private byte[] read(DataInput in) throws IOException {
>>>>    int len = WritableUtils.readVInt(in);
>>>>    Logger.getLogger(RelativeKey.class.getName()).debug("len : " + len);
>>>>    byte[] data = new byte[len];
>>>>    in.readFully(data);
>>>>    Logger.getLogger(RelativeKey.class.getName()).debug("data : " +
>>>> new String(data).substring(0, Math.min(data.length, 60)));
>>>>    return data;
>>>>  }
>>>> 
>>>> Keith
>>>> 
>>>> On Tue, Apr 10, 2012 at 2:08 PM, Keys Botzum <kbotzum@maprtech.com> wrote:
>>>>> At this point all but two of the Accumulo test/system/auto tests have completed successfully. This test is failing and I'm not quite sure why: org.apache.accumulo.server.test.functional.LargeRowTest
>>>>> 
>>>>> When I run it, this is the output I see:
>>>>> ./run.py -t largerowtest -d -v10
>>>>> ….
>>>>> 09:45:18 runTest (simple.largeRow.LargeRowTest) ............................. DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo org.apache.accumulo.server.test.functional.FunctionalTest -m localhost -u root -p secret -i SE-test-04-22187 org.apache.accumulo.server.test.functional.LargeRowTest getConfig
>>>>> DEBUG:test.auto:{
>>>>> 'tserver.compaction.major.delay':'1',
>>>>> }
>>>>> 
>>>>> DEBUG:test.auto:
>>>>> INFO:test.auto:killing accumulo processes everywhere
>>>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/test/system/auto/pkill.sh 9 1000 SE-test-04-22187.*org.apache.accumulo.start
>>>>> DEBUG:test.auto:localhost: hadoop fs -rmr /user/mapr/accumulo-SE-test-04-22187
>>>>> INFO:test.auto:Error output from command: rmr: cannot remove /user/mapr/accumulo-SE-test-04-22187: No such file or directory.
>>>>> DEBUG:test.auto:Exit code: 255
>>>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo init --clear-instance-name
>>>>> DEBUG:test.auto:Output from command: 10 09:45:20,539 [util.Initialize] INFO : Hadoop Filesystem is maprfs:///
>>>>> 10 09:45:20,541 [util.Initialize] INFO : Accumulo data dir is /user/mapr/accumulo-SE-test-04-22187
>>>>> 10 09:45:20,541 [util.Initialize] INFO : Zookeeper server is SE-test-00:5181,SE-test-01:5181,SE-test-02:5181
>>>>> Instance name : SE-test-04-22187
>>>>> Enter initial password for root: ******
>>>>> Confirm initial password for root: ******
>>>>> 10 09:45:21,442 [util.NativeCodeLoader] INFO : Loaded the native-hadoop library
>>>>> 10 09:45:21,562 [security.ZKAuthenticator] INFO : Initialized root user with username: root at the request of user !SYSTEM
>>>>> DEBUG:test.auto:Exit code: 0
>>>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo logger
>>>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo tserver
>>>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo monitor
>>>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo org.apache.accumulo.server.master.state.SetGoalState NORMAL
>>>>> DEBUG:test.auto:Output from command: 10 09:45:22,529 [server.Accumulo] INFO : Attempting to talk to zookeeper
>>>>> 10 09:45:22,750 [server.Accumulo] INFO : Zookeeper connected and initialized, attemping to talk to HDFS
>>>>> 10 09:45:23,009 [server.Accumulo] INFO : Connected to HDFS
>>>>> DEBUG:test.auto:Exit code: 0
>>>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo master
>>>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo org.apache.accumulo.server.test.functional.FunctionalTest -m localhost -u root -p secret -i SE-test-04-22187 org.apache.accumulo.server.test.functional.LargeRowTest setup
>>>>> DEBUG:test.auto:
>>>>> DEBUG:test.auto:
>>>>> DEBUG:test.auto:localhost: /opt/accumulo-1.4.0/bin/accumulo org.apache.accumulo.server.test.functional.FunctionalTest -m localhost -u root -p secret -i SE-test-04-22187 org.apache.accumulo.server.test.functional.LargeRowTest run
>>>>> DEBUG:test.auto:Waiting for /opt/accumulo-1.4.0/bin/accumulo org.apache.accumulo.server.test.functional.FunctionalTest -m localhost -u root -p secret -i SE-test-04-22187 org.apache.accumulo.server.test.functional.LargeRowTest run to stop in 240 secs
>>>>> DEBUG:test.auto:err: Thread "org.apache.accumulo.server.test.functional.FunctionalTest" died null
>>>>> DEBUG:test.auto:err: java.lang.reflect.InvocationTargetException
>>>>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>>        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>> DEBUG:test.auto:err:    at java.lang.reflect.Method.invoke(Method.java:597)
>>>>>        at org.apache.accumulo.start.Main$1.run(Main.java:89)
>>>>>        at java.lang.Thread.run(Thread.java:662)
>>>>> Caused by: java.lang.RuntimeException: org.apache.accumulo.core.client.impl.AccumuloServerException: Error on server 10.250.99.204:39253
>>>>> DEBUG:test.auto:err:
>>>>>        at org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:186)
>>>>>        at org.apache.accumulo.server.test.functional.LargeRowTest.verify(LargeRowTest.java:165)
>>>>>        at org.apache.accumulo.server.test.functional.LargeRowTest.basicTest(LargeRowTest.java:143)
>>>>>        at org.apache.accumulo.server.test.functional.LargeRowTest.test2(LargeRowTest.java:104)
>>>>>        at org.apache.accumulo.server.test.functional.LargeRowTest.run(LargeRowTest.java:87)
>>>>>        at org.apache.accumulo.server.test.functional.FunctionalTest.main(FunctionalTest.java:312)
>>>>>        ... 6 more
>>>>> DEBUG:test.auto:err: Caused by: org.apache.accumulo.core.client.impl.AccumuloServerException: Error on server 10.250.99.204:39253
>>>>>        at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:302)
>>>>>        at org.apache.accumulo.core.client.impl.ScannerIterator$Reader.run(ScannerIterator.java:94)
>>>>>        at org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:176)
>>>>>        ... 11 more
>>>>> Caused by: org.apache.thrift.TApplicationException: Internal error processing startScan
>>>>>        at org.apache.thrift.TApplicationException.read(TApplicationException.java:108)
>>>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.recv_startScan(TabletClientService.java:184)
>>>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.startScan(TabletClientService.java:157)
>>>>>        at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>>>>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>>>> DEBUG:test.auto:err:
>>>>>        at org.apache.accumulo.cloudtrace.instrument.thrift.TraceWrap$2.invoke(TraceWrap.java:84)
>>>>>        at $Proxy1.startScan(Unknown Source)
>>>>>        at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:415)
>>>>>        at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:295)
>>>>>        ... 13 more
>>>>> ERROR:test.auto:This looks like a stack trace: Thread "org.apache.accumulo.server.test.functional.FunctionalTest" died null
>>>>> java.lang.reflect.InvocationTargetException
>>>>>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>>        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>>>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>>>>        at org.apache.accumulo.start.Main$1.run(Main.java:89)
>>>>>        at java.lang.Thread.run(Thread.java:662)
>>>>> Caused by: java.lang.RuntimeException: org.apache.accumulo.core.client.impl.AccumuloServerException: Error on server 10.250.99.204:39253
>>>>>        at org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:186)
>>>>>        at org.apache.accumulo.server.test.functional.LargeRowTest.verify(LargeRowTest.java:165)
>>>>>        at org.apache.accumulo.server.test.functional.LargeRowTest.basicTest(LargeRowTest.java:143)
>>>>>        at org.apache.accumulo.server.test.functional.LargeRowTest.test2(LargeRowTest.java:104)
>>>>>        at org.apache.accumulo.server.test.functional.LargeRowTest.run(LargeRowTest.java:87)
>>>>>        at org.apache.accumulo.server.test.functional.FunctionalTest.main(FunctionalTest.java:312)
>>>>>        ... 6 more
>>>>> Caused by: org.apache.accumulo.core.client.impl.AccumuloServerException: Error on server 10.250.99.204:39253
>>>>>        at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:302)
>>>>>        at org.apache.accumulo.core.client.impl.ScannerIterator$Reader.run(ScannerIterator.java:94)
>>>>>        at org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:176)
>>>>>        ... 11 more
>>>>> Caused by: org.apache.thrift.TApplicationException: Internal error processing startScan
>>>>>        at org.apache.thrift.TApplicationException.read(TApplicationException.java:108)
>>>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.recv_startScan(TabletClientService.java:184)
>>>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.startScan(TabletClientService.java:157)
>>>>>        at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
>>>>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>>>>        at org.apache.accumulo.cloudtrace.instrument.thrift.TraceWrap$2.invoke(TraceWrap.java:84)
>>>>>        at $Proxy1.startScan(Unknown Source)
>>>>>        at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:415)
>>>>>        at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:295)
>>>>>        ... 13 more
>>>>> 
>>>>> FAIL
>>>>> ======================================================================
>>>>> FAIL: runTest (simple.largeRow.LargeRowTest)
>>>>> ----------------------------------------------------------------------
>>>>> Traceback (most recent call last):
>>>>>  File "/opt/accumulo-1.4.0/test/system/auto/JavaTest.py", line 57, in runTest
>>>>>    self.waitForStop(handle, self.maxRuntime)
>>>>>  File "/opt/accumulo-1.4.0/test/system/auto/TestUtils.py", line 368, in waitForStop
>>>>>    self.assert_(self.processResult(out, err, handle.returncode))
>>>>> AssertionError: False is not true
>>>>> 
>>>>> 
>>>>> ======================================================================
>>>>> FAIL: runTest (simple.largeRow.LargeRowTest)
>>>>> ----------------------------------------------------------------------
>>>>> Traceback (most recent call last):
>>>>>  File "/opt/accumulo-1.4.0/test/system/auto/JavaTest.py", line 57, in runTest
>>>>>    self.waitForStop(handle, self.maxRuntime)
>>>>>  File "/opt/accumulo-1.4.0/test/system/auto/TestUtils.py", line 368, in waitForStop
>>>>>    self.assert_(self.processResult(out, err, handle.returncode))
>>>>> AssertionError: False is not true
>>>>> 
>>>>> ----------------------------------------------------------------------
>>>>> Ran 1 test in 43.014s
>>>>> 
>>>>> FAILED (failures=1)
>>>>> 
>>>>> 
>>>>> The only log that seems to have any relevant output is the tserver_xxxx.log file. In it I found this error:
>>>>> Note that the timestamps here do not match the previous timestamps. This is just because I forgot to capture the data from the run that corresponds exactly to this run.
>>>>> 
>>>>> 
>>>>> 09 06:14:22,466 [tabletserver.TabletServer] INFO : Adding 1 logs for extent 2;F]\\;J^>ioHJ*gs[4TwSIQeN_C^]1!w@7e<wL<p.xE&TR\\g!lt6+c^0a3U7%Eo'Ji ... TRUNCATED;CJlc"pWa)g<$Gg(\\U0Kl<)ffOYm1{h@E1"nV$)z'7'8KNWt- .BISxZoDI^[?7jR... TRUNCATED as alias 16
>>>>> 09 06:14:25,018 [tabletserver.TabletServer] WARN : exception while scanning tablet 2;h&["[>Er>fnBdhzAR_'I!Htot>R/hNK_vNG)Y1a%$DJWg#QyQHFZ RaUAF3[p!eb... TRUNCATED;\\]jx?h@XRt8nDO%{>vT-Et-P$b.<,-4b2osta{ZE\\$u9k2T-MpdF _^<q\\M`X\\Er... TRUNCATED
>>>>> java.io.IOException: invalid distance too far back
>>>>>        at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.inflateBytesDirect(Native Method)
>>>>>        at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.decompress(ZlibDecompressor.java:221)
>>>>>        at org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:81)
>>>>>        at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:75)
>>>>>        at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>>>        at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>>>>>        at java.io.FilterInputStream.read(FilterInputStream.java:66)
>>>>>        at java.io.DataInputStream.readByte(DataInputStream.java:248)
>>>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.readFields(RelativeKey.java:116)
>>>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:584)
>>>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
>>>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>>>        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
>>>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>>>        at org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
>>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
>>>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
>>>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
>>>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)
>>>>>        at org.apache.accumulo.server.tabletserver.Tablet.nextBatch(Tablet.java:1766)
>>>>>        at org.apache.accumulo.server.tabletserver.Tablet.access$3200(Tablet.java:143)
>>>>>        at org.apache.accumulo.server.tabletserver.Tablet$Scanner.read(Tablet.java:1883)
>>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler$NextBatchTask.run(TabletServer.java:905)
>>>>>        at org.apache.accumulo.cloudtrace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>>        at java.lang.Thread.run(Thread.java:662)
>>>>> 09 06:14:25,020 [tabletserver.TabletServer] INFO : Adding 1 logs for extent !0<;~ as alias 2
>>>>> 09 06:14:25,022 [thrift.TabletClientService$Processor] ERROR: Internal error processing startScan
>>>>> java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.io.IOException: invalid distance too far back
>>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1155)
>>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.startScan(TabletServer.java:1110)
>>>>>        at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
>>>>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>>>>        at org.apache.accumulo.cloudtrace.instrument.thrift.TraceWrap$1.invoke(TraceWrap.java:59)
>>>>>        at $Proxy0.startScan(Unknown Source)
>>>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor$startScan.process(TabletClientService.java:2059)
>>>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor.process(TabletClientService.java:2037)
>>>>>        at org.apache.accumulo.server.util.TServerUtils$TimedProcessor.process(TServerUtils.java:154)
>>>>>        at org.apache.thrift.server.TNonblockingServer$FrameBuffer.invoke(TNonblockingServer.java:631)
>>>>>        at org.apache.accumulo.server.util.TServerUtils$THsHaServer$Invocation.run(TServerUtils.java:202)
>>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>>        at org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
>>>>>        at java.lang.Thread.run(Thread.java:662)
>>>>> Caused by: java.util.concurrent.ExecutionException: java.io.IOException: invalid distance too far back
>>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ScanTask.get(TabletServer.java:662)
>>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1146)
>>>>>        ... 15 more
>>>>> Caused by: java.io.IOException: invalid distance too far back
>>>>>        at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.inflateBytesDirect(Native Method)
>>>>>        at org.apache.hadoop.io.compress.zlib.ZlibDecompressor.decompress(ZlibDecompressor.java:221)
>>>>>        at org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:81)
>>>>>        at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:75)
>>>>>        at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>>>>>        at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>>>>>        at java.io.FilterInputStream.read(FilterInputStream.java:66)
>>>>>        at java.io.DataInputStream.readByte(DataInputStream.java:248)
>>>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.readFields(RelativeKey.java:116)
>>>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:584)
>>>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
>>>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>>>        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
>>>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>>>        at org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
>>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
>>>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
>>>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
>>>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)
>>>>>        at org.apache.accumulo.server.tabletserver.Tablet.nextBatch(Tablet.java:1766)
>>>>>        at org.apache.accumulo.server.tabletserver.Tablet.access$3200(Tablet.java:143)
>>>>>        at org.apache.accumulo.server.tabletserver.Tablet$Scanner.read(Tablet.java:1883)
>>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler$NextBatchTask.run(TabletServer.java:905)
>>>>>        at org.apache.accumulo.cloudtrace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>>        ... 1 more
>>>>> 
>>>>> 
>>>>> After guessing that the zlib error might be hiding the "real" error, I decided to disable compression in Accumulo (compression in MapR is transparent and the results are not affected by whether it is on or off). Normally I'd set table.file.compress.type to none in the accumulo-site.xml file but that doesn't work for the tests as they generate they own site files automatically. I hand edited TestUtil.py to generate a site file with that property set.
>>>>> 
>>>>> When I rerun the test, I get the same output from run.py, but the server error in tserver_xxx.log is very different:
>>>>> 
>>>>> 10 09:45:51,650 [tabletserver.TabletServer] INFO : Adding 1 logs for extent 2;(8{]laDN>C?'1D\\;K]l:fS\\lVXKOWq[_'&8".>-wL$Y,x-k<18_#t:7CHMH:\\)Zga... TRUNCATED;%9e.07Zx{t*taPSI\\;I4z*77vIG[Oa&(Dw?4_N(!OIA#Z(ZE%"v3gI9Q{ZlGNAGL@... TRUNCATED as alias 8
>>>>> 10 09:45:51,693 [tabletserver.TabletServer] INFO : Adding 1 logs for extent 2;k9TF\\;hI"]Ij'4\\P.t&'pJm3"\\;C0qd:Q>%G3>I6!5[pVN$5R0g1LwmPUg 5-fX4jG... TRUNCATED;h&["[>Er>fnBdhzAR_'I!Htot>R/hNK_vNG)Y1a%$DJWg#QyQHFZ RaUAF3[p!eb... TRUNCATED as alias 22
>>>>> 10 09:45:51,748 [tabletserver.TabletServer] INFO : Adding 1 logs for extent 2;F]\\;J^>ioHJ*gs[4TwSIQeN_C^]1!w@7e<wL<p.xE&TR\\g!lt6+c^0a3U7%Eo'Ji ... TRUNCATED;CJlc"pWa)g<$Gg(\\U0Kl<)ffOYm1{h@E1"nV$)z'7'8KNWt- .BISxZoDI^[?7jR... TRUNCATED as alias 16
>>>>> 10 09:46:00,996 [tabletserver.TabletServer] WARN : exception while scanning tablet 2;%9e.07Zx{t*taPSI\\;I4z*77vIG[Oa&(Dw?4_N(!OIA#Z(ZE%"v3gI9Q{ZlGNAGL@... TRUNCATED<
>>>>> java.io.EOFException
>>>>>        at java.io.DataInputStream.readFully(DataInputStream.java:180)
>>>>>        at java.io.DataInputStream.readFully(DataInputStream.java:152)
>>>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.read(RelativeKey.java:378)
>>>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.readFields(RelativeKey.java:134)
>>>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:584)
>>>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
>>>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>>>        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
>>>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>>>        at org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
>>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
>>>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
>>>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
>>>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)
>>>>>        at org.apache.accumulo.server.tabletserver.Tablet.nextBatch(Tablet.java:1766)
>>>>>        at org.apache.accumulo.server.tabletserver.Tablet.access$3200(Tablet.java:143)
>>>>>        at org.apache.accumulo.server.tabletserver.Tablet$Scanner.read(Tablet.java:1883)
>>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler$NextBatchTask.run(TabletServer.java:905)
>>>>>        at org.apache.accumulo.cloudtrace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>>        at java.lang.Thread.run(Thread.java:662)
>>>>> 10 09:46:00,999 [tabletserver.TabletServer] INFO : Adding 1 logs for extent !0<;~ as alias 2
>>>>> 10 09:46:01,000 [thrift.TabletClientService$Processor] ERROR: Internal error processing startScan
>>>>> java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.io.EOFException
>>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1155)
>>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.startScan(TabletServer.java:1110)
>>>>>        at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
>>>>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>>>>        at org.apache.accumulo.cloudtrace.instrument.thrift.TraceWrap$1.invoke(TraceWrap.java:59)
>>>>>        at $Proxy0.startScan(Unknown Source)
>>>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor$startScan.process(TabletClientService.java:2059)
>>>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor.process(TabletClientService.java:2037)
>>>>>        at org.apache.accumulo.server.util.TServerUtils$TimedProcessor.process(TServerUtils.java:154)
>>>>>        at org.apache.thrift.server.TNonblockingServer$FrameBuffer.invoke(TNonblockingServer.java:631)
>>>>>        at org.apache.accumulo.server.util.TServerUtils$THsHaServer$Invocation.run(TServerUtils.java:202)
>>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>>        at org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
>>>>>        at java.lang.Thread.run(Thread.java:662)
>>>>> Caused by: java.util.concurrent.ExecutionException: java.io.EOFException
>>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ScanTask.get(TabletServer.java:662)
>>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1146)
>>>>>        ... 15 more
>>>>> Caused by: java.io.EOFException
>>>>>        at java.io.DataInputStream.readFully(DataInputStream.java:180)
>>>>>        at java.io.DataInputStream.readFully(DataInputStream.java:152)
>>>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.read(RelativeKey.java:378)
>>>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.readFields(RelativeKey.java:134)
>>>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:584)
>>>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
>>>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>>>        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
>>>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>>>        at org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
>>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
>>>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
>>>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
>>>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)
>>>>>        at org.apache.accumulo.server.tabletserver.Tablet.nextBatch(Tablet.java:1766)
>>>>>        at org.apache.accumulo.server.tabletserver.Tablet.access$3200(Tablet.java:143)
>>>>>        at org.apache.accumulo.server.tabletserver.Tablet$Scanner.read(Tablet.java:1883)
>>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler$NextBatchTask.run(TabletServer.java:905)
>>>>>        at org.apache.accumulo.cloudtrace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>>        at java.lang.Thread.run(Thread.java:662)
>>>>> 10 09:46:00,999 [tabletserver.TabletServer] INFO : Adding 1 logs for extent !0<;~ as alias 2
>>>>> 10 09:46:01,000 [thrift.TabletClientService$Processor] ERROR: Internal error processing startScan
>>>>> java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.io.EOFException
>>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1155)
>>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.startScan(TabletServer.java:1110)
>>>>>        at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
>>>>>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>>>        at java.lang.reflect.Method.invoke(Method.java:597)
>>>>>        at org.apache.accumulo.cloudtrace.instrument.thrift.TraceWrap$1.invoke(TraceWrap.java:59)
>>>>>        at $Proxy0.startScan(Unknown Source)
>>>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor$startScan.process(TabletClientService.java:2059)
>>>>>        at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Processor.process(TabletClientService.java:2037)
>>>>>        at org.apache.accumulo.server.util.TServerUtils$TimedProcessor.process(TServerUtils.java:154)
>>>>>        at org.apache.thrift.server.TNonblockingServer$FrameBuffer.invoke(TNonblockingServer.java:631)
>>>>>        at org.apache.accumulo.server.util.TServerUtils$THsHaServer$Invocation.run(TServerUtils.java:202)
>>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>>        at org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
>>>>>        at java.lang.Thread.run(Thread.java:662)
>>>>> Caused by: java.util.concurrent.ExecutionException: java.io.EOFException
>>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ScanTask.get(TabletServer.java:662)
>>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler.continueScan(TabletServer.java:1146)
>>>>>        ... 15 more
>>>>> Caused by: java.io.EOFException
>>>>>        at java.io.DataInputStream.readFully(DataInputStream.java:180)
>>>>>        at java.io.DataInputStream.readFully(DataInputStream.java:152)
>>>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.read(RelativeKey.java:378)
>>>>>        at org.apache.accumulo.core.file.rfile.RelativeKey.readFields(RelativeKey.java:134)
>>>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader._next(RFile.java:584)
>>>>>        at org.apache.accumulo.core.file.rfile.RFile$LocalityGroupReader.next(RFile.java:556)
>>>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>>>        at org.apache.accumulo.server.problems.ProblemReportingIterator.next(ProblemReportingIterator.java:77)
>>>>>        at org.apache.accumulo.core.iterators.system.HeapIterator.next(HeapIterator.java:80)
>>>>>        at org.apache.accumulo.core.iterators.system.DeletingIterator.next(DeletingIterator.java:58)
>>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>>        at org.apache.accumulo.core.iterators.Filter.next(Filter.java:58)
>>>>>        at org.apache.accumulo.core.iterators.WrappingIterator.next(WrappingIterator.java:87)
>>>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.skipRowColumn(VersioningIterator.java:103)
>>>>>        at org.apache.accumulo.core.iterators.user.VersioningIterator.next(VersioningIterator.java:53)
>>>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.readNext(SourceSwitchingIterator.java:120)
>>>>>        at org.apache.accumulo.core.iterators.system.SourceSwitchingIterator.next(SourceSwitchingIterator.java:105)
>>>>>        at org.apache.accumulo.server.tabletserver.Tablet.nextBatch(Tablet.java:1766)
>>>>>        at org.apache.accumulo.server.tabletserver.Tablet.access$3200(Tablet.java:143)
>>>>>        at org.apache.accumulo.server.tabletserver.Tablet$Scanner.read(Tablet.java:1883)
>>>>>        at org.apache.accumulo.server.tabletserver.TabletServer$ThriftClientHandler$NextBatchTask.run(TabletServer.java:905)
>>>>>        at org.apache.accumulo.cloudtrace.instrument.TraceRunnable.run(TraceRunnable.java:47)
>>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>>>        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>>>        ... 1 more
>>>>> 
>>>>> 
>>>>> So the error would seem to be related to reading past the end of the file. What I can't determine is what is the reason. From examining the Accumulo source, it's clear that Accumulo has read the length of a key (I think) and is now trying to read the key value. That second read is what is failing. The question is why? Some ideas:
>>>>> 1) the file was originally written incorrectly by the writer,
>>>>> 2) the reader is reading too far
>>>>> 
>>>>> This could be caused by a issue in Accumulo or in MapR. It might be that MapR more strongly enforces end of file reads than stock Hadoop.
>>>>> 
>>>>> If anyone has suggestions on how to look into this further from the Accumulo side, I'd really appreciate it.
>>>>> 
>>>>> Thanks,
>>>>> Keys
>>>>> ________________________________
>>>>> Keys Botzum
>>>>> Senior Principal Technologist
>>>>> WW Systems Engineering
>>>>> kbotzum@maprtech.com
>>>>> 443-718-0098
>>>>> MapR Technologies
>>>>> http://www.mapr.com
>>> 


Mime
View raw message