Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id F326710E94 for ; Tue, 4 Mar 2014 03:15:59 +0000 (UTC) Received: (qmail 6913 invoked by uid 500); 4 Mar 2014 03:15:50 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 6253 invoked by uid 500); 4 Mar 2014 03:15:46 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 6240 invoked by uid 99); 4 Mar 2014 03:15:45 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 04 Mar 2014 03:15:45 +0000 X-ASF-Spam-Status: No, hits=-0.7 required=5.0 tests=RCVD_IN_DNSWL_LOW,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of rohithsharmaks@huawei.com designates 119.145.14.66 as permitted sender) Received: from [119.145.14.66] (HELO szxga03-in.huawei.com) (119.145.14.66) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 04 Mar 2014 03:15:39 +0000 Received: from 172.24.2.119 (EHLO szxeml209-edg.china.huawei.com) ([172.24.2.119]) by szxrg03-dlp.huawei.com (MOS 4.4.3-GA FastPath queued) with ESMTP id ALJ43316; Tue, 04 Mar 2014 11:15:15 +0800 (CST) Received: from SZXEML458-HUB.china.huawei.com (10.82.67.201) by szxeml209-edg.china.huawei.com (172.24.2.184) with Microsoft SMTP Server (TLS) id 14.3.158.1; Tue, 4 Mar 2014 11:14:55 +0800 Received: from SZXEML512-MBS.china.huawei.com ([169.254.8.221]) by SZXEML458-HUB.china.huawei.com ([10.82.67.201]) with mapi id 14.03.0158.001; Tue, 4 Mar 2014 11:14:52 +0800 From: Rohith Sharma K S To: "user@hadoop.apache.org" Subject: RE: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields Thread-Topic: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto overrides final method getUnknownFields Thread-Index: AQHPNvngBeA4a4OCfk+zqG5rRy1YGJrPEXMAgAAC1oCAASo8EA== Date: Tue, 4 Mar 2014 03:14:51 +0000 Message-ID: <0EE80F6F7A98A64EBD18F2BE839C91156750D0DA@szxeml512-mbs.china.huawei.com> References: <5314A740.4080601@roo.ee> <5314B8B1.4080503@roo.ee> In-Reply-To: <5314B8B1.4080503@roo.ee> Accept-Language: en-US, zh-CN Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.18.168.138] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-CFilter-Loop: Reflected X-Virus-Checked: Checked by ClamAV on apache.org Hi The reason for " org.apache.hadoop.yarn.proto.YarnProtos$ApplicationI= dProto overrides final method getUnknownFields.()Lcom/google/protobuf/Unkno= wnFieldSet" is hadoop is compiled with protoc-2.5.0 version, but in the cla= sspath lower version of protobuf is present. 1. Check MRAppMaster classpath, which version of protobuf is in classpath. = Expected to have 2.5.0 version. =20 Thanks & Regards Rohith Sharma K S -----Original Message----- From: Margusja [mailto:margus@roo.ee]=20 Sent: 03 March 2014 22:45 To: user@hadoop.apache.org Subject: Re: class org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdPro= to overrides final method getUnknownFields Hi 2.2.0 and 2.3.0 gave me the same container log. A little bit more details. I'll try to use external java client who submits job. some lines from maven pom.xml file: org.apache.hadoop hadoop-client 2.3.0 org.apache.hadoop hadoop-core 1.2.1 lines from external client: ... 2014-03-03 17:36:01 INFO FileInputFormat:287 - Total input paths to proces= s : 1 2014-03-03 17:36:02 INFO JobSubmitter:396 - number of splits:1 2014-03-03 17:36:03 INFO JobSubmitter:479 - Submitting tokens for job:=20 job_1393848686226_0018 2014-03-03 17:36:04 INFO YarnClientImpl:166 - Submitted application application_1393848686226_0018 2014-03-03 17:36:04 INFO Job:1289 - The url to track the job:=20 http://vm38.dbweb.ee:8088/proxy/application_1393848686226_0018/ 2014-03-03 17:36:04 INFO Job:1334 - Running job: job_1393848686226_0018 2014-03-03 17:36:10 INFO Job:1355 - Job job_1393848686226_0018 running in = uber mode : false 2014-03-03 17:36:10 INFO Job:1362 - map 0% reduce 0% 2014-03-03 17:36:10 INFO Job:1375 - Job job_1393848686226_0018 failed with= state FAILED due to: Application application_1393848686226_0018 failed 2 t= imes due to AM Container for appattempt_1393848686226_0018_000002 exited with exitCode: 1 due to:=20 Exception from container-launch: org.apache.hadoop.util.Shell$ExitCodeException: at org.apache.hadoop.util.Shell.runCommand(Shell.java:464) at org.apache.hadoop.util.Shell.run(Shell.java:379) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchCo= ntainer(DefaultContainerExecutor.java:195) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.Contain= erLaunch.call(ContainerLaunch.java:283) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.Contain= erLaunch.call(ContainerLaunch.java:79) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1= 145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:= 615) at java.lang.Thread.run(Thread.java:744) ... Lines from namenode: ... 14/03/03 19:12:42 INFO namenode.FSEditLog: Number of transactions: 900 Tota= l time for transactions(ms): 69 Number of transactions batched in Syncs: 0 Number of syncs: 542 SyncTimes(ms): 9783 14/03/03 19:12:42 INFO BlockStateChange: BLOCK* addToInvalidates:=20 blk_1073742050_1226 90.190.106.33:50010 14/03/03 19:12:42 INFO hdfs.StateChange: BLOCK* allocateBlock:=20 /user/hduser/input/data666.noheader.data.=20 BP-802201089-90.190.106.33-1393506052071 blk_1073742056_1232{blockUCState=3DUNDER_CONSTRUCTION, primaryNodeIndex=3D-1, replicas=3D[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} 14/03/03 19:12:44 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask 90.190.106.33:50010 to delete [blk_1073742050_1226] 14/03/03 19:12:53 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 90.190.106.33:50010 is added to blk_1073742056_1232{blockUCState= =3DUNDER_CONSTRUCTION, primaryNodeIndex=3D-1, replicas=3D[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0 14/03/03 19:12:53 INFO hdfs.StateChange: DIR* completeFile:=20 /user/hduser/input/data666.noheader.data is closed by DFSClient_NONMAPREDUCE_-915999412_15 14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addToInvalidates:=20 blk_1073742051_1227 90.190.106.33:50010 14/03/03 19:12:54 INFO hdfs.StateChange: BLOCK* allocateBlock:=20 /user/hduser/input/data666.noheader.data.info.=20 BP-802201089-90.190.106.33-1393506052071 blk_1073742057_1233{blockUCState=3DUNDER_CONSTRUCTION, primaryNodeIndex=3D-1, replicas=3D[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} 14/03/03 19:12:54 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 90.190.106.33:50010 is added to blk_1073742057_1233{blockUCState= =3DUNDER_CONSTRUCTION, primaryNodeIndex=3D-1, replicas=3D[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0 14/03/03 19:12:54 INFO hdfs.StateChange: DIR* completeFile:=20 /user/hduser/input/data666.noheader.data.info is closed by DFSClient_NONMAPREDUCE_-915999412_15 14/03/03 19:12:55 INFO hdfs.StateChange: BLOCK* allocateBlock:=20 /user/hduser/.staging/job_1393848686226_0019/job.jar.=20 BP-802201089-90.190.106.33-1393506052071 blk_1073742058_1234{blockUCState=3DUNDER_CONSTRUCTION, primaryNodeIndex=3D-1, replicas=3D[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} 14/03/03 19:12:56 INFO hdfs.StateChange: BLOCK* InvalidateBlocks: ask 90.190.106.33:50010 to delete [blk_1073742051_1227] 14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 90.190.106.33:50010 is added to blk_1073742058_1234{blockUCState= =3DUNDER_CONSTRUCTION, primaryNodeIndex=3D-1, replicas=3D[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0 14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile:=20 /user/hduser/.staging/job_1393848686226_0019/job.jar is closed by DFSClient_NONMAPREDUCE_-915999412_15 14/03/03 19:13:12 INFO blockmanagement.BlockManager: Increasing replication= from 3 to 10 for /user/hduser/.staging/job_1393848686226_0019/job.jar 14/03/03 19:13:12 INFO blockmanagement.BlockManager: Increasing replication= from 3 to 10 for /user/hduser/.staging/job_1393848686226_0019/job.split 14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock:=20 /user/hduser/.staging/job_1393848686226_0019/job.split.=20 BP-802201089-90.190.106.33-1393506052071 blk_1073742059_1235{blockUCState=3DUNDER_CONSTRUCTION, primaryNodeIndex=3D-1, replicas=3D[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} 14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 90.190.106.33:50010 is added to blk_1073742059_1235{blockUCState= =3DUNDER_CONSTRUCTION, primaryNodeIndex=3D-1, replicas=3D[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0 14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile:=20 /user/hduser/.staging/job_1393848686226_0019/job.split is closed by DFSClient_NONMAPREDUCE_-915999412_15 14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock:=20 /user/hduser/.staging/job_1393848686226_0019/job.splitmetainfo.=20 BP-802201089-90.190.106.33-1393506052071 blk_1073742060_1236{blockUCState=3DUNDER_CONSTRUCTION, primaryNodeIndex=3D-1, replicas=3D[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} 14/03/03 19:13:12 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 90.190.106.33:50010 is added to blk_1073742060_1236{blockUCState= =3DUNDER_CONSTRUCTION, primaryNodeIndex=3D-1, replicas=3D[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0 14/03/03 19:13:12 INFO hdfs.StateChange: DIR* completeFile:=20 /user/hduser/.staging/job_1393848686226_0019/job.splitmetainfo is closed by= DFSClient_NONMAPREDUCE_-915999412_15 14/03/03 19:13:12 INFO hdfs.StateChange: BLOCK* allocateBlock:=20 /user/hduser/.staging/job_1393848686226_0019/job.xml.=20 BP-802201089-90.190.106.33-1393506052071 blk_1073742061_1237{blockUCState=3DUNDER_CONSTRUCTION, primaryNodeIndex=3D-1, replicas=3D[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} 14/03/03 19:13:13 INFO BlockStateChange: BLOCK* addStoredBlock: blockMap updated: 90.190.106.33:50010 is added to blk_1073742061_1237{blockUCState= =3DUNDER_CONSTRUCTION, primaryNodeIndex=3D-1, replicas=3D[ReplicaUnderConstruction[90.190.106.33:50010|RBW]]} size 0 14/03/03 19:13:13 INFO hdfs.StateChange: DIR* completeFile:=20 /user/hduser/.staging/job_1393848686226_0019/job.xml is closed by DFSClient_NONMAPREDUCE_-915999412_15 ... Lines from namemanager log: ... 2014-03-03 19:13:19,473 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit co= de from container container_1393848686226_0019_02_000001 is : 1 2014-03-03 19:13:19,474 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor:=20 Exception from container-launch with container ID:=20 container_1393848686226_0019_02_000001 and exit code: 1 org.apache.hadoop.util.Shell$ExitCodeException: at org.apache.hadoop.util.Shell.runCommand(Shell.java:464) at org.apache.hadoop.util.Shell.run(Shell.java:379) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchCo= ntainer(DefaultContainerExecutor.java:195) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.Contain= erLaunch.call(ContainerLaunch.java:283) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.Contain= erLaunch.call(ContainerLaunch.java:79) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1= 145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:= 615) at java.lang.Thread.run(Thread.java:744) 2014-03-03 19:13:19,474 INFO org.apache.hadoop.yarn.server.nodemanager.ContainerExecutor: 2014-03-03 19:13:19,474 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.Contain= erLaunch:=20 Container exited with a non-zero exit code 1 2014-03-03 19:13:19,475 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Contai= ner:=20 Container container_1393848686226_0019_02_000001 transitioned from RUNNING = to EXITED_WITH_FAILURE 2014-03-03 19:13:19,475 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.Contain= erLaunch:=20 Cleaning up container container_1393848686226_0019_02_000001 2014-03-03 19:13:19,496 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor:=20 Deleting absolute path :=20 /tmp/hadoop-hdfs/nm-local-dir/usercache/hduser/appcache/application_1393848= 686226_0019/container_1393848686226_0019_02_000001 2014-03-03 19:13:19,498 WARN org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger:=20 USER=3Dhduser OPERATION=3DContainer Finished - Failed=20 TARGET=3DContainerImpl RESULT=3DFAILURE DESCRIPTION=3DContainer=20 failed with state: EXITED_WITH_FAILURE APPID=3Dapplication_1393848686226_0019 CONTAINERID=3Dcontainer_1393848686226_0019_02_000001 2014-03-03 19:13:19,498 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Contai= ner:=20 Container container_1393848686226_0019_02_000001 transitioned from EXITED_W= ITH_FAILURE to DONE 2014-03-03 19:13:19,498 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Appl= ication:=20 Removing container_1393848686226_0019_02_000001 from application application_1393848686226_0019 2014-03-03 19:13:19,499 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices:=20 Got event CONTAINER_STOP for appId application_1393848686226_0019 2014-03-03 19:13:20,160 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Sending ou= t status for container: container_id { app_attempt_id { application_id { id= : 19 cluster_timestamp: 1393848686226 } attemptId: 2 } id: 1 } state: C_COMPLETE diagnostics: "Exception from container-launch:=20 \norg.apache.hadoop.util.Shell$ExitCodeException: \n\tat org.apache.hadoop.= util.Shell.runCommand(Shell.java:464)\n\tat org.apache.hadoop.util.Shell.run(Shell.java:379)\n\tat org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)\n= \tat org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchCo= ntainer(DefaultContainerExecutor.java:195)\n\tat org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.Contain= erLaunch.call(ContainerLaunch.java:283)\n\tat org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.Contain= erLaunch.call(ContainerLaunch.java:79)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:262)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1= 145)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:= 615)\n\tat java.lang.Thread.run(Thread.java:744)\n\n\n" exit_status: 1 2014-03-03 19:13:20,161 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed co= mpleted container container_1393848686226_0019_02_000001 2014-03-03 19:13:20,542 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.Containe= rsMonitorImpl:=20 Starting resource-monitoring for container_1393848686226_0019_02_000001 2014-03-03 19:13:20,543 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.Containe= rsMonitorImpl:=20 Stopping resource-monitoring for container_1393848686226_0019_02_000001 2014-03-03 19:13:21,164 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Appl= ication:=20 Application application_1393848686226_0019 transitioned from RUNNING to APP= LICATION_RESOURCES_CLEANINGUP 2014-03-03 19:13:21,164 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor:=20 Deleting absolute path :=20 /tmp/hadoop-hdfs/nm-local-dir/usercache/hduser/appcache/application_1393848= 686226_0019 2014-03-03 19:13:21,165 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices:=20 Got event APPLICATION_STOP for appId application_1393848686226_0019 2014-03-03 19:13:21,165 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Appl= ication:=20 Application application_1393848686226_0019 transitioned from APPLICATION_RE= SOURCES_CLEANINGUP to FINISHED 2014-03-03 19:13:21,165 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAg= gregatingLogHandler:=20 Scheduling Log Deletion for application: application_1393848686226_0019, wi= th delay of 10800 seconds ... Tervitades, Margus (Margusja) Roo +372 51 48 780 http://margus.roo.ee http://ee.linkedin.com/in/margusroo skype: margusja ldapsearch -x -h ldap.sk.ee -b c=3DEE "(serialNumber=3D37303140314)" -----BEGIN PUBLIC KEY----- MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE 5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa BjM8j36yJvoBVsfOHQIDAQAB -----END PUBLIC KEY----- On 03/03/14 19:05, Ted Yu wrote: > Can you tell us the hadoop release you're using ? > > Seems there is inconsistency in protobuf library. > > > On Mon, Mar 3, 2014 at 8:01 AM, Margusja > wrote: > > Hi > > I even don't know what information to provide but my container log is= : > > 2014-03-03 17:36:05,311 FATAL [main] > org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting > MRAppMaster > java.lang.VerifyError: class > org.apache.hadoop.yarn.proto.YarnProtos$ApplicationIdProto > overrides final method > getUnknownFields.()Lcom/google/protobuf/UnknownFieldSet; > at java.lang.ClassLoader.defineClass1(Native Method) > at java.lang.ClassLoader.defineClass(ClassLoader.java:800) > at > java.security.SecureClassLoader.defineClass(SecureClassLoader.java:14= 2) > at > java.net.URLClassLoader.defineClass(URLClassLoader.java:449) > at java.net.URLClassLoader.access$100(URLClassLoader.java:71) > at java.net.URLClassLoader$1.run(URLClassLoader.java:361) > at java.net.URLClassLoader$1.run(URLClassLoader.java:355) > at java.security.AccessController.doPrivileged(Native Method) > at java.net.URLClassLoader.findClass(URLClassLoader.java:354) > at java.lang.ClassLoader.loadClass(ClassLoader.java:425) > at > sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) > at java.lang.ClassLoader.loadClass(ClassLoader.java:358) > at java.lang.Class.getDeclaredConstructors0(Native Method) > at > java.lang.Class.privateGetDeclaredConstructors(Class.java:2493) > at java.lang.Class.getConstructor0(Class.java:2803) > at java.lang.Class.getConstructor(Class.java:1718) > at > org.apache.hadoop.yarn.factories.impl.pb.RecordFactoryPBImpl.newRecor= dInstance(RecordFactoryPBImpl.java:62) > at > org.apache.hadoop.yarn.util.Records.newRecord(Records.java:36) > at > org.apache.hadoop.yarn.api.records.ApplicationId.newInstance(Applicat= ionId.java:49) > at > org.apache.hadoop.yarn.util.ConverterUtils.toApplicationAttemptId(Con= verterUtils.java:137) > at > org.apache.hadoop.yarn.util.ConverterUtils.toContainerId(ConverterUti= ls.java:177) > at > =20 > org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1 > 343) > > > Where to start digging? > > --=20 > Tervitades, Margus (Margusja) Roo > +372 51 48 780 > http://margus.roo.ee > http://ee.linkedin.com/in/margusroo > skype: margusja > ldapsearch -x -h ldap.sk.ee -b c=3DEE > "(serialNumber=3D37303140314)" > -----BEGIN PUBLIC KEY----- > MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCvbeg7LwEC2SCpAEewwpC3ajxE > 5ZsRMCB77L8bae9G7TslgLkoIzo9yOjPdx2NN6DllKbV65UjTay43uUDyql9g3tl > RhiJIcoAExkSTykWqAIPR88LfilLy1JlQ+0RD8OXiWOVVQfhOHpQ0R/jcAkM2lZa > BjM8j36yJvoBVsfOHQIDAQAB > -----END PUBLIC KEY----- > >