Return-Path: Delivered-To: apmail-hadoop-pig-commits-archive@www.apache.org Received: (qmail 50268 invoked from network); 10 Jun 2010 00:09:32 -0000 Received: from unknown (HELO mail.apache.org) (140.211.11.3) by 140.211.11.9 with SMTP; 10 Jun 2010 00:09:32 -0000 Received: (qmail 17750 invoked by uid 500); 10 Jun 2010 00:09:32 -0000 Delivered-To: apmail-hadoop-pig-commits-archive@hadoop.apache.org Received: (qmail 17737 invoked by uid 500); 10 Jun 2010 00:09:32 -0000 Mailing-List: contact pig-commits-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: pig-dev@hadoop.apache.org Delivered-To: mailing list pig-commits@hadoop.apache.org Received: (qmail 17730 invoked by uid 500); 10 Jun 2010 00:09:32 -0000 Delivered-To: apmail-incubator-pig-commits@incubator.apache.org Received: (qmail 17727 invoked by uid 99); 10 Jun 2010 00:09:32 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 10 Jun 2010 00:09:32 +0000 X-ASF-Spam-Status: No, hits=-2000.0 required=10.0 tests=ALL_TRUSTED,T_FRT_LOLITA1 X-Spam-Check-By: apache.org Received: from [140.211.11.130] (HELO eos.apache.org) (140.211.11.130) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 10 Jun 2010 00:09:24 +0000 Received: from eos.apache.org (localhost [127.0.0.1]) by eos.apache.org (Postfix) with ESMTP id 6A38617DB5 for ; Thu, 10 Jun 2010 00:09:02 +0000 (GMT) MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable From: Apache Wiki To: Apache Wiki Date: Thu, 10 Jun 2010 00:09:02 -0000 Message-ID: <20100610000902.16278.60019@eos.apache.org> Subject: =?utf-8?q?PigErrorHandlingFunctionalSpecification_reverted_to_revision_14?= =?utf-8?q?4_on_Pig_Wiki?= X-Virus-Checked: Checked by ClamAV on apache.org Dear wiki user, You have subscribed to a wiki page "Pig Wiki" for change notification. The page PigErrorHandlingFunctionalSpecification has been reverted to revis= ion 144 by Aniket Mokashi. The comment on this change is: Using GUI mode changed lot of editing.. reve= rting to previous version. http://wiki.apache.org/pig/PigErrorHandlingFunctionalSpecification?action= =3Ddiff&rev1=3D145&rev2=3D146 -------------------------------------------------- #format wiki #language en - <> <> + = + <> + <> = This document describes the functional specification for the Error Handli= ng feature in Pig. = + = =3D=3D Error types and mechanism to handle errors =3D=3D + = The [[#cookbook|cook book]] discusses the classification of errors within= Pig and proposes a guideline for exceptions that are to be used by develop= ers. A reclassification of the errors is presented below. = =3D=3D=3D Frontend errors =3D=3D=3D - . The front-end consists of multiple components - parser, type checker, = optimizer, translators, etc. These errors usually occur at the client side = before the execution begins in Hadoop. All the errors from these components= can be categorized as front-end errors. Components that are part of the fr= ont end will throw specific exceptions that capture the context. For exampl= e, the parser throws a `ParseException`, the type checker will throw a `Typ= eCheckerException`, the optimizer will throw a `LogicalOptimizerException`,= etc. A list of the exceptions thrown in the front-end are as follows. + The front-end consists of multiple components - parser, type checker, op= timizer, translators, etc. These errors usually occur at the client side be= fore the execution begins in Hadoop. All the errors from these components c= an be categorized as front-end errors. Components that are part of the fron= t end will throw specific exceptions that capture the context. For example,= the parser throws a `ParseException`, the type checker will throw a `TypeC= heckerException`, the optimizer will throw a `LogicalOptimizerException`, e= tc. A list of the exceptions thrown in the front-end are as follows. = - 1. `FrontendException` Generic front-end exception + 1. `FrontendException` Generic front-end exception - 1. `JobCreationException` Used for indicating errors during Map Reduce j= ob creation + 1. `JobCreationException` Used for indicating errors during Map Reduce= job creation - 1. `LogicalToPhysicalTranslatorException` Used for indicating errors dur= ing logical plan to physical plan translation + 1. `LogicalToPhysicalTranslatorException` Used for indicating errors d= uring logical plan to physical plan translation - 1. `MRCompilerException` Used for indicating errors during map reduce pl= an compilation from physical plan + 1. `MRCompilerException` Used for indicating errors during map reduce = plan compilation from physical plan - 1. `OptimizerException` Used for indicating errors during logical plan o= ptimization + 1. `OptimizerException` Used for indicating errors during logical plan= optimization - 1. `PigException` Generic exception in Pig + 1. `PigException` Generic exception in Pig - 1. `PlanException` Used for indicating errors during plan/graph operatio= ns + 1. `PlanException` Used for indicating errors during plan/graph operat= ions - 1. `PlanValidationException` Used for indicating errors during plan vali= dation + 1. `PlanValidationException` Used for indicating errors during plan va= lidation - 1. `SchemaMergeException` Used for indicating errors during schema merges + 1. `SchemaMergeException` Used for indicating errors during schema mer= ges - 1. `TypeCheckerException` Used for indicating errors due to type checking + 1. `TypeCheckerException` Used for indicating errors due to type check= ing - 1. `VisitorException` Used for indicating errors when visiting a plan + 1. `VisitorException` Used for indicating errors when visiting a plan + = = =3D=3D=3D Backend errors =3D=3D=3D - . The execution pipeline, the operators that form the pipeline and the m= ap reduce classes fall into the back-end. The errors that occur in the back= -end are generally at run-time. Exceptions such as `ExecException` and `Run= TimeException` fall into this category. These errors will be propagated to = the user facing system and an appropriate error message indicating the sour= ce of the error will be displayed. + The execution pipeline, the operators that form the pipeline and the map= reduce classes fall into the back-end. The errors that occur in the back-e= nd are generally at run-time. Exceptions such as `ExecException` and `RunTi= meException` fall into this category. These errors will be propagated to th= e user facing system and an appropriate error message indicating the source= of the error will be displayed. = =3D=3D=3D Internal errors =3D=3D=3D - . Any error that is not reported via an explicit exception is indicative= of a bug in the system. Such errors are flagged as internal errors and wil= l be reported as possible bugs. Any non Hadoop backend error is a bug in Pi= g. + Any error that is not reported via an explicit exception is indicative o= f a bug in the system. Such errors are flagged as internal errors and will = be reported as possible bugs. Any non Hadoop backend error is a bug in Pig. = While the aforementioned errors describe a developer's viewpoint of error= s, the user is interested in the source of the errors. A classification of = the source of errors is given below. = - 1. User Input - Sources of user input error are syntax error, semantic e= rror, etc. + 1. User Input - Sources of user input error are syntax error, semantic= error, etc. - 1. Bug - An internal error in the Pig code and not related to the user's= input + 2. Bug - An internal error in the Pig code and not related to the user= 's input - 1. User Environment - The client side environment + 3. User Environment - The client side environment - 1. Remote Environment - The Hadoop execution environment + 4. Remote Environment - The Hadoop execution environment = =3D=3D Error codes =3D=3D + = Error codes are categorized into ranges depending on the nature of the er= ror. The following table indicates the ranges for the error types in Pig. N= ormally, errors due to user input and bugs in the software are not retriabl= e. Errors due to the user environment and remote environment may be retriab= le based on the context. Error codes will be used for documentation to help= users address common errors. + = - ||'''Error type''' ||'''Range''' || + || '''Error type''' || '''Range''' || - ||User Input ||1000 - 1999 || + || User Input || 1000 - 1999 || - ||Bug ||2000 - 2999 || + || Bug || 2000 - 2999 || - ||User Environment (retriable) ||3000 - 3999 || + || User Environment (retriable) || 3000 - 3999 || - ||User Environment ||4000 - 4999 || + || User Environment || 4000 - 4999 || - ||Remote Environment (retriable) ||5000 - 5999 || + || Remote Environment (retriable) || 5000 - 5999 || - ||Remote Environment ||6000 - 6999 || + || Remote Environment || 6000 - 6999 || - = - = = = Programmatic access via Java APIs can query if exceptions are retriable o= r not. For external processes that rely on the return code of the process, = the table given below will indicate the status of the process execution. Fr= ont-end exceptions will result in failures as far the user is concerned. Ha= doop's errors are not retriable and return fatal error code (2) or partial= failure code (3). + = - ||'''Status''' ||'''Return Code''' || + || '''Status''' || '''Return Code''' || - ||Successful Execution ||0 || + || Successful Execution || 0 || - ||Retriable error ||1 || + || Retriable error || 1 || - ||Fatal error ||2 || + || Fatal error || 2 || - ||Partial failure ||3 || + || Partial failure || 3 || - = - = - = = =3D=3D Error message =3D=3D + = The format of the error message shown to the user will be as follows: = '''ERROR : ''' @@ -68, +72 @@ E.g.: ERROR 1005: The arity of the group by columns do not match. = =3D=3D Requirement on UDF authors =3D=3D + = In order to enable warning message aggregation, UDF authors should use Pi= g's abstraction to handle warning message aggregation. For more details ref= er to the [[#design|design document]]. = {{attachment:PigLogger.jpg}} = =3D=3D=3D Warning Codes =3D=3D=3D + = - Each warning message will be associated with a warning code. The warning = code will correspond to the enum type used for the warning message aggregat= ion. + Each warning message will be associated with a warning code. The warning = code will correspond to the enum type used for the warning message aggregat= ion. = = '''Note''' = - 1. The exact warning codes along with the warning messages are to be dec= ided. + 1. The exact warning codes along with the warning messages are to be d= ecided. = =3D=3D Additional command line switches =3D=3D + = In order to support the ability to turn on/off warning message aggregatio= n, log error messages to client side logs and specify the location of the c= lient side log, the following switches will be added to and/or extended in = Pig. = - 1. -wagg to turn on warning aggregation; by default warning aggregation = is turned off. + 1. -wagg to turn on warning aggregation; by default warning aggregatio= n is turned off. - 1. -v to include printing error messages on screen; by default error mes= sages will be written to client side log. Using -v will also print the mess= ages on the screen + 2. -v to include printing error messages on screen; by default error m= essages will be written to client side log. Using -v will also print the me= ssages on the screen - 1. <>-l directory where the client side log is st= ored; by default, logs will be stored in the current working directory and = named pig_.log. When used in batch mode (i.e., when a script to = be executed is provided), the log file name will also contain the name of t= he script - pig__.log. + 3. <>-l directory where the client side log is = stored; by default, logs will be stored in the current working directory an= d named pig_.log. When used in batch mode (i.e., when a script t= o be executed is provided), the log file name will also contain the name of= the script - pig__.log. + = = =3D=3D Compendium of error messages =3D=3D + = A list of possible error messages is listed below. This list is not compr= ehensive and will be modified to reflect the true error message along with = the error code. + = - ||'''Error Code''' ||'''Error Message''' ||'''How to Handle''' || + ||'''Error Code'''||'''Error Message'''||'''How to Handle'''|| - ||1000 ||Error during parsing || + ||1000||Error during parsing|| - ||1001 ||Unable to descirbe schema for alias || + ||1001||Unable to descirbe schema for alias || - ||1002 ||Unable to store alias || + ||1002||Unable to store alias || - ||1003 ||Unable to find an operator for alias || + ||1003||Unable to find an operator for alias || - ||1004 ||No alias to || + ||1004||No alias to || - ||1005 ||No plan for to || + ||1005||No plan for to || - ||1006 ||Could not find operator in plan || + ||1006||Could not find operator in plan|| - ||1007 ||Found duplicates in schema. . P= lease alias the columns with unique names. || + ||1007||Found duplicates in schema. . Pl= ease alias the columns with unique names.|| - ||1008 ||Expected a bag with a single element of type tuple but got a bag= schema with multiple elements || + ||1008||Expected a bag with a single element of type tuple but got a bag = schema with multiple elements|| - ||1009 ||Expected a bag with a single element of type tuple but got an el= ement of type || + ||1009||Expected a bag with a single element of type tuple but got an ele= ment of type || - ||1010 ||getAtomicGroupByType is used only when dealing with atomic col || + ||1010||getAtomicGroupByType is used only when dealing with atomic col|| - ||1011 ||getTupleGroupBySchema is used only when dealing with group col || + ||1011||getTupleGroupBySchema is used only when dealing with = group col|| - ||1012 ||Each input has to have the same number of inner p= lans || + ||1012||Each input has to have the same number of inner pl= ans|| - ||1013 || attributes can either be star (*) or a list of e= xpressions, but not both. || + ||1013|| attributes can either be star (*) or a list of e= xpressions, but not both.|| - ||1014 ||Problem with input of User-defined function: || + ||1014||Problem with input of User-defined function: || - ||1015 ||Error determining fieldschema of constant: || + ||1015||Error determining fieldschema of constant: || - ||1016 ||Problems in merging user defined schema || + ||1016||Problems in merging user defined schema|| - ||1017 ||Schema mismatch. A basic type on flattening cannot have more tha= n one column. User defined schema: || + ||1017||Schema mismatch. A basic type on flattening cannot have more than= one column. User defined schema: || - ||1018 ||Problem determining schema during load || + ||1018||Problem determining schema during load|| - ||1019 ||Unable to merge schemas || + ||1019||Unable to merge schemas|| - ||1020 ||Only a BAG or TUPLE can have schemas. Got || + ||1020||Only a BAG or TUPLE can have schemas. Got || - ||1021 ||Type mismatch. No useful type for merging. Field Schema: . Other Fileld Schema: + otherFs || + ||1021||Type mismatch. No useful type for merging. Field Schema: . Other Fileld Schema: + otherFs|| - ||1022 ||Type mismatch. Field Schema: . Other Fileld Schema= : + otherFs || + ||1022||Type mismatch. Field Schema: . Other Fileld Schema:= + otherFs|| - ||1023 ||Unable to create field schema || + ||1023||Unable to create field schema|| - ||1024 ||Found duplicate aliases: || + ||1024||Found duplicate aliases: || - ||1025 ||Found more than one match: || + ||1025||Found more than one match: || - ||1026 ||Attempt to fetch field: from schema of size || + ||1026||Attempt to fetch field: from schema of size || - ||1027 ||Cannot reconcile schemas with different sizes. This schema has s= ize other has size of || + ||1027||Cannot reconcile schemas with different sizes. This schema has si= ze other has size of || - ||1028 ||Access to the tuple of the bag is disallowed. Only acces= s to the elements of the tuple in the bag is allowed. || + ||1028||Access to the tuple of the bag is disallowed. Only access= to the elements of the tuple in the bag is allowed.|| - ||1029 ||One of the schemas is null for merging schemas. Schema: = Other schema: || + ||1029||One of the schemas is null for merging schemas. Schema: = Other schema: || - ||1030 ||Different schema sizes for merging schemas. Schema size: = Other schema size: || + ||1030||Different schema sizes for merging schemas. Schema size: O= ther schema size: || - ||1031 ||Incompatible types for merging schemas. Field schema type: Other field schema type: || + ||1031||Incompatible types for merging schemas. Field schema type: = Other field schema type: || - ||1032 ||Incompatible inner schemas for merging schemas. Field schema: Other field schema: || + ||1032||Incompatible inner schemas for merging schemas. Field schema: Other field schema: || - ||1033 ||Schema size mismatch for merging schemas. Other schema size grea= ter than schema size. Schema: . Other schema: || + ||1033||Schema size mismatch for merging schemas. Other schema size great= er than schema size. Schema: . Other schema: || - ||1034 ||TypeCastInserter invoked with an invalid operator class name: || + ||1034||TypeCastInserter invoked with an invalid operator class name: || - ||1035 ||Error getting LOProject's input schema || + ||1035||Error getting LOProject's input schema|| - ||1036 ||Map key should be a basic type || + ||1036||Map key should be a basic type|| - ||1037 ||Operand of Regex can be CharArray only || + ||1037||Operand of Regex can be CharArray only|| - ||1038 ||Operands of AND/OR can be boolean only || + ||1038||Operands of AND/OR can be boolean only|| - ||1039 ||Incompatible types in operator. left hand side: right han= d size: type || + ||1039||Incompatible types in operator. left hand side: right hand= size: type|| - ||1040 ||Could not set field schema || + ||1040||Could not set = field schema|| - ||1041 ||NEG can be used with numbers or Bytearray only || + ||1041||NEG can be used with numbers or Bytearray only|| - ||1042 ||NOT can be used with boolean only || + ||1042||NOT can be used with boolean only|| - ||1043 ||Unable to retrieve field schema of operator. || + ||1043||Unable to retrieve field schema of operator.|| - ||1044 ||Unable to get list of overloaded methods. || + ||1044||Unable to get list of overloaded methods.|| - ||1045 ||Could not infer the matching function for as multipl= e or none of them fit. Please use an explicit cast. || + ||1045||Could not infer the matching function for as multiple= or none of them fit. Please use an explicit cast.|| - ||1046 ||Multiple matching functions for with input schemas: (= , ). Please use an explicit cast. || + ||1046||Multiple matching functions for with input schemas: ( = , ). Please use an explicit cast.|| - ||1047 ||Condition in BinCond must be boolean || + ||1047||Condition in BinCond must be boolean|| - ||1048 ||Two inputs of BinCond must have compatible schemas || + ||1048||Two inputs of BinCond must have compatible schemas|| - ||1049 ||Problem during evaluaton of BinCond output type || + ||1049||Problem during evaluaton of BinCond output type|| - ||1050 ||Unsupported input type for BinCond: lhs =3D ; rhs =3D || + ||1050||Unsupported input type for BinCond: lhs =3D ; rhs =3D || - ||1051 ||Cannot cast to bytearray || + ||1051||Cannot cast to bytearray|| - ||1052 ||Cannot cast [with schema ] to with schema = || + ||1052||Cannot cast [with schema ] to with schema <= schema>|| - ||1053 ||Cannot resolve load function to use for casting from to <= type> || + ||1053||Cannot resolve load function to use for casting from to || - ||1054 ||Cannot merge schemas from inputs of UNION || + ||1054||Cannot merge schemas from inputs of UNION|| - ||1055 ||Problem while reading schemas from inputs of || + ||1055||Problem while reading schemas from inputs of || - ||1056 ||Problem while casting inputs of Union || + ||1056||Problem while casting inputs of Union|| - ||1057 ||'s i= nner plan can only have one output (leaf) || + ||1057|| 's i= nner plan can only have one output (leaf)|| - ||1058 ||Split's condition must evaluate to boolean. Found: || + ||1058||Split's condition must evaluate to boolean. Found: || - ||1059 ||Problem while reconciling output schema of || + ||1059||Problem while reconciling output schema of || - ||1060 ||Cannot resolve output = schema || + ||1060||Cannot resolve output s= chema|| - ||1061 ||Sorry, group by complex types will be supported soon || + ||1061||Sorry, group by complex types will be supported soon|| - ||1062 ||COGroup by incompatible types || + ||1062||COGroup by incompatible types|| - ||1063 ||Problem while reading field schema from input while inserting ca= st || + ||1063||Problem while reading field schema from input while inserting cas= t|| - ||1064 ||Problem reading column from schema: || + ||1064||Problem reading column from schema: || - ||1065 ||Found more than one load function to use: || + ||1065||Found more than one load function to use: || - ||1066 ||Unable to open iterator for alias || + ||1066||Unable to open iterator for alias || - ||1067 ||Unable to explain alias || + ||1067||Unable to explain alias || - ||1068 ||Using as key not supported. || + ||1068||Using as key not supported.|| - ||1069 ||Problem resolving class version numbers for class || + ||1069||Problem resolving class version numbers for class || - ||1070 ||Could not resolve using imports: || + ||1070||Could not resolve using imports: || - ||1071 ||Cannot convert a to || + ||1071||Cannot convert a to || - ||1072 ||Out of bounds access: Request for field number exceeds = tuple size of || + ||1072||Out of bounds access: Request for field number exceeds t= uple size of || - ||1073 ||Cannot determine field schema for || + ||1073||Cannot determine field schema for || - ||1074 ||Problem with formatting. Could not convert to . || + ||1074||Problem with formatting. Could not convert to .|| - ||1075 ||Received a bytearray from the UDF. Cannot determine how to conve= rt the bytearray to || + ||1075||Received a bytearray from the UDF. Cannot determine how to conver= t the bytearray to || - ||1076 ||Problem while reading field schema of cast operator. || + ||1076||Problem while reading field schema of cast operator.|| - ||1077 ||Two operators that require a cast in between are not adjacent. || + ||1077||Two operators that require a cast in between are not adjacent.|| - ||1078 ||Schema size mismatch for casting. Input schema size: . Tar= get schema size: || + ||1078||Schema size mismatch for casting. Input schema size: . Targ= et schema size: || - ||1079 ||Undefined type checking logic for unary operator: " || + ||1079||Undefined type checking logic for unary operator: " || - ||1080 ||Did not find inputs for operator: " || + ||1080||Did not find inputs for operator: " || - ||1081 ||Cannot cast to . Exp= ected bytearray but received: || + ||1081||Cannot cast to . Expe= cted bytearray but received: || - ||1082 ||Cogroups with more than 127 inputs not supported. || + ||1082||Cogroups with more than 127 inputs not supported.|| - ||1083 ||setBatchOn() must be called first. || + ||1083||setBatchOn() must be called first.|| - ||1084 ||Invalid Query: Query is null or of size 0. || + ||1084||Invalid Query: Query is null or of size 0.|| - ||1085 || operator in is null. Canno= t null operators. || + ||1085|| operator in is null. Canno= t null operators.|| - ||1086 ||First operator in should have multiple . Found first operator with . || + ||1086||First operator in should have multiple . Found first operator with .|| - ||1087 ||The should be lesser than the number = of of the first operator. Found first operator with = . || + ||1087||The should be lesser than the number o= f of the first operator. Found first operator with = .|| - ||1088 || operator in should have one . Found operator with = > . || + ||1088|| operator in should have one . Found operator with = > .|| - ||1089 ||Second operator in should be the of the First operator. || + ||1089||Second operator in should be the of the First operator.|| - ||1090 ||Second operator can have at most one edge fr= om First operator. Found edges. || + ||1090||Second operator can have at most one edge fro= m First operator. Found edges.|| - ||1091 ||First operator does not support multiple . On co= mpleting the operation First operator will end up wi= th edges || + ||1091||First operator does not support multiple . On com= pleting the operation First operator will end up wit= h edges|| - ||1092 || operator in swap is null. Cannot swap null operat= ors. || + ||1092|| operator in swap is null. Cannot swap null operat= ors.|| - ||1093 ||Swap supports swap of operators with at most one .= Found operator with || + ||1093||Swap supports swap of operators with at most one . = Found operator with || - ||1094 ||Attempt to insert between two nodes that were not connected. || + ||1094||Attempt to insert between two nodes that were not connected.|| - ||1095 ||Attempt to remove and reconnect for node with multiple . || + ||1095||Attempt to remove and reconnect for node with multiple .|| - ||1096 ||Attempt to remove and reconnect for node with </no> . || + ||1096||Attempt to remove and reconnect for node with </no> .|| - ||1097 ||Containing node cannot be null. || + ||1097||Containing node cannot be null.|| - ||1098 ||Node index cannot be negative. || + ||1098||Node index cannot be negative.|| - ||1099 ||Node to be replaced cannot be null. || + ||1099||Node to be replaced cannot be null.|| - ||1100 ||Replacement node cannot be null. || + ||1100||Replacement node cannot be null.|| - ||1101 ||Merge Join must have exactly two inputs. Found : + + inpu= ts || + ||1101||Merge Join must have exactly two inputs. Found : + + input= s || - ||1102 ||Data is not sorted on side. Last two keys encounter= ed were: , || + ||1102||Data is not sorted on side. Last two keys encountere= d were: , || - ||1103 ||Merge join only supports Filter, Foreach and Load as its predece= ssor. Found : || + ||1103||Merge join only supports Filter, Foreach and Load as its predeces= sor. Found : || - ||1104 ||Right input of merge-join must implement SamplableLoader interfa= ce. This loader doesn't implement it. || + ||1104||Right input of merge-join must implement SamplableLoader interfac= e. This loader doesn't implement it.|| - ||1105 ||Heap percentage / Conversion factor cannot be set to 0 || + ||1105||Heap percentage / Conversion factor cannot be set to 0 || - ||1106 ||Merge join is possible only for simple column or '*' join keys w= hen using as the loader || + ||1106||Merge join is possible only for simple column or '*' join keys wh= en using as the loader || - ||1107 ||Try to merge incompatible types (eg. numerical type vs non-numei= rcal type) || + ||1107||Try to merge incompatible types (eg. numerical type vs non-numeir= cal type)|| - ||1108 ||Duplicated schema || + ||1108||Duplicated schema|| - ||1109 ||Input ( ) on which outer join is desired should ha= ve a valid schema || + ||1109||Input ( ) on which outer join is desired should hav= e a valid schema|| - ||1110 ||Unsupported query: You have an partition column () insi= de a i= n the filter condition. || + ||1110||Unsupported query: You have an partition column () insid= e a in= the filter condition.|| - ||1111 ||Use of partition column/condition with non partition column/cond= ition in filter expression is not supported. || + ||1111||Use of partition column/condition with non partition column/condi= tion in filter expression is not supported.|| - ||1112 ||Unsupported query: You have an partition column () = in a construction like: (pcond and ...) or (pcond and ...) where pcond is = a condition on a partition column. || + ||1112||Unsupported query: You have an partition column () i= n a construction like: (pcond and ...) or (pcond and ...) where pcond is a= condition on a partition column.|| - ||1113||Unable to describe schema for nested expression || - ||1114||Unable to find schema for nested alias || - ||2000 ||Internal error. Mismatch in group by arities. Expected: = . Found: || + ||2000||Internal error. Mismatch in group by arities. Expected: .= Found: || - ||2001 ||Unable to clone plan before compiling || + ||2001||Unable to clone plan before compiling|| - ||2002 ||The output file(s): already exists || + ||2002||The output file(s): already exists|| - ||2003 ||Cannot read from the storage where the output will be= stored || + ||2003||Cannot read from the storage where the output will be = stored|| - ||2004 ||Internal error while trying to check if type casts are needed || + ||2004||Internal error while trying to check if type casts are needed|| - ||2005 ||Expected , got || + ||2005||Expected , got || - ||2006 ||TypeCastInserter invoked with an invalid operator class name: || + ||2006||TypeCastInserter invoked with an invalid operator class name: || - ||2007 ||Unable to insert type casts into plan || + ||2007||Unable to insert type casts into plan|| - ||2008 || cannot have more than one input. F= ound inputs. || + ||2008|| cannot have more than one input. F= ound inputs.|| - ||2009 ||Can not move LOLimit up || + ||2009||Can not move LOLimit up|| - ||2010 ||LOFilter should have one input || + ||2010||LOFilter should have one input|| - ||2011 ||Can not insert LOLimit clone || + ||2011||Can not insert LOLimit clone|| - ||2012 ||Can not remove LOLimit after || + ||2012||Can not remove LOLimit after || - ||2013 ||Moving LOLimit in front of is not implemented || + ||2013||Moving LOLimit in front of is not implemented|| - ||2014 ||Unable to optimize load-stream-store optimization || + ||2014||Unable to optimize load-stream-store optimization|| - ||2015 ||Invalid physical operators in the physical plan || + ||2015||Invalid physical operators in the physical plan|| - ||2016 ||Unable to obtain a temporary path. || + ||2016||Unable to obtain a temporary path.|| - ||2017 ||Internal error creating job configuration. || + ||2017||Internal error creating job configuration.|| - ||2018 ||Internal error. Unable to introduce the combiner for optimizatio= n. || + ||2018||Internal error. Unable to introduce the combiner for optimization= .|| - ||2019 ||Expected to find plan with single leaf. Found leaves. || + ||2019||Expected to find plan with single leaf. Found leaves.|| - ||2020 ||Expected to find plan with UDF leaf. Found || + ||2020||Expected to find plan with UDF leaf. Found || - ||2021 ||Internal error. Unexpected operator project(*) in local rearrang= e inner plan. || + ||2021||Internal error. Unexpected operator project(*) in local rearrange= inner plan.|| - ||2022 ||Both map and reduce phases have been done. This is unexpected wh= ile compiling. || + ||2022||Both map and reduce phases have been done. This is unexpected whi= le compiling.|| - ||2023 ||Received a multi input plan when expecting only a single input o= ne. || + ||2023||Received a multi input plan when expecting only a single input on= e.|| - ||2024 ||Expected reduce to have single leaf. Found leaves. || + ||2024||Expected reduce to have single leaf. Found leaves.|| - ||2025 ||Expected leaf of reduce plan to always be POStore. Found = || + ||2025||Expected leaf of reduce plan to always be POStore. Found = || - ||2026 ||No expression plan found in POSort. || + ||2026||No expression plan found in POSort.|| - ||2027 ||Both map and reduce phases have been done. This is unexpected fo= r a merge. || + ||2027||Both map and reduce phases have been done. This is unexpected for= a merge.|| - ||2028 ||ForEach can only have one successor. Found successors. || + ||2028||ForEach can only have one successor. Found successors.|| - ||2029 ||Error rewriting POJoinPackage. || + ||2029||Error rewriting POJoinPackage.|| - ||2030 ||Expected reduce plan leaf to have a single predecessor. Found predecessors. || + ||2030||Expected reduce plan leaf to have a single predecessor. Found = predecessors.|| - ||2031 ||Found map reduce operator with POLocalRearrange as last oper but= with no succesor. || + ||2031||Found map reduce operator with POLocalRearrange as last oper but = with no succesor.|| - ||2032 ||Expected map reduce operator to have a single successor. Found <= n> successors. || + ||2032||Expected map reduce operator to have a single successor. Found successors.|| - ||2033 ||Problems in rearranging map reduce operators in plan. || + ||2033||Problems in rearranging map reduce operators in plan.|| - ||2034 ||Error compiling operator || + ||2034||Error compiling operator || - ||2035 ||Internal error. Could not compute key type of sort operator. || + ||2035||Internal error. Could not compute key type of sort operator.|| - ||2036 ||Unhandled key type || + ||2036||Unhandled key type || - ||2037 ||Invalid ship specification. File doesn't exist: || + ||2037||Invalid ship specification. File doesn't exist: || - ||2038 ||Unable to rename to || + ||2038||Unable to rename to || - ||2039 ||Unable to copy to || + ||2039||Unable to copy to || - ||2040 ||Unknown exec type: || + ||2040||Unknown exec type: || - ||2041 ||No Plan to compile || + ||2041||No Plan to compile|| - ||2042 ||Internal error. Unable to translate logical plan to physical pla= n. || + ||2042||Internal error. Unable to translate logical plan to physical plan= .|| - ||2043 ||Unexpected error during execution. || + ||2043||Unexpected error during execution.|| - ||2044 ||The type cannot be collected as a Key type || + ||2044||The type cannot be collected as a Key type|| - ||2045 ||Internal error. Not able to check if the leaf node is a store op= erator. || + ||2045||Internal error. Not able to check if the leaf node is a store ope= rator.|| - ||2046 ||Unable to create FileInputHandler. || + ||2046||Unable to create FileInputHandler.|| - ||2047 ||Internal error. Unable to introduce split operators. || + ||2047||Internal error. Unable to introduce split operators.|| - ||2048 ||Error while performing checks to introduce split operators. || + ||2048||Error while performing checks to introduce split operators.|| - ||2049 ||Error while performing checks to optimize limit operator. || + ||2049||Error while performing checks to optimize limit operator.|| - ||2050 ||Internal error. Unable to optimize limit operator. || + ||2050||Internal error. Unable to optimize limit operator.|| - ||2051 ||Did not find a predecessor for . || + ||2051||Did not find a predecessor for .|| - ||2052 ||Internal error. Cannot retrieve operator from null or empty list= . || + ||2052||Internal error. Cannot retrieve operator from null or empty list.= || - ||2053 ||Internal error. Did not find roots in the physical plan. || + ||2053||Internal error. Did not find roots in the physical plan.|| - ||2054 ||Internal error. Could not convert to || + ||2054||Internal error. Could not convert to || - ||2055 ||Did not find exception name to create exception from string: || + ||2055||Did not find exception name to create exception from string: || - ||2056 ||Cannot create exception from empty string. ||Pig could not find = an exception in the error messages from Hadoop, examine the [[#clientSideLo= g|client log]] to find more information. || + ||2056||Cannot create exception from empty string.||Pig could not find an= exception in the error messages from Hadoop, examine the [[#clientSideLog|= client log]] to find more information.|| - ||2057 ||Did not find fully qualified method name to reconstruct stack tr= ace: || + ||2057||Did not find fully qualified method name to reconstruct stack tra= ce: || - ||2058 ||Unable to set index on the newly created POLocalRearrange. || + ||2058||Unable to set index on the newly created POLocalRearrange.|| - ||2059 ||Problem with inserting cast operator for > in plan. || + ||2059||Problem with inserting cast operator for > in plan.|| - ||2060 ||Expected one leaf. Found leaves. || + ||2060||Expected one leaf. Found leaves.|| - ||2061 ||Expected single group by element but found multiple elements. || + ||2061||Expected single group by element but found multiple elements.|| - ||2062 ||Each COGroup input has to have the same number of inner plans." = || + ||2062||Each COGroup input has to have the same number of inner plans."|| - ||2063 ||Expected multiple group by element but found single element. || + ||2063||Expected multiple group by element but found single element.|| - ||2064 ||Unsupported root type in LOForEach: || + ||2064||Unsupported root type in LOForEach: || - ||2065 ||Did not find roots of the inner plan. || + ||2065||Did not find roots of the inner plan.|| - ||2066 ||Unsupported (root) operator in inner plan: || + ||2066||Unsupported (root) operator in inner plan: || - ||2067 || does not know how to handle type: || + ||2067|| does not know how to handle type: || - ||2068 ||Internal error. Improper use of method getColumn() in POProject = || + ||2068|| Internal error. Improper use of method getColumn() in POProject|| - ||2069 ||Error during map reduce compilation. Problem in accessing column= from project operator. || + ||2069||Error during map reduce compilation. Problem in accessing column = from project operator.|| - ||2070 ||Problem in accessing column from project operator. || + ||2070||Problem in accessing column from project operator.|| - ||2071 ||Problem with setting up local rearrange's plans. || + ||2071||Problem with setting up local rearrange's plans.|| - ||2072 ||Attempt to run a non-algebraic function as an algebraic function= || + ||2072||Attempt to run a non-algebraic function as an algebraic function|| - ||2073 ||Problem with replacing distinct operator with distinct built-in = function. || + ||2073||Problem with replacing distinct operator with distinct built-in f= unction.|| - ||2074 ||Could not configure distinct's algebraic functions in map reduce= plan. || + ||2074||Could not configure distinct's algebraic functions in map reduce = plan.|| - ||2075 ||Could not set algebraic function type. || + ||2075||Could not set algebraic function type.|| - ||2076 ||Unexpected Project-Distinct pair while trying to set up plans fo= r use with combiner. || + ||2076||Unexpected Project-Distinct pair while trying to set up plans for= use with combiner.|| - ||2077 ||Problem with reconfiguring plan to add distinct built-in functio= n. || + ||2077||Problem with reconfiguring plan to add distinct built-in function= .|| - ||2078 ||Caught error from UDF: [] || + ||2078||Caught error from UDF: []|| - ||2079 ||Unexpected error while printing physical plan. || + ||2079||Unexpected error while printing physical plan.|| - ||2080 ||Foreach currently does not handle type || + ||2080||Foreach currently does not handle type || - ||2081 ||Unable to setup the function. || + ||2081||Unable to setup the function.|| - ||2082 ||Did not expect result of type: || + ||2082||Did not expect result of type: || - ||2083 ||Error while trying to get next result in POStream. || + ||2083||Error while trying to get next result in POStream.|| - ||2084 ||Error while running streaming binary. || + ||2084||Error while running streaming binary.|| - ||2085 ||Unexpected problem during optimization. Could not find LocalRear= range in combine plan. || + ||2085||Unexpected problem during optimization. Could not find LocalRearr= ange in combine plan.|| - ||2086 ||Unexpected problem during optimization. Could not find all Local= Rearrange operators. || + ||2086||Unexpected problem during optimization. Could not find all LocalR= earrange operators.|| - ||2087 ||Unexpected problem during optimization. Found index: in = multiple LocalRearrange operators. || + ||2087||Unexpected problem during optimization. Found index: in m= ultiple LocalRearrange operators.|| - ||2088 ||Unable to get results for: || + ||2088||Unable to get results for: || - ||2089 ||Unable to flag project operator to use single tuple bag. || + ||2089||Unable to flag project operator to use single tuple bag.|| - ||2090 ||Received Error while processing the plan. || + ||2090||Received Error while processing the plan.|| - ||2091 ||Packaging error while processing group. || + ||2091||Packaging error while processing group.|| - ||2092 ||No input paths specified in job. || + ||2092||No input paths specified in job.|| - ||2093 ||Encountered error in package operator while processing group. || + ||2093||Encountered error in package operator while processing group.|| - ||2094 ||Unable to deserialize object || + ||2094||Unable to deserialize object|| - ||2095 ||Did not get reduce key type from job configuration. || + ||2095||Did not get reduce key type from job configuration.|| - ||2096 ||Unexpected class in SortPartitioner: || + ||2096||Unexpected class in SortPartitioner: || - ||2097 ||Failed to copy from: to: || + ||2097||Failed to copy from: to: || - ||2098 ||Invalid seek option: || + ||2098||Invalid seek option: || - ||2099 ||Problem in constructing slices. || + ||2099||Problem in constructing slices.|| - ||2100 || does not exist. || + ||2100|| does not exist.|| - ||2101 || should not be used for storing. || + ||2101|| should not be used for storing.|| - ||2102 ||"Cannot test a for emptiness. || + ||2102||"Cannot test a for emptiness.|| - ||2103 ||Problem while computing of . || + ||2103||Problem while computing of .|| - ||2104 ||Error while determining schema of . || + ||2104||Error while determining schema of .|| - ||2105 ||Error while converting to bytes || + ||2105||Error while converting to bytes|| - ||2106 ||Error while computing in <= class name> || + ||2106||Error while computing in || - ||2107 ||DIFF expected two inputs but received inputs. || + ||2107||DIFF expected two inputs but received inputs.|| - ||2108 ||Could not determine data type of field: || + ||2108||Could not determine data type of field: || - ||2109 ||TextLoader does not support conversion . || + ||2109||TextLoader does not support conversion .|| - ||2110 ||Unable to deserialize optimizer rules. || + ||2110||Unable to deserialize optimizer rules.|| - ||2111 ||Unable to create temporary directory: || + ||2111||Unable to create temporary directory: || - ||2112 ||Unexpected data while reading tuple from binary file. || + ||2112||Unexpected data while reading tuple from binary file.|| - ||2113 ||SingleTupleBag should never be serialized or serialized. || + ||2113||SingleTupleBag should never be serialized or serialized.|| - ||2114 ||Expected input to be chararray, but got || + ||2114||Expected input to be chararray, but got || - ||2115 ||Internal error. Expected to throw exception from the backend. Di= d not find any exception to throw. || + ||2115||Internal error. Expected to throw exception from the backend. Did= not find any exception to throw.|| - ||2116 ||Unexpected error. Could not check for the existence of the file(= s): || + ||2116||Unexpected error. Could not check for the existence of the file(s= ): || - ||2117 ||Unexpected error when launching map reduce job. || + ||2117||Unexpected error when launching map reduce job.|| - ||2118 ||Unable to create input slice for: || + ||2118||Unable to create input slice for: || - ||2119 ||Internal Error: Found multiple data types for map key || + ||2119||Internal Error: Found multiple data types for map key|| - ||2120 ||Internal Error: Unable to determine data type for map key || + ||2120||Internal Error: Unable to determine data type for map key|| - ||2121 ||Error while calling finish method on UDFs. || + ||2121||Error while calling finish method on UDFs.|| - ||2122 ||Sum of probabilities should be one || + ||2122||Sum of probabilities should be one|| - ||2123 ||Internal Error: Unable to discover required fields from the load= s || + ||2123||Internal Error: Unable to discover required fields from the loads= || - ||2124 ||Internal Error: Unexpected error creating field schema || + ||2124||Internal Error: Unexpected error creating field schema|| - ||2125 ||Expected at most one predecessor of load || + ||2125||Expected at most one predecessor of load|| - ||2126 ||Predecessor of load should be store || + ||2126||Predecessor of load should be store|| - ||2127 ||Cloning of plan failed. || + ||2127||Cloning of plan failed.|| - ||2128 ||Failed to connect store with dependent load. || + ||2128||Failed to connect store with dependent load.|| - ||2129 ||Internal Error. Unable to add store to the split plan for optimi= zation. || + ||2129||Internal Error. Unable to add store to the split plan for optimiz= ation.|| - ||2130 ||Internal Error. Unable to merge split plans for optimization. || + ||2130||Internal Error. Unable to merge split plans for optimization.|| - ||2131 ||Internal Error. Unable to connect split plan for optimization. || + ||2131||Internal Error. Unable to connect split plan for optimization.|| - ||2132 ||Internal Error. Unable to replace store with split operator for = optimization. || + ||2132||Internal Error. Unable to replace store with split operator for o= ptimization.|| - ||2133 ||Internal Error. Unable to connect map plan with successors for o= ptimization. || + ||2133||Internal Error. Unable to connect map plan with successors for op= timization.|| - ||2134 ||Internal Error. Unable to connect map plan with predecessors for= optimization. || + ||2134||Internal Error. Unable to connect map plan with predecessors for = optimization.|| - ||2135 ||Received error from store function. || + ||2135||Received error from store function.|| - ||2136 ||Internal Error. Unable to set multi-query index for optimization= . || + ||2136||Internal Error. Unable to set multi-query index for optimization.= || - ||2137 ||Internal Error. Unable to add demux to the plan as leaf for opti= mization. || + ||2137||Internal Error. Unable to add demux to the plan as leaf for optim= ization.|| - ||2138 ||Internal Error. Unable to connect package to local rearrange ope= rator in pass-through combiner for optimization. || + ||2138||Internal Error. Unable to connect package to local rearrange oper= ator in pass-through combiner for optimization.|| - ||2139 ||Invalid value type: . Expected value type is DataBag. || + ||2139||Invalid value type: . Expected value type is DataBag.|| - ||2140 ||Invalid package index: . Should be in the range between 0= and . || + ||2140||Invalid package index: . Should be in the range between 0 = and .|| - ||2141 ||Internal Error. Cannot merge non-combiner with combiners for opt= imization. || + ||2141||Internal Error. Cannot merge non-combiner with combiners for opti= mization.|| - ||2142 ||ReadOnceBag should never be serialized. || + ||2142||ReadOnceBag should never be serialized.|| - ||2143 ||Expected index value within POPackageLite is 0, but found 'index= '. || + ||2143||Expected index value within POPackageLite is 0, but found 'index'= .|| - ||2144 ||Problem while fixing project inputs during rewiring. || + ||2144||Problem while fixing project inputs during rewiring.|| - ||2145 ||Problem while rebuilding schemas after transformation. || + ||2145||Problem while rebuilding schemas after transformation.|| - ||2146 ||Internal Error. Inconsistency in key index found during optimiza= tion. || + ||2146||Internal Error. Inconsistency in key index found during optimizat= ion.|| - ||2147 ||Error cloning POLocalRearrange for limit after sort. || + ||2147||Error cloning POLocalRearrange for limit after sort.|| - ||2148 ||Error cloning POPackageLite for limit after sort || + ||2148||Error cloning POPackageLite for limit after sort|| - ||2149 ||Internal error while trying to check if filters can be pushed up= . || + ||2149||Internal error while trying to check if filters can be pushed up.= || - ||2150 ||Internal error. The push before input is not set. || + ||2150||Internal error. The push before input is not set.|| - ||2151 ||Internal error while pushing filters up. || + ||2151||Internal error while pushing filters up.|| - ||2152 ||Internal error while trying to check if foreach with flatten can= be pushed down. || + ||2152||Internal error while trying to check if foreach with flatten can = be pushed down.|| - ||2153 ||Internal error. The mapping for the flattened columns is empty || + ||2153||Internal error. The mapping for the flattened columns is empty|| - ||2154 ||Internal error. Schema of successor cannot be null for pushing d= own foreach with flatten. || + ||2154||Internal error. Schema of successor cannot be null for pushing do= wn foreach with flatten.|| - ||2155 ||Internal error while pushing foreach with flatten down. || + ||2155||Internal error while pushing foreach with flatten down.|| - ||2156 ||Error while fixing projections. Projection map of node to be rep= laced is null. || + ||2156||Error while fixing projections. Projection map of node to be repl= aced is null.|| - ||2157 ||Error while fixing projections. No mapping available in old pred= ecessor to replace column. || + ||2157||Error while fixing projections. No mapping available in old prede= cessor to replace column.|| - ||2158 ||Error during fixing projections. No mapping available in old pre= decessor for column to be replaced. || + ||2158||Error during fixing projections. No mapping available in old pred= ecessor for column to be replaced.|| - ||2159 ||Error during fixing projections. Could not locate replacement co= lumn from the old predecessor. || + ||2159||Error during fixing projections. Could not locate replacement col= umn from the old predecessor.|| - ||2160 ||Error during fixing projections. Projection map of new predecess= or is null. || + ||2160||Error during fixing projections. Projection map of new predecesso= r is null.|| - ||2161 ||Error during fixing projections. No mapping available in new pre= decessor to replace column. || + ||2161||Error during fixing projections. No mapping available in new pred= ecessor to replace column.|| - ||2162 ||Error during fixing projections. Could not locate mapping for co= lumn in new predecessor. || + ||2162||Error during fixing projections. Could not locate mapping for col= umn in new predecessor.|| - ||2163 ||Error during fixing projections. Could not locate replacement co= lumn for column: in the new predecessor. || + ||2163||Error during fixing projections. Could not locate replacement col= umn for column: in the new predecessor.|| - ||2164 ||Expected EOP as return status. Found: || + ||2164||Expected EOP as return status. Found: || - ||2165 ||Problem in index construction. || + ||2165||Problem in index construction. || - ||2166 ||Key type mismatch. Found key of type on left side. But, f= ound key of type in index built for right side. || + ||2166||Key type mismatch. Found key of type on left side. But, fo= und key of type in index built for right side. || - ||2167 ||LocalRearrange used to extract keys from tuple isn't configured = correctly. || + ||2167||LocalRearrange used to extract keys from tuple isn't configured c= orrectly. || - ||2168 ||Expected physical plan with exactly one root and one leaf. || + ||2168||Expected physical plan with exactly one root and one leaf. || - ||2169 ||Physical operator preceding predicate not found in = compiled MR jobs. || + ||2169||Physical operator preceding predicate not found in c= ompiled MR jobs.|| - ||2170 ||Physical operator preceding both left and right predicate found = to be same. This is not expected. || + ||2170||Physical operator preceding both left and right predicate found t= o be same. This is not expected. || - ||2171 ||Expected one but found more then one root physical operator in p= hysical plan. || + ||2171||Expected one but found more then one root physical operator in ph= ysical plan. || - ||2172 ||Expected physical operator at root to be POLoad. Found : || + ||2172||Expected physical operator at root to be POLoad. Found : || - ||2173 ||One of the preceding compiled MR operator is null. This is not e= xpected. || + ||2173||One of the preceding compiled MR operator is null. This is not ex= pected.|| - ||2174 ||Internal exception. Could not create the sampler job. || + ||2174||Internal exception. Could not create the sampler job. || - ||2175 ||Internal error. Could not retrieve file size for the sampler. || + ||2175||Internal error. Could not retrieve file size for the sampler. || - ||2176 ||Error processing right input during merge join || + ||2176||Error processing right input during merge join|| - ||2177 ||Prune column optimization: Cannot retrieve operator from null or= empty list || + ||2177||Prune column optimization: Cannot retrieve operator from null or = empty list|| - ||2178 ||Prune column optimization: The matching node from the optimizor = framework is null || + ||2178||Prune column optimization: The matching node from the optimizor f= ramework is null|| - ||2179 ||Prune column optimization: Error while performing checks to prun= e columns. || + ||2179||Prune column optimization: Error while performing checks to prune= columns.|| - ||2180 ||Prune column optimization: Only LOForEach and LOSplit are expect= ed || + ||2180||Prune column optimization: Only LOForEach and LOSplit are expecte= d|| - ||2181 ||Prune column optimization: Unable to prune columns. || + ||2181||Prune column optimization: Unable to prune columns.|| - ||2182 ||Prune column optimization: Only relational operator can be used = in column prune optimization. || + ||2182||Prune column optimization: Only relational operator can be used i= n column prune optimization.|| - ||2183 ||Prune column optimization: LOLoad must be the root logical opera= tor. || + ||2183||Prune column optimization: LOLoad must be the root logical operat= or.|| - ||2184 ||Prune column optimization: Fields list inside RequiredFields is = null. || + ||2184||Prune column optimization: Fields list inside RequiredFields is n= ull.|| - ||2185 ||Prune column optimization: Unable to prune columns. || + ||2185||Prune column optimization: Unable to prune columns.|| - ||2186 ||Prune column optimization: Cannot locate node from successor || + ||2186||Prune column optimization: Cannot locate node from successor|| - ||2187 ||Column pruner: Cannot get predessors || + ||2187||Column pruner: Cannot get predessors|| - ||2188 ||Column pruner: Cannot prune columns || + ||2188||Column pruner: Cannot prune columns|| - ||2189 ||Column pruner: Expect schema || + ||2189||Column pruner: Expect schema|| - ||2190 ||PruneColumns: Cannot find predecessors for logical operator || + ||2190||PruneColumns: Cannot find predecessors for logical operator|| - ||2191 ||PruneColumns: No input to prune || + ||2191||PruneColumns: No input to prune|| - ||2192 ||PruneColumns: Column to prune does not exist || + ||2192||PruneColumns: Column to prune does not exist|| - ||2193 ||PruneColumns: Foreach can only have 1 predecessor || + ||2193||PruneColumns: Foreach can only have 1 predecessor|| - ||2194 ||PruneColumns: Expect schema || + ||2194||PruneColumns: Expect schema|| - ||2195 ||PruneColumns: Fail to visit foreach inner plan || + ||2195||PruneColumns: Fail to visit foreach inner plan|| - ||2196 ||RelationalOperator: Exception when traversing inner plan || + ||2196||RelationalOperator: Exception when traversing inner plan|| - ||2197 ||RelationalOperator: Cannot drop column which require * || + ||2197||RelationalOperator: Cannot drop column which require *|| - ||2198 ||LOLoad: load only take 1 input || + ||2198||LOLoad: load only take 1 input|| - ||2199 ||LOLoad: schema mismatch || + ||2199||LOLoad: schema mismatch|| - ||2200 ||PruneColumns: Error getting top level project || + ||2200||PruneColumns: Error getting top level project|| - ||2201 ||Could not validate schema alias || + ||2201||Could not validate schema alias|| - ||2202 ||Error change distinct/sort to use secondary key optimizer || + ||2202||Error change distinct/sort to use secondary key optimizer|| - ||2203 ||Sort on columns from different inputs || + ||2203||Sort on columns from different inputs|| - ||2204 ||Error setting secondary key plan || + ||2204||Error setting secondary key plan|| - ||2205 ||Error visiting POForEach inner plan || + ||2205||Error visiting POForEach inner plan|| - ||2206 ||Error visiting POSort inner plan || + ||2206||Error visiting POSort inner plan|| - ||2207 ||POForEach inner plan has more than 1 root || + ||2207||POForEach inner plan has more than 1 root|| - ||2208 ||Exception visiting foreach inner plan || + ||2208||Exception visiting foreach inner plan|| - ||2209 ||Internal error while processing any partition filter conditions = in the filter after the load || + ||2209||Internal error while processing any partition filter conditions i= n the filter after the load|| - ||2210 ||Internal Error in logical optimizer. || + ||2210||Internal Error in logical optimizer.|| - ||2211 ||Column pruner: Unable to prune columns. || + ||2211||Column pruner: Unable to prune columns.|| - ||2212 ||Unable to prune plan. || + ||2212||Unable to prune plan.|| - ||2213 ||Error visiting inner plan for ForEach. || + ||2213||Error visiting inner plan for ForEach.|| - ||2214 ||Cannot find POLocalRearrange to set secondary plan. || + ||2214||Cannot find POLocalRearrange to set secondary plan.|| - ||2215 ||See more than 1 successors in the nested plan. || + ||2215||See more than 1 successors in the nested plan.|| - ||2216 ||Cannot get field schema || + ||2216||Cannot get field schema|| - ||2217 ||Problem setFieldSchema || + ||2217||Problem setFieldSchema|| - ||2218 ||Invalid resource schema: bag schema must have tuple as its field= || + ||2218||Invalid resource schema: bag schema must have tuple as its field|| - ||2998 ||Unexpected internal error. || + ||2998||Unexpected internal error.|| - ||2999 ||Unhandled internal error. || + ||2999||Unhandled internal error.|| - ||3000 ||IOException caught while compiling POMergeJoin || + ||3000||IOException caught while compiling POMergeJoin|| - ||4000 ||The output file(s): already exists || + ||4000||The output file(s): already exists|| - ||4001 ||Cannot read from the storage where the output will be= stored || + ||4001||Cannot read from the storage where the output will be = stored|| - ||4002 ||Can't read jar file: || + ||4002||Can't read jar file: || - ||4003 ||Unable to obtain a temporary path. || + ||4003||Unable to obtain a temporary path.|| - ||4004 ||Invalid ship specification. File doesn't exist: || + ||4004||Invalid ship specification. File doesn't exist: || - ||4005 ||Unable to rename to || + ||4005||Unable to rename to || - ||4006 ||Unable to copy to || + ||4006||Unable to copy to || - ||4007 ||Missing from hadoop configuration || + ||4007||Missing from hadoop configuration|| - ||4008 ||Failed to create local hadoop file || + ||4008||Failed to create local hadoop file || - ||4009 ||Failed to copy data to local hadoop file || + ||4009||Failed to copy data to local hadoop file || - ||6000 ||The output file(s): already exists || + ||6000||The output file(s): already exists|| - ||6001 ||Cannot read from the storage where the output will be= stored || + ||6001||Cannot read from the storage where the output will be = stored|| - ||6002 ||Unable to obtain a temporary path. || + ||6002||Unable to obtain a temporary path.|| - ||6003 ||Invalid cache specification. File doesn't exist: || + ||6003||Invalid cache specification. File doesn't exist: || - ||6004 ||Invalid ship specification. File doesn't exist: || + ||6004||Invalid ship specification. File doesn't exist: || - ||6005 ||Unable to rename to || + ||6005||Unable to rename to || - ||6006 ||Unable to copy to || + ||6006||Unable to copy to || - ||6007 ||Unable to check name || + ||6007||Unable to check name || - ||6008 ||Failed to obtain glob for || + ||6008||Failed to obtain glob for || - ||6009 ||Failed to create job client || + ||6009||Failed to create job client|| - ||6010 ||Could not connect to HOD || + ||6010||Could not connect to HOD|| - ||6011 ||Failed to run command on server ; return code:= ; error: || + ||6011||Failed to run command on server ; return code: = ; error: || - ||6012 ||Unable to run command: on server || + ||6012||Unable to run command: on server || - ||6013 ||Unable to chmod . Thread interrupted. || + ||6013||Unable to chmod . Thread interrupted.|| - ||6014 ||Failed to save secondary output '' of task: || + ||6014||Failed to save secondary output '' of task: || - ||6015 ||During execution, encountered a Hadoop error. || + ||6015||During execution, encountered a Hadoop error.|| - ||6016 ||Out of memory. || + ||6016||Out of memory.|| - ||6017 ||Execution failed, while processing ' || + ||6017||Execution failed, while processing '|| - ||6018 ||Error while reading input || + ||6018||Error while reading input|| - = - = - = = =3D=3D Change Log =3D=3D + = - 1. December 19, 2008: Changed the "Compendium of error messages" to incl= ude error codes along with updated error messages + 1. December 19, 2008: Changed the "Compendium of error messages" to in= clude error codes along with updated error messages - 1. December 23, 2008: + 1. December 23, 2008: = - i. Updated "Compendium of error messages" to include new error codes (2= 002, 2003, 4000, 4001, 6000, 6001) and error messages and moved error code = 1002 to 2001 + i. Updated "Compendium of error messages" to include new error code= s (2002, 2003, 4000, 4001, 6000, 6001) and error messages and moved error c= ode 1002 to 2001 - i. Updated "Frontend errors" to remove PigParseException + i. Updated "Frontend errors" to remove PigParseException - i. Updated "Additional command line switches" to remove pid from log fi= le name. + i. Updated "Additional command line switches" to remove pid from lo= g file name. - 1. December 24, 2008: Updated "Compendium of error messages" to include = new error codes (1002, 1003, 1066, 1067, 4002) + 1. December 24, 2008: Updated "Compendium of error messages" to includ= e new error codes (1002, 1003, 1066, 1067, 4002) - 1. December 30, 2008: Updated "Compendium of error messages" to include = new error codes (1068, 2004 through 2017, 4003, 6002) + 1. December 30, 2008: Updated "Compendium of error messages" to includ= e new error codes (1068, 2004 through 2017, 4003, 6002) - 1. January 5, 2009: Updated "Compendium of error messages" to include ne= w error codes (2018 through 2033) + 1. January 5, 2009: Updated "Compendium of error messages" to include = new error codes (2018 through 2033) - 1. January 6, 2009: Updated "Compendium of error messages" to include ne= w error codes (1069 through 1073, 2034 through 2046, 4004 through 4009, 600= 3 through 6013) + 1. January 6, 2009: Updated "Compendium of error messages" to include = new error codes (1069 through 1073, 2034 through 2046, 4004 through 4009, 6= 003 through 6013) - 1. January 7, 2008: Updated "Compendium of error messages" to include ne= w error codes (2047 through 2050) + 1. January 7, 2008: Updated "Compendium of error messages" to include = new error codes (2047 through 2050) - 1. January 8, 2009: Updated "Compendium of error messages" to include Fr= agment Replicate Join for error codes 1057 and 1060 + 1. January 8, 2009: Updated "Compendium of error messages" to include = Fragment Replicate Join for error codes 1057 and 1060 - 1. January 9, 2009: Updated "Compendium of error messages" to include ne= w error codes (2051) + 1. January 9, 2009: Updated "Compendium of error messages" to include = new error codes (2051) - 1. January 14, 2009: Updated "Compendium of error messages" to include n= ew error codes (1074, 2052 through 2054, 2098, 2099) + 1. January 14, 2009: Updated "Compendium of error messages" to include= new error codes (1074, 2052 through 2054, 2098, 2099) - 1. January 21, 2009: Updated "Compendium of error messages" to include n= ew error codes (1075, 2055 through 2057) + 1. January 21, 2009: Updated "Compendium of error messages" to include= new error codes (1075, 2055 through 2057) - 1. January 22, 2009: Updated "Compendium of error messages" to include n= ew error codes (2058) + 1. January 22, 2009: Updated "Compendium of error messages" to include= new error codes (2058) - 1. January 23, 2009: Updated "Frontend errors" to update the list of exc= eptions thrown in the front-end. + 1. January 23, 2009: Updated "Frontend errors" to update the list of e= xceptions thrown in the front-end. - 1. January 27, 2009: Updated "Compendium of error messages" to include n= ew error codes (1076 through 1080, 2058 through 2066) + 1. January 27, 2009: Updated "Compendium of error messages" to include= new error codes (1076 through 1080, 2058 through 2066) - 1. February 2, 2009: Updated "Compendium of error messages" to include n= ew error codes (1081 through 1082, 2067 through 2092, 6014) + 1. February 2, 2009: Updated "Compendium of error messages" to include= new error codes (1081 through 1082, 2067 through 2092, 6014) - 1. February 3, 2009: Updated "Compendium of error messages" to include n= ew error codes (updated 2098 and 2099 to 2998 and 2999 respectively, added = 2093 through 2101, 2103 through 2107) + 1. February 3, 2009: Updated "Compendium of error messages" to include= new error codes (updated 2098 and 2099 to 2998 and 2999 respectively, adde= d 2093 through 2101, 2103 through 2107) - 1. February 4, 2009: Updated "Compendium of error messages" to include n= ew error codes (updated 1010, 1011, 1012 and 2106; added 2102, 2108 through= 2114) + 1. February 4, 2009: Updated "Compendium of error messages" to include= new error codes (updated 1010, 1011, 1012 and 2106; added 2102, 2108 throu= gh 2114) = - 1. February 5, 2009: Updated "Compendium of error messages" to include n= ew error codes (updated 2043 and 2106; added 2115) + 1. February 5, 2009: Updated "Compendium of error messages" to include= new error codes (updated 2043 and 2106; added 2115) - 1. February 11, 2009: Updated "Compendium of error messages" to include = new error codes (2116 through 2121, 6015 and 6016) + 1. February 11, 2009: Updated "Compendium of error messages" to includ= e new error codes (2116 through 2121, 6015 and 6016) - 1. February 12, 2009: Updated "Compendium of error messages" to include = new error code 2122 + 1. February 12, 2009: Updated "Compendium of error messages" to includ= e new error code 2122 - 1. April 10, 2009: Updated "Compendium of error messages" to replace er= ror code 2110 + 1. April 10, 2009: Updated "Compendium of error messages" to replace = error code 2110 - 1. November 2, 2009: Updated "Compendium of error messages" to include n= ew error code 1109 + 1. November 2, 2009: Updated "Compendium of error messages" to include= new error code 1109 - = =3D=3D References =3D=3D + = 1. <> "Pig Developer Cookbook" October 21, 2008, http:= //wiki.apache.org/pig/PigDeveloperCookbook - 1. <> Santhosh Srinivasan, "Pig Error Handling Design", = December 8, 2008, http://wiki.apache.org/pig/PigErrorHandlingDesign + 2. <> Santhosh Srinivasan, "Pig Error Handling Design", = December 8, 2008, http://wiki.apache.org/pig/PigErrorHandlingDesign =20