falcon-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "pavan kumar kolamuri (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (FALCON-1068) When scheduling a process, Falcon throws "Bad Request;Could not commit transaction due to exception during persistence"
Date Mon, 16 Mar 2015 12:55:38 GMT

    [ https://issues.apache.org/jira/browse/FALCON-1068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14363162#comment-14363162
] 

pavan kumar kolamuri commented on FALCON-1068:
----------------------------------------------

Yes [~sowmyaramesh] what you said is correct . But the lock timeout is 5 minutes. If some
transaction holds lock for entity and get failed for some reason for another 5 minutes no
graph operations are allowed during that 5 minutes.

1) We need to handle transaction failures and do retries and rollback even it failed after
retries.  

2) And also can we do this ?  while adding process instance in GraphDB we have a few operations
to do , why can't we commit after every operation , that will help in reducing lock contentions.


> When scheduling a process, Falcon throws "Bad Request;Could not commit transaction due
to exception during persistence"
> -----------------------------------------------------------------------------------------------------------------------
>
>                 Key: FALCON-1068
>                 URL: https://issues.apache.org/jira/browse/FALCON-1068
>             Project: Falcon
>          Issue Type: Bug
>            Reporter: Adam Kawa
>         Attachments: falcon.application.log.FALCON-1068.rtf
>
>
> I have a simple script "manage-entity.sh process dss" that deletes, submit and schedules
a Falcon process. 
> A couple of times per week, I get the "FalconCLIException: Bad Request;Could not commit
transaction due to exception during persistence" when submitting the process. 
> The workaround is to restart Falcon server...
> e.g.:
> {code}
> $ ./manage-entity.sh process dss my-process.xml
> falcon/default/my-process(process) removed successfully (KILLED in ENGINE)
> Stacktrace:
> org.apache.falcon.client.FalconCLIException: Bad Request;Could not commit transaction
due to exception during persistence
> 	at org.apache.falcon.client.FalconCLIException.fromReponse(FalconCLIException.java:44)
> 	at org.apache.falcon.client.FalconClient.checkIfSuccessful(FalconClient.java:1162)
> 	at org.apache.falcon.client.FalconClient.sendEntityRequestWithObject(FalconClient.java:684)
> 	at org.apache.falcon.client.FalconClient.submitAndSchedule(FalconClient.java:347)
> 	at org.apache.falcon.cli.FalconCLI.entityCommand(FalconCLI.java:371)
> 	at org.apache.falcon.cli.FalconCLI.run(FalconCLI.java:182)
> 	at org.apache.falcon.cli.FalconCLI.main(FalconCLI.java:132)
> $ ./falcon-restart.sh
> Hadoop is installed, adding hadoop classpath to falcon classpath
> Hadoop is installed, adding hadoop classpath to falcon classpath
> falcon started using hadoop version:  Hadoop 2.5.0
> $ ./manage-entity.sh process dss my-process.xml
> falcon/default/my-process(process) removed successfully (KILLED in ENGINE)
> schedule/default/my-process(process) scheduled successfully
> submit/falcon/default/Submit successful (process) my-process
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message