asterixdb-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mike Carey <dtab...@gmail.com>
Subject Re: Deadlock issue
Date Thu, 07 Jan 2016 15:16:26 GMT
The general transaction handling for such an exception wrt locking and
aborts probably assumes that total bailouts are the answer.  Thus, it may
leave messes that rollbacks are otherwise the answer to.  Feeds and
transactions don't mix super well, it seems....  Watching how duplicate
keys work for insert from query statements may help you debug. If we change
things to allow those to succeed for all non duplicate keys - which might
make more sense for that anyway.
On Jan 7, 2016 5:48 AM, "abdullah alamoudi" <bamousaa@gmail.com> wrote:

> Today, as I was working on fixing handling of duplicate keys with feeds,
> everything seemed to work fine. here is what we do when we encounter a
> duplicate key exception.
>
> 1. we remove the tuple causing the exception.
> 2. we continue from where we stopped.
>
> The problem is that when I try to query the dataset after that to check
> and see which records made it into the dataset, I get a deadlock.
>
> I have looked at the stack trace (attached) and I think the threads in the
> file are the relevant ones. Please, have a look and let me know if you have
> a possible cause in mind.
>
> The threads are related to:
> 1. BufferCache.
> 2. Logging.
> 3. Locking.
>
> Let me know what you think. I can reproduce this bug. it happened on 100%
> of my test runs.
>
> I will let you know when I solve it but it is taking longer than I thought.
>
> Amoudi, Abdullah.
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message