couchdb-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jason Smith <...@iriscouch.com>
Subject Re: All The Numbers -- View Engine Performance Benchmarks
Date Mon, 28 Jan 2013 05:37:34 GMT
Hey, Jan. This is a totally random and hypothetical idea:

Do you think there would be any speedup to use term_to_binary() and
binary_to_term() instead of encoding through JSON? The view server would of
course need to support that codec. I have already implemented encoding in
erlang.js: https://github.com/iriscouch/erlang.js

My suspicion is that there would be minor or zero speedup. The Erlang side
would get faster (term_to_binary is faster) but the JS side would get
slower (custom decoding rather than JSON.parse()). The JS VM is slightly
faster so the total change would reflect that.

But I thought I'd share the idea.

On Sun, Jan 27, 2013 at 12:50 PM, Jan Lehnardt <jan@apache.org> wrote:

>
> On Jan 27, 2013, at 13:22 , Alexander Shorin <kxepal@gmail.com> wrote:
>
> > On Sun, Jan 27, 2013 at 3:55 PM, Jason Smith <jhs@iriscouch.com> wrote:
> >>
> >> * Very little difference in different implementations (because stdio is
> the
> >> bottleneck)
> >
> > Why stdio is a bottleneck? I'm interesting underlay reasons.
>
> It is actually not the the stdio, but the serialisation form erlang-terms
> to JSON to JS Objects to JSON to erlang terms.
>
> Cheers
> Jan
> --
>
>
> >
> > As for my experience, the protocol design doesn't allows view and
> > query servers works faster as they can. For example, we have 50 ddocs
> > with validate functions. For each document save there would be
> > executed from 100 commands (50 resets + 50 ddoc validate_doc_update
> > calls) till 150 commands (+ddocs caches), while it's possible to
> > process them in bulk mode.
> >
> > --
> > ,,,^..^,,,
>
>


-- 
Iris Couch

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message