couchdb-marketing mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jan Lehnardt <>
Subject Getting people to test CouchDB 2.0 alpha releases
Date Thu, 14 Jan 2016 11:45:10 GMT
Dear marketing team (cc dev@),

over the holiday break I made first build for CouchDB 2.0 available and have written up a
document on how to get started with testing it at
and I’ve also used Twitter to spread the word, and we’ve had this in last week’s weekly

But we’re not seeing a lot of people giving these builds a spin. I’m not 100% on what
to expect, but I would have expected a *little* more buzz around this :)

Can you help brainstorm a few ideas on how to get this in front of more people, and how to
get them to try out CouchDB 2.0 alpha?

These things are circling in my back of the head:

- Maybe the doc is too dense and not fun enough, or to badly written, please suggest any edits
that you may think help (or scrap it and make an alternative, if you think it can’t be saved

- Some people reply with “do you know when it is out of alpha and in beta?”, because they
think alpha is to early to get into the game. I usually reply that we go beta as soon as they
reported all the issues they found, but it didn’t help yet ;) — How do we convince those
who *would* help, to help earlier?

- Can we simplify the getting started experience?

- Should we host a public cluster somewhere, that people can play with?

- How do we get client library authors to test with CouchDB 2.0? (can we come up with a .travis.yml
file that they can maybe use in a 2.0 branch of their tree (maybe PouchDB has something there

- Can we simplify the procedure of how to report issues?

- Can we put a feedback form into Fauxton? (maybe an embedded Google Form is enough?)

- Can we put a web-chat pane into Fauxton, so people get a quick way to get into our IRC channel?

- Can we track installations somehow, e.g. along the lines of “send anonymous data to the
developers” where we can track the operating system / cluster config / maybe data sizes
etc? (all opt-in of course).

- Can we make this a mmopg? Maybe we draw up a huge matrix of test configurations that we’d
like to see tested, and people get points for filling it out, and we can crown the top-alpha-tester-champion
or something?

- Something similar for qualified bug reports in JIRA?

- Are there publications we should be talking to?


* * *

For all ideas we should consider the effort to set them up and maintain continuously.

Let’s do this! :)


View raw message