httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ian G <i...@iang.org>
Subject Re: dev Digest 11 Oct 2008 02:09:18 -0000 Issue 2699
Date Sat, 11 Oct 2008 13:06:18 GMT
dev-digest-help@httpd.apache.org wrote:

> ------------------------------------------------------------------------

> From:
> "Eric Covener" <covener@gmail.com>

> On Thu, Oct 9, 2008 at 5:59 AM, Ian G <iang@iang.org> wrote:
>>
>>> As we all know, this will not be in 2.2.10... Please recall that
>>> things must be in -trunk before being viable for backport to 2.2.x.
>> It's impossible to even express how disappointing this is ;(
>>
>> There are only two changes in TLS on the server side that have been
>> identified to have any effect on phishing [1].  TLS/SNI is the easy one.
> 
> What's the effect beyond making mass-vhosting easier?


Good question, and I'm afraid the answer is long and complex.

The intent is to break the vicious cycle that holds TLS
authentication down.

You've probably seen it a dozen times, at least, I have:  Every time
one of the Linux admins tries to set up an SSL webserver (because
some management flunkie like me thumps the desk), they discover they
can set up only one server for one website on their one machine.
When they get through that barrier, they cannot do the same for
every other website they maintain.

So TLS knowledge is special, because it is non-shareable across
their many web sites;  eventually, it becomes a specialisation, only
done if desperate or if paid.

Out in the LAMPs world, nobody wants SSL because of this lack of an
easy "fix" to the installation addiction.  This lack of desire flows
through across the entire package world; where they do security by
cookies, mail, passwords, voodoo .. anything that can be done that
is totally controlled by the app;  they avoid anything that requires
other things, like TLS/SNI.

E.g., many packages mangle URLs if HTTPS is used ... because nobody
does that.  Nobody's going to fix it because they believe that
security is in own-code, not code elsewhere.  Vicious circle.  The
result is that the whole field is limited, specialised, shy and
expensive.

So when something like Phishing -- which is an authentication
failure -- came along in 2003 there weren't the smarts nor the
people around to fix it.  Even now, someone well funded like Mozilla
only has around 2-4 people working on it, and Microsoft doesn't
prioritise it at all (but also don't reveal what they are not doing).

Why so few?  Nobody understands it.  In application space,
vanishingly few people understand TLS, let alone authentication at
the TLS level.  Why do so few understand what the TLS authentication
is about?  Because nobody uses TLS in routine business, it's all
Flash, Javascript, mySQL, etc etc.  Why not?  These reasons:

  * one TLS-website per machine is not worth the trouble
  * getting a cert is a nightmare, techs don't do paperwork
  * configuration issues...

Releasing these blockages will break the vicious circle;  until
there is mass usage of TLS, the mass Linux market will not believe
it is useful, nor will they think it secures them.  When there is
mass usage, the use of authentication will be routine, and we will
have a systemic response to phishing.


>> A httpd fix will almost work by itself;  the browsers already did
>> their part [2].  Only the config changes implemented by all here are
>> needed on the web server to turn the LAMPs on in a million small but
>> secured sites.
> 
> There's still the issue of certificates and CPU time.


Yes (second point above), acknowledged;  getting a CA-signed cert is
too much grief for a Linux techie to go through.  It simply doesn't
make sense to do that, unless desperate, paid or some other syndrome.

However there are three responses emerging to that:

1.  A few CAs now do free certs (I have been involved with one of
them for 2.5 years, perhaps only so that we can get the lights
turned on for secure browsing).

2.  Client side changes.  Mozilla are now working to add what they
call KCM or Key Continuity Management to Firefox so that sites using
self-signed certs can also work effectively.  This is directly for
the above Linux crowd with their million machines and dozens of
small websites.  Microsoft is working in different ways which
achieve the same thing -- they are not saying it directly, but if
you read up on the CardKey product, it is designed to accomodate the
concepts of KCM.  Of course, they are trying to be higher up on the
food chain, but the architectural direction is the same, because
they also have discovered that they can only secure users when the
users bootstrap up into TLS.

3.  The world of browser certificates is splitting into "hard certs"
and "easy certs".  This might be "EV" and "the others" although it's
still an argument-in-progress.

All three of these forces come from a desire *to make certs easier
to get* because those who are working on the security side of client
software have realised that TLS is used too infrequently, and this
makes phishing easy.

All of these things are oriented to breaking the vicious cycle of
TLS authentication unavailability.



CPU time can be ignored for this discussion:

  * Moore's law halves the cost every 18 months.
  * It might remain an issue for big sites, but that is a
vanishingly small number of servers *and* people out there, when we
think about the mass market of httpd.
  * The big iron sites are well-balanced economic businesses and
they can and will easily pay for any CPU they need.

The LAMPs people can't get it at the price they can afford -- their
time across all their tasks and their one machine.  These are who I
am concerned about.  They don't care about CPU because most of them
are humming along at nothing CPU, they are happy to increase the CPU
demands, because like disk space, that's one thing that is cheaper
than thinking about.


>> What are the blockages?  Mozo have offered money but don't know what
>> to do or who to talk to?
> 
> Review has been public.  Nobody's opposed to SNI in the webserver, but
> AIUI the patch that implements it seems to have a troubled history
> with respect to integrating with all the per-directory quriks of SSL
> renegotiation in mod_ssl.
> 
> IMO the merits of SNI isn't the operative argument.


Agreed all [1].  What can be done to make the patch less troubled?

Money is still an option even if it isn't the right one [2].

iang




[1]  to be frank, I told Mozilla when they offered a few weeks back
that money wasn't the answer;  the code has been done, it is in
review, and now we wait.  There is "nobody" to give money too, as
money can sometimes help code, but not review, especially.

[2] NLnet will also treat a request favourably, I am told, so that
is two foundations that are keen.  If you think review and
backporting or whatever is helped by flying people together or
shifting people of jobs for money, say so.

Mime
View raw message