apr-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Luke Kenneth Casson Leighton <l...@samba-tng.org>
Subject Re: [RFC] Network Abstraction Layer
Date Fri, 02 Mar 2001 12:32:56 GMT
> > so, one bucket can deal with the NetBIOS header.
> 
> Careful. We may be getting some terminology mixed up here.

we?  nahh, just me

> I think we're
> definitely on the same page :-), but I'd like to clarify...

appreciate it.
 
> *) in the above example, the brigade has a HEAP or POOL or some other kind
>    of simple bucket inserted as the first element. It refers to some memory
>    with the appropriate data.
> 
> *) it also has a FILE bucket and an EOS bucket
> 
> *) a filter inserted the bucket into the brigade
> 
> *) in the Samba case, I suspect your NetBIOS header is just another HEAP
>    bucket, inserted by a NetBIOS *filter*

urrrr.... *headache*.  okay.

and the implementation on NT would do NBT but that bypasses IPX, NETBEUI,
TCP/IP, DECnet 3.0, carrier pigeons etc. whilst on samba (or any other
user-level implementation) you have to do it as buckets/filters.

... urr... :)


> > one with the SMB header.
> > 
> > one with the IPC$ layer.
> > 
> > one with the SMBtrans layer.
> > 
> > one with the DCE/RPC pipe layer.
> > 
> > one with the DCE/RPC header layer.
> > 
> > one with the DCE/RPC data.
> 
> Again, I would think *filters* are inserting standard bucket types into the
> brigade. You wouldn't necessarily need to create new bucket types for each
> of the above concepts. You could definitely create filters for each one,
> however.

okay.

getting educated, here.
 
> >...
> > ... we still need an apr_bucket_NAL which can "bounce" in-and-out of
> > buckets, _even though it may actually be implemented as buckets itself,
> > underneath!_ and handle the possibility that on one particular OS - e.g.
> > NT - is in fact a call to a kernel-level function, e.g. CreateNamedPipe().
> 
> Hmm. I'm thinking that a filter would just choose the right kind of bucket
> to insert into the brigade. It could insert a SOCKET bucket, a PIPE bucket,
> or it could read data from the OS over an OS-specific API and then insert a
> HEAP bucket pointing to that memory.

ah ha.  so... a pulling-approach.

... hmmm...

so, you'd be quite happy to educate people that if we want someone to do,
say, the IPC$ and the SMBtrans layer, above (OOPS!  i missed one out!!!
the SMBtrans "named pipe" layer!!! that's 8 layers not 7 :) :) that they
have to know how to deal with three layers and their interactions.  four,
because they're implementing two of them.

what i'd really like to have is that they are told, "you write this
program, it is implemented as a unix-domain-socket, it's a
'NamedPipe'Demon.  to get data to /from the layer below you, call these
functions (SMBipcread, SMBipcwrite, or apr_nal_create("smbipc$")
preferably).  provide _us_ with functions we can call that get data to /
from _you_."

that is a much more obvious, isolated [self-contained], rewarding task,
that i think is... a safer and less daunting task.

now, if apr_nal_create("smbipc$") _happens_ to be implemented in terms of
filters / buckets, and there happens to be a performance-hit because a)
you're doing ux-dom-sock file/io instead of filling in data into the same
memory block, well, i'll take the hit as the payoff is code
maintainability as this project turns from a 300,000 line deal into the
order of millions of lines.

> > or even, later on, let's say that someone decides to provide a linux
> > kernel-level SMB layer or even a NetBIOS kernel API.
> 
> Yup. When this happens, I'd say there are a couple choices:
> 
> 1) write a new bucket type. your "read from network" filter could be
>    configured to insert the new bucket type, rather than a standard type.

> 2) write a new filter which is inserted as the "read from network" filter.
>    it does the job of talking to the new kernel API, and populating the
>    brigade with standard bucket types containing the data.
> 
> 3) a combination of the above two: rather than a configurable filter, you
>    simply insert a new filter which always uses the new custom bucket type.

...

i'm not getting it, then, am i :)

can this be done across several different programs?

i.e. you have one user-space daemon that handles the NetBIOS layer for
you (which is direct-to-kernel if you have linux kernel netbios support,
or you call NT's NBT routines)

and you have one user-space program that _uses_ the NetBIOS layer?

etc.?


> 
> > as you can see, seven levels of buckets like that - all of which a
> 
> Shouldn't this be seven levels of filters?

whoops: sorry :)
 
> In the Apache case, each little black box is a filter. It gets a brigade,
> it operates on that brigade, then it passes it. It doesn't have to know
> where the brigade came from, or who is next in line. It usually doesn't have
> to know anything about the bucket types (just that it can read()).

putting everything into one program, where one program is turing into 10
million lines of code, _that_ scares me.  i dunno about you.

:)

> > what you think?
> 
> It sounds great! I'm encouraged by your interest in the filters, buckets,
> and brigades. It seems to be a matter of some details at the endpoints of
> the filter stack, to determine whether the custom socket is a bucket vs a
> filter, but that should be straightforward.

cool.
 
> 
> Something that I just thought of. Apache operates its I/O in two modes:
> 
> 1) it "pulls" data from an input filter stack
> 
> [ processes it ]
> 
> 2) it "pushes" data into the output filter stack
> 
> 
> I could see a case where the following could be set up:
> 
>               DRIVER
>               /    \
> INPUT-FUNCTION      PROCESS-FILTER -> PROCESS-FILTER -> OUTPUT-FILTER
> 
> The driver sits in a loop, pulling content (a brigade) from (replaceable)
> input function, and then simply shoves that brigade into a processing
> filter. Those filters do all the needed work and just keep moving stuff down
> the line. Eventually, it drops back to the (same) network in the
> output-filter.
> 

and the "input-function" itself could be a process-filter on some OSes but
not on others!

>                                  DRIVER
>                                  /    \
> INPUT_FULTER-PF-PF-INPUT-FUNCTION      PF -> PF -> OUTPUT-FILTER
> 

whoops, that's not exactly right.

the model i am thinking of, the "Driver" is hidden by the apr_nal_* API.
(and is basically the means to decide, on a per-os-basis, whether to use
kernel-level-drivers or another filter-chain).


> Not really sure. Just blue-skying a thought here. It would get complicated a
> bit by certain authentication protocols that need to speak to the output,
> but not run through the filter stack. Ah well...

correct.   which is why we're proposing that the apr_nap_* API contains an
authentication pointer parameter.  thing.

> [ in the Apache case, the filters in the stack are "instantiated" with
>   references to the current connection and request objects. conceivably,
>   Samba would do a similar thing, and it could use the connection object to
>   speak back to the client. ]
> 
> Oh. Silly me. You could set it up such that the processing filters simply
> wouldn't be added until after the authentication occurs. The auth filter
> would simply pass bits-for-the-client to the next filter. More bits will
> arrive at the auth filter, it would verify the auth, and trigger the
> insertion of some processing filters.

oodear :)

... yep: you need to dynamically decide which authentication mechanism
you're going to use, basically.

or better, you say, "stuff this, i'm not going to deal with all that:
we'll define an API that we can do _one_ processing filter that does
'authentication-passing', and you _must_ pre-process and conform to that
filter's API."




> Hoo boy. This is where I step out and let you guys deal with it :-). Just
> ask about Apache's filters, the buckets and brigades, and I (and others
> here!) can handle that... :-)

teehee.

okay.

i didn't mention authentication, did i?  whoops :)

okay.  there are two points at which authentication can occur.  well...
we've only chosen to implement two of the points, however the code we wish
to plug-in can have several more.

there's one authentication point at the SMB level, and another one at the
DCE/RPC level.

so, what i came up with in the TNG architecture was a means to pass unix
uid,gid,unix-groups over a unix domain socket, plus some other
NT-domain-related session / authentication information.

this allows one [isolated] program - smbd - to tell another [isolated]
program - lsarpcd, samrd, netlogond etc. - what the user context is.

smbd in TNG knows absolutely NOTHING about dce/rpc.  zero.

and lsarpcd, samrd and netlogond know zip about SMB.

that kind of isolation, other than security context info, is... well...
kinda important :)


so... we combine best of both approaches, neh?

lukes

 ----- Luke Kenneth Casson Leighton <lkcl@samba-tng.org> -----

"i want a world of dreams, run by near-sighted visionaries"
"good.  that's them sorted out.  now, on _this_ world..."



Mime
View raw message