struts-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ralf Fischer <thema...@googlemail.com>
Subject Re: S2 as JSR for Action Framework
Date Mon, 25 Aug 2008 07:08:31 GMT
Hi,

Am 25.08.2008 um 06:30 schrieb Jeromy Evans:

> Don Brown wrote:
>> On Mon, Aug 25, 2008 at 12:54 PM, Martin Cooper  
>> <martinc@apache.org> wrote:
>>
>>> Another option is a client-side component-based framework like Ext  
>>> or Flex
>>> running directly against web services, RESTful or otherwise. No  
>>> server-side
>>> web framework required. Of course, you could use something server- 
>>> side like
>>> DWR to facilitate working with web services, or Jersey for RESTful  
>>> services,
>>> but that would be a choice rather than a requirement.
>>>
>>
>> This is a nice design, when you can do it. GWT is also a good way to
>> build these types of apps.  Unfortunately, they can easily break much
>> of what makes the web what it is - the back button, unique,
>> addressable URI's, accessibility, search engine crawling, etc.
>> Therefore, I think some sort of server-side web framework will  
>> usually
>> be necessary, however, I don't think it has to go so far as JSF,  
>> where
>> they try to push all the state to the server.  I was talking with a
>> guy here at work who is looking to start using GWT more about how and
>> where a plain HTML view of the application fits.  He wants to do very
>> dynamic, client-side heavy views, but still needs to support search
>> engines and REST clients.  What if you use Jersey for your REST API,
>> GWT or straight JQuery for your client-side UI, then have Jersey +
>> something generate HTML views of your REST API, which you could use
>> for search engines and developers wanting to browse and interact with
>> your application.  If you can have the HTML representation of your
>> REST API auto-generated, you wouldn't have to maintain two different
>> interfaces, and you could go fully nuts with your client-side heavy
>> app without having to worry about accessibility or search engine
>> issues.
>>
>> Don
>>
>>
>
>
> [rant] Personally I think search engines need to solve this  
> problem.  The era of crawling sites needs to close.  As a publisher  
> of content I should be able to connect to a Google API and publish  
> my content and URIs to them in a standard machine-friendly format  
> ready for indexing.  Alternatively, I could implement a dedicated- 
> service for them to consume instead of emulating pages of content in  
> a non-page-oriented application. Then my application then can be  
> what it needs to be in any form suitable for my users instead of  
> perpetuating the artificial SEO-optimzation industry. [/rant]

Well there is a thing like that where you can publish information to  
google or other search engines in one single file. You have to serve a  
file with a name like http://foo.com/site.xml.gz [1] which holds a  
description of your whole site. Sure it's no API, but it gets pretty  
close to what you mention.

Despite that I think struts(2) is in a rather good position. The  
biggest deficit is IMO in documentation, and a comprehensible, easy-to- 
use taglib with simple AJAX functions which encapsulates all the fuzzy  
client side stuff or users who don't want to learn JavaScript, but at  
the same are easily to extend by Pros.

Both things are known to the readers of this list and are (more or  
less) worked on.

Cheers,
-Ralf

[1] http://groups.google.com/group/google-sitemaps/topics?start=20&sa=N

> Anyway, despite that, I took this approach recently with a client- 
> heavy (single page) application myself, with the exception of  
> autogeneration of the HTML.  Basically:
> - mandated that the client include a custom header (X-RequestedBy)  
> and signature in the request
> - if headers present, the S2 rest plugin handled the request and  
> returned the resource in the requested content type. I just had to  
> build the view myself for html.
> - if the header's not present and it was a GET, the REST plugin  
> returned the HTML view and sitemesh decorated it as a full HTML PAGE.
> - if a resource was requested directly and the user had javascript,  
> they were redirected to the rich client with the best-guess initial  
> state based on the URI
> - all flow control is managed on the client.
>
> That meant that one action could service requests for the resource  
> for rich clients and support search engines requests for the same  
> content.
> Search engines could browse the site through the same content spread  
> over many little well-formed pages.
> Users accessing the site via the search engine's sub-URI would see  
> the rich client with appropriate initial state derived from the URI
> On the client-side sensible URIs could still be used in links and  
> listeners adjusted the content type when appropriate.
>
> Users without JS could get by but were a low priority.  Users with  
> screen readers are still a challenge but not due to struts.
>
> This approach wasn't as simple as it should be though but confirms  
> that Don's idea is feasible.  The biggest problem was in fact with  
> IE6 memory leaks and the poor performance of javascript in most  
> browsers.  A flex client could have used the same services without a  
> problem.  If automation of a bland html view with a sitemap were  
> provided for users without javascript/flash you'd eliminate the  
> double-up on the views for search engines.
>
> I definitely like the direction these discussions are going.
>
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@struts.apache.org
> For additional commands, e-mail: dev-help@struts.apache.org
>


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@struts.apache.org
For additional commands, e-mail: dev-help@struts.apache.org


Mime
View raw message