lucene-java-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From dan2000 <liu...@ntlworld.com>
Subject Re: TermQuery doesn't support non-english charecters
Date Sun, 09 Jul 2006 12:29:41 GMT

yes, myField is a tokenized field. I've used ChineseAnalyzer. here is an
examle text ??

Let me explain what exactly what I want.

myField is a tokenized field:
new Field("key",key, Field.Store.YES, Field.Index.TOKENIZED)

I sometimes need to find the exact match. What would be the best way to find
a exact match for a tokenized field? I've tried:
Query query = new QueryParser(myField,
myLanguageAnalyzer).parse(myField+":"+myKey); 
mySearcher.search(query); 

But I always get a lot of relevant results with above code. The
myLanguageAnalyzer is the same analyzer that was used for indexing.  I just
want something like "key = myKey" instead of " key like myKey".
-- 
View this message in context: http://www.nabble.com/TermQuery-doesn%27t-support-non-english-charecters-tf1911988.html#a5239478
Sent from the Lucene - Java Users forum at Nabble.com.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-user-help@lucene.apache.org


Mime
View raw message