lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Furkan KAMACI <>
Subject Re: Which Tokenizer to use at searching
Date Sun, 09 Mar 2014 16:22:48 GMT

Firstly you have to keep in mind that if you don't index punctuation they
will not be visible for search. On the other hand you can have different
analyzer for index and search. You have to give more detail about your
situation. What will be your tokenizer at search time, WhiteSpaceTokenizer?
You can have a look at here:

If you can give some examples what you want for indexing and searching I
can help you to combine index and search analyzer/tokenizer/token filters.


2014-03-09 18:06 GMT+02:00 abhishek jain <>:

> Hi Friends,
> I am concerned on Tokenizer, my scenario is:
> During indexing i want to token on all punctuations, so i can use
> StandardTokenizer, but at search time i want to consider punctuations as
> part of text,
> I dont store contents but only indexes.
> What should i use.
> Any advices ?
> --
> Thanks and kind Regards,
> Abhishek jain

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message