ElasticSearch n-gram tokenfilter not finding partial words -
i have been playing around elasticsearch new project of mine. have set default analyzers use ngram tokenfilter. elasticsearch.yml file:
index: analysis: analyzer: default_index: tokenizer: standard filter: [standard, stop, myngram] default_search: tokenizer: standard filter: [standard, stop] filter: myngram: type: ngram min_gram: 1 max_gram: 10
i created new index , added following document it:
$ curl -xput http://localhost:9200/test/newtype/3 -d '{"text": "one 2 3 4 5 six"}' {"ok":true,"_index":"test","_type":"newtype","_id":"3"}
however, when search using query text:hree
or text:ive
or other partial terms, elasticsearch not return document. returns document when search exact term (like text:two
).
i have tried changing config file such default_search uses ngram token filter, result same. doing wrong here , how correct it?
not sure default_* settings. applying mapping specifies index_analyzer , search_analyzer works:
curl -xdelete localhost:9200/twitter curl -xpost localhost:9200/twitter -d ' {"index": { "number_of_shards": 1, "analysis": { "filter": { "myngram" : {"type": "ngram", "min_gram": 2, "max_gram": 10} }, "analyzer": { "a1" : { "type":"custom", "tokenizer": "standard", "filter": ["lowercase", "myngram"] } } } } } }' curl -xput localhost:9200/twitter/tweet/_mapping -d '{ "tweet" : { "index_analyzer" : "a1", "search_analyzer" : "standard", "date_formats" : ["yyyy-mm-dd", "dd-mm-yyyy"], "properties" : { "user": {"type":"string", "analyzer":"standard"}, "message" : {"type" : "string" } } }}' curl -xput 'http://localhost:9200/twitter/tweet/1' -d '{ "user" : "kimchy", "post_date" : "2009-11-15t14:12:12", "message" : "trying out elastic search" }' curl -xget localhost:9200/twitter/_search?q=ear curl -xget localhost:9200/twitter/_search?q=sea curl -xget localhost:9200/twitter/_mapping
Comments
Post a Comment