elasticsearch - NEST Elastic query works for a few hours and then stops -


i have strange situation elasticsearch 5.4 , nest 5.4.0. have written simple c# console app queries elastic once per minute , returns hits/documents , stores them in postgres database further processing. works few hours , begins return queries valid .debuginformation 0 documents yet can copy , run same query in kibana dev tools , results. when stop console app , restart queries , returns hits , well. below code samples , log entries. trying figure out why stops working after awhile. using .net core c# console app nest.

i not sure if .debuginformation returning info es health @ moment see if there issues es cluster @ moment 429s. have looked @ elasticsearch.log , shows inserts. not sure if there place find query problems.

has had issues of nest working fine , stopping?

here query log 2 runs. first runs fine , returns 9 rows (i removed 1 in sample due sensitive data) , runs again returns 0 hits. queries after have 0 hits in them until restart c# code again. same start , end date inputs , real data in elastic....

2017-09-12 16:41:59.799 -05:00 [information] dates: start 9/12/2017 4:41:00 pm end 9/12/2017 4:42:00 pm 2017-09-12 16:41:59.800 -05:00 [debug] alertservice._queryerrors: 9/12/2017 4:41:00 pm end 9/12/2017 4:42:00 pm 2017-09-12 16:41:59.811 -05:00 [debug] alertservice._elasticquerylogerrors: elasticquery {                     "bool": {                         "filter":                             [ {                                 "range":                                 { "@timestamp": { "gte": "2017-09-12t21:41:00z",                                                     "lte": "2017-09-12t21:42:00z" }                                 }                               },                               {                                 "exists" : { "field" : "error_data" }                               }                             ]                         } } 2017-09-12 16:41:59.811 -05:00 [debug] alertservice._elasticquerylogerrors: searchresponse 9 : valid nest response built successful low level call on post: /filebeat-%2a/_search # audit trail of api call:  - [1] healthyresponse: node: http://servername:9200/ took: 00:00:00.0112120 # request: {"from":0,"query":{                     "bool": {                         "filter":                             [ {                                 "range":                                 { "@timestamp": { "gte": "2017-09-12t21:41:00z",                                                     "lte": "2017-09-12t21:42:00z" }                                 }                               },                               {                                 "exists" : { "field" : "error_data" }                               }                             ]                         } } # response: {"took":7,"timed_out":false,"_shards":{"total":215,"successful":215,"failed":0},"hits":{"total":9,"max_score":0.0,"hits":[{"_index":"filebeat-2017.09.12","_type":"log","_id":"av54cdl2yay890ucuru4","_score":0.0,"_source":{"offset":237474,"target_url":"...url...","input_type":"log","source":"....source....","type":"log","tags":["xxx-001","beats_input_codec_plain_applied","@timestamp":"2017-09-12t21:41:02.000z","@version":"1","beat":{"hostname":"xxx-001","name":"xxx-001","version":"5.4.3"},"host":"xxx-001","timestamp":"09/12/2017 16:41:02","error_data":"exception, see detail log"}]}  2017-09-12 16:41:59.811 -05:00 [debug] alertservice._queryerrors: (result) system.collections.generic.list`1[xx.alerts.core.models.filebeatmodel] 2017-09-12 16:41:59.811 -05:00 [information] errorcount: 9  2017-09-12 16:42:00.222 -05:00 [information] dates: start 9/12/2017 4:42:00 pm end 9/12/2017 4:43:00 pm 2017-09-12 16:42:00.222 -05:00 [debug] alertservice._queryerrors: 9/12/2017 4:42:00 pm end 9/12/2017 4:43:00 pm 2017-09-12 16:42:00.229 -05:00 [debug] alertservice._elasticquerylogerrors: elasticquery {                     "bool": {                         "filter":                             [ {                                 "range":                                 { "@timestamp": { "gte": "2017-09-12t21:42:00z",                                                     "lte": "2017-09-12t21:43:00z" }                                 }                               },                               {                                 "exists" : { "field" : "error_data" }                               }                             ]                         } } 2017-09-12 16:42:00.229 -05:00 [debug] alertservice._elasticquerylogerrors: searchresponse 0 : valid nest response built successful low level call on post: /filebeat-%2a/_search # audit trail of api call:  - [1] healthyresponse: node: http://servername:9200/ took: 00:00:00.0066742 # request: {"from":0,"query":{                     "bool": {                         "filter":                             [ {                                 "range":                                 { "@timestamp": { "gte": "2017-09-12t21:42:00z",                                                     "lte": "2017-09-12t21:43:00z" }                                 }                               },                               {                                 "exists" : { "field" : "error_data" }                               }                             ]                         } } # response: {"took":4,"timed_out":false,"_shards":{"total":215,"successful":215,"failed":0},"hits":{"total":0,"max_score":null,"hits":[]}}  2017-09-12 16:42:00.229 -05:00 [debug] alertservice._queryerrors: (result) system.collections.generic.list`1[q2.alerts.core.models.filebeatmodel] 2017-09-12 16:42:00.229 -05:00 [information] errorcount: 0 

here nest query

    public ienumerable<filebeatmodel> _elasticquerylogerrors(datetime startdate, datetime enddate)     {         //var startdatestring = startdate.kind;         //var enddatestring = enddate.kind;          var searchquery = @"{                 ""bool"": {                     ""filter"":                         [ {                             ""range"":                             { ""@timestamp"": { ""gte"": """ + string.format("{0:yyyy-mm-ddthh:mm:ssz}", startdate.touniversaltime()) +                     @""",                                                 ""lte"": """ + string.format("{0:yyyy-mm-ddthh:mm:ssz}", enddate.touniversaltime()) + @""" }                             }                           },                           {                             ""exists"" : { ""field"" : ""error_data"" }                           }                         ]                     } }";          var searchresponse = _es.search<filebeatmodel>(s => s             .alltypes()             .from(0)             .query(query => query.raw(searchquery)));          _logger.logdebug("alertservice._elasticquerylogerrors: elasticquery " + searchquery);          _logger.logdebug("alertservice._elasticquerylogerrors: searchresponse " + searchresponse.hits.count + " : " + searchresponse.debuginformation);          foreach (var searchresponsehit in searchresponse.hits)         {             searchresponsehit.source.id = searchresponsehit.id;         }          return searchresponse.documents.tolist();     } 

here constructor of class running above code in loop. loops may run hours or days. may area of issue how connection constructed long period of time. when close , reopen app running queries on period missed run fine.

    public alertservice(ioptions<elasticconfig> elasticconfig, alertsdbcontext context, ilogger<alertservice> logger)     {         _logger = logger;          _logger.logdebug(" *** entering alertservice");         string elasticconnectionstring = elasticconfig.value.connectionstring;         string defaultindex = elasticconfig.value.indexname;          var settings = new connectionsettings(                 new uri(elasticconnectionstring))             .connectionlimit(-1)             .disabledirectstreaming()             .defaultindex(defaultindex);          _es = new elasticclient(settings);         _context = context;     } 

i have confirmed race condition created myself internal timer creeps on call elastic val pointed out in comments. not bug in nest code , timing. have aligned call using system.threading.timer single callback per elapse , works properly. val assistance


Comments

Popular posts from this blog

angular - Ionic slides - dynamically add slides before and after -

minify - Minimizing css files -

Add a dynamic header in angular 2 http provider -