在做数据导出的时候,因为用到了滚动。因为每次调用都创建了 scroll,而且我没有手动删除,而es 默认的 scroll 是500个。到达500个没有清除的手,就会报这个错误。错误如下
{"error":{"root_cause":[{"type":"exception","reason":"Trying to create too many scroll contexts. Must be less than or equal to: [500]. This limit can be set by changing the [search.max_open_scroll_context] setting."},{"type":"exception","reason":"Trying to create too many scroll contexts. Must be less than or equal to: [500]. This limit can be set by changing the [search.max_open_scroll_context] setting."}],"type":"search_phase_execution_exception","reason":"all shards failed","phase":"query","grouped":true,"failed_shards":[{"shard":0,"index":"collect_data_store_customer_flow_fact_201910","node":"4oJe_iuMS4eEUxOEVRt6hQ","reason":{"type":"exception","reason":"Trying to create too many scroll contexts. Must be less than or equal to: [500]. This limit can be set by changing the [search.max_open_scroll_context] setting."}},{"shard":0,"index":"collect_data_store_customer_flow_fact_201911","node":"4oJe_iuMS4eEUxOEVRt6hQ","reason":{"type":"exception","reason":"Trying to create too many scroll contexts. Must be less than or equal to: [500]. This limit can be set by changing the [search.max_open_scroll_context] setting."}}]},"status":500} at org.elasticsearch.client.RestClient$SyncResponseListener.get(RestClient.java:920) at org.elasticsearch.client.RestClient.performRequest(RestClient.java:227) at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1433) ... 62 common frames omitted Caused by: org.elasticsearch.client.ResponseException: method [POST], host [http://172.18.32.181:9200], URI [/collect_data_store_customer_flow_fact*/_search?typed_keys=true&ignore_unavailable=false&expand_wildcards=open&allow_no_indices=true&scroll=60000ms&search_type=query_then_fetch&batched_reduce_size=512&ccs_minimize_roundtrips=true], status line [HTTP/1.1 500 Internal Server Error] {"error":{"root_cause":[{"type":"exception","reason":"Trying to create too many scroll contexts. Must be less than or equal to: [500]. This limit can be set by changing the [search.max_open_scroll_context] setting."},{"type":"exception","reason":"Trying to create too many scroll contexts. Must be less than or equal to: [500]. This limit can be set by changing the [search.max_open_scroll_context] setting."}],"type":"search_phase_execution_exception","reason":"all shards failed","phase":"query","grouped":true,"failed_shards":[{"shard":0,"index":"collect_data_store_customer_flow_fact_201910","node":"4oJe_iuMS4eEUxOEVRt6hQ","reason":{"type":"exception","reason":"Trying to create too many scroll contexts. Must be less than or equal to: [500]. This limit can be set by changing the [search.max_open_scroll_context] setting."}},{"shard":0,"index":"collect_data_store_customer_flow_fact_201911","node":"4oJe_iuMS4eEUxOEVRt6hQ","reason":{"type":"exception","reason":"Trying to create too many scroll contexts. Must be less than or equal to: [500]. This limit can be set by changing the [search.max_open_scroll_context] setting."}}]},"status":500} at org.elasticsearch.client.RestClient$1.completed(RestClient.java:540) at org.elasticsearch.client.RestClient$1.completed(RestClient.java:529) at org.apache.http.concurrent.BasicFuture.completed(BasicFuture.java:122) at org.apache.http.impl.nio.client.DefaultClientExchangeHandlerImpl.responseCompleted(DefaultClientExchangeHandlerImpl.java:181) at org.apache.http.nio.protocol.HttpAsyncRequestExecutor.processResponse(HttpAsyncRequestExecutor.java:448) at org.apache.http.nio.protocol.HttpAsyncRequestExecutor.inputReady(HttpAsyncRequestExecutor.java:338) at org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(DefaultNHttpClientConnection.java:265) at org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:81) at org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:39) at org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODispatch.java:114) at org.apache.http.impl.nio.reactor.baseIOReactor.readable(baseIOReactor.java:162) at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:337) at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:315) at org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:276) at org.apache.http.impl.nio.reactor.baseIOReactor.execute(baseIOReactor.java:104) at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:591)
# # 解决方式
1. 在 kibana 上边 执行 DELETE _search/scroll/_all
2. 也可以通过下边的方式来清楚。
curl -XDELETE localhost:9200/_search/scroll/_all
# # 总结
如果经常用到 scroll 的话,还是要手动清除的。也可以在程序中清除,更好一点。
出现这样的问题,就好好检查一下代码逻辑吧,是不是逻辑哪里有问题,在频繁的创建scroll
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)