Symptoms
The search bar does not include the option to search all folders
All users are affected
If this issue affects all users, it is probably in the wake of a migration of emails in the system files folder. This means that the ElasticSearch search index doesn't exist – you can run the ConsolidateMailSpoolIndexJob
to create the index and index all server emails:
- Go to the admin console > System management > Planning > run
ConsolidateMailSpoolIndexJob
or
Use our command line tool to run the consolidateIndex maintenance operation:
bm-cli maintenance consolidateIndex domain.net
This operation is resource-hungry – from BlueMind 3.5.12 you can run it by groups of users using the option --match:
bm-cli maintenance consolidateIndex --match "a.*" domain.net bm-cli maintenance consolidateIndex --match "[b-c].*" domain.net
Only a few users are affected
If the issue only affects one or a few users, this means that their ElasticSearch index doesn't exist or is corrupt – it has to be created again:
- Either go into each user's admin form and click "Validate and repair user data" then "Consolidate mailbox index", then, if the issue isn't resolved, "Reconstruct mailbox index".
Or using our command line tool:
bm-cli maintenance repair user@domain.net bm-cli maintenance consolidateIndex user@domain.net
Some search results are missing
When one or a few users report incomplete searches, like with the above issue, this means that their ElasticSearch index doesn't exist or is corrupt – it has to be created again:
- Either go into each user's admin form and click "Validate and repair user data" then "Consolidate mailbox index", then, if the issue isn't resolved, "Reconstruct mailbox index".
Or using our command line tool:
bm-cli maintenance repair user@domain.net bm-cli maintenance consolidateIndex user@domain.net
Error message during searches
This may be caused by an inconsistency between the list of IMAP folders and the database, the maintenance operation "check&repair", which can be accessed from the Maintenance tab in the user page can be used to rebuild this list. Re-indexing the mailbox should fix the issue. Run "Reconstruct mailbox index" in that tab.
If this isn't the issue the /var/log/bm-webmail.errors
logs can point to the origin of the problem.
An error is displayed when trying to access a message found by search
This is probably due to an indexing fault when the message was moved. Update the search index using the "Consolidate mailbox index" maintenance operation which can be accessed from the Maintenance tab in the user page.
Logs show esQuota and imapQuota errors
You find messages such as the one below in /var/log/bm-webmail/
errors:
10-Nov-2019 17:37:38 UTC] [jdoe@bluemind.loc] esQuota < (imapQuota * 0.8). disable es search. esQuota: 4199171, imapQuota: 6123568
This means that for the account shown, less than 80% of the mailbox is indexed (esQuota = elasticsearch quota), elasticsearch search (== advanced search engine) is therefore disabled because inefficient.
To fix this, you have to consolidate or reindex the account.
If only a few identified users are affected
If the issue only affects one or a few users, this means that their ElasticSearch index doesn't exist or is corrupt – it has to be created again:
- either by going into each user's admin page and executing "Validate and repair user data" then "Consolidate mailbox index" then, if there's no improvement, "Reconstruct mailbox index"
or by using our command line tool:
bm-cli maintenance repair user@domain.net bm-cli maintenance consolidateIndex user@domain.net
If all users are affected
To repair all accounts, you can:
find the accounts by running a grep on the log file:
grep "disable es search. esQuota:" /var/log/bm-webmail/errors
- copy the logins found into a text file (e.g.
/tmp/accountWithoutEsSearch.txt
) use the following command combination to start the consolidation of the index for each file login:
while read account; do bm-cli maintenance consolidatedIndex $account;done < /tmp/accountWithoutEsSearch.txt
Issue/Confirmation
If you detect a search malfunction in BlueMind, you can find the cause using the command:
curl -XGET --silent 'http://localhost:9200/_cluster/health'
This command displays the status of the ElasticSearch cluster. If the status is 'green' then everything is fine, if it is 'red', this means there is an issue with Elasticsearch. This information is also fed into the monitoring console.
Solution
Several issues may stop ElasticSearch from working:
index corruption: mainly due to low disk space. You need at least 10% of free disk space. If the disk containing ES data (/var/spool/bm-elasticsearch) has run out of space, search indexes may have become corrupt. In ES logs, this translates as an error on service start up:
[2017-01-26 20:06:54,764][WARN ][cluster.action.shard] [Bill Foster] [mailspool][0] received shard failed for [mailspool][0], node[PcC6eICxRAajmWioK1mhDA], [P], s[INITIALIZING], indexUUID [IEJHQkOnTtOcdY0bMMIFRA], reason [master [Bill Foster][PcC6eICxRAajmWioK1mhDA][bluemind.exa.net.uk][inet[/82.219.13.101:9300]] marked shard as initializing, but shard is marked as failed, resend shard failure] [2016-01-26 20:06:55,828][WARN ][indices.cluster] [Bill Foster] [mailspool][0] failed to start shard org.elasticsearch.index.gateway.IndexShardGatewayRecoveryException: [mailspool][0] failed to recover shard at org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:287) at org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:132) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745)
You must therefore remove the index:
cd /var/spool/bm-elasticsearch/data/bluemind-c68b34ff-ccf4-49f8-9456-e8db902e8f66/nodes/0/indices/ service bm-elasticsearch stop rm -fr mailspool/ service bm-elasticsearch start
Then start indexing again from scheduled jobs > run
ReconstructMailSpoolIndexJob
Beware, however, email indexing is IO-hungry and it is best to run it in the evening or at the weekend.
translog
corruption: this can happen if the server has crashed or because of low memory. In this case, the general index is not corrupt and only the indexing of documents not written to the disk yet will be lost.
In ES logs, this translates as the following error during service restart:[2017-09-04 19:24:38,340][WARN ][indices.cluster ] [Hebe] [mailspool][1] failed to start shard org.elasticsearch.index.gateway.IndexShardGatewayRecoveryException: [mailspool][1] failed to recover shard at org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:287) at org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:132) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: org.elasticsearch.index.translog.TranslogCorruptedException: translog corruption while reading from stream at org.elasticsearch.index.translog.ChecksummedTranslogStream.read(ChecksummedTranslogStream.java:70) at org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:257) ... 4 more
To remove the corrupt translogs:
service bm-elasticsearch stop rm -rf /var/spool/bm-elasticsearch/data/bluemind-5da5da65-b2e8-4b1e-afb2-f26792f66ac4/nodes/0/indices/mailspool/*/translog service bm-elasticsearch start
Running
ConsolidateMailSpoolIndexJob
re-indexes the missing messages.