Recently we ran into a very strange driver behaviour.
After the return of decommissioned node, the driver starts to refresh Nodes status as expected following with an exception:
The exception itself is repeated every time the driver tries to execute a request on this node thus flooding logs with tons of errors.
Application restart resolves the error.
Also, the driver is still able to execute queries
Steps to reproduce:
Get a cluster of 3 node: 1 DC, 3 Racks (1 node in each Rack)
(Not sure if related, but in my case all keyspaces are with Replication Factor 3)
Make sure that driver established at least one connection with every node (write/read data. Also not sure if related, but operations are executed with LocalQuorum)
Execute node decommission while writing/reading data
Make sure, driver removed decommissioned node (
Return the decommissioned node into cassandra ring (remove all data before joining)
Wait for node to be joined
The driver will start to throw exceptions
Cassandra Driver is used under .NET Framework 4.6.1 on Windows Server.