Driver is unable to correctly reestablish connection with previously decommissioned node

Description

Hello!

Recently we ran into a very strange driver behaviour.

After the return of decommissioned node, the driver starts to refresh Nodes status as expected following with an exception:

The exception itself is repeated every time the driver tries to execute a request on this node thus flooding logs with tons of errors.

Application restart resolves the error.

Also, the driver is still able to execute queries

Steps to reproduce:

  1. Get a cluster of 3 node: 1 DC, 3 Racks (1 node in each Rack)

  2. (Not sure if related, but in my case all keyspaces are with Replication Factor 3)

  3. Make sure that driver established at least one connection with every node (write/read data. Also not sure if related, but operations are executed with LocalQuorum)

  4. Execute node decommission while writing/reading data

  5. Make sure, driver removed decommissioned node (

    )

  6. Return the decommissioned node into cassandra ring (remove all data before joining)

  7. Wait for node to be joined

  8. The driver will start to throw exceptions

 

UPD: grammar

Environment

Cassandra Driver is used under .NET Framework 4.6.1 on Windows Server.

Pull Requests

None

Status

Assignee

Unassigned

Reporter

Лев Димов

Labels

None

Reproduced in

None

PM Priority

None

Fix versions

External issue ID

None

External issue ID

None

External issue ID

None

External issue ID

None

External issue ID

None

External issue ID

None

Doc Impact

None

Reviewer

None

Epic Link

None

Sprint

Size

None

Affects versions

3.8.0
3.10.1

Priority

Major
Configure