Race condition in the ControlConnection Dispose method can leak connections

Description

Our product has multiple services running on the same appliance, which use the C# driver to build one connection per keyspace per service. We have about 5 keyspaces. About one month ago, we upgraded the C# driver from 3.5 to 3.13. We also turned on the driver logging so it integrates with our logging. We noticed today that 2 of our services have this log which was introduced in this commit https://github.com/datastax/csharp-driver/commit/fa451415703b4ddaf8265277710ba39e8b3e5478#diff-f82069886a3a20205b9de5409b69e1ff:

2020-03-24T17:49:12.701Z [Error] ( 057) {CassandraClient} Exception thrown while handling cassandra event.
Cassandra.DriverInternalError: Could not schedule event in the ProtocolEventDebouncer.​
at Cassandra.ProtocolEvents.ProtocolEventDebouncer.<ScheduleEventAsync>d__6.MoveNext()​
— End of stack trace from previous location where exception was thrown ---​
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()​
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)​
at Cassandra.Connections.ControlConnection.<OnConnectionCassandraEvent>d__50.MoveNext()

And then every 30 seconds, we have this warning. Services which do not have this warning also do not have the error above. Restarting our service (therefore the csharp driver client) fixes the problem. Since this is logged for the driver heartbeat interval of the Connection IdleTimeoutHandler, we think this is could be for the keyspace connections that service just isn't using. Writing to other keyspaces, while this warning logs is working.

2020-03-24T00:00:20.978Z [Warning] ( 088) {CassandraClient} Received heartbeat request exception System.InvalidOperationException: Can not start timer after Disposed
at Cassandra.Tasks.HashedWheelTimer.Start()
at Cassandra.Tasks.HashedWheelTimer.NewTimeout(Action`1 action, Object state, Int64 delay)
at Cassandra.Connections.Connection.RunWriteQueueAction()

It just seems like an internal coding error, but I am curious to know if you think this could be caused by something we do.

Environment

Win10 Enterprise LTSC, C# Driver 3.13, Cassandra 3.11.1

Pull Requests

None

Assignee

Unassigned

Reporter

Tania Engel

Reproduced in

None

PM Priority

None

Fix versions

External issue ID

None

Doc Impact

None

Reviewer

None

Pull Request

None

Epic Link

None

Sprint

Size

None

Affects versions

Priority

Minor
Configure