Similary to , If Cluster.manager.init() fails, isInit gets marked to true, so subsequent connect/init operations may cause unexpected behavior.
For example, the next time someone calls cluster.connect() it will initialize a Session and thus its pools. At this point the protocol version may be unset, causing the Pool core/max values to be unset as well, which causes an exception while init'ing pools:
One of our cases of this was a bad auth configuration. User was using Spark, had correct credentials for job submission
spark.cassandra.*
But incorrect credentials for the metastore connection
spark.hadoop.cassandra.*
And for dsefs
spark.hadoop.dsefs.*
We fixed this by making sure the incorrect parameters were changed.
The similar stack trace was seen while configuring Zeppelin and it was because of the incorrect credentials provided in the interpreter.sh file for spark user inspite of providing the correct credentials in the command line.
It will be nice to have a better exception being logged such as `authentication failed for <user>`.
I think a key to this is the user attempts a connect("ks") where the keyspace is not authorized for that user but they are authorized to connect in general. But just a guess
It definitely is associated with authentication at the very least, I was able to reproduce using this test here.