Driver should fail fast on subsequent connect/init calls if Cluster.manager.init() fails

Description

Similary to , If Cluster.manager.init() fails, isInit gets marked to true, so subsequent connect/init operations may cause unexpected behavior.

For example, the next time someone calls cluster.connect() it will initialize a Session and thus its pools. At this point the protocol version may be unset, causing the Pool core/max values to be unset as well, which causes an exception while init'ing pools:

Environment

None

Pull Requests

None

Activity

Show:
Russell Spitzer
July 31, 2018, 10:03 PM

One of our cases of this was a bad auth configuration. User was using Spark, had correct credentials for job submission

spark.cassandra.*

But incorrect credentials for the metastore connection

spark.hadoop.cassandra.*

And for dsefs

spark.hadoop.dsefs.*

We fixed this by making sure the incorrect parameters were changed.

Tasneem
July 31, 2018, 10:44 PM

The similar stack trace was seen while configuring Zeppelin and it was because of the incorrect credentials provided in the interpreter.sh file for spark user inspite of providing the correct credentials in the command line.

It will be nice to have a better exception being logged such as `authentication failed for <user>`.

Russell Spitzer
August 1, 2018, 3:35 PM

I think a key to this is the user attempts a connect("ks") where the keyspace is not authorized for that user but they are authorized to connect in general. But just a guess

Andy Tolbert
August 8, 2018, 10:37 PM

It definitely is associated with authentication at the very least, I was able to reproduce using this test here.

Fixed

Assignee

Greg Bestland

Reporter

Andy Tolbert

Labels

None

PM Priority

None

Reproduced in

None

Affects versions

Fix versions

Pull Request

None

Doc Impact

None

Size

None

External issue ID

None

External issue ID

None

Priority

Major
Configure