Having upgraded support for Cassandra 2.2.0 using The 2.2.0-rc3 release of the Datastax Java Driver, Cassandra connections now automatically attempt to connect to the IPv6 address, even in local environments.
Not only connection is attempted without any explicit definition, but also it doesn't seem to "give up" no matter how long the waiting time is. The differential view of the phantom changeset is available here just in case.
The bottom line is no changes were made to the phantom connectors module which handles all connections to Cassandra and session initialisation. When running a local application with the newest version, the logs make the attempt obvious:
The normal IPv4 connection works as expected and the application runs normally, all queries are processed and working just fine.
Many thanks for the reply, I was unaware of the change in spite of browsing a bit through your source tree. I will update everything in the Scala driver too and do a release to fix this, I imagine it will cause a lot of fun for many users.
wasn't sure if you were aware of this, but thought I'd share just in case . We maintain an Upgrade Guide where we log these kind of changes that may be client impacting (binary compatibility and beyond). As you'll see we changed quite a bit in 2.2, this particular change is under #12.
Having second thoughts about this, I wonder if there are legitimate situations where you would want to have both A and AAAA records in your infrastructure, for example if you're migrating from ipv4 to ipv6.
Although I don't see how the driver would be supposed to pick the "right" record anyway, so in that kind of situation you would probably use separate DNS names.
I'm not sure if there are legitimate usages of both A and AAA records. But I assume this can happen as an infrastructure misconfiguration. If we can continue to operate in the presence of this error, this can prove valuable.
Actually, since we ignore contact points that aren't in the system.peers of the first node we connect to, so any IP that doesn't correspond to a host will be filtered out.
That was already fixed in 2.2.0-rc3, but I think the retry would still happen if the ipv6 address was tried first. solves that because now we only start retries once the cluster has initialized (and bad addresses have already been filtered out).
So it should work fine with recent 3.0 versions, I'll close this.