Driver 2.2.0-rc3 attempts IPv6 connection automatically


Having upgraded support for Cassandra 2.2.0 using The 2.2.0-rc3 release of the Datastax Java Driver, Cassandra connections now automatically attempt to connect to the IPv6 address, even in local environments.

Not only connection is attempted without any explicit definition, but also it doesn't seem to "give up" no matter how long the waiting time is. The differential view of the phantom changeset is available here just in case.

The bottom line is no changes were made to the phantom connectors module which handles all connections to Cassandra and session initialisation. When running a local application with the newest version, the logs make the attempt obvious:

The normal IPv4 connection works as expected and the application runs normally, all queries are processed and working just fine.


MacOSX Yosemite

Pull Requests



Flavian Alexandru
August 26, 2015, 7:23 PM

Many thanks for the reply, I was unaware of the change in spite of browsing a bit through your source tree. I will update everything in the Scala driver too and do a release to fix this, I imagine it will cause a lot of fun for many users.

Andy Tolbert
August 27, 2015, 1:27 AM

wasn't sure if you were aware of this, but thought I'd share just in case . We maintain an Upgrade Guide where we log these kind of changes that may be client impacting (binary compatibility and beyond). As you'll see we changed quite a bit in 2.2, this particular change is under #12.

Olivier Michallat
December 10, 2015, 2:57 PM

Having second thoughts about this, I wonder if there are legitimate situations where you would want to have both A and AAAA records in your infrastructure, for example if you're migrating from ipv4 to ipv6.
Although I don't see how the driver would be supposed to pick the "right" record anyway, so in that kind of situation you would probably use separate DNS names.

Alex Popescu
December 11, 2015, 10:50 PM

I'm not sure if there are legitimate usages of both A and AAA records. But I assume this can happen as an infrastructure misconfiguration. If we can continue to operate in the presence of this error, this can prove valuable.

Olivier Michallat
December 16, 2015, 2:04 PM

Actually, since we ignore contact points that aren't in the system.peers of the first node we connect to, so any IP that doesn't correspond to a host will be filtered out.
That was already fixed in 2.2.0-rc3, but I think the retry would still happen if the ipv6 address was tried first. solves that because now we only start retries once the cluster has initialized (and bad addresses have already been filtered out).

So it should work fine with recent 3.0 versions, I'll close this.

Not a Problem




Flavian Alexandru



PM Priority


Reproduced in


Affects versions

Fix versions

Pull Request


Doc Impact




External issue ID


External issue ID