MaxSimultaneousRequestsPerConnectionThreshold not respected

Description

Had the following code:

PlainTextAuthProvider authProvider = new PlainTextAuthProvider("user", "pwd");
Collection<InetSocketAddress> whiteList = new ArrayList<InetSocketAddress>();
whiteList.add(new InetSocketAddress("ip1", 9042));

WhiteListPolicy policy = new WhiteListPolicy(new DCAwareRoundRobinPolicy("DC1"), whiteList);
Cluster cluster = Cluster.builder().withAuthProvider(authProvider).withLoadBalancingPolicy(policy).addContactPoint("1ip1").build();
cluster.getConfiguration().getPoolingOptions().setMinSimultaneousRequestsPerConnectionThreshold(HostDistance.LOCAL, 0);
cluster.getConfiguration().getPoolingOptions().setMaxSimultaneousRequestsPerConnectionThreshold(HostDistance.LOCAL, 1);
cluster.getConfiguration().getPoolingOptions().setMaxConnectionsPerHost(HostDistance.LOCAL, 100);
cluster.getConfiguration().getPoolingOptions().setCoreConnectionsPerHost(HostDistance.LOCAL, 3);
final Session session = cluster.connect("system");
ExecutorService executorService = Executors.newFixedThreadPool(10);
for (int i=0;i<100;++i) {
executorService.execute(new Runnable() {
@Override
public void run() {
while (true) readSystemPeersAndAssert(session);

}
});
}
Uninterruptibles.sleepUninterruptibly(1, TimeUnit.HOURS);

I was expecting the inFlight to be at the most 1. However in the attached logs you can see inFlight go as high as 4.

Environment

None

Pull Requests

None

Activity

Show:
Sylvain Lebresne
November 14, 2014, 8:38 AM

Yes, maxSimultaneousRequestsPerConnection is not a strict threshold. The javadoc for it's getter actually has it's proper definition, namely:

So this is more a threshold above which new connections are created. I can agree that the name might be misleading, but it was never intended to be a strict limit and we're not always so good at picking good names.

I'll note that it's not a strict threshold for performance reasons. Creating a new connection takes time, so if the driver was to wait on reaching that threshold to spawn new connection and was then waiting on the newly created connection to be ready, it would have to block and this would impact latency. Instead, this setting plays a role of trigger for the driver: if we reach that threshold, this signals that we probably don't have enough connection to handle the load coming and we create a new connection, but until this newly connection is created and active, we continue to use existing connections if possible.

I'll note that I cannot see any good practical reason for wanting to stricly limit the number of request per connection, it's actually better to have less connections with more request on them that then contrary.

Overall, we could change the name of the option to better reflect the fact that it's more intented to be a trigger for new connection creation than to actually limit the number of per connection requests, but this would be a breaking change and this option becomes deprecated with the native protocol v3 anyway so I don't think it's worth bothering. Maybe we can clarify even further the javadoc however.

Vishy Kasar
November 14, 2014, 6:16 PM

Agreed this is not a big deal. I just noticed it in a test and reported just in case this is a defect.

Not a Problem

Assignee

Unassigned

Reporter

Vishy Kasar

Labels

None

PM Priority

None

Reproduced in

None

Affects versions

Fix versions

None

Pull Request

None

Doc Impact

None

Size

None

External issue ID

None

External issue ID

None

Components

Priority

Major
Configure