We should automate throughput-based benchmarks mainly for regression testing, to have the ability to compare 2 different branches/commits of the driver to look for regression or perf improvements.
For these benchmarks, we would need 2 things:
A project / tool to test the maximum throughput at different in-flight requests. Ideally, we should be able to configure different CQL workloads and driver settings.
A test setup using our automated provisioning tool to launch it.
For the project/tool, you can look at the node.js driver one as an example: https://github.com/riptano/nodejs-driver-sut or reuse whatever we have.