org.apache.spark.sql.AnalysisException - on querying a Cassandra table which has many columns
SPARKC-544
Connector does not load all data from Cassandra table
SPARKC-508
Upgrade driver version for 2.0.0 Release
SPARKC-474
Extend SPARKC-383 to All Row Readers
SPARKC-473
Fix Incomplete Shading Of Guava
SPARKC-461
Unable to SavetoCassandra
SPARKC-449
ReplicaPartitioner fails on tokens with hashcode Integer.MIN_VALUE
SPARKC-419
DataSource Partition Key Index Pushdowns Broken for ProtocolVersion 3
SPARKC-402
Casting the timestamp to long doesn't keep the millisecond information
SPARKC-395
AnalysisException is thrown if using long value in where statement for timestamp columns with CassandraSqlContext.sql()
SPARKC-394
Single Partition Queries Fail with Predicate
SPARKC-368
Fix Troublesome Dependencies - Shade Guava and include the Cassandra Java Driver
SPARKC-355
Not able to change Authentication in spark-cassandra-connector
SPARKC-352
Fix synchronization in reflection calls for Scala 2.10
SPARKC-333
CassandraRDDPartitioner violates Partition Contract
SPARKC-323
sbt -Dscala-2.11=true assembly fails due to impossible to get artifacts - IvyNode = org.slf4j#slf4j-log4j12;1.7.6
SPARKC-295
Update for Spark 1.6 Snapshot
SPARKC-287
Some SparkSQL test suites fail with parse error in CREATE TEMPORARY TABLE
SPARKC-194
Upgrade to Spark 1.4
SPARKC-192
Add Guava Back to Assembly Fat Jar
SPARKC-176
Integrating Spark SQL Data Sources API
SPARKC-112
Upgrade to Spark 1.3
SPARKC-98
Upgrade to Spark 1.2.0
SPARKC-15
Example in doc doesn't work
SPARKC-529
Unable to make ssl connection to cassandra through spark
SPARKC-521
spark-cassandra connectivity with cloudera distribution
SPARKC-509
WriteConf Settings not used with Update and Delete Query Templates
SPARKC-505
spark.cassandra.* properties with camelcases not working when setting as option to sqlContext.read
SPARKC-479
Setting DF mode to "Overwrite" result in a TRUNCATE TABLE
SPARKC-472
Upgrade/Change Spark, Scala Versions
SPARKC-450
when trying to get data from Cassandra, the Guava dependency problem arises
SPARKC-418
documentations and examples should be upgraded to spark 2.0 API
SPARKC-410
add support for spark 2
SPARKC-406
ColumnTypes Need to be Dependent on C* Protocol Version
SPARKC-387
Spark SQL 1.5.2 - java.lang.NoSuchMethodError
SPARKC-320
Unable to convert nulls in nested Java Bean Classes
SPARKC-318
Sbt assembly broken
SPARKC-296
java.util.NoSuchElementException: key not found: 'org.apache.cassandra.db.marshal.DateType'
SPARKC-282
Cannot save to UDT C* Rows from UDT's read into Dataframes
SPARKC-271
input_split_size_in_mb is incorrectly interpreted in bytes, instead of megabytes
SPARKC-208
CassandraSQLContext does not work
SPARKC-197
Dataframe writes to Cassandra fail
SPARKC-191
SPOF in connection logic when C* process is down on live spark master
SPARKC-183
Write DataFrames Documentation
SPARKC-173
Support of Cassandra tuples
SPARKC-172
Replace CassandraRelation by datasource Relation
SPARKC-163
Provide Listen Address as Preferred Location in Spark 1.3
SPARKC-131
spark.cassandra.connection.factory is not a valid Spark Cassandra Connector variable
SPARKC-105
spark.cassandra.connection.factory didn't work
SPARKC-102
ResultSet.all incorrectly returns no results
SPARKC-78
issue 1 of 566

org.apache.spark.sql.AnalysisException - on querying a Cassandra table which has many columns

Description

*org.apache.spark.sql.AnalysisException is thrown on querying Cassandra tables which has many columns. In my case, Table has 110 columns, on executing a select query "select * from xxxx", following exception appeared..
*
org.apache.spark.sql.AnalysisException: Could not read schema from the hive metastore because it is corrupted. (missing part 1 of the schema, 2 parts are expected).;
at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$29.apply(HiveExternalCatalog.scala:1203)
at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$29.apply(HiveExternalCatalog.scala:1200)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.immutable.Range.foreach(Range.scala:160)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at org.apache.spark.sql.hive.HiveExternalCatalog$.org$apache$spark$sql$hive$HiveExternalCatalog$$getSchemaFromTableProperties(HiveExternalCatalog.scala:1200)
at org.apache.spark.sql.hive.HiveExternalCatalog.restoreDataSourceTable(HiveExternalCatalog.scala:781)
at org.apache.spark.sql.hive.HiveExternalCatalog.org$apache$spark$sql$hive$HiveExternalCatalog$$restoreTableMetadata(HiveExternalCatalog.scala:683)
at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$getTable$1.apply(HiveExternalCatalog.scala:648)
at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$getTable$1.apply(HiveExternalCatalog.scala:648)
at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
at org.apache.spark.sql.hive.HiveExternalCatalog.getTable(HiveExternalCatalog.scala:647)
at org.apache.spark.sql.catalyst.catalog.SessionCatalog.lookupRelation(SessionCatalog.scala:681)
at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.org$apache$spark$sql$catalyst$analysis$Analyzer$ResolveRelations$$lookupTableFromCatalog(Analyzer.scala:640)
at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.resolveRelation(Analyzer.scala:595)
at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$8.applyOrElse(Analyzer.scala:625)
at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$$anonfun$apply$8.applyOrElse(Analyzer.scala:618)

Environment

DSE - 6.0.0
spark - 2.2
Cassandra - 3.11
spark-cassandra-connector -2.11-2.0.0

Pull Requests

None

Status

Assignee

Russell Spitzer

Reporter

Venkata praveen

Labels

None

Reviewer

None

Reviewer 2

None

Tester

None

Pull Request

None

Components

Affects versions

Priority

Blocker
Configure