Skip to content

Commit

Permalink
Changes for 1.3 release
Browse files Browse the repository at this point in the history
Updates build and readme files only.

Author: Hossein <[email protected]>

Closes #191 from falaki/v1.3.0.
  • Loading branch information
falaki committed Nov 20, 2015
1 parent 786847c commit 472c20d
Show file tree
Hide file tree
Showing 2 changed files with 7 additions and 8 deletions.
13 changes: 6 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,27 +16,26 @@ You can link against this library in your program at the following coordiates:
```
groupId: com.databricks
artifactId: spark-csv_2.10
version: 1.2.0
version: 1.3.0
```
### Scala 2.11
```
groupId: com.databricks
artifactId: spark-csv_2.11
version: 1.2.0
version: 1.3.0
```


## Using with Spark shell
This package can be added to Spark using the `--packages` command line option. For example, to include it when starting the spark shell:

### Spark compiled with Scala 2.11
```
$SPARK_HOME/bin/spark-shell --packages com.databricks:spark-csv_2.11:1.2.0
$SPARK_HOME/bin/spark-shell --packages com.databricks:spark-csv_2.11:1.3.0
```

### Spark compiled with Scala 2.10
```
$SPARK_HOME/bin/spark-shell --packages com.databricks:spark-csv_2.10:1.2.0
$SPARK_HOME/bin/spark-shell --packages com.databricks:spark-csv_2.10:1.3.0
```

## Features
Expand Down Expand Up @@ -323,7 +322,7 @@ Automatically infer schema (data types), otherwise everything is assumed string:
```R
library(SparkR)

Sys.setenv('SPARKR_SUBMIT_ARGS'='"--packages" "com.databricks:spark-csv_2.10:1.2.0" "sparkr-shell"')
Sys.setenv('SPARKR_SUBMIT_ARGS'='"--packages" "com.databricks:spark-csv_2.10:1.3.0" "sparkr-shell"')
sqlContext <- sparkRSQL.init(sc)

df <- read.df(sqlContext, "cars.csv", source = "com.databricks.spark.csv", schema = customSchema, inferSchema = "true")
Expand All @@ -335,7 +334,7 @@ You can manually specify schema:
```R
library(SparkR)

Sys.setenv('SPARKR_SUBMIT_ARGS'='"--packages" "com.databricks:spark-csv_2.10:1.2.0" "sparkr-shell"')
Sys.setenv('SPARKR_SUBMIT_ARGS'='"--packages" "com.databricks:spark-csv_2.10:1.3.0" "sparkr-shell"')
sqlContext <- sparkRSQL.init(sc)
customSchema <- structType(
structField("year", "integer"),
Expand Down
2 changes: 1 addition & 1 deletion build.sbt
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
name := "spark-csv"

version := "1.3.0-SNAPSHOT"
version := "1.3.0"

organization := "com.databricks"

Expand Down

0 comments on commit 472c20d

Please sign in to comment.