| read.df {SparkR} | R Documentation | 
Returns the dataset in a data source as a SparkDataFrame
read.df(sqlContext, path = NULL, source = NULL, schema = NULL, ...) loadDF(sqlContext, path = NULL, source = NULL, schema = NULL, ...)
sqlContext | 
 SQLContext to use  | 
path | 
 The path of files to load  | 
source | 
 The name of external data source  | 
schema | 
 The data schema defined in structType  | 
The data source is specified by the 'source' and a set of options(...). If 'source' is not specified, the default data source configured by "spark.sql.sources.default" will be used.
SparkDataFrame
## Not run: 
##D sc <- sparkR.init()
##D sqlContext <- sparkRSQL.init(sc)
##D df1 <- read.df(sqlContext, "path/to/file.json", source = "json")
##D schema <- structType(structField("name", "string"),
##D                      structField("info", "map<string,double>"))
##D df2 <- read.df(sqlContext, mapTypeJsonPath, "json", schema)
##D df3 <- loadDF(sqlContext, "data/test_table", "parquet", mergeSchema = "true")
## End(Not run)