pyspark.RDD.map#
- RDD.map(f, preservesPartitioning=False)[source]#
- Return a new RDD by applying a function to each element of this RDD. - New in version 0.7.0. - Parameters
- ffunction
- a function to run on each element of the RDD 
- preservesPartitioningbool, optional, default False
- indicates whether the input function preserves the partitioner, which should be False unless this is a pair RDD and the input function doesn’t modify the keys 
 
- Returns
 - See also - Examples - >>> rdd = sc.parallelize(["b", "a", "c"]) >>> sorted(rdd.map(lambda x: (x, 1)).collect()) [('a', 1), ('b', 1), ('c', 1)]