scala map reduce 思想

1. map

val v = Vector(1, 2, 3, 4)
val v2 = v.map(n => n * 2)

scala> val v = Vector(1, 2, 3, 4)
v: scala.collection.immutable.Vector[Int] = Vector(1, 2, 3, 4)

scala> val v2 = v.map(n => n * 2)
v2: scala.collection.immutable.Vector[Int] = Vector(2, 4, 6, 8)

map这个单词在这里不是地图的意思,而是映射、关联,将源容器中的元素经过函数处理后一一映射到新容器中。上面代码中执行的函数就是乘以2的匿名函数,我们可以自己定义成其他函数,加2啊,乘5啊,平方啊,等等。

2. reduce

val v = Vector(1, 2, 3, 4)
val v3 = v.reduce((sum, n) => sum + n)

很多例子写 v.reduce((a, b) => a + b) 便不好理解,其实是传入2个值,处理后,再跟下一个值传入,直到所有的都处理完。reduce也不是减少的意思,而是归纳、简化的意思,具体讲是把容器中的元素作参数传给里面的二元匿名函数,我的理解是实际上是个尾递归函数。

3. 具体例子

求一个文件中的平均年龄

1 54
2 69
3 66
4 33
5 18
6 51
7 82
8 26
9 1
格式是这样的。


import org.apache.spark.SparkConf
import org.apache.spark.SparkContext



object AvgAgeCalculator {
  def main(args: Array[String]): Unit = {
       val conf = new SparkConf().setAppName("Spark Exercise:Average Age Calculator")
       val sc = new SparkContext(conf)
       val dataFile = sc.textFile("file:///Users/walle/Documents/spark_projects/sparkage/sample_age_data.txt", 5);

       val count = dataFile.count()
       //文件是对一行处理,这里对空格进行分割得到第二个,scala数组中是用()根据下标取元素
       val ageData = dataFile.map(line => line.split(" ")(1))
       //求和
       val totalAge = ageData.map(age => Integer.parseInt( String.valueOf(age))).collect().reduce((a, b) => a + b)
       val avgAge : Double = totalAge.toDouble / count.toDouble
       println("Average Age is " + avgAge)
  }

}

http://www.waitingfy.com/archives/4048

4048

Leave a Reply

Name and Email Address are required fields.
Your email will not be published or shared with third parties.