星星之火java.lang.OutOfMemoryError:Java堆空間我的集群:一個主節(jié)點,11個從節(jié)點,每個節(jié)點有6GB內存。我的設置:spark.executor.memory=4g, Dspark.akka.frameSize=512問題是:第一,我從HDFS讀取了一些數(shù)據(jù)(2.19 GB)到RDD:val imageBundleRDD = sc.newAPIHadoopFile(...)第二,在這個RDD上做些什么:val res = imageBundleRDD.map(data => {
val desPoints = threeDReconstruction(data._2, bg)
(data._1, desPoints)
})最活的,剛過去的,提供給人類發(fā)展戰(zhàn)略的產出:res.saveAsNewAPIHadoopFile(...)當我運行我的程序時,它顯示:.....
14/01/15 21:42:27 INFO cluster.ClusterTaskSetManager: Starting task 1.0:24 as TID 33 on executor 9: Salve7.Hadoop (NODE_LOCAL)
14/01/15 21:42:27 INFO cluster.ClusterTaskSetManager: Serialized task 1.0:24 as 30618515 bytes in 210 ms
14/01/15 21:42:27 INFO cluster.ClusterTaskSetManager: Starting task 1.0:36 as TID 34 on executor 2: Salve11.Hadoop (NODE_LOCAL)
14/01/15 21:42:28 INFO cluster.ClusterTaskSetManager: Serialized task 1.0:36 as 30618515 bytes in 449 ms
14/01/15 21:42:28 INFO cluster.ClusterTaskSetManager: Starting task 1.0:32 as TID 35 on executor 7: Salve4.Hadoop (NODE_LOCAL)
Uncaught error from thread [spark-akka.actor.default-dispatcher-3] shutting down JVM since 'akka.jvm-exit-on-fatal-error'
is enabled for ActorSystem[spark]
java.lang.OutOfMemoryError: Java heap space任務太多了?PS當輸入數(shù)據(jù)約為225 MB時,一切正常。我怎樣才能解決這個問題?
3 回答

森林海
TA貢獻2011條經驗 獲得超2個贊
Spark
spark-submit
您可以在本地模式下運行星火。在這種非分布式的單JVM部署模式中,SPark在同一個JVM中生成所有的執(zhí)行組件-驅動程序、執(zhí)行器、后端和主程序。這是唯一使用驅動程序執(zhí)行的模式。
OOM
heap
driver-memory
executor-memory
.
spark-1.6.1/bin/spark-submit --class "MyClass" --driver-memory 12g --master local[*] target/scala-2.10/simple-project_2.10-1.0.jar

FFIVE
TA貢獻1797條經驗 獲得超6個贊
# Set SPARK_MEM if it isn't already set since we also use it for this process SPARK_MEM=${SPARK_MEM:-512m} export SPARK_MEM # Set JAVA_OPTS to be able to load native libraries and to set heap size JAVA_OPTS="$OUR_JAVA_OPTS" JAVA_OPTS="$JAVA_OPTS -Djava.library.path=$SPARK_LIBRARY_PATH" JAVA_OPTS="$JAVA_OPTS -Xms$SPARK_MEM -Xmx$SPARK_MEM"
添加回答
舉報
0/150
提交
取消