Spark Submit giving error in Pentaho Spoon -
i new pentaho.i using mapr distribution,when submitting spark job,i getting below error.please me on this.i have done necessary configuration integration of spark , pentaho.please find attached screenshots of pentaho spark submit job.
2017/09/12 12:41:44 - spoon - starting job... 2017/09/12 12:41:44 - spark_submit - start of job execution 2017/09/12 12:41:44 - spark_submit - starting entry [spark submit] 2017/09/12 12:41:44 - spark submit - submitting spark script 2017/09/12 12:41:45 - spark submit - warning: master yarn-cluster deprecated since 2.0. please use master "yarn" specified deploy mode instead. 2017/09/12 12:41:45 - spark submit - slf4j: class path contains multiple slf4j bindings. 2017/09/12 12:41:45 - spark submit - slf4j: found binding in [jar:file:/opt/mapr/lib/slf4j-log4j12-1.7.12.jar!/org/slf4j/impl/staticloggerbinder.class] 2017/09/12 12:41:45 - spark submit - slf4j: found binding in [jar:file:/opt/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/staticloggerbinder.class] 2017/09/12 12:41:45 - spark submit - slf4j: see http://www.slf4j.org/codes.html#multiple_bindings explanation. 2017/09/12 12:41:45 - spark submit - slf4j: actual binding of type [org.slf4j.impl.log4jloggerfactory] 2017/09/12 12:41:45 - spark submit - 17/09/12 12:41:45 warn nativecodeloader: unable load native-hadoop library platform... using builtin-java classes applicable 2017/09/12 12:41:46 - spark submit - 17/09/12 12:41:46 error maprzkrmfinderutils: zookeeper address not configured in yarn configuration. please check yarn-site.xml. 2017/09/12 12:41:46 - spark submit - 17/09/12 12:41:46 error maprzkrmfinderutils: unable determine resourcemanager service address zookeeper. 2017/09/12 12:41:46 - spark submit - 17/09/12 12:41:46 error maprzkbasedrmfailoverproxyprovider: unable create proxy resourcemanager null 2017/09/12 12:41:46 - spark submit - exception in thread "main" java.lang.runtimeexception: unable create proxy resourcemanager null 2017/09/12 12:41:46 - spark submit - @ org.apache.hadoop.yarn.client.maprzkbasedrmfailoverproxyprovider.getproxy(maprzkbasedrmfailoverproxyprovider.java:135) 2017/09/12 12:41:46 - spark submit - @ org.apache.hadoop.io.retry.retryinvocationhandler$proxydescriptor.<init>(retryinvocationhandler.java:195) 2017/09/12 12:41:46 - spark submit - @ org.apache.hadoop.io.retry.retryinvocationhandler.<init>(retryinvocationhandler.java:304) 2017/09/12 12:41:46 - spark submit - @ org.apache.hadoop.io.retry.retryinvocationhandler.<init>(retryinvocationhandler.java:298) 2017/09/12 12:41:46 - spark submit - @ org.apache.hadoop.io.retry.retryproxy.create(retryproxy.java:58) 2017/09/12 12:41:46 - spark submit - @ org.apache.hadoop.yarn.client.rmproxy.creatermproxy(rmproxy.java:95) 2017/09/12 12:41:46 - spark submit - @ org.apache.hadoop.yarn.client.clientrmproxy.creatermproxy(clientrmproxy.java:73) 2017/09/12 12:41:46 - spark submit - @ org.apache.hadoop.yarn.client.api.impl.yarnclientimpl.servicestart(yarnclientimpl.java:193) 2017/09/12 12:41:46 - spark submit - @ org.apache.hadoop.service.abstractservice.start(abstractservice.java:193) 2017/09/12 12:41:46 - spark submit - @ org.apache.spark.deploy.yarn.client.submitapplication(client.scala:152) 2017/09/12 12:41:46 - spark submit - @ org.apache.spark.deploy.yarn.client.run(client.scala:1154) 2017/09/12 12:41:46 - spark submit - @ org.apache.spark.deploy.yarn.client$.main(client.scala:1213) 2017/09/12 12:41:46 - spark submit - @ org.apache.spark.deploy.yarn.client.main(client.scala) 2017/09/12 12:41:46 - spark submit - @ sun.reflect.nativemethodaccessorimpl.invoke0(native method) 2017/09/12 12:41:46 - spark submit - @ sun.reflect.nativemethodaccessorimpl.invoke(nativemethodaccessorimpl.java:62) 2017/09/12 12:41:46 - spark submit - @ sun.reflect.delegatingmethodaccessorimpl.invoke(delegatingmethodaccessorimpl.java:43) 2017/09/12 12:41:46 - spark submit - @ java.lang.reflect.method.invoke(method.java:498) 2017/09/12 12:41:46 - spark submit - @ org.apache.spark.deploy.sparksubmit$.org$apache$spark$deploy$sparksubmit$$runmain(sparksubmit.scala:733) 2017/09/12 12:41:46 - spark submit - @ org.apache.spark.deploy.sparksubmit$.dorunmain$1(sparksubmit.scala:177) 2017/09/12 12:41:46 - spark submit - @ org.apache.spark.deploy.sparksubmit$.submit(sparksubmit.scala:202) 2017/09/12 12:41:46 - spark submit - @ org.apache.spark.deploy.sparksubmit$.main(sparksubmit.scala:116) 2017/09/12 12:41:46 - spark submit - @ org.apache.spark.deploy.sparksubmit.main(sparksubmit.scala) 2017/09/12 12:41:46 - spark submit - caused by: java.lang.runtimeexception: zookeeper address not found mapr filesystem , not configured in yarn configuration. 2017/09/12 12:41:46 - spark submit - @ org.apache.hadoop.yarn.client.maprzkrmfinderutils.maprzkbasedrmfinder(maprzkrmfinderutils.java:99) 2017/09/12 12:41:46 - spark submit - @ org.apache.hadoop.yarn.client.maprzkbasedrmfailoverproxyprovider.updatecurrentrmaddress(maprzkbasedrmfailoverproxyprovider.java:64) 2017/09/12 12:41:46 - spark submit - @ org.apache.hadoop.yarn.client.maprzkbasedrmfailoverproxyprovider.getproxy(maprzkbasedrmfailoverproxyprovider.java:131) 2017/09/12 12:41:46 - spark submit - ... 21 more 2017/09/12 12:41:46 - spark_submit - finished job entry [spark submit] (result=[false]) 2017/09/12 12:41:46 - spark_submit - starting entry [spark submit] 2017/09/12 12:41:46 - spark submit - submitting spark script
in opinion pdi tells problem on line 6: class path contains multiple slf4j bindings
, reference detailed explanations 4 lines below: http://www.slf4j.org/codes.html#multiple_bindings.
the ambiguous class staticloggerbinder may in opt/mapr/lib/slf4j-log4j12-1.7.12.jar
or in opt/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar
.
remove 1 of them , restart.
Comments
Post a Comment