-
Notifications
You must be signed in to change notification settings - Fork 96
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
HDP 3.1 NoSuchMethodError with HDFS Jar Storage #1345
Comments
which database are you using for ambari ? |
PostgreSQL 9.6
…Sent from my iPhone
On Apr 3, 2019, at 1:29 AM, aditya777hp <notifications@github.com<mailto:notifications@github.com>> wrote:
which database are you using for ambari ?
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub<#1345 (comment)>, or mute the thread<https://github.com/notifications/unsubscribe-auth/AJUPRZFlKKw1krBgsIftkTdlfmoRnEL7ks5vdEo8gaJpZM4cSXZ9>.
|
Something similar happened with me as well, try changing the driver.Maybe using an older version of the driver might help !!! |
I'm confused at what Postgres has to do with this. It's saying that the method org.apache.hadoop.io.retry.RetryUtils.getDefaultRetryPolicy(Lorg/apache/hadoop/conf/Configuration;Ljava/lang/String;ZLjava/lang/String;Ljava/lang/String;Ljava/lang/Class;) doesn't exist. I'm assuming It's because sam and HDP were built different versions of the Hadoop Libraries but I don't know. Seems like an odd problem to have. |
Ok, It might be so... |
Hi, I am also having same problem when I try running any spark script on Zeppeeln 0.8.0. My HDP is 3.1.4. |
Startup on an HDP 3.1.0 and HDF 3.4.0 Cluster fails with following error. Looks to be a dependency issues.
Exception in thread "main" java.lang.NoSuchMethodError: org.apache.hadoop.io.retry.RetryUtils.getDefaultRetryPolicy(Lorg/apache/hadoop/conf/Configuration;Ljava/lang/String;ZLjava/lang/String;Ljava/lang/String;Ljava/lang/Class;)Lorg/apache/hadoop/io/retry/RetryPolicy; at org.apache.hadoop.hdfs.NameNodeProxies.createNNProxyWithClientProtocol(NameNodeProxies.java:318) at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:235) at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:139) at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:510) at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:453) at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:136) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3303) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3352) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3320) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:479) at com.hortonworks.registries.common.util.HdfsFileStorage.init(HdfsFileStorage.java:63) at com.hortonworks.streamline.webservice.StreamlineApplication.getJarStorage(StreamlineApplication.java:222) at com.hortonworks.streamline.webservice.StreamlineApplication.registerResources(StreamlineApplication.java:245) at com.hortonworks.streamline.webservice.StreamlineApplication.run(StreamlineApplication.java:102) at com.hortonworks.streamline.webservice.StreamlineApplication.run(StreamlineApplication.java:76) at io.dropwizard.cli.EnvironmentCommand.run(EnvironmentCommand.java:43) at io.dropwizard.cli.ConfiguredCommand.run(ConfiguredCommand.java:85) at io.dropwizard.cli.Cli.run(Cli.java:75) at io.dropwizard.Application.run(Application.java:79) at com.hortonworks.streamline.webservice.StreamlineApplication.main(StreamlineApplication.java:80)
The text was updated successfully, but these errors were encountered: