![]() Run it on an instance of Analytics Engine of IBM Cloud, it will crash.Create any Scala program that reads a JSON using play-json.When I run my JAR file directly using java -jar, it works on both the machines, but my scenario requires me to do a spark-submit on Analytics Engine, hence I’m trying to explore all possible options. Given that both run the same version of Spark and an almost similar version of Scala, I’m quite confused to why my program crashes on Analytics Engine. While reading the JSON works on my local Mac machine using spark-submit, unfortunately it does not work on the Analytics Engine, and it throws the following error: Exception in thread "main" : .JsLookup$.apply$extension1(Lplay/api/libs/json/JsLookupResult Ljava/lang/String )Lplay/api/libs/json/JsValue Īt .Authorization.getAuthToken(Authorization.scala:42)Īt .Test$.main(Test.scala:21)Īt .Test.main(Test.scala)Īt 0(Native Method)Īt (NativeMethodAccessorImpl.java:62)Īt (DelegatingMethodAccessorImpl.java:43)Īt .invoke(Method.java:498)Īt .JavaMainApplication.start(SparkApplication.scala:52)Īt .SparkSubmit$.org$apache$spark$deploy$SparkSubmit$runMain(SparkSubmit.scala:904)Īt .SparkSubmit$.doRunMain$1(SparkSubmit.scala:198)Īt .SparkSubmit$.submit(SparkSubmit.scala:228)Īt .SparkSubmit$.main(SparkSubmit.scala:137)Īt .SparkSubmit.main(SparkSubmit.scala)Īuthorization.scala:42 refers to reading the JSON as mentioned above. ![]() ![]() If you are looking for instructions for Yosemite OSX 10.10 click here or Mountain Lion OSX. I try executing my code using spark-submit on two different machines (Mac/Spark 2.3.2/Scala 2.11.12) and Analytics Engine on IBM Cloud (Linux -south… /Scala 2.11.8/Spark 2.3.2) Actual Behavior Important: These instructions are for Mavericks OSX 10.9 only. The code involves reading from JSON in this way: response("token") I’m using scalaj-http library to read a JSON from the internet and parse it using play-json library. I use a JAR file to be executed using spark-submit. Please describe the expected behavior of the issue, starting from the first action. if you connect to a PostgreSQL database, include both the version / OS of PostgreSQL and the JDBC driver version used to connect to the database. If this is an issue that involves integration with another system, include the exact version and OS of the other system, including any intermediate drivers or APIs i.e. Library Dependenciesīuild.sbt contents: name := "CMDW-Security" Paste the output from java -version at the command line. Scala 2.11.8 being used inside Spark 2.3.2 Scala Operating System (Ubuntu 15.10 / MacOS 10.10 / Windows 10) 2.7.3 API (Scala / Java / Neither / Both)
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |