Removing intermediate container 25476927fa0a Removing intermediate container 6df7c84d8312 Removing intermediate container 2239c4f9a0dc Removing intermediate container c6a7f951555d Step 1/7 : FROM gcr.io/spark-operator/spark:v2.4.5 Sending build context to Docker daemon 103.9MB Packaging /Users/dzlab/Projects/spark-k8s/target/scala-2.12/spark-k8s_2. Main Scala API documentation successful. Compiling 1 Scala source to /Users/dzlab/Projects/spark-k8s/target/scala-2.12/classes. Main Scala API documentation to /Users/dzlab/Projects/spark-k8s/target/scala-2.12/api. Set current project to spark-k8s (in build file:/Users/dzlab/Projects/spark-k8s/) Loading project definition from /Users/dzlab/Projects/spark-k8s/project Loading global plugins from /Users/dzlab/.sbt/0.13/plugins Now as we have the infra and the project setup, we can build the Docker image for our Spark example project using sbt docker:publishLocal like this: registry: set the Minikube VM IP address and port 5000 on which Docker Registry is running.dockerBaseImage: set to a Spark Operator image which we need to use as base Docker image.Notice the following important variables in this build configuration file:
withRegistryHost ( Some ( registry )) ), mainClass in ( Compile, run ) := Some ( "dzlab.SparkJob" ) ) settings ( name := "spark-k8s", commonSettings, dockerAliases ++= Seq ( dockerAlias. enablePlugins ( DockerPlugin, JavaAppPackaging ). value / "test" / "resources", javacOptions ++= Seq (), scalacOptions ++= Seq ( "-deprecation", "-feature", "-language:implicitConversions", "-language:postfixOps" ), libraryDependencies ++= sparkLibs ) // Docker Image build settings dockerBaseImage := "gcr.io/spark-operator/spark:v" + sparkVersion val registry = "192.168.64.11:5000" lazy val root = ( project in file ( "." )). value / "test", resourceDirectory in Test := baseDirectory. value / "src", scalaSource in Test := baseDirectory. Val sparkVersion = "2.4.5" scalaVersion in ThisBuild := "2.12.0" val sparkLibs = Seq ( "" %% "spark-core" % sparkVersion, "" %% "spark-sql" % sparkVersion ) // JAR build settings lazy val commonSettings = Seq ( organization := "dzlab", version := "0.1", scalaSource in Compile := baseDirectory. Switch to Minikube Docker daemon so that all the subsequent Docker commands will be forwarded to it: ? Done! kubectl is now configured to use "minikube" ? Enabled addons: default-storageclass, storage-provisioner
? Downloading Kubernetes v1.18.3 preload. ? Starting control plane node minikube in cluster minikube $ sudo chmod u+s /Users/dzlab/.minikube/bin/docker-machine-driver-hyperkit $ sudo chown root:wheel /Users/dzlab/.minikube/bin/docker-machine-driver-hyperkit
#STOP DOCKER HYPERKIT DRIVER#
? The 'hyperkit' driver requires elevated permissions. ? Downloading driver docker-machine-driver-hyperkit: ✨ Using the hyperkit driver based on user configuration $ minikube start -driver=hyperkit -memory 8192 -cpus 4 -insecure-registry "10.0.0.0/24"