RSS

Category Archives: API Management

Camel CDI app in Fabric8 via Maven

Recently I spent some time experimenting with the Fabric8 microservices framework. And while it is way too comprehensive to cover in a single blog post I wanted to focus on deploying an App/Microservice to a Fabric8 cluster.

For this post I am using a local fabric8 install using minikube, for more information see: http://fabric8.io/guide/getStarted/gofabric8.html

For details on to setup your fabric8 environment using minikube you can read the excellent blog post of my collegue Dirk Janssen here.

Also since I am doing quite a lot of work recently with Apache Camel using the CDI framework I the decision to deploy a Camel CDI based microservice was quickly made ūüôā

So in this blog I will outline how to create a basic Camel/CDI based microservice using maven and deploy in on a kubernetes cluster using a fabric8 CD pipeline.

Generating the microservice

I used a maven archetype to quickly bootstrap the microservice. Now I used the Eclipse IDE to generate the project, but you can off course use the Maven CLI as well.

archetype-selector

Fabric8 provides lots of different archetypes and quickstarts out of the box for various types of programming languages and especially for Java various frameworks. Here I am going with the cdi-camel-jetty-archetype. This archetype generates a Camel route exposed with a Jetty endpoint wired together using CDI.

 

Building and running locally

As with any Maven based application one of the first things to do after the project is created is executing:

$ mvn clean install

However initially the result was:


~/workspace/cdi-jetty-demo$ mvn install
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building Fabric8 :: Quickstarts :: CDI :: Camel with Jetty as HTTP server 0.0.1-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ cdi-jetty-demo ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 2 resources
[INFO]
[INFO] --- fabric8-maven-plugin:3.2.8:resource (default) @ cdi-jetty-demo ---
[INFO] F8: Running in Kubernetes mode
[INFO] F8: Using resource templates from /home/pim/workspace/cdi-jetty-demo/src/main/fabric8
2016-12-28 11:54:14 INFO  Version:30 - HV000001: Hibernate Validator 5.2.4.Final
[WARNING] F8: fmp-git: No .git/config file could be found so cannot annotate kubernetes resources with git commit SHA and branch
[WARNING] F8: fmp-git: No .git/config file could be found so cannot annotate kubernetes resources with git commit SHA and branch
[WARNING] F8: fmp-git: No .git/config file could be found so cannot annotate kubernetes resources with git commit SHA and branch
[WARNING] F8: fmp-git: No .git/config file could be found so cannot annotate kubernetes resources with git commit SHA and branch
[WARNING] F8: fmp-git: No .git/config file could be found so cannot annotate kubernetes resources with git commit SHA and branch
[WARNING] F8: fmp-git: No .git/config file could be found so cannot annotate kubernetes resources with git commit SHA and branch
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 3.612 s
[INFO] Finished at: 2016-12-28T11:54:14+01:00
[INFO] Final Memory: 37M/533M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal io.fabric8:fabric8-maven-plugin:3.2.8:resource (default) on project cdi-jetty-demo: Execution default of goal io.fabric8:fabric8-maven-plugin:3.2.8:resource failed: Container cdi-jetty-demo has no Docker image configured. Please check your Docker image configuration (including the generators which are supposed to run) -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/PluginExecutionException

Since the archetype did not have a template in cdi-jetty-demo/src/main/fabric8 and was still looking for it there it was throwing an error. After some looking around I managed to solve the issue by using a different version of the fabric8-maven-plugin.

Specifically:

<fabric8.maven.plugin.version>3.1.49</fabric8.maven.plugin.version>

Now running the clean install was executing successfully.


~/workspace/cdi-jetty-demo$ mvn clean install
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building Fabric8 :: Quickstarts :: CDI :: Camel with Jetty as HTTP server 0.0.1-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ cdi-jetty-demo ---
[INFO] Deleting /home/pim/workspace/cdi-jetty-demo/target
[INFO]
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ cdi-jetty-demo ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 2 resources
[INFO]
[INFO] --- fabric8-maven-plugin:3.1.49:resource (default) @ cdi-jetty-demo ---
[INFO] F8> Running in Kubernetes mode
[INFO] F8> Running generator java-exec
[INFO] F8> Using resource templates from /home/pim/workspace/cdi-jetty-demo/src/main/fabric8
[WARNING] F8> No .git/config file could be found so cannot annotate kubernetes resources with git commit SHA and branch
[WARNING] F8> No .git/config file could be found so cannot annotate kubernetes resources with git commit SHA and branch
[INFO]
[INFO] --- maven-compiler-plugin:3.3:compile (default-compile) @ cdi-jetty-demo ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 1 source file to /home/pim/workspace/cdi-jetty-demo/target/classes
[INFO]
[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ cdi-jetty-demo ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 1 resource
[INFO]
[INFO] --- maven-compiler-plugin:3.3:testCompile (default-testCompile) @ cdi-jetty-demo ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 1 source file to /home/pim/workspace/cdi-jetty-demo/target/test-classes
[INFO]
[INFO] --- maven-surefire-plugin:2.18.1:test (default-test) @ cdi-jetty-demo ---
[INFO]
[INFO] --- maven-jar-plugin:2.4:jar (default-jar) @ cdi-jetty-demo ---
[INFO] Building jar: /home/pim/workspace/cdi-jetty-demo/target/cdi-jetty-demo.jar
[INFO]
[INFO] >>> fabric8-maven-plugin:3.1.49:build (default) > initialize @ cdi-jetty-demo >>>
[INFO]
[INFO] <<< fabric8-maven-plugin:3.1.49:build (default) < initialize @ cdi-jetty-demo <<<
[INFO]
[INFO] --- fabric8-maven-plugin:3.1.49:build (default) @ cdi-jetty-demo ---
[INFO] F8> Running in Kubernetes mode
[INFO] F8> Running generator java-exec
[INFO] F8> Environment variable from gofabric8 : DOCKER_HOST=tcp://192.168.42.40:2376
[INFO] F8> Environment variable from gofabric8 : DOCKER_CERT_PATH=/home/pim/.minikube/certs
[INFO] F8> Pulling from fabric8/java-alpine-openjdk8-jdk
117f30b7ae3d: Already exists
f1011be98339: Pull complete
dae2abad9134: Pull complete
d1ea5cd75444: Pull complete
cec1e2f1c0f2: Pull complete
a9ec98a3bcba: Pull complete
38bbb9125eaa: Pull complete
66f50c07037b: Pull complete
868f8ddb8412: Pull complete
[INFO] F8> Digest: sha256:572ec2fdc9ac33bb1a8a5ee96c17eae9b7797666ed038c08b5c3583f98c1277f
[INFO] F8> Status: Downloaded newer image for fabric8/java-alpine-openjdk8-jdk:1.1.11
[INFO] F8> Pulled fabric8/java-alpine-openjdk8-jdk:1.1.11 in 47 seconds
Downloading: https://repo.fusesource.com/nexus/content/groups/public/org/apache/httpcomponents/httpclient/4.3.3/httpclient-4.3.3.pom
Downloading: https://maven.repository.redhat.com/ga/org/apache/httpcomponents/httpclient/4.3.3/httpclient-4.3.3.pom
Downloading: https://repo.maven.apache.org/maven2/org/apache/httpcomponents/httpclient/4.3.3/httpclient-4.3.3.pom
Downloaded: https://repo.maven.apache.org/maven2/org/apache/httpcomponents/httpclient/4.3.3/httpclient-4.3.3.pom (6 KB at 30.9 KB/sec)
Downloading: https://repo.fusesource.com/nexus/content/groups/public/org/apache/httpcomponents/httpmime/4.3.3/httpmime-4.3.3.pom
Downloading: https://maven.repository.redhat.com/ga/org/apache/httpcomponents/httpmime/4.3.3/httpmime-4.3.3.pom
Downloading: https://repo.maven.apache.org/maven2/org/apache/httpcomponents/httpmime/4.3.3/httpmime-4.3.3.pom
Downloaded: https://repo.maven.apache.org/maven2/org/apache/httpcomponents/httpmime/4.3.3/httpmime-4.3.3.pom (5 KB at 81.7 KB/sec)
Downloading: https://repo.fusesource.com/nexus/content/groups/public/org/apache/httpcomponents/httpclient-cache/4.3.3/httpclient-cache-4.3.3.pom
Downloading: https://maven.repository.redhat.com/ga/org/apache/httpcomponents/httpclient-cache/4.3.3/httpclient-cache-4.3.3.pom
Downloading: https://repo.maven.apache.org/maven2/org/apache/httpcomponents/httpclient-cache/4.3.3/httpclient-cache-4.3.3.pom
Downloaded: https://repo.maven.apache.org/maven2/org/apache/httpcomponents/httpclient-cache/4.3.3/httpclient-cache-4.3.3.pom (7 KB at 102.3 KB/sec)
Downloading: https://repo.fusesource.com/nexus/content/groups/public/org/apache/httpcomponents/fluent-hc/4.3.3/fluent-hc-4.3.3.pom
Downloading: https://maven.repository.redhat.com/ga/org/apache/httpcomponents/fluent-hc/4.3.3/fluent-hc-4.3.3.pom
Downloading: https://repo.maven.apache.org/maven2/org/apache/httpcomponents/fluent-hc/4.3.3/fluent-hc-4.3.3.pom
Downloaded: https://repo.maven.apache.org/maven2/org/apache/httpcomponents/fluent-hc/4.3.3/fluent-hc-4.3.3.pom (5 KB at 80.4 KB/sec)
[INFO] Copying files to /home/pim/workspace/cdi-jetty-demo/target/docker/eos/cdi-jetty-demo/snapshot-161228-115614-0167/build/maven
[INFO] Building tar: /home/pim/workspace/cdi-jetty-demo/target/docker/eos/cdi-jetty-demo/snapshot-161228-115614-0167/tmp/docker-build.tar
[INFO] F8> docker-build.tar: Created [eos/cdi-jetty-demo:snapshot-161228-115614-0167] "java-exec" in 8 seconds
[INFO] F8> [eos/cdi-jetty-demo:snapshot-161228-115614-0167] "java-exec": Built image sha256:94fad
[INFO] F8> [eos/cdi-jetty-demo:snapshot-161228-115614-0167] "java-exec": Tag with latest
[INFO]
[INFO] --- maven-install-plugin:2.4:install (default-install) @ cdi-jetty-demo ---
[INFO] Installing /home/pim/workspace/cdi-jetty-demo/target/cdi-jetty-demo.jar to /home/pim/.m2/repository/nl/rubix/eos/cdi-jetty-demo/0.0.1-SNAPSHOT/cdi-jetty-demo-0.0.1-SNAPSHOT.jar
[INFO] Installing /home/pim/workspace/cdi-jetty-demo/pom.xml to /home/pim/.m2/repository/nl/rubix/eos/cdi-jetty-demo/0.0.1-SNAPSHOT/cdi-jetty-demo-0.0.1-SNAPSHOT.pom
[INFO] Installing /home/pim/workspace/cdi-jetty-demo/target/classes/META-INF/fabric8/openshift.yml to /home/pim/.m2/repository/nl/rubix/eos/cdi-jetty-demo/0.0.1-SNAPSHOT/cdi-jetty-demo-0.0.1-SNAPSHOT-openshift.yml
[INFO] Installing /home/pim/workspace/cdi-jetty-demo/target/classes/META-INF/fabric8/openshift.json to /home/pim/.m2/repository/nl/rubix/eos/cdi-jetty-demo/0.0.1-SNAPSHOT/cdi-jetty-demo-0.0.1-SNAPSHOT-openshift.json
[INFO] Installing /home/pim/workspace/cdi-jetty-demo/target/classes/META-INF/fabric8/kubernetes.yml to /home/pim/.m2/repository/nl/rubix/eos/cdi-jetty-demo/0.0.1-SNAPSHOT/cdi-jetty-demo-0.0.1-SNAPSHOT-kubernetes.yml
[INFO] Installing /home/pim/workspace/cdi-jetty-demo/target/classes/META-INF/fabric8/kubernetes.json to /home/pim/.m2/repository/nl/rubix/eos/cdi-jetty-demo/0.0.1-SNAPSHOT/cdi-jetty-demo-0.0.1-SNAPSHOT-kubernetes.json
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:03 min
[INFO] Finished at: 2016-12-28T11:57:15+01:00
[INFO] Final Memory: 61M/873M
[INFO] ------------------------------------------------------------------------

After the project built successfully I wanted to run it in my Kubernetes cluster.

Thankfully the guys at fabric8 also took this into consideration, so by executing the maven goal fabric8:run you app will be booted into a Docker container and louched as a Pod in the Kubernetes cluster just for some local testing.


~/workspace/cdi-jetty-demo$ mvn fabric8:run
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building Fabric8 :: Quickstarts :: CDI :: Camel with Jetty as HTTP server 0.0.1-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] >>> fabric8-maven-plugin:3.1.49:run (default-cli) > install @ cdi-jetty-demo >>>
[INFO]
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ cdi-jetty-demo ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 2 resources
[INFO]
[INFO] --- fabric8-maven-plugin:3.1.49:resource (default) @ cdi-jetty-demo ---
[INFO] F8> Running in Kubernetes mode
[INFO] F8> Running generator java-exec
[INFO] F8> Using resource templates from /home/pim/workspace/cdi-jetty-demo/src/main/fabric8
[WARNING] F8> No .git/config file could be found so cannot annotate kubernetes resources with git commit SHA and branch
[WARNING] F8> No .git/config file could be found so cannot annotate kubernetes resources with git commit SHA and branch
[INFO]
[INFO] --- maven-compiler-plugin:3.3:compile (default-compile) @ cdi-jetty-demo ---
[INFO] Nothing to compile - all classes are up to date
[INFO]
[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ cdi-jetty-demo ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 1 resource
[INFO]
[INFO] --- maven-compiler-plugin:3.3:testCompile (default-testCompile) @ cdi-jetty-demo ---
[INFO] Nothing to compile - all classes are up to date
[INFO]
[INFO] --- maven-surefire-plugin:2.18.1:test (default-test) @ cdi-jetty-demo ---
[INFO]
[INFO] --- maven-jar-plugin:2.4:jar (default-jar) @ cdi-jetty-demo ---
[INFO] Building jar: /home/pim/workspace/cdi-jetty-demo/target/cdi-jetty-demo.jar
[INFO]
[INFO] >>> fabric8-maven-plugin:3.1.49:build (default) > initialize @ cdi-jetty-demo >>>
[INFO]
[INFO] <<< fabric8-maven-plugin:3.1.49:build (default) < initialize @ cdi-jetty-demo <<<
[INFO]
[INFO] --- fabric8-maven-plugin:3.1.49:build (default) @ cdi-jetty-demo ---
[INFO] F8> Running in Kubernetes mode
[INFO] F8> Running generator java-exec
[INFO] F8> Environment variable from gofabric8 : DOCKER_HOST=tcp://192.168.42.40:2376
[INFO] F8> Environment variable from gofabric8 : DOCKER_CERT_PATH=/home/pim/.minikube/certs
[INFO] Copying files to /home/pim/workspace/cdi-jetty-demo/target/docker/eos/cdi-jetty-demo/snapshot-161228-122841-0217/build/maven
[INFO] Building tar: /home/pim/workspace/cdi-jetty-demo/target/docker/eos/cdi-jetty-demo/snapshot-161228-122841-0217/tmp/docker-build.tar
[INFO] F8> docker-build.tar: Created [eos/cdi-jetty-demo:snapshot-161228-122841-0217] "java-exec" in 5 seconds
[INFO] F8> [eos/cdi-jetty-demo:snapshot-161228-122841-0217] "java-exec": Built image sha256:bccd3
[INFO] F8> [eos/cdi-jetty-demo:snapshot-161228-122841-0217] "java-exec": Tag with latest
[INFO]
[INFO] --- maven-install-plugin:2.4:install (default-install) @ cdi-jetty-demo ---
[INFO] Installing /home/pim/workspace/cdi-jetty-demo/target/cdi-jetty-demo.jar to /home/pim/.m2/repository/nl/rubix/eos/cdi-jetty-demo/0.0.1-SNAPSHOT/cdi-jetty-demo-0.0.1-SNAPSHOT.jar
[INFO] Installing /home/pim/workspace/cdi-jetty-demo/pom.xml to /home/pim/.m2/repository/nl/rubix/eos/cdi-jetty-demo/0.0.1-SNAPSHOT/cdi-jetty-demo-0.0.1-SNAPSHOT.pom
[INFO] Installing /home/pim/workspace/cdi-jetty-demo/target/classes/META-INF/fabric8/openshift.yml to /home/pim/.m2/repository/nl/rubix/eos/cdi-jetty-demo/0.0.1-SNAPSHOT/cdi-jetty-demo-0.0.1-SNAPSHOT-openshift.yml
[INFO] Installing /home/pim/workspace/cdi-jetty-demo/target/classes/META-INF/fabric8/openshift.json to /home/pim/.m2/repository/nl/rubix/eos/cdi-jetty-demo/0.0.1-SNAPSHOT/cdi-jetty-demo-0.0.1-SNAPSHOT-openshift.json
[INFO] Installing /home/pim/workspace/cdi-jetty-demo/target/classes/META-INF/fabric8/kubernetes.yml to /home/pim/.m2/repository/nl/rubix/eos/cdi-jetty-demo/0.0.1-SNAPSHOT/cdi-jetty-demo-0.0.1-SNAPSHOT-kubernetes.yml
[INFO] Installing /home/pim/workspace/cdi-jetty-demo/target/classes/META-INF/fabric8/kubernetes.json to /home/pim/.m2/repository/nl/rubix/eos/cdi-jetty-demo/0.0.1-SNAPSHOT/cdi-jetty-demo-0.0.1-SNAPSHOT-kubernetes.json
[INFO]
[INFO] <<< fabric8-maven-plugin:3.1.49:run (default-cli) < install @ cdi-jetty-demo <<<
[INFO]
[INFO] --- fabric8-maven-plugin:3.1.49:run (default-cli) @ cdi-jetty-demo ---
[INFO] F8> Using Kubernetes at https://192.168.42.40:8443/ in namespace default with manifest /home/pim/workspace/cdi-jetty-demo/target/classes/META-INF/fabric8/kubernetes.yml
[INFO] Using namespace: default
[INFO] Creating a Service from kubernetes.yml namespace default name cdi-jetty-demo
[INFO] Created Service: target/fabric8/applyJson/default/service-cdi-jetty-demo.json
[INFO] Creating a Deployment from kubernetes.yml namespace default name cdi-jetty-demo
[INFO] Created Deployment: target/fabric8/applyJson/default/deployment-cdi-jetty-demo.json
[INFO] hint> Use the command `kubectl get pods -w` to watch your pods start up
[INFO] F8> Watching pods with selector LabelSelector(matchExpressions=[], matchLabels={project=cdi-jetty-demo, provider=fabric8, group=nl.rubix.eos}, additionalProperties={}) waiting for a running pod...
[INFO] New Pod> cdi-jetty-demo-1244207563-s9xn2 status: Pending
[INFO] New Pod> cdi-jetty-demo-1244207563-s9xn2 status: Running
[INFO] New Pod> Tailing log of pod: cdi-jetty-demo-1244207563-s9xn2
[INFO] New Pod> Press Ctrl-C to scale down the app and stop tailing the log
[INFO] New Pod>
[INFO] Pod> exec java -javaagent:/opt/agent-bond/agent-bond.jar=jolokia{{host=0.0.0.0}},jmx_exporter{{9779:/opt/agent-bond/jmx_exporter_config.yml}} -cp .:/app/* org.apache.camel.cdi.Main
[INFO] Pod> I> No access restrictor found, access to any MBean is allowed
[INFO] Pod> Jolokia: Agent started with URL http://172.17.0.6:8778/jolokia/
[INFO] Pod> 2016-12-28 11:28:52.953:INFO:ifasjipjsoejs.Server:jetty-8.y.z-SNAPSHOT
[INFO] Pod> 2016-12-28 11:28:53.001:INFO:ifasjipjsoejs.AbstractConnector:Started SelectChannelConnector@0.0.0.0:9779
[INFO] Pod> 2016-12-28 11:28:53,142 [main           ] INFO  Version                        - WELD-000900: 2.3.3 (Final)
[INFO] Pod> 2016-12-28 11:28:53,412 [main           ] INFO  Bootstrap                      - WELD-000101: Transactional services not available. Injection of @Inject UserTransaction not available. Transactional observers will be invoked synchronously.
[INFO] Pod> 2016-12-28 11:28:53,605 [main           ] INFO  Event                          - WELD-000411: Observer method [BackedAnnotatedMethod] private org.apache.camel.cdi.CdiCamelExtension.processAnnotatedType(@Observes ProcessAnnotatedType<?>) receives events for all annotated types. Consider restricting events using @WithAnnotations or a generic type with bounds.
[INFO] Pod> 2016-12-28 11:28:54,809 [main           ] INFO  DefaultTypeConverter           - Loaded 196 type converters
[INFO] Pod> 2016-12-28 11:28:54,861 [main           ] INFO  CdiCamelExtension              - Camel CDI is starting Camel context [myJettyCamel]
[INFO] Pod> 2016-12-28 11:28:54,863 [main           ] INFO  DefaultCamelContext            - Apache Camel 2.18.1 (CamelContext: myJettyCamel) is starting
[INFO] Pod> 2016-12-28 11:28:54,864 [main           ] INFO  ManagedManagementStrategy      - JMX is enabled
[INFO] Pod> 2016-12-28 11:28:55,030 [main           ] INFO  DefaultRuntimeEndpointRegistry - Runtime endpoint registry is in extended mode gathering usage statistics of all incoming and outgoing endpoints (cache limit: 1000)
[INFO] Pod> 2016-12-28 11:28:55,081 [main           ] INFO  DefaultCamelContext            - StreamCaching is not in use. If using streams then its recommended to enable stream caching. See more details at http://camel.apache.org/stream-caching.html
[INFO] Pod> 2016-12-28 11:28:55,120 [main           ] INFO  log                            - Logging initialized @2850ms
[INFO] Pod> 2016-12-28 11:28:55,170 [main           ] INFO  Server                         - jetty-9.2.19.v20160908
[INFO] Pod> 2016-12-28 11:28:55,199 [main           ] INFO  ContextHandler                 - Started o.e.j.s.ServletContextHandler@28cb9120{/,null,AVAILABLE}
[INFO] Pod> 2016-12-28 11:28:55,206 [main           ] INFO  ServerConnector                - Started ServerConnector@2da1c45f{HTTP/1.1}{0.0.0.0:8080}
[INFO] Pod> 2016-12-28 11:28:55,206 [main           ] INFO  Server                         - Started @2936ms
[INFO] Pod> 2016-12-28 11:28:55,225 [main           ] INFO  DefaultCamelContext            - Route: route1 started and consuming from: jetty:http://0.0.0.0:8080/camel/hello
[INFO] Pod> 2016-12-28 11:28:55,226 [main           ] INFO  DefaultCamelContext            - Total 1 routes, of which 1 are started.
[INFO] Pod> 2016-12-28 11:28:55,228 [main           ] INFO  DefaultCamelContext            - Apache Camel 2.18.1 (CamelContext: myJettyCamel) started in 0.363 seconds
[INFO] Pod> 2016-12-28 11:28:55,238 [main           ] INFO  Bootstrap                      - WELD-ENV-002003: Weld SE container STATIC_INSTANCE initialized
^C[INFO] F8> Stopping the app:
[INFO] F8> Scaling Deployment default/cdi-jetty-demo to replicas: 0
[INFO] Pod> 2016-12-28 11:29:52,307 [Thread-12      ] INFO  MainSupport$HangupInterceptor  - Received hang up - stopping the main instance.
[INFO] Pod> 2016-12-28 11:29:52,313 [Thread-12      ] INFO  CamelContextProducer           - Camel CDI is stopping Camel context [myJettyCamel]
[INFO] Pod> 2016-12-28 11:29:52,313 [Thread-12      ] INFO  DefaultCamelContext            - Apache Camel 2.18.1 (CamelContext: myJettyCamel) is shutting down
[INFO] Pod> 2016-12-28 11:29:52,314 [Thread-12      ] INFO  DefaultShutdownStrategy        - Starting to graceful shutdown 1 routes (timeout 300 seconds)
[INFO] Pod> 2016-12-28 11:29:52,347 [ - ShutdownTask] INFO  ServerConnector                - Stopped ServerConnector@2da1c45f{HTTP/1.1}{0.0.0.0:8080}
[INFO] Pod> 2016-12-28 11:29:52,350 [ - ShutdownTask] INFO  ContextHandler                 - Stopped o.e.j.s.ServletContextHandler@28cb9120{/,null,UNAVAILABLE}
[INFO] Pod> 2016-12-28 11:29:52,353 [ - ShutdownTask] INFO  DefaultShutdownStrategy        - Route: route1 shutdown complete, was consuming from: jetty:http://0.0.0.0:8080/camel/hello

There are two things to note:

thirst the App is not by default exposed outside of the Kubernetes cluster. I will explain how to do this later on in this blog after we have completely deployed the app in our cluster.

Second, the fabric8:run goal starts the container and tails the log in the foreground, whenever you close the app by hitting ctrl+c your app is undeployed in the cluster automatically. To tweak this behavior check: https://maven.fabric8.io

Deploying our microservice in the fabric8 cluster

After running and testing our microservice locally we are ready to fully deploy our microservice in our cluster. For this we also start with a Maven command, mvn fabric8:import will import the application (templates) in the Kubernetes cluster and push the sources of the microservice to the fabric8 git repository based on Gogs.


~/workspace/cdi-jetty-demo$ mvn fabric8:import
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building Fabric8 :: Quickstarts :: CDI :: Camel with Jetty as HTTP server 0.0.1-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] --- fabric8-maven-plugin:3.1.49:import (default-cli) @ cdi-jetty-demo ---
[INFO] F8> Running 1 endpoints of gogs in namespace default
[INFO] F8> Creating Namespace user-secrets-source-minikube with labels: {provider=fabric8, kind=secrets}
Please enter your username for git repo gogs: gogsadmin
Please enter your password/access token for git repo gogs: RedHat$1
[INFO] F8> Creating Secret user-secrets-source-minikube/default-gogs-git
[INFO] F8> Creating ConfigMap fabric8-git-app-secrets in namespace user-secrets-source-minikube
[INFO] F8> git username: gogsadmin password: ******** email: pim@rubix.nl
[INFO] Trusting all SSL certificates
[INFO] Initialised an empty git configuration repo at /home/pim/workspace/cdi-jetty-demo
[INFO] creating git repository client at: http://192.168.42.40:30616
[INFO] Using remoteUrl: http://192.168.42.40:30616/gogsadmin/cdi-jetty-demo.git and remote name origin
[INFO] About to git commit and push to: http://192.168.42.40:30616/gogsadmin/cdi-jetty-demo.git and remote name origin
[INFO] Using UsernamePasswordCredentialsProvider{user: gogsadmin, password length: 8}
[INFO] Creating a BuildConfig for namespace: default project: cdi-jetty-demo
[INFO] F8> You can view the project dashboard at: http://192.168.42.40:30482/workspaces/default/projects/cdi-jetty-demo
[INFO] F8> To configure a CD Pipeline go to: http://192.168.42.40:30482/workspaces/default/projects/cdi-jetty-demo/forge/command/devops-edit
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 19.643 s
[INFO] Finished at: 2016-12-28T12:30:27+01:00
[INFO] Final Memory: 26M/494M
[INFO] ------------------------------------------------------------------------

After the successful execution of the import goal we need to finish the setup further in the fabric8 console.

First we need to setup a Kubernetes secret to authenticate to the Gogs Git repository.

select-secret

Next we can select a CD pipeline.

select-pipeline

After we have selected a CD pipeline suited for the lifecycle of our Microservice the Jenkinsfile will be committed in Git repository of our microservice. And after that Jenkins will be configured and the Jenkins CD pipeline will start its initial job.

build-success

If everything goes fine the entire Jenkins pipeline will finish successfully.

finished-pipeline

Now our microservice is running in the test and staging environments, which are created if they did not already exist. However like mentioned above the microservice is not yet exposed outside of the Kubernetes cluster. This is because the Kubernetes template used has the following setting in the service configuration:

type: ClusterIP

change this to and your app will be exposed outside of the Kubernetes cluster ready to be consumed by external parties:

type: NodePort

This means the microservice can be called via a browser:

app

If everything went fine the fabric8 dashboard for the app will look something like this:

app-dashboard

Like I mentioned above, the selection of the CD pipeline will add the Jenkinsfile to the Git repository, you can see it (and edit it off course if you whish) in the Gogs Git repo:

gogs-jenkins-file

Some final thoughts

Being able to quickly deploy functionality across different environments and not have to worry about runtime config, app servers, and manually hacking CD pipelines is definitely a major advantage for every app developer. For this reason the fabric8 framework in combination with Kubernetes really has some good potential. Using fabric8 locally with minikube did cause some stability issues, which I have not fully identified yet, but I’m sure this will improve with every new version coming up.

 
1 Comment

Posted by on 2016-12-28 in API Management, Geen categorie

 

Tags: , , , , , ,

Serverless architecture, what is it?

One of the more recent trends in IT is Serverless architecture. Like any hype in the earlier stages a lot of ambiguity exists on what it is and what problems it solves. Serverless architecture is no different. So recently a collegue of mine Jan van Zoggel (you can read his awesome blog here) and I took a look at what Serverless architecture is.

Some definitions, some ambiguity and some other terms

The Serverless trend emerged, like so many others these days, in the realm of web and app development. Where all logic and state traditionally handled by the backend was placed is a ‚Äú*aas‚ÄĚ (as a service) which got the term BaaS or mBaaS (mobile)Backend as a service.

Meanwhile Serverless also got to mean something slightly different, which of course got another ‚Äú*aas‚ÄĚ acronym, FaaS. Or Function as a Service. In FaaS certain application logic (Functions) run in ephemeral runtimes, in the cloud where the cloud provider in question handles all runtime specific configuration and setup. This includes stuff like networking, loadbalancing, scaling and so on. More on these emphemeral runtimes below.

Even though it is still quite early on in the world of Serverless architectures it seems the FaaS definition of Serverless architecture is gaining more traction than the mBaaS definition. This doesn’t mean mBaaS does not have a valid right of existence, just that it’s association with Serverless architecture is faining away.

To return to the definition of Serverless architecture and FaaS, it seems that the abstraction of (all) runtime configuration, setup and complexity is what Serverless is all about. Off course it is not truly serverless. Your stuff has to run somewhere ūüėČ

With this abstraction in mind I find the definition by Justin Isaf the most to the point when he said:

“Abstracting the server away from the developer. Not removing the server itself”

You can find his presentation about Serverless on YouTube: https://www.youtube.com/watch?v=5-V8DKPsUoU

The evolution of runtimes

When looking not just at Serverless architecture, but also other fairly recent trends in IT, we’re looking at you Cloud, Microservices and Containerization. We can detect a ‚Äúevolution of runtimes‚ÄĚ.

From one to many runtimes.

Traditionally you had one gigantic server, virtualized or bare metal where an application server of some sort was installed to. And on this application server all applications, modules and or components of a system would be deployed to and run. Even though these application servers where usually setup in multiples to at least have some high availability further scaling and moving around these often called monolithic runtimes was hard at best and downright impossible at worst (looking at you mainframe!).

Recently a trend emerged to not only split complex monolithic systems is autonomous modules, called microservices, but also to separate these microservices further by running them in a separate runtime. Usually a container of some sort. Thow in some container orchestration tool like Kubernetes, or Docker Swarm and all of a sudden scaling out and moving around runtimes becomes much much simpler. Every application is neatly separated by others and specific requests or messages for a particular app are handled by its runtime (container).

The next step in this evolution is what Serverless architecture and FaaS is all about. When a container runtime handles all requests of the app it is serving a FaaS function or app runtime handles only one request and turns itself off after the function completes. This turning off after every invocation has a couple of characteristics which defines a Serverless architecture.

  • No ‚ÄúAlways on‚ÄĚ server ‚Äď when no invocation requests are being handled and a traditional runtime would be sitting idle a Serverless architecture simply has nothing running.
  • ‚ÄúInfinite‚ÄĚ scalability ‚Äď since every request is handled by it’s seperate runtime and the provisioning and running of these FaaS functions is handled by the (Cloud) provider it provides a theoretically infinitely scalable application.
  • Zero runtime configuration ‚Äď the configuration of your application server, docker container or server is completely left to the (cloud) provider. Providing a ‚ÄúNo-Ops‚ÄĚ environment where the team only has to worry about the application logic itself.

evolution-of-runtimes

The evolution of runtimes

You can compare it with modern cars which turn off the engine while waiting for a traffic light. When the light turns green and the engine has to perform it starts. When the car is idling it simply turns off the engine completely.

scalibility

Since every request is handled by its own runtime scalability is handled out of the box.

Commercial offerings

As of the writing of this blog the three largest commercial offerings for implementing a Serverless architecture are available from, not surprisingly, the ‚Äúbig three‚ÄĚ of cloud providers.

  1. Amazon AWS Lambda aws-lambda
  2. Google Cloud Functions screen-shot-2016-12-02-at-15-30-58
  3. Microsoft Azure Functions screen-shot-2016-12-10-at-13-06-26

Not suprisingly the specific Serverless offering is completely integrated with all other cloud offerings of the particular provider. Meaning you can invoke your FaaS function from various triggers provided by other cloud solutions.

Some final thoughts

Even though it’s pretty early in the hype cycle Serverless architecture and FaaS definitely have some attractive characteristics. The potential of Serverless functions is very great, basically all short running stateless transactions can be handled by a FaaS functions. And when combined with other cloud offerings like storage and API Gateways even more complex applications can be created with Serverless architecture. With the cloud native and scalability of FaaS your application is completely ready for the 21st century, and off course buzzword complient ūüėČ

 
 

Tags: , , , ,

What is HATEOAS?

With probably the most unpronounceable acronym in the world of IT, and there are a lot, HATEOAS is also one of the most obscure and misunderstood constraints of the REST specification. In this article I will make an attempt to shed some light on the world of hypermedia and HATEOAS.

Let’s start with listing the complete list of the constraints of REST to give us some context on where the HATEOAS constraint sits in the REST constraints.

  1. Client-server
  2. Stateless server
  3. Cache
  4. Uniform interface
    1. Identification of resources
    2. Manipulation of resources through representations
    3. Self-descriptive messages
    4. Hypermedia as the engine of application state (HATEOAS)
  5. Layered System
  6. Code-on-Demand

 

As we just saw in the list of constraints HATEOAS stands for ‚ÄúHypermedia As The Engine Of Application State‚ÄĚ. So, what does this mean?

To grab the concept it‚Äôs helpful to think of a regular webpage. When browsing to a page there are, most of the time, various hyperlinks available which you can use to navigate further to other pages. These pages are essentially the ‚Äústate‚ÄĚ of the web(application).

To quote Roy Fielding from his thesis about RESTful architecture and design:

“The name ‚ÄėRepresentational State Transfer‚Äô is intended to evoke an image of how a well-designed Web application behaves: a network of web pages (a virtual state-machine), where the user progresses through the application by selecting links (state transitions), resulting in the next page (representing the next state of the application) being transferred to the user and rendered for their use.”-Roy Fielding
Architectural Styles and the Design of Network-based Software Architectures
Chapter 6

So in essence the states are the various webpages and the transitions to the different states are the hyperlinks. The hyperlinks are the ‚Äúengine‚ÄĚ to which the state transfer can occur.

Hateoas_web

Above we see a diagram of various webpages, when starting to browse the internet we start at an initial starting page. Clicking links on this starting page we can browse to other pages. Except from changing the URL manually in the browser address bar, the pages we can open via hyperlinks are dictated by the first page itself. The state transitions are controlled by the web application. This is an important concept in HATEOAS as well. Also the endpoints are hidden from an end user perspective.

As stated above the webpages are states of the (web)application and the hyperlinks are the mechanism (engine) to changing the state. We can abstract our webpages diagram also to this:

Hateoas_api

So… how does this applies to REST and HATEOAS?

The way to implement HATEOAS is pretty straightforward: in each response message add the link(s) for possible next request messages. Therefore give the opportunity to the consumer of the REST service to transition the state via the links in the response message.

A very simplified example of a HATEOAS response:

{
  "stocklist": {
    "name": "ACME",
    "price": "10.00",
    "link": [
      {
        "rel": "self",
        "href": "/stock/ACME",
        "method": "get"
      },
      {
        "rel": "buy",
        "href": "/account/ACME/buy",
        "method": "post"
      },
      {
        "rel": "sell",
        "href": "/account/ACME/sell",
        "method": "post"
      }
    ]
  }
}

In our simple example response we request the stock information for ACME. We get the price back, but in addition the links to either buy or sell the stock.

So when we add links to our responses does that mean we are now truly RESTfull?

Again, Roy Fielding seems pretty clear about this:

“If the engine of application state (and hence the API) is not being driven by hypertext, then it cannot be RESTful and cannot be a REST API. Period. Is there some broker manual somewhere that needs to be fixed?”-Roy Fielding
REST APIs must be hypertext-driven
Untangled: Musings of Roy T. Fielding

But, are we truly RESTful if we include hyperlinks in our responses?

The thing is, people do not use REST API’s. People use apps and sites and those apps and site use those REST API’s. So what does this mean? It means that if the app or site developer chooses not to use the HATEOAS links in the response the end user cannot state transition using hypermedia and ergo we are not truly HATEOAS compliant and thus RESTful.

So, adding the links in the responses is an important step in HATEOAS and RESTful compliant, it is only the first step.

HATEOAS is one of the more misunderstood and forgotten REST constraints. I hope this blogpost will help you in better grasp this REST constraint.

 
Leave a comment

Posted by on 2016-03-18 in API Management

 

Tags: , , , , , ,