RSS

Tag Archives: fabric8

Camel CDI app in Fabric8 via Maven

Recently I spent some time experimenting with the Fabric8 microservices framework. And while it is way too comprehensive to cover in a single blog post I wanted to focus on deploying an App/Microservice to a Fabric8 cluster.

For this post I am using a local fabric8 install using minikube, for more information see: http://fabric8.io/guide/getStarted/gofabric8.html

For details on to setup your fabric8 environment using minikube you can read the excellent blog post of my collegue Dirk Janssen here.

Also since I am doing quite a lot of work recently with Apache Camel using the CDI framework I the decision to deploy a Camel CDI based microservice was quickly made ūüôā

So in this blog I will outline how to create a basic Camel/CDI based microservice using maven and deploy in on a kubernetes cluster using a fabric8 CD pipeline.

Generating the microservice

I used a maven archetype to quickly bootstrap the microservice. Now I used the Eclipse IDE to generate the project, but you can off course use the Maven CLI as well.

archetype-selector

Fabric8 provides lots of different archetypes and quickstarts out of the box for various types of programming languages and especially for Java various frameworks. Here I am going with the cdi-camel-jetty-archetype. This archetype generates a Camel route exposed with a Jetty endpoint wired together using CDI.

 

Building and running locally

As with any Maven based application one of the first things to do after the project is created is executing:

$ mvn clean install

However initially the result was:


~/workspace/cdi-jetty-demo$ mvn install
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building Fabric8 :: Quickstarts :: CDI :: Camel with Jetty as HTTP server 0.0.1-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ cdi-jetty-demo ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 2 resources
[INFO]
[INFO] --- fabric8-maven-plugin:3.2.8:resource (default) @ cdi-jetty-demo ---
[INFO] F8: Running in Kubernetes mode
[INFO] F8: Using resource templates from /home/pim/workspace/cdi-jetty-demo/src/main/fabric8
2016-12-28 11:54:14 INFO  Version:30 - HV000001: Hibernate Validator 5.2.4.Final
[WARNING] F8: fmp-git: No .git/config file could be found so cannot annotate kubernetes resources with git commit SHA and branch
[WARNING] F8: fmp-git: No .git/config file could be found so cannot annotate kubernetes resources with git commit SHA and branch
[WARNING] F8: fmp-git: No .git/config file could be found so cannot annotate kubernetes resources with git commit SHA and branch
[WARNING] F8: fmp-git: No .git/config file could be found so cannot annotate kubernetes resources with git commit SHA and branch
[WARNING] F8: fmp-git: No .git/config file could be found so cannot annotate kubernetes resources with git commit SHA and branch
[WARNING] F8: fmp-git: No .git/config file could be found so cannot annotate kubernetes resources with git commit SHA and branch
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 3.612 s
[INFO] Finished at: 2016-12-28T11:54:14+01:00
[INFO] Final Memory: 37M/533M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal io.fabric8:fabric8-maven-plugin:3.2.8:resource (default) on project cdi-jetty-demo: Execution default of goal io.fabric8:fabric8-maven-plugin:3.2.8:resource failed: Container cdi-jetty-demo has no Docker image configured. Please check your Docker image configuration (including the generators which are supposed to run) -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/PluginExecutionException

Since the archetype did not have a template in cdi-jetty-demo/src/main/fabric8 and was still looking for it there it was throwing an error. After some looking around I managed to solve the issue by using a different version of the fabric8-maven-plugin.

Specifically:

<fabric8.maven.plugin.version>3.1.49</fabric8.maven.plugin.version>

Now running the clean install was executing successfully.


~/workspace/cdi-jetty-demo$ mvn clean install
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building Fabric8 :: Quickstarts :: CDI :: Camel with Jetty as HTTP server 0.0.1-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ cdi-jetty-demo ---
[INFO] Deleting /home/pim/workspace/cdi-jetty-demo/target
[INFO]
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ cdi-jetty-demo ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 2 resources
[INFO]
[INFO] --- fabric8-maven-plugin:3.1.49:resource (default) @ cdi-jetty-demo ---
[INFO] F8> Running in Kubernetes mode
[INFO] F8> Running generator java-exec
[INFO] F8> Using resource templates from /home/pim/workspace/cdi-jetty-demo/src/main/fabric8
[WARNING] F8> No .git/config file could be found so cannot annotate kubernetes resources with git commit SHA and branch
[WARNING] F8> No .git/config file could be found so cannot annotate kubernetes resources with git commit SHA and branch
[INFO]
[INFO] --- maven-compiler-plugin:3.3:compile (default-compile) @ cdi-jetty-demo ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 1 source file to /home/pim/workspace/cdi-jetty-demo/target/classes
[INFO]
[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ cdi-jetty-demo ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 1 resource
[INFO]
[INFO] --- maven-compiler-plugin:3.3:testCompile (default-testCompile) @ cdi-jetty-demo ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 1 source file to /home/pim/workspace/cdi-jetty-demo/target/test-classes
[INFO]
[INFO] --- maven-surefire-plugin:2.18.1:test (default-test) @ cdi-jetty-demo ---
[INFO]
[INFO] --- maven-jar-plugin:2.4:jar (default-jar) @ cdi-jetty-demo ---
[INFO] Building jar: /home/pim/workspace/cdi-jetty-demo/target/cdi-jetty-demo.jar
[INFO]
[INFO] >>> fabric8-maven-plugin:3.1.49:build (default) > initialize @ cdi-jetty-demo >>>
[INFO]
[INFO] <<< fabric8-maven-plugin:3.1.49:build (default) < initialize @ cdi-jetty-demo <<<
[INFO]
[INFO] --- fabric8-maven-plugin:3.1.49:build (default) @ cdi-jetty-demo ---
[INFO] F8> Running in Kubernetes mode
[INFO] F8> Running generator java-exec
[INFO] F8> Environment variable from gofabric8 : DOCKER_HOST=tcp://192.168.42.40:2376
[INFO] F8> Environment variable from gofabric8 : DOCKER_CERT_PATH=/home/pim/.minikube/certs
[INFO] F8> Pulling from fabric8/java-alpine-openjdk8-jdk
117f30b7ae3d: Already exists
f1011be98339: Pull complete
dae2abad9134: Pull complete
d1ea5cd75444: Pull complete
cec1e2f1c0f2: Pull complete
a9ec98a3bcba: Pull complete
38bbb9125eaa: Pull complete
66f50c07037b: Pull complete
868f8ddb8412: Pull complete
[INFO] F8> Digest: sha256:572ec2fdc9ac33bb1a8a5ee96c17eae9b7797666ed038c08b5c3583f98c1277f
[INFO] F8> Status: Downloaded newer image for fabric8/java-alpine-openjdk8-jdk:1.1.11
[INFO] F8> Pulled fabric8/java-alpine-openjdk8-jdk:1.1.11 in 47 seconds
Downloading: https://repo.fusesource.com/nexus/content/groups/public/org/apache/httpcomponents/httpclient/4.3.3/httpclient-4.3.3.pom
Downloading: https://maven.repository.redhat.com/ga/org/apache/httpcomponents/httpclient/4.3.3/httpclient-4.3.3.pom
Downloading: https://repo.maven.apache.org/maven2/org/apache/httpcomponents/httpclient/4.3.3/httpclient-4.3.3.pom
Downloaded: https://repo.maven.apache.org/maven2/org/apache/httpcomponents/httpclient/4.3.3/httpclient-4.3.3.pom (6 KB at 30.9 KB/sec)
Downloading: https://repo.fusesource.com/nexus/content/groups/public/org/apache/httpcomponents/httpmime/4.3.3/httpmime-4.3.3.pom
Downloading: https://maven.repository.redhat.com/ga/org/apache/httpcomponents/httpmime/4.3.3/httpmime-4.3.3.pom
Downloading: https://repo.maven.apache.org/maven2/org/apache/httpcomponents/httpmime/4.3.3/httpmime-4.3.3.pom
Downloaded: https://repo.maven.apache.org/maven2/org/apache/httpcomponents/httpmime/4.3.3/httpmime-4.3.3.pom (5 KB at 81.7 KB/sec)
Downloading: https://repo.fusesource.com/nexus/content/groups/public/org/apache/httpcomponents/httpclient-cache/4.3.3/httpclient-cache-4.3.3.pom
Downloading: https://maven.repository.redhat.com/ga/org/apache/httpcomponents/httpclient-cache/4.3.3/httpclient-cache-4.3.3.pom
Downloading: https://repo.maven.apache.org/maven2/org/apache/httpcomponents/httpclient-cache/4.3.3/httpclient-cache-4.3.3.pom
Downloaded: https://repo.maven.apache.org/maven2/org/apache/httpcomponents/httpclient-cache/4.3.3/httpclient-cache-4.3.3.pom (7 KB at 102.3 KB/sec)
Downloading: https://repo.fusesource.com/nexus/content/groups/public/org/apache/httpcomponents/fluent-hc/4.3.3/fluent-hc-4.3.3.pom
Downloading: https://maven.repository.redhat.com/ga/org/apache/httpcomponents/fluent-hc/4.3.3/fluent-hc-4.3.3.pom
Downloading: https://repo.maven.apache.org/maven2/org/apache/httpcomponents/fluent-hc/4.3.3/fluent-hc-4.3.3.pom
Downloaded: https://repo.maven.apache.org/maven2/org/apache/httpcomponents/fluent-hc/4.3.3/fluent-hc-4.3.3.pom (5 KB at 80.4 KB/sec)
[INFO] Copying files to /home/pim/workspace/cdi-jetty-demo/target/docker/eos/cdi-jetty-demo/snapshot-161228-115614-0167/build/maven
[INFO] Building tar: /home/pim/workspace/cdi-jetty-demo/target/docker/eos/cdi-jetty-demo/snapshot-161228-115614-0167/tmp/docker-build.tar
[INFO] F8> docker-build.tar: Created [eos/cdi-jetty-demo:snapshot-161228-115614-0167] "java-exec" in 8 seconds
[INFO] F8> [eos/cdi-jetty-demo:snapshot-161228-115614-0167] "java-exec": Built image sha256:94fad
[INFO] F8> [eos/cdi-jetty-demo:snapshot-161228-115614-0167] "java-exec": Tag with latest
[INFO]
[INFO] --- maven-install-plugin:2.4:install (default-install) @ cdi-jetty-demo ---
[INFO] Installing /home/pim/workspace/cdi-jetty-demo/target/cdi-jetty-demo.jar to /home/pim/.m2/repository/nl/rubix/eos/cdi-jetty-demo/0.0.1-SNAPSHOT/cdi-jetty-demo-0.0.1-SNAPSHOT.jar
[INFO] Installing /home/pim/workspace/cdi-jetty-demo/pom.xml to /home/pim/.m2/repository/nl/rubix/eos/cdi-jetty-demo/0.0.1-SNAPSHOT/cdi-jetty-demo-0.0.1-SNAPSHOT.pom
[INFO] Installing /home/pim/workspace/cdi-jetty-demo/target/classes/META-INF/fabric8/openshift.yml to /home/pim/.m2/repository/nl/rubix/eos/cdi-jetty-demo/0.0.1-SNAPSHOT/cdi-jetty-demo-0.0.1-SNAPSHOT-openshift.yml
[INFO] Installing /home/pim/workspace/cdi-jetty-demo/target/classes/META-INF/fabric8/openshift.json to /home/pim/.m2/repository/nl/rubix/eos/cdi-jetty-demo/0.0.1-SNAPSHOT/cdi-jetty-demo-0.0.1-SNAPSHOT-openshift.json
[INFO] Installing /home/pim/workspace/cdi-jetty-demo/target/classes/META-INF/fabric8/kubernetes.yml to /home/pim/.m2/repository/nl/rubix/eos/cdi-jetty-demo/0.0.1-SNAPSHOT/cdi-jetty-demo-0.0.1-SNAPSHOT-kubernetes.yml
[INFO] Installing /home/pim/workspace/cdi-jetty-demo/target/classes/META-INF/fabric8/kubernetes.json to /home/pim/.m2/repository/nl/rubix/eos/cdi-jetty-demo/0.0.1-SNAPSHOT/cdi-jetty-demo-0.0.1-SNAPSHOT-kubernetes.json
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:03 min
[INFO] Finished at: 2016-12-28T11:57:15+01:00
[INFO] Final Memory: 61M/873M
[INFO] ------------------------------------------------------------------------

After the project built successfully I wanted to run it in my Kubernetes cluster.

Thankfully the guys at fabric8 also took this into consideration, so by executing the maven goal fabric8:run you app will be booted into a Docker container and louched as a Pod in the Kubernetes cluster just for some local testing.


~/workspace/cdi-jetty-demo$ mvn fabric8:run
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building Fabric8 :: Quickstarts :: CDI :: Camel with Jetty as HTTP server 0.0.1-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] >>> fabric8-maven-plugin:3.1.49:run (default-cli) > install @ cdi-jetty-demo >>>
[INFO]
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ cdi-jetty-demo ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 2 resources
[INFO]
[INFO] --- fabric8-maven-plugin:3.1.49:resource (default) @ cdi-jetty-demo ---
[INFO] F8> Running in Kubernetes mode
[INFO] F8> Running generator java-exec
[INFO] F8> Using resource templates from /home/pim/workspace/cdi-jetty-demo/src/main/fabric8
[WARNING] F8> No .git/config file could be found so cannot annotate kubernetes resources with git commit SHA and branch
[WARNING] F8> No .git/config file could be found so cannot annotate kubernetes resources with git commit SHA and branch
[INFO]
[INFO] --- maven-compiler-plugin:3.3:compile (default-compile) @ cdi-jetty-demo ---
[INFO] Nothing to compile - all classes are up to date
[INFO]
[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ cdi-jetty-demo ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 1 resource
[INFO]
[INFO] --- maven-compiler-plugin:3.3:testCompile (default-testCompile) @ cdi-jetty-demo ---
[INFO] Nothing to compile - all classes are up to date
[INFO]
[INFO] --- maven-surefire-plugin:2.18.1:test (default-test) @ cdi-jetty-demo ---
[INFO]
[INFO] --- maven-jar-plugin:2.4:jar (default-jar) @ cdi-jetty-demo ---
[INFO] Building jar: /home/pim/workspace/cdi-jetty-demo/target/cdi-jetty-demo.jar
[INFO]
[INFO] >>> fabric8-maven-plugin:3.1.49:build (default) > initialize @ cdi-jetty-demo >>>
[INFO]
[INFO] <<< fabric8-maven-plugin:3.1.49:build (default) < initialize @ cdi-jetty-demo <<<
[INFO]
[INFO] --- fabric8-maven-plugin:3.1.49:build (default) @ cdi-jetty-demo ---
[INFO] F8> Running in Kubernetes mode
[INFO] F8> Running generator java-exec
[INFO] F8> Environment variable from gofabric8 : DOCKER_HOST=tcp://192.168.42.40:2376
[INFO] F8> Environment variable from gofabric8 : DOCKER_CERT_PATH=/home/pim/.minikube/certs
[INFO] Copying files to /home/pim/workspace/cdi-jetty-demo/target/docker/eos/cdi-jetty-demo/snapshot-161228-122841-0217/build/maven
[INFO] Building tar: /home/pim/workspace/cdi-jetty-demo/target/docker/eos/cdi-jetty-demo/snapshot-161228-122841-0217/tmp/docker-build.tar
[INFO] F8> docker-build.tar: Created [eos/cdi-jetty-demo:snapshot-161228-122841-0217] "java-exec" in 5 seconds
[INFO] F8> [eos/cdi-jetty-demo:snapshot-161228-122841-0217] "java-exec": Built image sha256:bccd3
[INFO] F8> [eos/cdi-jetty-demo:snapshot-161228-122841-0217] "java-exec": Tag with latest
[INFO]
[INFO] --- maven-install-plugin:2.4:install (default-install) @ cdi-jetty-demo ---
[INFO] Installing /home/pim/workspace/cdi-jetty-demo/target/cdi-jetty-demo.jar to /home/pim/.m2/repository/nl/rubix/eos/cdi-jetty-demo/0.0.1-SNAPSHOT/cdi-jetty-demo-0.0.1-SNAPSHOT.jar
[INFO] Installing /home/pim/workspace/cdi-jetty-demo/pom.xml to /home/pim/.m2/repository/nl/rubix/eos/cdi-jetty-demo/0.0.1-SNAPSHOT/cdi-jetty-demo-0.0.1-SNAPSHOT.pom
[INFO] Installing /home/pim/workspace/cdi-jetty-demo/target/classes/META-INF/fabric8/openshift.yml to /home/pim/.m2/repository/nl/rubix/eos/cdi-jetty-demo/0.0.1-SNAPSHOT/cdi-jetty-demo-0.0.1-SNAPSHOT-openshift.yml
[INFO] Installing /home/pim/workspace/cdi-jetty-demo/target/classes/META-INF/fabric8/openshift.json to /home/pim/.m2/repository/nl/rubix/eos/cdi-jetty-demo/0.0.1-SNAPSHOT/cdi-jetty-demo-0.0.1-SNAPSHOT-openshift.json
[INFO] Installing /home/pim/workspace/cdi-jetty-demo/target/classes/META-INF/fabric8/kubernetes.yml to /home/pim/.m2/repository/nl/rubix/eos/cdi-jetty-demo/0.0.1-SNAPSHOT/cdi-jetty-demo-0.0.1-SNAPSHOT-kubernetes.yml
[INFO] Installing /home/pim/workspace/cdi-jetty-demo/target/classes/META-INF/fabric8/kubernetes.json to /home/pim/.m2/repository/nl/rubix/eos/cdi-jetty-demo/0.0.1-SNAPSHOT/cdi-jetty-demo-0.0.1-SNAPSHOT-kubernetes.json
[INFO]
[INFO] <<< fabric8-maven-plugin:3.1.49:run (default-cli) < install @ cdi-jetty-demo <<<
[INFO]
[INFO] --- fabric8-maven-plugin:3.1.49:run (default-cli) @ cdi-jetty-demo ---
[INFO] F8> Using Kubernetes at https://192.168.42.40:8443/ in namespace default with manifest /home/pim/workspace/cdi-jetty-demo/target/classes/META-INF/fabric8/kubernetes.yml
[INFO] Using namespace: default
[INFO] Creating a Service from kubernetes.yml namespace default name cdi-jetty-demo
[INFO] Created Service: target/fabric8/applyJson/default/service-cdi-jetty-demo.json
[INFO] Creating a Deployment from kubernetes.yml namespace default name cdi-jetty-demo
[INFO] Created Deployment: target/fabric8/applyJson/default/deployment-cdi-jetty-demo.json
[INFO] hint> Use the command `kubectl get pods -w` to watch your pods start up
[INFO] F8> Watching pods with selector LabelSelector(matchExpressions=[], matchLabels={project=cdi-jetty-demo, provider=fabric8, group=nl.rubix.eos}, additionalProperties={}) waiting for a running pod...
[INFO] New Pod> cdi-jetty-demo-1244207563-s9xn2 status: Pending
[INFO] New Pod> cdi-jetty-demo-1244207563-s9xn2 status: Running
[INFO] New Pod> Tailing log of pod: cdi-jetty-demo-1244207563-s9xn2
[INFO] New Pod> Press Ctrl-C to scale down the app and stop tailing the log
[INFO] New Pod>
[INFO] Pod> exec java -javaagent:/opt/agent-bond/agent-bond.jar=jolokia{{host=0.0.0.0}},jmx_exporter{{9779:/opt/agent-bond/jmx_exporter_config.yml}} -cp .:/app/* org.apache.camel.cdi.Main
[INFO] Pod> I> No access restrictor found, access to any MBean is allowed
[INFO] Pod> Jolokia: Agent started with URL http://172.17.0.6:8778/jolokia/
[INFO] Pod> 2016-12-28 11:28:52.953:INFO:ifasjipjsoejs.Server:jetty-8.y.z-SNAPSHOT
[INFO] Pod> 2016-12-28 11:28:53.001:INFO:ifasjipjsoejs.AbstractConnector:Started SelectChannelConnector@0.0.0.0:9779
[INFO] Pod> 2016-12-28 11:28:53,142 [main           ] INFO  Version                        - WELD-000900: 2.3.3 (Final)
[INFO] Pod> 2016-12-28 11:28:53,412 [main           ] INFO  Bootstrap                      - WELD-000101: Transactional services not available. Injection of @Inject UserTransaction not available. Transactional observers will be invoked synchronously.
[INFO] Pod> 2016-12-28 11:28:53,605 [main           ] INFO  Event                          - WELD-000411: Observer method [BackedAnnotatedMethod] private org.apache.camel.cdi.CdiCamelExtension.processAnnotatedType(@Observes ProcessAnnotatedType<?>) receives events for all annotated types. Consider restricting events using @WithAnnotations or a generic type with bounds.
[INFO] Pod> 2016-12-28 11:28:54,809 [main           ] INFO  DefaultTypeConverter           - Loaded 196 type converters
[INFO] Pod> 2016-12-28 11:28:54,861 [main           ] INFO  CdiCamelExtension              - Camel CDI is starting Camel context [myJettyCamel]
[INFO] Pod> 2016-12-28 11:28:54,863 [main           ] INFO  DefaultCamelContext            - Apache Camel 2.18.1 (CamelContext: myJettyCamel) is starting
[INFO] Pod> 2016-12-28 11:28:54,864 [main           ] INFO  ManagedManagementStrategy      - JMX is enabled
[INFO] Pod> 2016-12-28 11:28:55,030 [main           ] INFO  DefaultRuntimeEndpointRegistry - Runtime endpoint registry is in extended mode gathering usage statistics of all incoming and outgoing endpoints (cache limit: 1000)
[INFO] Pod> 2016-12-28 11:28:55,081 [main           ] INFO  DefaultCamelContext            - StreamCaching is not in use. If using streams then its recommended to enable stream caching. See more details at http://camel.apache.org/stream-caching.html
[INFO] Pod> 2016-12-28 11:28:55,120 [main           ] INFO  log                            - Logging initialized @2850ms
[INFO] Pod> 2016-12-28 11:28:55,170 [main           ] INFO  Server                         - jetty-9.2.19.v20160908
[INFO] Pod> 2016-12-28 11:28:55,199 [main           ] INFO  ContextHandler                 - Started o.e.j.s.ServletContextHandler@28cb9120{/,null,AVAILABLE}
[INFO] Pod> 2016-12-28 11:28:55,206 [main           ] INFO  ServerConnector                - Started ServerConnector@2da1c45f{HTTP/1.1}{0.0.0.0:8080}
[INFO] Pod> 2016-12-28 11:28:55,206 [main           ] INFO  Server                         - Started @2936ms
[INFO] Pod> 2016-12-28 11:28:55,225 [main           ] INFO  DefaultCamelContext            - Route: route1 started and consuming from: jetty:http://0.0.0.0:8080/camel/hello
[INFO] Pod> 2016-12-28 11:28:55,226 [main           ] INFO  DefaultCamelContext            - Total 1 routes, of which 1 are started.
[INFO] Pod> 2016-12-28 11:28:55,228 [main           ] INFO  DefaultCamelContext            - Apache Camel 2.18.1 (CamelContext: myJettyCamel) started in 0.363 seconds
[INFO] Pod> 2016-12-28 11:28:55,238 [main           ] INFO  Bootstrap                      - WELD-ENV-002003: Weld SE container STATIC_INSTANCE initialized
^C[INFO] F8> Stopping the app:
[INFO] F8> Scaling Deployment default/cdi-jetty-demo to replicas: 0
[INFO] Pod> 2016-12-28 11:29:52,307 [Thread-12      ] INFO  MainSupport$HangupInterceptor  - Received hang up - stopping the main instance.
[INFO] Pod> 2016-12-28 11:29:52,313 [Thread-12      ] INFO  CamelContextProducer           - Camel CDI is stopping Camel context [myJettyCamel]
[INFO] Pod> 2016-12-28 11:29:52,313 [Thread-12      ] INFO  DefaultCamelContext            - Apache Camel 2.18.1 (CamelContext: myJettyCamel) is shutting down
[INFO] Pod> 2016-12-28 11:29:52,314 [Thread-12      ] INFO  DefaultShutdownStrategy        - Starting to graceful shutdown 1 routes (timeout 300 seconds)
[INFO] Pod> 2016-12-28 11:29:52,347 [ - ShutdownTask] INFO  ServerConnector                - Stopped ServerConnector@2da1c45f{HTTP/1.1}{0.0.0.0:8080}
[INFO] Pod> 2016-12-28 11:29:52,350 [ - ShutdownTask] INFO  ContextHandler                 - Stopped o.e.j.s.ServletContextHandler@28cb9120{/,null,UNAVAILABLE}
[INFO] Pod> 2016-12-28 11:29:52,353 [ - ShutdownTask] INFO  DefaultShutdownStrategy        - Route: route1 shutdown complete, was consuming from: jetty:http://0.0.0.0:8080/camel/hello

There are two things to note:

thirst the App is not by default exposed outside of the Kubernetes cluster. I will explain how to do this later on in this blog after we have completely deployed the app in our cluster.

Second, the fabric8:run goal starts the container and tails the log in the foreground, whenever you close the app by hitting ctrl+c your app is undeployed in the cluster automatically. To tweak this behavior check: https://maven.fabric8.io

Deploying our microservice in the fabric8 cluster

After running and testing our microservice locally we are ready to fully deploy our microservice in our cluster. For this we also start with a Maven command, mvn fabric8:import will import the application (templates) in the Kubernetes cluster and push the sources of the microservice to the fabric8 git repository based on Gogs.


~/workspace/cdi-jetty-demo$ mvn fabric8:import
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building Fabric8 :: Quickstarts :: CDI :: Camel with Jetty as HTTP server 0.0.1-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] --- fabric8-maven-plugin:3.1.49:import (default-cli) @ cdi-jetty-demo ---
[INFO] F8> Running 1 endpoints of gogs in namespace default
[INFO] F8> Creating Namespace user-secrets-source-minikube with labels: {provider=fabric8, kind=secrets}
Please enter your username for git repo gogs: gogsadmin
Please enter your password/access token for git repo gogs: RedHat$1
[INFO] F8> Creating Secret user-secrets-source-minikube/default-gogs-git
[INFO] F8> Creating ConfigMap fabric8-git-app-secrets in namespace user-secrets-source-minikube
[INFO] F8> git username: gogsadmin password: ******** email: pim@rubix.nl
[INFO] Trusting all SSL certificates
[INFO] Initialised an empty git configuration repo at /home/pim/workspace/cdi-jetty-demo
[INFO] creating git repository client at: http://192.168.42.40:30616
[INFO] Using remoteUrl: http://192.168.42.40:30616/gogsadmin/cdi-jetty-demo.git and remote name origin
[INFO] About to git commit and push to: http://192.168.42.40:30616/gogsadmin/cdi-jetty-demo.git and remote name origin
[INFO] Using UsernamePasswordCredentialsProvider{user: gogsadmin, password length: 8}
[INFO] Creating a BuildConfig for namespace: default project: cdi-jetty-demo
[INFO] F8> You can view the project dashboard at: http://192.168.42.40:30482/workspaces/default/projects/cdi-jetty-demo
[INFO] F8> To configure a CD Pipeline go to: http://192.168.42.40:30482/workspaces/default/projects/cdi-jetty-demo/forge/command/devops-edit
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 19.643 s
[INFO] Finished at: 2016-12-28T12:30:27+01:00
[INFO] Final Memory: 26M/494M
[INFO] ------------------------------------------------------------------------

After the successful execution of the import goal we need to finish the setup further in the fabric8 console.

First we need to setup a Kubernetes secret to authenticate to the Gogs Git repository.

select-secret

Next we can select a CD pipeline.

select-pipeline

After we have selected a CD pipeline suited for the lifecycle of our Microservice the Jenkinsfile will be committed in Git repository of our microservice. And after that Jenkins will be configured and the Jenkins CD pipeline will start its initial job.

build-success

If everything goes fine the entire Jenkins pipeline will finish successfully.

finished-pipeline

Now our microservice is running in the test and staging environments, which are created if they did not already exist. However like mentioned above the microservice is not yet exposed outside of the Kubernetes cluster. This is because the Kubernetes template used has the following setting in the service configuration:

type: ClusterIP

change this to and your app will be exposed outside of the Kubernetes cluster ready to be consumed by external parties:

type: NodePort

This means the microservice can be called via a browser:

app

If everything went fine the fabric8 dashboard for the app will look something like this:

app-dashboard

Like I mentioned above, the selection of the CD pipeline will add the Jenkinsfile to the Git repository, you can see it (and edit it off course if you whish) in the Gogs Git repo:

gogs-jenkins-file

Some final thoughts

Being able to quickly deploy functionality across different environments and not have to worry about runtime config, app servers, and manually hacking CD pipelines is definitely a major advantage for every app developer. For this reason the fabric8 framework in combination with Kubernetes really has some good potential. Using fabric8 locally with minikube did cause some stability issues, which I have not fully identified yet, but I’m sure this will improve with every new version coming up.

 
1 Comment

Posted by on 2016-12-28 in API Management, Geen categorie

 

Tags: , , , , , ,

JBoss Fuse Fabric port ranges‚Ķ oh my‚Ķ

Recently I had an opportunity for a customer to sort out some of the interconnectivity required between servers for a JBoss Fuse Fabric HA ensemble. Obviously when connecting multiple servers in a Fuse Fabric ensemble interconnectivity between those servers is required. Unfortunately the port (ranges) required by JBoss Fuse Fabric are not all documented, and the ones who are documented are spread out between different documents and sections. So it’s not a bulletproof list down to the digit but, with some trial and error (and some netstat J ) we discovered quite some port ranges.

ActiveMQ

Starting with one of the more well known ports, the ActiveMQ message broker defaults to 61616 (although this can be overridden in the broker xml configuration file). When creating brokers based on fabric profiles and assigning these to child containers ports get increased. We decided to open a range of 61600-61700 for the ActiveMQ brokers. When you add more than one hundred brokers you should probably make some additional changes to the broker xml configuration file anyways, here you could control the port ranges to accommodate to your specific needs).

SSH

The containers in JBoss Fuse Fabric have an Apache Karaf runtime. Apache Karaf acts as its own SSH server. Which are useful for managing these containers. However the Apache Karaf containers in Fuse do not host their SSH servers on port 22 (which is usually reserved for the SSH access to the server), but host their ssh a port range starting with 8100 and increasing with each child container created in the Fabric. So a range of 8100 ‚Äď 8180 (increasing further will cause conflicts with Jolokia, which we will discuss below) gives the option of creating 80 child containers per server!!

RMI

Now it gets a little bit more dicey, for RMI two separate ranges have to be opened RMI registry and RMI server. We noticed the RMI server starts with a port range of 44444, but when creating some child containers the ports got a bit stranger. They were increasing, as expected, and in the url they neatly increase by one. So a child container gets 44445 (as the root container starts with 44444) and so on. But a netstat ‚Äďplnt command reveals a lot more open ports a lot higher up the range.

In the documentation for the fabric:create command in the console reference[1] the default max port is documented as 65535. This seems to be corresponding with the RMI server port since a netstat revealed assigned ports in the 50000s for the child containers we created.

For the RMI registry port, the documentation unfortunately does not provide much clearity. The default JMX RMI port is 1099, and this does get used by Fuse Fabric, usually by the root container of that node. However, each child containers has its own RMI registry port. These registry ports increase by one. So we decided to open the range up to 1400 giving ample room for creating and deleting child containers.

The strange thing also is the fact this port range conflicts with the ActiveMQ range. Since an internal port conflict is not something I experienced yet I assume Fuse Fabric has the logic to accommodate for the ranges overlapping.

Controlling the RMI ports for both could be done with the commands admin:change-rmi-registry-port and admin:change-rmi-server-port however, I did not test this in a Fabric ensemble setup!

https://access.redhat.com/documentation/en-US/Red_Hat_JBoss_Fuse/6.1/html/Console_Reference/ConsoleadminChangeRegistryPort.html

https://access.redhat.com/documentation/en-US/Red_Hat_JBoss_Fuse/6.1/html/Console_Reference/ConsoleadminChangeServerPort.html

Fuse Management Console + Jolokia

The default port for the Fuse Management Console is 8181, this is also the port Jolokia uses. For each child container this port gets increased by one. Since we ended our range for the RMI registry at 1400 we will end this range at 8582 for consistency. Although when creating 400 child containers on one machine you’ll have one hell of a server running all these Fabric containers.

Zookeeper

JBoss Fuse Fabric uses Zookeeper as a registry for all sorts of things (for example auto discovery) The default port the Zookeeper registry listens on is 2181. But when creating an ensemble the Zookeeper url gets updated and the port will change, usually increasing by one. Although it is a bit unclear how this port change actually works under the hood. To be on the safe side we went with a port range of 2181 up to 2200.

Fabric Server

The last port ranges are of the Fabric server. I found this one when I did a fabric:config-list and found this entry

Properties:

service.pid = io.fabric8.zookeeper.server.2d360b3a-18bb-4a68-8c94-dccd25e9a2b1

clientPortAddress = 0.0.0.0

syncLimit = 5

server.id = 1

initLimit = 10

service.factoryPid = io.fabric8.zookeeper.server

clientPort = 2182

server.1 = 10.0.2.15:2888:3888

dataDir = data/zookeeper/0001

tickTime = 2000

fabric.zookeeper.pid = io.fabric8.zookeeper.server-0001

Here we see a port range of 2888 to 3888 of the Fabric server.

 

Some final thoughts

Getting into the interconnectivity between servers in a JBoss Fuse Fabric ensemble is definitely not trivial. Espessially the port ranges of Fuse Fabric are quite esoteric. Hopefully this will get documented soon or become a lot clearer when Fabric V2 arrives.

Anyway all the final port ranges we discovered are listed here:

Usage Port range
ActiveMQ 61600 ‚Äď 61700
RMI registry 1000 ‚Äď 1400
RMI server 44444 ‚Äď 65535
SSH 8100 ‚Äď 8180
Fuse Management Console + Jolokia 8181 ‚Äď 8582
Zookeeper 2181 ‚Äď 2200
Fabric server 2888 – 3888

 

[1] https://access.redhat.com/documentation/en-US/Red_Hat_JBoss_Fuse/6.1/html/Console_Reference/ConsoleFabricCreate.html

 
Leave a comment

Posted by on 2015-04-26 in JBoss Fuse

 

Tags: , , , , ,

Fabric load balancer for CXF ‚Äď Part 2

In part 1 we explored the possibility to use Fabric as a load balancer for multiple CXF http endpoints and had a more detailed look at how to implement the Fabric load balancing feature on a CXFRS service. In this part we will look into the client side of the load balancer feature.

In order to leverage the Fabric load balancer from a client side clients have to be ‚Äúinside the Fabric‚ÄĚ meaning they will need access to the Zookeeper registry.

When we again take a look at a diagram explaining the load balancer feature we can see clients will need to perform a lookup in the Zookeeper registry to obtain the actual endpoints of the http service.

Fabric load balancer CXF Part 1 - 1

As mentioned in part 1 there are two possibilities to accomplish this lookup in the Fabric Zookeeper registry:

As of this moment in JBoss Fuse 6.1 the fabric-http-gateway profile is not fully supported yet. So in this post we will explore the Camel proxy route.

Camel route

A Camel proxy is essentially a Camel route

A Camel proxy is essentially a Camel route which provides a protocol bridge between ‚Äúregular‚ÄĚ http and Fabric load balancing by looking up endpoints in the Fabric (Zookeeper) registry. The Camel route itself can be very straight forward, an example Camel route implementing this proxy can look like:

<camelContext id="cxfProxyContext" trace="false" xmlns="http://camel.apache.org/schema/blueprint">
    <route id="cxfProxyRoute">
        <from uri="cxfrs:bean:proxyEndpoint"/>
        <log message="got proxy request"/>
        <to uri="cxfrs:bean:clientEndpoint"/>
    </route>
</camelContext>

The Fabric load balancer will be implemented on the cxfrs:bean in the Camel route.

Note the Camel route also exposes a CXFRS endpoint in the <from endpoint therefore a regular CXFRS service class and endpoint will have to be created.

The service class is the same we used in part 1 for the service itself:


package nl.rubix.cxf.fabric.ha.test;

import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;

@Path("/")
public class Endpoint {
    
    @GET
    @Path("/test/{id}")
    @Produces(MediaType.APPLICATION_JSON)
    public String getAssets(@PathParam("id") String id){
        return null;
    }
    
}

The endpoint we will expose our proxy on is configured like this:

<cxf:rsServer id=<strong>"proxyEndpoint"</strong> address=<strong>"http://localhost:1234/proxy"</strong> serviceClass=<strong>"nl.rubix.cxf.fabric.proxy.test.Endpoint"</strong>/>

Implementing the Fabric load balancer

To implement the load balancer on the client side the pom has to be updated just as the server side. Next to the regular cxfrs dependencies add the fabric-cxf dependency to the pom:

<dependency>
    <groupId>io.fabric8</groupId>
    <artifactId>fabric-cxf</artifactId>
    <version>1.0.0.redhat-379</version>
    <type>bundle</type>
</dependency>

And also add the fabric cxf class to the import package in the maven-bundle-plugin to add it to the OSGi manifest file:

<plugin>
    <groupId>org.apache.felix</groupId>
    <artifactId>maven-bundle-plugin</artifactId>
    <version>2.3.7</version>
    <extensions>true</extensions>
    <configuration>
        <instructions>
            <Bundle-SymbolicName>cxf-fabric-proxy-test</Bundle-SymbolicName>
            <Private-Package>nl.rubix.cxf.fabric.proxy.test.*</Private-Package>
            <Import-Package>*,io.fabric8.cxf</Import-Package>
        </instructions>
    </configuration>
</plugin>

The next step is to add the load balancer features to the Blueprint.xml (or Spring if you prefer Spring). These steps are similar to the configuration of the Fabric Load balancer on the Server side discussed in part 1.

So first add the cxf-core namespace to the Blueprint.xml.

xmlns:cxf-core=‚ÄĚhttp://cxf.apache.org/blueprint/core‚ÄĚ

Add an OSGi reference to the CuratorFramework OSGi service[1]:


<reference id=<strong>"curator"</strong> interface=<strong>"org.apache.curator.framework.CuratorFramework"</strong> />

Next instantiate the load balancer bean:

Instantiate the FabricLoadBalancerFeature bean:

<bean id="fabricLoadBalancerFeature" class="io.fabric8.cxf.FabricLoadBalancerFeature">
    <property name="curator" ref="curator" />
    <property name="fabricPath" value="cxf/endpoints" />
</bean>

It is important to note the value in the ‚ÄúfabricPath‚ÄĚ property must contain the exact same value of the fabricPath of the service the client will invoke. This fabricPath points to the location in the Fabric Zookeeper registry where the endpoints are stored. It is the coupling between the client and service.

To register the Fabric load balancer with the cxfrs:bean used in the to endpoint (we are implementing a client so we must register the Fabric load balancer with the cxfrs bean on the client side). Extend the declaration of the cxfrsclient:

<cxf:rsClient id="clientEndpoint" address="http://dummy/url" serviceClass="nl.rubix.cxf.fabric.proxy.test.Endpoint">
    <cxf:features>
        <ref component-id="fabricLoadBalancerFeature"/>
      </cxf:features>
</cxf:rsClient> 

Add the Fabric load balancer feature to a cxf bus in the rsClient:

<cxf:rsClient id="clientEndpoint" address="http://dummy/url" serviceClass="nl.rubix.cxf.fabric.proxy.test.Endpoint">
    <cxf:features>
        <ref component-id="fabricLoadBalancerFeature"/>
    </cxf:features>
</cxf:rsClient> 

The entire Blueprint.xml for this proxy looks like this:


<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:camel="http://camel.apache.org/schema/blueprint"
       xmlns:cxf="http://camel.apache.org/schema/blueprint/cxf"
       xmlns:cxf-core="http://cxf.apache.org/blueprint/core"
       xsi:schemaLocation="
       http://www.osgi.org/xmlns/blueprint/v1.0.0 http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd
       http://camel.apache.org/schema/blueprint http://camel.apache.org/schema/blueprint/camel-blueprint.xsd">

  <cxf:rsServer id="proxyEndpoint" address="http://localhost:1234/proxy" serviceClass="nl.rubix.cxf.fabric.proxy.test.Endpoint"/>
  <cxf:rsClient id="clientEndpoint" address="http://dummy/url" serviceClass="nl.rubix.cxf.fabric.proxy.test.Endpoint">
    <cxf:features>
        <ref component-id="fabricLoadBalancerFeature"/>
      </cxf:features>
  </cxf:rsClient> 
  <reference id="curator" interface="org.apache.curator.framework.CuratorFramework" />

    <bean id="fabricLoadBalancerFeature" class="io.fabric8.cxf.FabricLoadBalancerFeature">
        <property name="curator" ref="curator" />
        <property name="fabricPath" value="cxf/endpoints" />
    </bean>
  
  <camelContext id="cxfProxyContext" trace="false" xmlns="http://camel.apache.org/schema/blueprint">
    <route id="cxfProxyRoute">
      <from uri="cxfrs:bean:proxyEndpoint"/>
      <log message="got proxy request"/>
      <to uri="cxfrs:bean:clientEndpoint"/>
    </route>
  </camelContext>

</blueprint>

This Camel proxy will proxy request from: ‚Äúhttp://localhost:1234/proxy‚ÄĚ to the two instances of the CXFRS service we created in part 1 running on port ‚Äúhttp://localhost:2345/testendpoint‚ÄĚ

When we deploy this Camel proxy in a Fabric profile and run it in a Fabric container we can test the proxy:

Fabric load balancer CXF Part 2 - 2

The response comes from the service created in part 1, but now it is accessible by our proxy endpoint.

Abstracting endpoints like this makes sense in larger Fabric environments when leveraging multiple machines hosting the Fabric containers. Clients no longer need to know the IP and port of the services they call. So scaling and migrating the services to other machines becomes much easier. For instance adding another instance of a CXFRS service on a container running on another machine, or even in the cloud no longer requires the endpoints of the clients to be updated.

 

For more information about CXF load balancing using Fabric:

https://access.redhat.com/documentation/en-US/Red_Hat_JBoss_Fuse/6.1/html/Apache_CXF_Development_Guide/FabricHA.html

[1] The CuratorFramework service is for communicating with the Apache Zookeeper registry used by Fabric.

 
1 Comment

Posted by on 2015-03-16 in JBoss Fuse

 

Tags: , , , , ,

Fuse Fabric MQ provision exception

When using the discovery endpoint to connect to a master slave ActiveMQ cluster I ran into an error.
It is worth to note that this exception is not caused by the discovery mechanism, I happened to find this issue when using the discovery url, however when hard wiring the connector (e.g. using tcp://localhost:61616) will also cause this issue. This issue has to do with the container setup and profiling which I will explain below.

“Provision Exception:

org.osgi.service.resolver.ResolutionException: Unable to resolve dummy/0.0.0: missing requirement [dummy/0.0.0] osgi.identity; osgi.identity=fabricdemo; type=osgi.bundle; version=”[1.0.0.SNAPSHOT,1.0.0.SNAPSHOT]” [caused by: Unable to resolve fabricdemo/1.0.0.SNAPSHOT: missing requirement [fabricdemo/1.0.0.SNAPSHOT] osgi.wiring.package; filter:=”(&(osgi.wiring.package=org.apache.activemq.camel.component)(version>=5.9.0)(!(version>=6.0.0)))”]‚ÄĚ

Fabric-MQ-provision-error-1

This is caused by the Fuse container setup, where the brokers run on two seperate containers in a master slave construction. The container running the deployed Camel route is also running the JBoss Fuse minimal profile.

Fabric-MQ-provision-error-2

The connection to the broker cluster is setup in the Blueprint context in the following manner:

<bean id="jmsConnectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory">
	<property name="brokerURL" value="discovery:(fabric:default)" />
	<property name="userName" value="admin" />
	<property name="password" value="admin" />
</bean>
<bean id="pooledConnectionFactory" class="org.apache.activemq.pool.PooledConnectionFactory">
	<property name="maxConnections" value="1" />
	<property name="maximumActiveSessionPerConnection" value="500" />
	<property name="connectionFactory" ref="jmsConnectionFactory" />
</bean>
<bean id="jmsConfig" class="org.apache.camel.component.jms.JmsConfiguration">
	<property name="connectionFactory" ref="pooledConnectionFactory" />
	<property name="concurrentConsumers" value="10" />
</bean>
<bean id="activemq" class="org.apache.activemq.camel.component.ActiveMQComponent">
	<property name="configuration" ref="jmsConfig"/>
</bean>

When starting the container a provision exception is thrown, this exception: missing requirement [fabricdemo/1.0.0.SNAPSHOT] osgi.wiring.package; filter:=”(&(osgi.wiring.package=org.apache.activemq.camel.component)(version>=5.9.0)(!(version>=6.0.0)))”]‚ÄĚ

Again, it is not just caused by the discovery brokerURL: .

The exception is thrown because the JBoss Fuse minimal profile does not provides the ActiveMQ component. When adding the dependencies for this component to your project the exception still persists, the reason for this is that the component is not exported by default in the OSGi bundle. This means other bundles cannot use it. Exporting the class implementing the component will solve this issue.
To export the ActiveMQ component class we need to extend the Apache Felix maven-bundle-plugin. We need to tell the plugin to export the ActiveMQ component, this can be done by adding the following line to the configuration:

<Export-Package>org.apache.activemq.camel.component</Export-Package>

The entire plugin now looks like this:

<plugin>
	<groupId>org.apache.felix</groupId>
	<artifactId>maven-bundle-plugin</artifactId>
	<version>2.3.7</version>
	<extensions>true</extensions>
	<configuration>
		<instructions>
			<Bundle-SymbolicName>fabricdemo</Bundle-SymbolicName>
			<Private-Package>nl.rubix.camel-activemq.*</Private-Package>
			<Import-Package>*</Import-Package>
			<Export-Package>org.apache.activemq.camel.component</Export-Package>
		</instructions>
	</configuration>
</plugin>

As a side note: to use the discovey broker url the mq-fabric feature must be added to the profile. For this demo I used the Fabric maven plugin which I configured as follows:

<plugin>
	<groupId>io.fabric8</groupId>
	<artifactId>fabric8-maven-plugin</artifactId>
	<configuration>
		<profile>activemqtest</profile>
		<features>mq-fabric</features>
	</configuration>
</plugin>

After the modification the the pom.xml the container will start properly:

Fabric-MQ-provision-error-3

Anyway I hope this helps someone.

 
1 Comment

Posted by on 2014-12-24 in JBoss Fuse

 

Tags: , , , , , ,

Deploying a Jboss Fuse project into a fabric8 container with maven

One of the great features of Jboss Fuse and the underlying Apache Camel is that it can be deployed virtually anywhere. However one of the runtime possibilities that comes out of the box with Jboss Fuse is Fuse Fabric, which uses fabric8 containers as a runtime.

In fabric8 the deployable unit is a fabric profile. In this post we will create a fabric profile containing a Jboss Fuse project. Then we will create and startup a fabric container and provision this container firstly with the Jboss Fuse runtime and finally with our fabric profile containing a camel route. We will use maven for most of the build and deployment steps.

Note this post will not go into detail how to setup your fabric8 environment and assumes you already have a running fabric with a running root container.

As mentioned above we are using the Fuse spring example which you get for free after creating a Fuse project using the spring archetype for Fuse. The only thing I changed in the Camel route of this example is the location of the directories used for reading and writing the files. I changed it so it no longer uses a directory in the project but just some location on my file system for quick testing.

Here is the example Camel route:deploying-into-fabric8-1

We are going to deploy the project containing this route as a fabric profile. This fabric profile can than be added to a fabric container which, in this case, acts as a karaf runtime for Fuse. We are going to use maven for all the build and deploy steps.

To deploy our project into a container we need to walk through the following steps:

  1. Update the pom.xml file
  2. do a Maven install
  3. deploy your project using maven
  4. create a fabric container
  5. add the fabric profile to the container

 Update the pom.xml file

To deploy our Fuse project using maven we need to make some changes to our pom file. We need to change the following:

  1. Add info for creating the OSGi bundle
  2. Add Apache Felix plugin
  3. Add Fabric8 plugin and properties

Add info for creating the OSGi bundle

To use an OSGi bundle we need to change two things in our pom file.

  • Change the packaging from jar to bundle
  • add Apache Felix plugin
<plugin>
   <groupId>org.apache.felix</groupId>
   <artifactId>maven-bundle-plugin</artifactId>
   <version>2.3.7</version>
   <extensions>true</extensions>
   <configuration>
      <instructions>
         <Bundle-SymbolicName>fabricdemo</Bundle-SymbolicName>
         <Private-Package>nl.rubix.fabricdemo.*</Private-Package>
         <Import-Package>*</Import-Package>
      </instructions>
   </configuration>
</plugin>

 Add Fabric8 plugin and properties

Next up is adding the fabric8 stuff to our pom file. This contains a the fabric8 plugin adding some fabric8 properties and removing the existing maven plugins.

Add Fabric8 maven plugin

<!-- fabric8 plugin for using deploying via mvn fabric:deploy -->
<plugin>
   <groupId>io.fabric8</groupId>
   <artifactId>fabric8-maven-plugin</artifactId>
</plugin>

 Add Fabric8 properties

there are a lot of fabric8 properties, for this simple demo we are going to use only one for setting the name of the fabric profile:

<fabric8.profile>fabricdemo</fabric8.profile>

Optionally you can add other fabric8 properties for adding specific features to your fabric profile or defining a parent profile for creating a profile hierarchy. For example:

<fabric8.features>camel camel-cxf</fabric8.features>
<fabric8.parentProfiles>feature-camel</fabric8.parentProfiles>

 Remove the existing maven plugin

finally we need to remove the existing maven plugins so no conflicts can arrise using the fabric plugin.

Remove the following plugins:

<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-compiler-plugin</artifactId>
    <version>2.5.1</version>
    <configuration>
        <source>1.6</source>
        <target>1.6</target>
    </configuration>
</plugin>
<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-resources-plugin</artifactId>
    <version>2.6</version>
    <configuration>
        <encoding>UTF-8</encoding>
    </configuration>
</plugin>

 Do a Maven install

execute the command ‘mvn install’

Note if you want to skip unit tests execute the command ‘mvn -Dmaven.test.skip=true install’

optional if you want to run your Camel route locally so you can do some manual testing execute the command: ‘mvn camel:run’

Deploy your project using Maven

Now we are ready to deploy our Fuse project as a fabric profile making it available in fabric8.

To do this simply execute the command: ‘mvn fabric8:deploy’

The first time you execute this command for a particular project it can take up some time when Maven is downloading all the nessecary jar files. After the deployment is successful the output should look something like this:

[INFO] Updating profile: fabricdemo with parent profile(s): [feature-camel] using OSGi resolver
[INFO] About to invoke mbean io.fabric8:type=ProjectDeployer on jolokia URL: http://localhost:8181/jolokia with user: admin
[INFO] Result: DeployResults{profileUrl='null', profileId='fabricdemo', versionId='1.0'}
[INFO] Uploading file ReadMe.txt to invoke mbean io.fabric8:type=Fabric on jolokia URL: http://localhost:8181/jolokia with user: admin
[INFO] No profile configuration file directory /home/jboss/workspace/fabricdemo/src/main/fabric8 is defined in this project; so not importing any other configuration files into the profile.
[INFO] Performing profile refresh on mbean: io.fabric8:type=Fabric version: 1.0 profile: fabricdemo
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 2:56.842s
[INFO] Finished at: Sun Sep 14 13:30:51 CEST 2014
[INFO] Final Memory: 28M/120M
[INFO] ------------------------------------------------------------------------
[jboss@localhost fabricdemo]$

 

Now we can switch to Hawtio for finalizing our deployment and starting our application.

Create a fabric container

We assume you have already created a fabric and your server is running.

In the karaf console the command ‘fabric:container-list’ should output something like this:

deploying-into-fabric8-2

Now we can switch to hawtio to finish up the deployment. Go to localhost:8181 (or, when using a remote server to the server address) to start up the hawtio console.

The startup screen (after you log in) should look something like this:

deploying-into-fabric8-3

now click the create button to create a new container which we will use to deploy our newly created fabric profile into.

Type a container name, in this example I am using fabric8 demo. Then enter ‘full’ into the search box and select jboss/fuse full. This will provision the container with the Fuse runtime (like Apache Camel, CXF and ActiveMQ). Now press ‘Create and Start Container’

deploying-into-fabric8-4

After some time our new container will be started:

deploying-into-fabric8-5

however we still need to provision this container with the fabric profile we created earlier.

Add the fabric profile to the container

After the new container is started click on the container -> now add the fabric profile we just deployed with maven. click Add -> expand ‘Uncategorized’ -> select your profile ‚Üí click Add

deploying-into-fabric8-6

Now fabric8 will provision the container with our fabric profile. After it is done provisioning and the container is running you should see something like this:

deploying-into-fabric8-7

Note the little Camel after Services and the camel in the JMS Domains.

Now our Fuse project is correctly deployed and running in a fabric8 container. To open the container simply click the Open button and the Hawtio console of the container will open in a new browser tab.

When we go to the Camel tab we can expand the route we have just deployed to see how many messages the route has processed. After some test messages it seems our route is working!

deploying-into-fabric8-8The complete pom.xml file used for this project:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">

  <modelVersion>4.0.0</modelVersion>

  <groupId>nl.rubix</groupId>
  <artifactId>fabricdemo</artifactId>
  <packaging>bundle</packaging>
  <version>1.0.0-SNAPSHOT</version>

  <name>A Camel Spring Route</name>
  <url>http://www.myorganization.org</url>

  <properties>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
    <fabric8.profile>fabricdemo</fabric8.profile>
  </properties>

  <repositories>
    <repository>
      <id>release.fusesource.org</id>
      <name>FuseSource Release Repository</name>
      <url>http://repo.fusesource.com/nexus/content/repositories/releases</url>
      <snapshots>
        <enabled>false</enabled>
      </snapshots>
      <releases>
        <enabled>true</enabled>
      </releases>
    </repository>
    <repository>
     <id>ea.fusesource.org</id>
     <name>FuseSource Community Early Access Release Repository</name>
     <url>http://repo.fusesource.com/nexus/content/groups/ea</url>
     <snapshots>
      <enabled>false</enabled>
     </snapshots>
     <releases>
      <enabled>true</enabled>
     </releases>
    </repository>    
    <repository>
      <id>snapshot.fusesource.org</id>
      <name>FuseSource Snapshot Repository</name>
      <url>http://repo.fusesource.com/nexus/content/repositories/snapshots</url>
      <snapshots>
        <enabled>true</enabled>
      </snapshots>
      <releases>
        <enabled>false</enabled>
      </releases>
    </repository>
  </repositories>

  <pluginRepositories>
    <pluginRepository>
      <id>release.fusesource.org</id>
      <name>FuseSource Release Repository</name>
      <url>http://repo.fusesource.com/nexus/content/repositories/releases</url>
      <snapshots>
        <enabled>false</enabled>
      </snapshots>
      <releases>
        <enabled>true</enabled>
      </releases>
    </pluginRepository>
    <pluginRepository>
     <id>ea.fusesource.org</id>
     <name>FuseSource Community Early Access Release Repository</name>
     <url>http://repo.fusesource.com/nexus/content/groups/ea</url>
     <snapshots>
      <enabled>false</enabled>
     </snapshots>
     <releases>
      <enabled>true</enabled>
     </releases>
    </pluginRepository>      
    <pluginRepository>
      <id>snapshot.fusesource.org</id>
      <name>FuseSource Snapshot Repository</name>
      <url>http://repo.fusesource.com/nexus/content/repositories/snapshots</url>
      <snapshots>
        <enabled>true</enabled>
      </snapshots>
      <releases>
        <enabled>false</enabled>
      </releases>
    </pluginRepository>
  </pluginRepositories>

  <dependencies>
    <dependency>
      <groupId>org.apache.camel</groupId>
      <artifactId>camel-core</artifactId>
      <version>2.12.0.redhat-610379</version>
    </dependency>
    <dependency>
      <groupId>org.apache.camel</groupId>
      <artifactId>camel-spring</artifactId>
      <version>2.12.0.redhat-610379</version>
    </dependency>

    <!-- logging -->
    <dependency>
      <groupId>org.slf4j</groupId>
      <artifactId>slf4j-api</artifactId>
      <version>1.7.5</version>
    </dependency>
    <dependency>
      <groupId>org.slf4j</groupId>
      <artifactId>slf4j-log4j12</artifactId>
      <version>1.7.5</version>
    </dependency>
    <dependency>
      <groupId>log4j</groupId>
      <artifactId>log4j</artifactId>
      <version>1.2.17</version>
    </dependency>

    <!-- testing -->
    <dependency>
      <groupId>org.apache.camel</groupId>
      <artifactId>camel-test-spring</artifactId>
      <version>2.12.0.redhat-610379</version>
      <scope>test</scope>
    </dependency>

  </dependencies>

  <build>
    <defaultGoal>install</defaultGoal>

    <plugins>

      <!-- allows the route to be ran via 'mvn camel:run' -->
      <plugin>
        <groupId>org.apache.camel</groupId>
        <artifactId>camel-maven-plugin</artifactId>
        <version>2.12.0.redhat-610379</version>
      </plugin>
      
      <!-- apache felix plugin for creating OSGi bundle -->
      <plugin>
            <groupId>org.apache.felix</groupId>
            <artifactId>maven-bundle-plugin</artifactId>
            <version>2.3.7</version>
            <extensions>true</extensions>
            <configuration>
                <instructions>
                    <Bundle-SymbolicName>homeloan</Bundle-SymbolicName>
                    <Private-Package>nl.rubix.fabricdemo.*</Private-Package>
                    <Import-Package>*</Import-Package>
                </instructions>
            </configuration>
        </plugin>
        
        <!-- fabric8 plugin for using deploying via mvn fabric:deploy -->
        <plugin>
              <groupId>io.fabric8</groupId>
              <artifactId>fabric8-maven-plugin</artifactId>
          </plugin>
    </plugins>
  </build>

</project>

 

 
8 Comments

Posted by on 2014-10-10 in JBoss Fuse

 

Tags: , , , , ,

Stopping a fabric container after ‘container has not been created by fabric’ error message

When creating a new fabric sometimes there are still containers running in the background of the old fabric. These can cause conflicts with the ‘new’ containers. Especially when using the same ZooKeeper credentials as the old fabric in the new one.

Note this posts will only explain how to stop and delete a container after the ‘container has not been created by fabric’ error message. It may very well be that other configurations and background processes are running of the old fabric what may cause unexpected behavior. If you have the possibility to start your fabric from scratch (for intance a developer machine) it may be better to restart fuse with ./fuse -clean and create a new fabric with the command fabric:create ‚Äďclean

for more information about the available commands see the Fuse documentation:

https://access.redhat.com/documentation/en-US/Red_Hat_JBoss_Fuse/6.1/html/Console_Reference/files/ConsoleRefIntro.html

One of the problems is that when trying to start, stop or delete a container you get the error message ‘container <<your container>> has not been created by fabric’.

container-stop-error-1

Also trying to stop/start/delete the container from the Fuse karaf console will result in the same message.

JBossFuse:karaf@root> fabric:container-list
[id] [version] [connected] [profiles] [provision status]
root* 1.0 true fabric, fabric-ensemble-0000-1 success
  demoContainer 1.0 true jboss-fuse-full, GettingStarted success
JBossFuse:karaf@root> fabric:container-stop demoContainer
Error executing command: Container demoContainer has not been created using Fabric

So how can we stop this container?

To stop this container we need to take two steps;

1) install the ‘zookeeper-commands’ feature in the root container

2) using zookeeper commands to find the PID of the container so we can kill it through the os.

1) installing the ‘zookeeper-commands’ feature in the root container

In fabric8 features are also controlled by fabric8, for this reason the good old ‘feature:install’ command will not work.

I resulted in installing the zookeeper-commands feature through the Hawtio console.

In the Hawtio console go to ‘Wiki’ ‚Üí edit fabric features ‚Üí search for zookeeper and select zookeeper-commands ‚Üí click the ‘Add’ buttom and finally click ‘save’

In Hawtio console go to wiki and select the fabric profile.

container-stop-error-2

Then select the edit features button at the bottom of the features list:

container-stop-error-3

Search for zookeeper, select zookeeper-commands ‚Üí click ‘Add’ ‚Üí click ‘Save changes’

container-stop-error-42) Stopping the container

After installing the zookeeper-commands feature we can use the zookeeper commands in the Fuse/karaf shell:

finding the PID of the container we want to stop execute the following command:

JBossFuse:karaf@root> zk:list -r -d|grep demoContainer|grep pid
/fabric/registry/containers/status/demoContainer/pid = 7177

Now in a ‘regular’ shell we can kill the process (and thus the container) with this PID.

[jboss@localhost ]$ ps -ef|grep 7177
jboss     7177     1  2 12:30 pts/0    00:03:02 java -server -Dcom.sun.management.jmxremote -Dzookeeper.url=10.0.2.15:2181 -Dzookeeper.password.encode=true -Dzookeeper.password=ZKENC=YWRtaW4= -Xmx512m -XX:+UnlockDiagnosticVMOptions -XX:+UnsyncloadClass -Dbind.address=0.0.0.0 -Dio.fabric8.datastore.component.name=io.fabric8.datastore -Dio.fabric8.datastore.type=caching-git -Dio.fabric8.datastore.felix.fileinstall.filename=file:/home/jboss/Apps/jboss-fuse-6.1.0.redhat-379/etc/io.fabric8.datastore.cfg -Dio.fabric8.datastore.service.pid=io.fabric8.datastore -Dio.fabric8.datastore.gitPullPeriod=1000 -Djava.util.logging.config.file=/home/jboss/Apps/jboss-fuse-6.1.0.redhat-379/instances/demoContainer/etc/java.util.logging.properties -Djava.endorsed.dirs=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.60-2.4.7.0.fc20.x86_64/jre/jre/lib/endorsed:/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.60-2.4.7.0.fc20.x86_64/jre/lib/endorsed:/home/jboss/Apps/jboss-fuse-6.1.0.redhat-379/lib/endorsed -Djava.ext.dirs=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.60-2.4.7.0.fc20.x86_64/jre/jre/lib/ext:/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.60-2.4.7.0.fc20.x86_64/jre/lib/ext:/home/jboss/Apps/jboss-fuse-6.1.0.redhat-379/lib/ext -Dkaraf.home=/home/jboss/Apps/jboss-fuse-6.1.0.redhat-379 -Dkaraf.base=/home/jboss/Apps/jboss-fuse-6.1.0.redhat-379/instances/demoContainer -Dkaraf.data=/home/jboss/Apps/jboss-fuse-6.1.0.redhat-379/instances/demoContainer/data -Dkaraf.etc=/home/jboss/Apps/jboss-fuse-6.1.0.redhat-379/instances/demoContainer/etc -Dkaraf.startLocalConsole=false -Dkaraf.startRemoteShell=true -classpath /home/jboss/Apps/jboss-fuse-6.1.0.redhat-379/lib/org.apache.servicemix.specs.activator-2.3.0.redhat-610379.jar:/home/jboss/Apps/jboss-fuse-6.1.0.redhat-379/lib/karaf.jar:/home/jboss/Apps/jboss-fuse-6.1.0.redhat-379/lib/karaf-jaas-boot.jar:/home/jboss/Apps/jboss-fuse-6.1.0.redhat-379/lib/esb-version.jar org.apache.karaf.main.Main
[jboss@localhost ]$ kill -9 7177

Now the container is stopped, but it still exists. And we still can’t start it and stop it through the regular process. So in the next step I’ll explain how to delete the container.

deleting the container

To remove the container from the fabric we need to remove all the zookeeper entries. To do this first we first need to know what entries we need te remove.

We can use the zk:list command again for retrieving the zookeeper entries.

JBossFuse:karaf@root> zk:list -r|grep demo
/fabric/configs/containers/demoContainer
/fabric/configs/versions/1.0/containers/demoContainer
/fabric/registry/containers/config/demoContainer
/fabric/registry/containers/config/demoContainer/bindaddress
/fabric/registry/containers/config/demoContainer/maximumport
/fabric/registry/containers/config/demoContainer/localhostname
/fabric/registry/containers/config/demoContainer/localip
/fabric/registry/containers/config/demoContainer/publicip
/fabric/registry/containers/config/demoContainer/parent
/fabric/registry/containers/config/demoContainer/resolver
/fabric/registry/containers/config/demoContainer/minimumport
/fabric/registry/containers/config/demoContainer/manualip
/fabric/registry/containers/config/demoContainer/ip
/fabric/registry/ports/containers/demoContainer
/fabric/registry/ports/containers/demoContainer/org.apache.karaf.shell
/fabric/registry/ports/containers/demoContainer/org.apache.karaf.shell/sshPort
/fabric/registry/ports/containers/demoContainer/org.apache.karaf.management
/fabric/registry/ports/containers/demoContainer/org.apache.karaf.management/rmiServerPort
/fabric/registry/ports/containers/demoContainer/org.apache.karaf.management/rmiRegistryPort
/fabric/registry/ports/containers/demoContainer/org.ops4j.pax.web
/fabric/registry/ports/containers/demoContainer/org.ops4j.pax.web/org.osgi.service.http.port

Now we know all the entries of the container we want te delete we can delete the entries using the zk:delete command. Make sure you use the -r parameter to recursively remove alle entries.

JBossFuse:karaf@root> zk:delete -r /fabric/configs/containers/demoContainer
JBossFuse:karaf@root> zk:delete -r /fabric/configs/versions/1.0/containers/demoContainer
JBossFuse:karaf@root> zk:delete -r /fabric/registry/containers/config/demoContainer
JBossFuse:karaf@root> zk:delete -r /fabric/registry/ports/containers/demoContainer

Now when we execute the fabric:container-lists command the demoContainer is no longer in the list.

JBossFuse:karaf@root> fabric:container-list
[id]                           [version] [connected] [profiles]                                         [provision status]
root*                          1.0       true        fabric, fabric-ensemble-0000-1                     success
 
Leave a comment

Posted by on 2014-09-20 in JBoss Fuse

 

Tags: , , , ,