RSS

Tag Archives: fuse fabric

Configuring a Network of Brokers in Fuse Fabric

In ActiveMQ it is possible to define logical groups of message brokers in order to obtain more resiliency or throughput.
The setup configuration described here can be outlined as follows:
Network_of_brokers_layout
Creating broker groups and a network of brokers can be done in various manners in JBoss Fuse Fabric. Here we are going to use the Fabric CLI.
The following steps are necessary to create the configuration above:

  1. Creating a Fabric (if we don’t already have one)

  2. Create child containers

  3. Create the MQ broker profiles and assign them to the child containers

  4. Connect a couple of clients for testing

1. Creating a Fabric (optional)

Assuming we start with a clean Fuse installation the first step is creating a Fabric. This step can be skipped if a fabric is already available.

In the Fuse CLI execute the following command:

JBossFuse:karaf@root> fabric:create -p fabric --wait-for-provisioning

2. Create child containers

Next we are going to create two sets of child containers which are going to host our brokers. Note, the child containers we are going to create in this step are empty child containers and do not yet contain AMQ brokers. We are going to provision these containers with AMQ brokers in step 3.

First create the child containers for siteA:

JBossFuse:karaf@root> fabric:container-create-child root site-a 2

Next create the child containers for siteB:

JBossFuse:karaf@root> fabric:container-create-child root site-b 2

3. Create the MQ broker profiles and assign them to the child containers

In this step we are going to create the broker profiles in fabric and assign them to the containers we created in the previous step.


JBossFuse:karaf@root> fabric:mq-create --group site-a --networks site-b --networks-username admin --networks-password admin --assign-container site-a1,site-a2 site-a-profile

JBossFuse:karaf@root> fabric:mq-create --group site-b --networks site-a --networks-username admin --networks-password admin --assign-container site-b1,site-b2 site-b-profile

The fabric:mq-create command creates a broker profile in Fuse Fabric. The –group flag assigns a group to the brokers in the profile. The networks flag creates the required network connection needed for a network of brokers. In the assign-container flag we assign this newly created broker profile to one or more containers.

4.Connect a couple of clients for testing

A sample project containing two clients, one producer and one consumer is available on github.

Clone the repository:

$ git clone https://github.com/pimg/mq-fabric-client.git

build the project:

$ mvn clean install

Start the message consumer (in the java-consumer directory):

$ mvn exec:java

Start the message producer (in the java-producer directory):

$ mvn exec:java

Observe the console logging of the producer:

14:48:58 INFO  Using local ZKClient
14:48:58 INFO  Starting
14:48:58 INFO  Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
14:48:58 INFO  Client environment:host.name=pim-XPS-15-9530
14:48:58 INFO  Client environment:java.version=1.8.0_77
14:48:58 INFO  Client environment:java.vendor=Oracle Corporation
14:48:58 INFO  Client environment:java.home=/home/pim/apps/jdk1.8.0_77/jre
14:48:58 INFO  Client environment:java.class.path=/home/pim/apps/apache-maven-3.3.9/boot/plexus-classworlds-2.5.2.jar
14:48:58 INFO  Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
14:48:58 INFO  Client environment:java.io.tmpdir=/tmp
14:48:58 INFO  Client environment:java.compiler=<NA>
14:48:58 INFO  Client environment:os.name=Linux
14:48:58 INFO  Client environment:os.arch=amd64
14:48:58 INFO  Client environment:os.version=4.2.0-34-generic
14:48:58 INFO  Client environment:user.name=pim
14:48:58 INFO  Client environment:user.home=/home/pim
14:48:58 INFO  Client environment:user.dir=/home/pim/workspace/mq-fabric/java-producer
14:48:58 INFO  Initiating client connection, connectString=localhost:2181 sessionTimeout=60000 watcher=org.apache.curator.ConnectionState@47e011e3
14:48:58 INFO  Opening socket connection to server localhost/127.0.0.1:2181
14:48:58 INFO  Socket connection established to localhost/127.0.0.1:2181, initiating session
14:48:58 INFO  Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x154620540e80009, negotiated timeout = 40000
14:48:58 INFO  State change: CONNECTED
14:48:59 INFO  Adding new broker connection URL: tcp://10.0.3.1:38417
14:49:00 INFO  Successfully connected to tcp://10.0.3.1:38417
14:49:00 INFO  Sending to destination: queue://fabric.simple this text: 1. message sent
14:49:00 INFO  Sending to destination: queue://fabric.simple this text: 2. message sent
14:49:01 INFO  Sending to destination: queue://fabric.simple this text: 3. message sent
14:49:01 INFO  Sending to destination: queue://fabric.simple this text: 4. message sent
14:49:02 INFO  Sending to destination: queue://fabric.simple this text: 5. message sent
14:49:02 INFO  Sending to destination: queue://fabric.simple this text: 6. message sent
14:49:03 INFO  Sending to destination: queue://fabric.simple this text: 7. message sent
14:49:03 INFO  Sending to destination: queue://fabric.simple this text: 8. message sent
14:49:04 INFO  Sending to destination: queue://fabric.simple this text: 9. message sent

Observe the console logging of the consumer:

14:48:20 INFO  Using local ZKClient
14:48:20 INFO  Starting
14:48:20 INFO  Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
14:48:20 INFO  Client environment:host.name=pim-XPS-15-9530
14:48:20 INFO  Client environment:java.version=1.8.0_77
14:48:20 INFO  Client environment:java.vendor=Oracle Corporation
14:48:20 INFO  Client environment:java.home=/home/pim/apps/jdk1.8.0_77/jre
14:48:20 INFO  Client environment:java.class.path=/home/pim/apps/apache-maven-3.3.9/boot/plexus-classworlds-2.5.2.jar
14:48:20 INFO  Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
14:48:20 INFO  Client environment:java.io.tmpdir=/tmp
14:48:20 INFO  Client environment:java.compiler=<NA>
14:48:20 INFO  Client environment:os.name=Linux
14:48:20 INFO  Client environment:os.arch=amd64
14:48:20 INFO  Client environment:os.version=4.2.0-34-generic
14:48:20 INFO  Client environment:user.name=pim
14:48:20 INFO  Client environment:user.home=/home/pim
14:48:20 INFO  Client environment:user.dir=/home/pim/workspace/mq-fabric/java-consumer
14:48:20 INFO  Initiating client connection, connectString=localhost:2181 sessionTimeout=60000 watcher=org.apache.curator.ConnectionState@3d732a14
14:48:20 INFO  Opening socket connection to server localhost/127.0.0.1:2181
14:48:20 INFO  Socket connection established to localhost/127.0.0.1:2181, initiating session
14:48:20 INFO  Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x154620540e80008, negotiated timeout = 40000
14:48:20 INFO  State change: CONNECTED
14:48:21 INFO  Adding new broker connection URL: tcp://10.0.3.1:38417
14:48:21 INFO  Successfully connected to tcp://10.0.3.1:38417
14:48:21 INFO  Start consuming messages from queue://fabric.simple with 120000ms timeout
14:49:00 INFO  Got 1. message: 1. message sent
14:49:00 INFO  Got 2. message: 2. message sent
14:49:01 INFO  Got 3. message: 3. message sent
14:49:01 INFO  Got 4. message: 4. message sent
14:49:02 INFO  Got 5. message: 5. message sent
14:49:02 INFO  Got 6. message: 6. message sent
14:49:03 INFO  Got 7. message: 7. message sent
14:49:03 INFO  Got 8. message: 8. message sent
14:49:04 INFO  Got 9. message: 9. message sent
 
Leave a comment

Posted by on 2016-12-02 in JBoss Fuse

 

Tags: , , , , ,

JBoss Fuse 6.2 a first look at the http gateway

Tech preview in 6.1 the http gateway for Fuse fabric is in full support in JBoss Fuse 6.2. The http gateway (and mq gateway for ActiveMQ) offers a gateway for clients not residing inside the fabric to services within the fabric. When not using this kind of functionality services have to be running on particular IP addresses and ports for clients to connect to them. Even when using a loadbalancer, the loadbalancer has to know on what IP and port combination the services are running. The whole concept of fabric and containers in general is that they become IP agnostic, it does not really matter where and how many services/containers are running as long as they can handle the load. This IP agnosticism introduces complexity for clients who need to connect to these services. The http gateway offers a solution for this complexity by using autodiscovery based on the Zookeeper registry of Fuse fabric and exposes itself for external clients on one static port. So now only the http gateway needs to be deployed on a static IP and port, while the services can be deployed on any container inside the fabric.

Prior to the 6.2 release this autodiscovery mechanism needed to be built automatically and a dumbed down version of the gateway could be build using Camel. I have blogged about creating both a service and a client (acting as a dumbed down gateway, I called it proxy in my post) here:

https://pgaemers.wordpress.com/2015/02/26/fabric-load-balancer-for-cxf-part-1/
https://pgaemers.wordpress.com/2015/03/16/fabric-load-balancer-for-cxf-part-2/

 

Now that the http gateway is in full support let’s take a look at it.

Deploying a service

To hit the ground running I’ve used the Rest Quickstart example provided in JBoss Fuse. I just simply create a new fabric container with the quickstarts/cxf/rest profile:

http-gateway-1

After the container is started it provides a Rest service with four verbs, for the purpose demoing the http gateway we will be using only the get operation since it is the simplest to invoke.http-gateway-7a

In Hawtio when navigating to Services -> APIs we can see the API provided by the REST quickstart is registered inside Fuse. We can browse the endpoint or retrieve the WADL.

http-gateway-2When invoking the get operation from a browser we can fetch a response.

http-gateway-3

When we connect to the container and inspect the log file we can see the following entries:

We can see the invoked endpoint is: http://10.0.2.15:8182/cxf/crm/customerservice/customers/123

Deploying the http gateway

Deploying the standard out of the box http gateway couldn’t be simpler. Just spin up a new container and add the Gateway/http profile to it.

http-gateway-5

By default the http gateway is exposed on port 9000 and since we do not have any custom mapping rules implemented the context path of the service is used by the gateway as is. This means we can use the same URI as the previous request except for the port which is now 9000. So when invoking this endpoint we get the same response we previously got: http://10.0.2.15:9000/cxf/crm/customerservice/customers/123/

http-gateway-6

When looking at the log file of the container hosting our service we can now see the following entries:

http-gateway-4

We can see the request came from port 9000

Replicating the service

But what happens when we create another instance of our rest quickstart service by creating another container with the same profile. Or replicating our container.

Since the rest quickstarts has a relative path configured JBoss Fuse Fabric allocates a port for when provisioning a container with the rest quickstart. In our first container this was 8182. To see the endpoint of our new container we can again go to the Services -> APIs tab in Hawtio and take look:

http-gateway-8

Here we see the latest instance of the service is bound to port 8184, and indeed when we invoke this endpoint we retrieve the expected response:

http-gateway-9When using the gateway endpoint (port 9000) to invoke the service we can observe the logs of both containers to see gateway requests are fed to both endpoints:

http-gateway-10

http-gateway-11

This means clients wanting to invoke this service can use the gateway endpoint, in this example invoke the 9000 port and requests get automatically redirected to the service containers. Dynamically reallocating the services to new IP addresses and/or ports does not effect this as long as the service is inside the fabric and the endpoint is registered inside Zookeeper!

 

Configuring the http gateway

Until now we have been running the http gateway in the standard configuration. This means the rules for redirecting are taking the contextpath of the service. That’s why we only had to change the port number to direct our browser to the gateway rather than one of the service directly. It is however possible to change the configuration of the gateway profile and have different mapping rules.

To change the configuration of the http gateway we have to modify the properties stored in the gateway/http profile. Make sure you modify the properties in the profile and not in the child container running the profile since this can cause synchronization problems inside the fabric. When changing properties always do it in the profile/configuration registry.

This can be done by utilizing the GIT api of the Fabric configuration registry directly or by using Hawtio. For this demo we are going to use Hawtio.

http-gateway-12

When using Hawtio to browse to the http gateway profile we can see the four property files containing the mapping rules:

  • fabric8.gateway.http.mapping-apis.properties
  • fabric8.gateway.http.mapping-git.properties
  • fabric8.gateway.http.mapping-servlets.properties
  • fabric8.gateway.http.mapping-webapps.properties

The mapping rules, and property files, are devided in four different categories. In the documentation it is explained what those categories are: https://access.redhat.com/documentation/en-US/Red_Hat_JBoss_Fuse/6.2/html/Fabric_Guide/Gateway-Config.html

Our rest quickstart falls under the mapping-apis category so we will have to modify the “io.fabric8.gateway.http.mapping-apis.properties” file. This can be done straight from Hawtio.

Simply click that particular property file and the read-only mode opens.

http-gateway-13

Here we see the path in the Zookeeper registry where the endpoints of all APIs are located. To edit the file simply click edit.

As mentioned before the default mapping rule maps the zookeeperPath to the contextpath. But we can insert our own paths.

To modify the mapping add the following property to the property file:

uriTemplate: /mycoolpath{contextPath}/

The complete file looks like this:

#
#  Copyright 2005-2014 Red Hat, Inc.
#
#  Red Hat licenses this file to you under the Apache License, version
#  2.0 (the "License"); you may not use this file except in compliance
#  with the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
#  Unless required by applicable law or agreed to in writing, software
#  distributed under the License is distributed on an "AS IS" BASIS,
#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
#  implied.  See the License for the specific language governing
#  permissions and limitations under the License.
#

zooKeeperPath = /fabric/registry/clusters/apis
uriTemplate: /mycoolpath{contextPath}/

Now when we go to this url: http://localhost:9000/mycoolpath/cxf/crm/customerservice/customers/123/ we get the response from the service.

http-gateway-14

In the example above we just changed the uriTemplate property. For more finegrained URL rules we can add extra rules also modifying the zooKeeperPath property. For example to split REST and SOAP APIs.

The http gateway is a very handy tool to solve the problem of having to know what service is running on what container binding to what port. Since it uses the Zookeeper registry of the fabric it can enable making the services container agnostic. It should also help tremendously when securing the fabric. The gateway can drastically reduce the amount of ports that need to be opened for the outside world and can make configuring external proxies and load balancers a lot easier.

 
Leave a comment

Posted by on 2015-07-18 in JBoss Fuse

 

Tags: , , , , ,

Fabric load balancer for CXF – Part 2

In part 1 we explored the possibility to use Fabric as a load balancer for multiple CXF http endpoints and had a more detailed look at how to implement the Fabric load balancing feature on a CXFRS service. In this part we will look into the client side of the load balancer feature.

In order to leverage the Fabric load balancer from a client side clients have to be “inside the Fabric” meaning they will need access to the Zookeeper registry.

When we again take a look at a diagram explaining the load balancer feature we can see clients will need to perform a lookup in the Zookeeper registry to obtain the actual endpoints of the http service.

Fabric load balancer CXF Part 1 - 1

As mentioned in part 1 there are two possibilities to accomplish this lookup in the Fabric Zookeeper registry:

As of this moment in JBoss Fuse 6.1 the fabric-http-gateway profile is not fully supported yet. So in this post we will explore the Camel proxy route.

Camel route

A Camel proxy is essentially a Camel route

A Camel proxy is essentially a Camel route which provides a protocol bridge between “regular” http and Fabric load balancing by looking up endpoints in the Fabric (Zookeeper) registry. The Camel route itself can be very straight forward, an example Camel route implementing this proxy can look like:

<camelContext id="cxfProxyContext" trace="false" xmlns="http://camel.apache.org/schema/blueprint">
    <route id="cxfProxyRoute">
        <from uri="cxfrs:bean:proxyEndpoint"/>
        <log message="got proxy request"/>
        <to uri="cxfrs:bean:clientEndpoint"/>
    </route>
</camelContext>

The Fabric load balancer will be implemented on the cxfrs:bean in the Camel route.

Note the Camel route also exposes a CXFRS endpoint in the <from endpoint therefore a regular CXFRS service class and endpoint will have to be created.

The service class is the same we used in part 1 for the service itself:


package nl.rubix.cxf.fabric.ha.test;

import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;

@Path("/")
public class Endpoint {
    
    @GET
    @Path("/test/{id}")
    @Produces(MediaType.APPLICATION_JSON)
    public String getAssets(@PathParam("id") String id){
        return null;
    }
    
}

The endpoint we will expose our proxy on is configured like this:

<cxf:rsServer id=<strong>"proxyEndpoint"</strong> address=<strong>"http://localhost:1234/proxy"</strong> serviceClass=<strong>"nl.rubix.cxf.fabric.proxy.test.Endpoint"</strong>/>

Implementing the Fabric load balancer

To implement the load balancer on the client side the pom has to be updated just as the server side. Next to the regular cxfrs dependencies add the fabric-cxf dependency to the pom:

<dependency>
    <groupId>io.fabric8</groupId>
    <artifactId>fabric-cxf</artifactId>
    <version>1.0.0.redhat-379</version>
    <type>bundle</type>
</dependency>

And also add the fabric cxf class to the import package in the maven-bundle-plugin to add it to the OSGi manifest file:

<plugin>
    <groupId>org.apache.felix</groupId>
    <artifactId>maven-bundle-plugin</artifactId>
    <version>2.3.7</version>
    <extensions>true</extensions>
    <configuration>
        <instructions>
            <Bundle-SymbolicName>cxf-fabric-proxy-test</Bundle-SymbolicName>
            <Private-Package>nl.rubix.cxf.fabric.proxy.test.*</Private-Package>
            <Import-Package>*,io.fabric8.cxf</Import-Package>
        </instructions>
    </configuration>
</plugin>

The next step is to add the load balancer features to the Blueprint.xml (or Spring if you prefer Spring). These steps are similar to the configuration of the Fabric Load balancer on the Server side discussed in part 1.

So first add the cxf-core namespace to the Blueprint.xml.

xmlns:cxf-core=”http://cxf.apache.org/blueprint/core

Add an OSGi reference to the CuratorFramework OSGi service[1]:


<reference id=<strong>"curator"</strong> interface=<strong>"org.apache.curator.framework.CuratorFramework"</strong> />

Next instantiate the load balancer bean:

Instantiate the FabricLoadBalancerFeature bean:

<bean id="fabricLoadBalancerFeature" class="io.fabric8.cxf.FabricLoadBalancerFeature">
    <property name="curator" ref="curator" />
    <property name="fabricPath" value="cxf/endpoints" />
</bean>

It is important to note the value in the “fabricPath” property must contain the exact same value of the fabricPath of the service the client will invoke. This fabricPath points to the location in the Fabric Zookeeper registry where the endpoints are stored. It is the coupling between the client and service.

To register the Fabric load balancer with the cxfrs:bean used in the to endpoint (we are implementing a client so we must register the Fabric load balancer with the cxfrs bean on the client side). Extend the declaration of the cxfrsclient:

<cxf:rsClient id="clientEndpoint" address="http://dummy/url" serviceClass="nl.rubix.cxf.fabric.proxy.test.Endpoint">
    <cxf:features>
        <ref component-id="fabricLoadBalancerFeature"/>
      </cxf:features>
</cxf:rsClient> 

Add the Fabric load balancer feature to a cxf bus in the rsClient:

<cxf:rsClient id="clientEndpoint" address="http://dummy/url" serviceClass="nl.rubix.cxf.fabric.proxy.test.Endpoint">
    <cxf:features>
        <ref component-id="fabricLoadBalancerFeature"/>
    </cxf:features>
</cxf:rsClient> 

The entire Blueprint.xml for this proxy looks like this:


<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:camel="http://camel.apache.org/schema/blueprint"
       xmlns:cxf="http://camel.apache.org/schema/blueprint/cxf"
       xmlns:cxf-core="http://cxf.apache.org/blueprint/core"
       xsi:schemaLocation="
       http://www.osgi.org/xmlns/blueprint/v1.0.0 http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd
       http://camel.apache.org/schema/blueprint http://camel.apache.org/schema/blueprint/camel-blueprint.xsd">

  <cxf:rsServer id="proxyEndpoint" address="http://localhost:1234/proxy" serviceClass="nl.rubix.cxf.fabric.proxy.test.Endpoint"/>
  <cxf:rsClient id="clientEndpoint" address="http://dummy/url" serviceClass="nl.rubix.cxf.fabric.proxy.test.Endpoint">
    <cxf:features>
        <ref component-id="fabricLoadBalancerFeature"/>
      </cxf:features>
  </cxf:rsClient> 
  <reference id="curator" interface="org.apache.curator.framework.CuratorFramework" />

    <bean id="fabricLoadBalancerFeature" class="io.fabric8.cxf.FabricLoadBalancerFeature">
        <property name="curator" ref="curator" />
        <property name="fabricPath" value="cxf/endpoints" />
    </bean>
  
  <camelContext id="cxfProxyContext" trace="false" xmlns="http://camel.apache.org/schema/blueprint">
    <route id="cxfProxyRoute">
      <from uri="cxfrs:bean:proxyEndpoint"/>
      <log message="got proxy request"/>
      <to uri="cxfrs:bean:clientEndpoint"/>
    </route>
  </camelContext>

</blueprint>

This Camel proxy will proxy request from: “http://localhost:1234/proxy” to the two instances of the CXFRS service we created in part 1 running on port “http://localhost:2345/testendpoint

When we deploy this Camel proxy in a Fabric profile and run it in a Fabric container we can test the proxy:

Fabric load balancer CXF Part 2 - 2

The response comes from the service created in part 1, but now it is accessible by our proxy endpoint.

Abstracting endpoints like this makes sense in larger Fabric environments when leveraging multiple machines hosting the Fabric containers. Clients no longer need to know the IP and port of the services they call. So scaling and migrating the services to other machines becomes much easier. For instance adding another instance of a CXFRS service on a container running on another machine, or even in the cloud no longer requires the endpoints of the clients to be updated.

 

For more information about CXF load balancing using Fabric:

https://access.redhat.com/documentation/en-US/Red_Hat_JBoss_Fuse/6.1/html/Apache_CXF_Development_Guide/FabricHA.html

[1] The CuratorFramework service is for communicating with the Apache Zookeeper registry used by Fabric.

 
1 Comment

Posted by on 2015-03-16 in JBoss Fuse

 

Tags: , , , , ,

Fabric load balancer for CXF – Part 1

Sticking with the recent CXF themes of this blog in this 2 part series we will explore the combination of CXF http endpoints and Fuse Fabric for autodiscovery and load balancing.

One of the advantages of using Fuse Fabric is the autodiscovery of services within the Fabric. In this two part post we are going to take a look at the load balancing feature provided by Fabric for CXF endpoints. Note, it is also possible to use similar kind of features for plain http endpoints. In the first part we will focus on the server side and setup a CXF service in Fuse to use the load balancing feature provided by Fabric. In the second part we will focus on the client side of consuming the load balanced service and execute the lookup in the Fabric registry.

When using the Fuse Fabric load balancing feature, Fabric will provide the load balancing by discovering CXF endpoints and load balancing requests between these endpoints. However, for endpoints to be discovered by Fabric they have to register themselves in the Fabric registry implemented by Apache ZooKeeper. It is worth to note that when an endpoint is registered in the Fabric it is still possible to call that endpoint with the registered address without the Fabric load balancer. The advantage of enabling a Fabric load balancer is that CXF(RS) services can be looked up in the Fabric registry so scaling to new machines can be done without reconfiguring IP and ports on the client side. However for this to work a client also has to use fabric for the ability to lookup endpoints in the Fabric registry. Since the Fabric registry is only available for clients within the Fabric and not for external (non Fuse Fabric clients), this will be the focus of part 2.

Fabric load balancer CXF Part 1 - 1In the picture above from the Red Hat documentation (https://access.redhat.com/documentation/en-US/Red_Hat_JBoss_Fuse/6.1/html/Apache_CXF_Development_Guide/FabricHA.html ) the servers (1 and 2) register their endpoints in the Fabric registry under the fabric path (explained below) “demo/lb”. The client performs a lookup to the same “demo/lb” fabric path to obtain the actual endpoints of the service it wants to call. In the design time configuration a dummy address is configured (http://dummyaddress) this address is not used during runtime, since the endpoints are retrieved from the fabric registry.

CXFRS service

For this post a very simple CXFRS service is created exposing one GET method which responds with a fixed string. The interface class of the service looks like this:


package nl.rubix.cxf.fabric.ha.test;

import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;

@Path("/")
public class Endpoint {
    
    @GET
    @Path("/test/{id}")
    @Produces(MediaType.APPLICATION_JSON)
    public String getAssets(@PathParam("id") String id){
        return null;
    }

    
}

The Blueprint context implementing the CXFRS service looks like this:


<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0" 
            xmlns:camel="http://camel.apache.org/schema/blueprint" 
            xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
            xmlns:cxf="http://camel.apache.org/schema/blueprint/cxf"
            xmlns:cm="http://aries.apache.org/blueprint/xmlns/blueprint-cm/v1.0.0"
            xsi:schemaLocation="http://www.osgi.org/xmlns/blueprint/v1.0.0 http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd
            http://camel.apache.org/schema/blueprint http://camel.apache.org/schema/blueprint/camel-blueprint.xsd">

  <camelContext id="blueprintContext" trace="false" xmlns="http://camel.apache.org/schema/blueprint">
    <route id="cxfrsServerTest">
      <from uri="cxfrs:bean:rsServer?bindingStyle=SimpleConsumer"/>
      <log message="The id contains ${header.id}!!"/>
      <setBody>
        <constant>this is a  response</constant>
      </setBody>
    </route>
  </camelContext>
  
  <cxf:rsServer id="rsServer" address="http://localhost:2345/testendpoint" serviceClass="nl.rubix.cxf.fabric.ha.test.Endpoint" loggingFeatureEnabled="true" />
  
</blueprint>


Note, this service is just plain CXFRS and Camel, no Fabric features have been added yet.

Implementing the load balancer features

To implement Fabric load balancing on the server side the pom has to be updated with extra dependencies and the Blueprint xml file needs to be configured with the load balancer.

The pom needs to be updated with the following items:

Add the fabric-cxf dependency to the pom:


<dependency>
    <groupId>io.fabric8</groupId>
    <artifactId>fabric-cxf</artifactId>
    <version>1.0.0.redhat-379</version>
    <type>bundle</type>
</dependency>


Add the io.fabric8.cxf to the import package of the OSGi bundle:


<plugin>
    <groupId>org.apache.felix</groupId>
    <artifactId>maven-bundle-plugin</artifactId>
    <version>2.3.7</version>
    <extensions>true</extensions>
    <configuration>
        <instructions>
            <Bundle-SymbolicName>cxf-fabric-ha-test</Bundle-SymbolicName>
            <Private-Package>test.cxf.fabric.ha.test.*</Private-Package>
            <Import-Package>*,io.fabric8.cxf</Import-Package>
        </instructions>
    </configuration>
</plugin>


To add the CXF Fabric load balancer to the Blueprint xml file:

Add the cxf-core namespace to the Blueprint xml:

xmlns:cxf-core=”http://cxf.apache.org/blueprint/core

Add an OSGi reference to the CuratorFramework OSGi service[1]:

[1] The CuratorFramework service is for communicating with the Apache Zookeeper registry used by Fabric.


<reference id="curator" interface="org.apache.curator.framework.CuratorFramework" />

Instantiate the FabricLoadBalancerFeature bean:


<bean id="fabricLoadBalancerFeature" class="io.fabric8.cxf.FabricLoadBalancerFeature">
    <property name="curator" ref="curator" />
    <property name="fabricPath" value="cxf/endpoints" />
</bean>

The curator property is a reference to the curator OSGi service reference declared earlier. The fabricPath is the location in the Fabric (Zookeeper) registry where the CXF endpoints are stored. This location in the registry is used by clients to lookup available endpoints. In the picture above this was set to demo/lb

To register the Fabric load balancer with the CXF service add a CXF bus with a reference to the fabricLoadBalancerFeature bean declared above:


<cxf-core:bus>
    <cxf-core:features>
        <cxf-core:logging />
        <ref component-id="fabricLoadBalancerFeature" />
    </cxf-core:features>
</cxf-core:bus>

The entire Blueprint xml for the load balancer looks like this:


<?xml version="1.0" encoding="UTF-8"?>
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0" 
            xmlns:camel="http://camel.apache.org/schema/blueprint" 
            xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
            xmlns:cxf="http://camel.apache.org/schema/blueprint/cxf"
            xmlns:cxf-core="http://cxf.apache.org/blueprint/core" 
            xmlns:cm="http://aries.apache.org/blueprint/xmlns/blueprint-cm/v1.0.0"
            xsi:schemaLocation="http://www.osgi.org/xmlns/blueprint/v1.0.0 http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd
       http://camel.apache.org/schema/blueprint http://camel.apache.org/schema/blueprint/camel-blueprint.xsd">

  <camelContext id="blueprintContext" trace="false" xmlns="http://camel.apache.org/schema/blueprint">
    <route id="cxfrsServerTest">
      <from uri="cxfrs:bean:rsServer?bindingStyle=SimpleConsumer"/>
      <log message="The id contains ${header.id}!!"/>
      <setBody>
        <constant>this is a  response</constant>
      </setBody>
    </route>
  </camelContext>
  
  <cxf:rsServer id="rsServer" address="http://localhost:2345/testendpoint" serviceClass="nl.rubix.cxf.fabric.ha.test.Endpoint" loggingFeatureEnabled="true" />
  
  <!-- configuration for the Fabric load balancer -->
  <reference id="curator" interface="org.apache.curator.framework.CuratorFramework" />

    <bean id="fabricLoadBalancerFeature" class="io.fabric8.cxf.FabricLoadBalancerFeature">
        <property name="curator" ref="curator" />
        <property name="fabricPath" value="cxf/endpoints" />
    </bean>

    <cxf-core:bus>
        <cxf-core:features>
            <cxf-core:logging />
            <ref component-id="fabricLoadBalancerFeature" />
        </cxf-core:features>
    </cxf-core:bus>

</blueprint>

When we deploy this service in a Fabric profile on a container we can access the service through the assigned endpoints the container gave us. Note this is not yet using the load balancer feature, since accessing the service through a browser does not perform the lookup in the Fabric registry.

Fabric load balancer CXF Part 1 - 2You can find the endpoint the container assigned to the services on the APIs tab:

Fabric load balancer CXF Part 1 - 3When we access the endpoint through a browser, again without using the Fabric load balancer we get the response:

Fabric load balancer CXF Part 1 - 4

As mentioned above to use the Fabric load balancing on the client side clients have to be able to access the Fabric registry. There are two possibilities to accomplish this:

In part 2 we will explore the client side for accessing the Fabric load balancer.

For more information about CXF load balancing using Fabric:

https://access.redhat.com/documentation/en-US/Red_Hat_JBoss_Fuse/6.1/html/Apache_CXF_Development_Guide/FabricHA.html

 
1 Comment

Posted by on 2015-02-26 in JBoss Fuse

 

Tags: , , , ,

Fuse Fabric MQ provision exception

When using the discovery endpoint to connect to a master slave ActiveMQ cluster I ran into an error.
It is worth to note that this exception is not caused by the discovery mechanism, I happened to find this issue when using the discovery url, however when hard wiring the connector (e.g. using tcp://localhost:61616) will also cause this issue. This issue has to do with the container setup and profiling which I will explain below.

“Provision Exception:

org.osgi.service.resolver.ResolutionException: Unable to resolve dummy/0.0.0: missing requirement [dummy/0.0.0] osgi.identity; osgi.identity=fabricdemo; type=osgi.bundle; version=”[1.0.0.SNAPSHOT,1.0.0.SNAPSHOT]” [caused by: Unable to resolve fabricdemo/1.0.0.SNAPSHOT: missing requirement [fabricdemo/1.0.0.SNAPSHOT] osgi.wiring.package; filter:=”(&(osgi.wiring.package=org.apache.activemq.camel.component)(version>=5.9.0)(!(version>=6.0.0)))”]”

Fabric-MQ-provision-error-1

This is caused by the Fuse container setup, where the brokers run on two seperate containers in a master slave construction. The container running the deployed Camel route is also running the JBoss Fuse minimal profile.

Fabric-MQ-provision-error-2

The connection to the broker cluster is setup in the Blueprint context in the following manner:

<bean id="jmsConnectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory">
	<property name="brokerURL" value="discovery:(fabric:default)" />
	<property name="userName" value="admin" />
	<property name="password" value="admin" />
</bean>
<bean id="pooledConnectionFactory" class="org.apache.activemq.pool.PooledConnectionFactory">
	<property name="maxConnections" value="1" />
	<property name="maximumActiveSessionPerConnection" value="500" />
	<property name="connectionFactory" ref="jmsConnectionFactory" />
</bean>
<bean id="jmsConfig" class="org.apache.camel.component.jms.JmsConfiguration">
	<property name="connectionFactory" ref="pooledConnectionFactory" />
	<property name="concurrentConsumers" value="10" />
</bean>
<bean id="activemq" class="org.apache.activemq.camel.component.ActiveMQComponent">
	<property name="configuration" ref="jmsConfig"/>
</bean>

When starting the container a provision exception is thrown, this exception: missing requirement [fabricdemo/1.0.0.SNAPSHOT] osgi.wiring.package; filter:=”(&(osgi.wiring.package=org.apache.activemq.camel.component)(version>=5.9.0)(!(version>=6.0.0)))”]”

Again, it is not just caused by the discovery brokerURL: .

The exception is thrown because the JBoss Fuse minimal profile does not provides the ActiveMQ component. When adding the dependencies for this component to your project the exception still persists, the reason for this is that the component is not exported by default in the OSGi bundle. This means other bundles cannot use it. Exporting the class implementing the component will solve this issue.
To export the ActiveMQ component class we need to extend the Apache Felix maven-bundle-plugin. We need to tell the plugin to export the ActiveMQ component, this can be done by adding the following line to the configuration:

<Export-Package>org.apache.activemq.camel.component</Export-Package>

The entire plugin now looks like this:

<plugin>
	<groupId>org.apache.felix</groupId>
	<artifactId>maven-bundle-plugin</artifactId>
	<version>2.3.7</version>
	<extensions>true</extensions>
	<configuration>
		<instructions>
			<Bundle-SymbolicName>fabricdemo</Bundle-SymbolicName>
			<Private-Package>nl.rubix.camel-activemq.*</Private-Package>
			<Import-Package>*</Import-Package>
			<Export-Package>org.apache.activemq.camel.component</Export-Package>
		</instructions>
	</configuration>
</plugin>

As a side note: to use the discovey broker url the mq-fabric feature must be added to the profile. For this demo I used the Fabric maven plugin which I configured as follows:

<plugin>
	<groupId>io.fabric8</groupId>
	<artifactId>fabric8-maven-plugin</artifactId>
	<configuration>
		<profile>activemqtest</profile>
		<features>mq-fabric</features>
	</configuration>
</plugin>

After the modification the the pom.xml the container will start properly:

Fabric-MQ-provision-error-3

Anyway I hope this helps someone.

 
1 Comment

Posted by on 2014-12-24 in JBoss Fuse

 

Tags: , , , , , ,

Stopping a fabric container after ‘container has not been created by fabric’ error message

When creating a new fabric sometimes there are still containers running in the background of the old fabric. These can cause conflicts with the ‘new’ containers. Especially when using the same ZooKeeper credentials as the old fabric in the new one.

Note this posts will only explain how to stop and delete a container after the ‘container has not been created by fabric’ error message. It may very well be that other configurations and background processes are running of the old fabric what may cause unexpected behavior. If you have the possibility to start your fabric from scratch (for intance a developer machine) it may be better to restart fuse with ./fuse -clean and create a new fabric with the command fabric:create –clean

for more information about the available commands see the Fuse documentation:

https://access.redhat.com/documentation/en-US/Red_Hat_JBoss_Fuse/6.1/html/Console_Reference/files/ConsoleRefIntro.html

One of the problems is that when trying to start, stop or delete a container you get the error message ‘container <<your container>> has not been created by fabric’.

container-stop-error-1

Also trying to stop/start/delete the container from the Fuse karaf console will result in the same message.

JBossFuse:karaf@root> fabric:container-list
[id] [version] [connected] [profiles] [provision status]
root* 1.0 true fabric, fabric-ensemble-0000-1 success
  demoContainer 1.0 true jboss-fuse-full, GettingStarted success
JBossFuse:karaf@root> fabric:container-stop demoContainer
Error executing command: Container demoContainer has not been created using Fabric

So how can we stop this container?

To stop this container we need to take two steps;

1) install the ‘zookeeper-commands’ feature in the root container

2) using zookeeper commands to find the PID of the container so we can kill it through the os.

1) installing the ‘zookeeper-commands’ feature in the root container

In fabric8 features are also controlled by fabric8, for this reason the good old ‘feature:install’ command will not work.

I resulted in installing the zookeeper-commands feature through the Hawtio console.

In the Hawtio console go to ‘Wiki’ → edit fabric features → search for zookeeper and select zookeeper-commands → click the ‘Add’ buttom and finally click ‘save’

In Hawtio console go to wiki and select the fabric profile.

container-stop-error-2

Then select the edit features button at the bottom of the features list:

container-stop-error-3

Search for zookeeper, select zookeeper-commands → click ‘Add’ → click ‘Save changes’

container-stop-error-42) Stopping the container

After installing the zookeeper-commands feature we can use the zookeeper commands in the Fuse/karaf shell:

finding the PID of the container we want to stop execute the following command:

JBossFuse:karaf@root> zk:list -r -d|grep demoContainer|grep pid
/fabric/registry/containers/status/demoContainer/pid = 7177

Now in a ‘regular’ shell we can kill the process (and thus the container) with this PID.

[jboss@localhost ]$ ps -ef|grep 7177
jboss     7177     1  2 12:30 pts/0    00:03:02 java -server -Dcom.sun.management.jmxremote -Dzookeeper.url=10.0.2.15:2181 -Dzookeeper.password.encode=true -Dzookeeper.password=ZKENC=YWRtaW4= -Xmx512m -XX:+UnlockDiagnosticVMOptions -XX:+UnsyncloadClass -Dbind.address=0.0.0.0 -Dio.fabric8.datastore.component.name=io.fabric8.datastore -Dio.fabric8.datastore.type=caching-git -Dio.fabric8.datastore.felix.fileinstall.filename=file:/home/jboss/Apps/jboss-fuse-6.1.0.redhat-379/etc/io.fabric8.datastore.cfg -Dio.fabric8.datastore.service.pid=io.fabric8.datastore -Dio.fabric8.datastore.gitPullPeriod=1000 -Djava.util.logging.config.file=/home/jboss/Apps/jboss-fuse-6.1.0.redhat-379/instances/demoContainer/etc/java.util.logging.properties -Djava.endorsed.dirs=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.60-2.4.7.0.fc20.x86_64/jre/jre/lib/endorsed:/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.60-2.4.7.0.fc20.x86_64/jre/lib/endorsed:/home/jboss/Apps/jboss-fuse-6.1.0.redhat-379/lib/endorsed -Djava.ext.dirs=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.60-2.4.7.0.fc20.x86_64/jre/jre/lib/ext:/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.60-2.4.7.0.fc20.x86_64/jre/lib/ext:/home/jboss/Apps/jboss-fuse-6.1.0.redhat-379/lib/ext -Dkaraf.home=/home/jboss/Apps/jboss-fuse-6.1.0.redhat-379 -Dkaraf.base=/home/jboss/Apps/jboss-fuse-6.1.0.redhat-379/instances/demoContainer -Dkaraf.data=/home/jboss/Apps/jboss-fuse-6.1.0.redhat-379/instances/demoContainer/data -Dkaraf.etc=/home/jboss/Apps/jboss-fuse-6.1.0.redhat-379/instances/demoContainer/etc -Dkaraf.startLocalConsole=false -Dkaraf.startRemoteShell=true -classpath /home/jboss/Apps/jboss-fuse-6.1.0.redhat-379/lib/org.apache.servicemix.specs.activator-2.3.0.redhat-610379.jar:/home/jboss/Apps/jboss-fuse-6.1.0.redhat-379/lib/karaf.jar:/home/jboss/Apps/jboss-fuse-6.1.0.redhat-379/lib/karaf-jaas-boot.jar:/home/jboss/Apps/jboss-fuse-6.1.0.redhat-379/lib/esb-version.jar org.apache.karaf.main.Main
[jboss@localhost ]$ kill -9 7177

Now the container is stopped, but it still exists. And we still can’t start it and stop it through the regular process. So in the next step I’ll explain how to delete the container.

deleting the container

To remove the container from the fabric we need to remove all the zookeeper entries. To do this first we first need to know what entries we need te remove.

We can use the zk:list command again for retrieving the zookeeper entries.

JBossFuse:karaf@root> zk:list -r|grep demo
/fabric/configs/containers/demoContainer
/fabric/configs/versions/1.0/containers/demoContainer
/fabric/registry/containers/config/demoContainer
/fabric/registry/containers/config/demoContainer/bindaddress
/fabric/registry/containers/config/demoContainer/maximumport
/fabric/registry/containers/config/demoContainer/localhostname
/fabric/registry/containers/config/demoContainer/localip
/fabric/registry/containers/config/demoContainer/publicip
/fabric/registry/containers/config/demoContainer/parent
/fabric/registry/containers/config/demoContainer/resolver
/fabric/registry/containers/config/demoContainer/minimumport
/fabric/registry/containers/config/demoContainer/manualip
/fabric/registry/containers/config/demoContainer/ip
/fabric/registry/ports/containers/demoContainer
/fabric/registry/ports/containers/demoContainer/org.apache.karaf.shell
/fabric/registry/ports/containers/demoContainer/org.apache.karaf.shell/sshPort
/fabric/registry/ports/containers/demoContainer/org.apache.karaf.management
/fabric/registry/ports/containers/demoContainer/org.apache.karaf.management/rmiServerPort
/fabric/registry/ports/containers/demoContainer/org.apache.karaf.management/rmiRegistryPort
/fabric/registry/ports/containers/demoContainer/org.ops4j.pax.web
/fabric/registry/ports/containers/demoContainer/org.ops4j.pax.web/org.osgi.service.http.port

Now we know all the entries of the container we want te delete we can delete the entries using the zk:delete command. Make sure you use the -r parameter to recursively remove alle entries.

JBossFuse:karaf@root> zk:delete -r /fabric/configs/containers/demoContainer
JBossFuse:karaf@root> zk:delete -r /fabric/configs/versions/1.0/containers/demoContainer
JBossFuse:karaf@root> zk:delete -r /fabric/registry/containers/config/demoContainer
JBossFuse:karaf@root> zk:delete -r /fabric/registry/ports/containers/demoContainer

Now when we execute the fabric:container-lists command the demoContainer is no longer in the list.

JBossFuse:karaf@root> fabric:container-list
[id]                           [version] [connected] [profiles]                                         [provision status]
root*                          1.0       true        fabric, fabric-ensemble-0000-1                     success
 
Leave a comment

Posted by on 2014-09-20 in JBoss Fuse

 

Tags: , , , ,