RSS

3scale policy development – part 2 generate a policy scaffold

In first part of our multi-part blog series about 3scale policy development we looked into the setup of a development environment. Now we have a functioning development environment we can start the actual development of the 3scale policy. In this part we will take a look and use the scaffolding utility provided by APIcast to generate a policy scaffold.

The first thing we are going to do is create a new git branch of the APIcast source we have cloned in the previous part. This is an optional step, but developing a new feature or changing code in general in a new branch is a good habit to get into. So create a new branch and start up our development container.

$ git checkout -b policy-development-blog
Switched to a new branch 'policy-development-blog'
$ make development

To generate the scaffold of our policy we can use the apicast utility located in the bin/ directory of our development container.
So in the development container issue the following command:

$ bin/apicast generate policy hello_world

where hello_world is the name of the policy.

bash-4.2$ bin/apicast generate policy hello_world
source: /home/centos/examples/scaffold/policy
destination: /home/centos

exists: t
created: t/apicast-policy-hello_world.t
exists: gateway
exists: gateway/src
exists: gateway/src/apicast
exists: gateway/src/apicast/policy
created: gateway/src/apicast/policy/hello_world
created: gateway/src/apicast/policy/hello_world/hello_world.lua
created: gateway/src/apicast/policy/hello_world/init.lua
created: gateway/src/apicast/policy/hello_world/apicast-policy.json
exists: spec
exists: spec/policy
created: spec/policy/hello_world
created: spec/policy/hello_world/hello_world_spec.lua
bash-4.2$

As you can see from the output of the generate policy command a few files have been created. These artifacts related to our policy are located in three different directories:

  • t/ – this directory contains all Nginx integration tests
  • src/gateway/apicast/policy – this directory contains the source code and configuration schemas of all policies. Our policy resides in the subdirectory of hello_world
  • spec/policy – this directory contains the unit tests of all policies. The unit tests for our policy resides in the subdirectory of hello_world

So the policy scaffolding utility not only generates a scaffold for our policy, but also the files for a configuration schema, unit tests and integration tests. Let’s have a look at these files.

The source code of our policy residing in the directory src/gateway/apicast/policy/hello_world contains three files.

  • init.lua – all policies contain this init.lua file. It contains 1 line importing (require in Lua) our policy. It should not be modified.
  • aplicast-policy.json – The APIcast gateway is configured using a json document. Policies requiring configuration also use this json document. The apicast-policy.json file is a json schema file were configuration properties for the policy can be defined. We will look into configuration properties and this file in more detail in our next part of our policy development blog series.

 

{
  "$schema": "http://apicast.io/policy-v1/schema#manifest#",
  "name": "hello_world",
  "summary": "TODO: write policy summary",
  "description": [
      "TODO: Write policy description"
  ],
  "version": "builtin",
  "configuration": {
    "type": "object",
    "properties": { }
  }
}
  • hello_world.lua – This is the actual source code of our policy, which at the moment does not contain much.

 

-- This is a hello_world description.
local policy = require('apicast.policy')
local _M = policy.new('hello_world')
local new = _M.new
--- Initialize a hello_world
-- @tparam[opt] table config Policy configuration.
function _M.new(config)
  local self = new(config)
  return self
end
return _M

The first two lines import the APIcast policy module an instantiate a new policy with hello_world as an argument. This returns a module itself which is implemented using a Lua table. Lua is not an Object Oriented language from itself but tables (and especially metatables) can mimic objects. The third line stores a reference to a function new which is defined below. The new function takes a config variable as argument, but as of now nothing is done with is. The new method simply returns itself. Finally the module representing our policy is returned. This is done so other components importing this policy module retrieve the table and can invoke all functions and variables stored in the policy.
We won’t cover all the files in details here since we are going to touch these in upcoming series when we flesh out our policy with functionality.
But as a final verification to see if we have something working let’s run the unit tests again.

The keen observer can see the number of successes in the unit test outcome has increased from 749 to 751 after we generated the scaffold for our policy.

In the next part we will take a closer look at the json configuration schema file and how we can read the configuration values from the json configuration as well as ENV vars to use in our policy.

Advertisements
 
Leave a comment

Posted by on 2019-03-22 in API Management

 

Tags: , , , , , , ,

3scale policy development – part 1 setting up a development environment

3scale policy development – part 1 setting up a development environment

In this multi part blog series we are going to dive into the development, testing and deployment of a custom 3scale APIcast policy. In this initial part we are going to setup a development environment so we can actually start the development of our policy.

But before we begin, let’s first take a look what a 3scale APIcast policy is. We are not going into too much detail here, since better and more detailed descriptions about 3scale APIcast policies already exist.

For those unfamiliar 3scale is a full API Management solution of Red Hat. It exists of an API Manager used for account management, analytics and overall configuration. A developer portal used for outside developers for gaining access to API’s and viewing the documentation. And the API gateway named APIcast. The APIcast gateway is based on Nginx and more specifically Openresty, which is a distribution of Nginx compiled with various modules, most notable the lua-nginx-module.

The lua-nginx-module provides the ability to enhance a Nginx server by executing scripts using the Lua programming language. This is done by providing a Lua hook for each of the Nginx phases. Nginx works using an event loop and a state model where every request (as well as the starting of the server and its worker processes) goes through various phases. Each phase can execute a specific Lua function.

An overview of the various phases and corresponding Lua hooks was kindly in the README of the lua-nginx-module: https://github.com/openresty/lua-nginx-module#directives

Since the APIcast gateway uses Openresty 3scale provided a way to leverage these Lua hooks in the Nginx server using something called policies. As described in the APIcast README:

“The behaviour of APIcast is customizable via policies. A policy basically tells APIcast what it should do in each of the nginx phases.”

A detailed explanation of policies can be found in the same README: https://github.com/3scale/apicast/blob/master/doc/policies.md

Setting up the development enviroment

As was clear from the introduction, APIcast policies are created in the Lua programming language. So we need to setup an environment to do some Lua programming. Also, an actual APIcast server would be very nice to perform some local tests.

Luckily the guys from 3scale made it very easy to setup a development environment for APIcast using Docker and Docker Compose.

Pre-requisites:

This means both Docker and Docker compose must be installed.

The version of Docker I currently use is:

Docker version 18.09.2, build 6247962

Instructions for installing Docker can be found on the Docker website.

With Docker compose version:

docker-compose version 1.23.1, build b02f1306

Instructions for installing Docker-compose can also be found on the Docker website.

Setting up the APIcast development image:

Now that we have both Docker and Docker-compose installed we an setup the APIcast development image.

Firstly the APIcast git repostitory must be cloned so we can start the development of our policy. Since we are going to base our policy on the latest 3scale release we are switching to the stable branch of APIcast.

$ git clone https://github.com/3scale/apicast.git

when done switch to a stable branch, I am using 3.3

$ cd apicast/

$ git checkout 3.3-stable

To start the APIcast containers using Docker-compose we can use the Make file provided by 3scale. In the APIcast directory simply execute the command:

$ make development

The Docker container starts in the foreground with a bash session. The first thing we need to do inside the container is installing all the dependencies.

This can also be done using a Make command, which again must be issued inside the container.

$ make dependencies

It will now download and install a plethora of dependencies inside the container.

The output will be very long, but if everything went well you should be greeted with an output that looks something like this:

Now as a final verification we can run some APIcast unit tests to see if we are up and running and ready to start the development of our policy.

To run the Lua unit tests run the following command inside the container:

$ make busted

Now that we can successfully run unit tests we can start our policy development!

The project’s source code will be available in the container and sync’ed with your local apicast directory, so you can edit files in your preferred environment and still be able to run whatever you need inside the Docker container.

The development container for APIcast uses a Docker volume mount to mount the local apicast directory inside the container. This means all files changed locally in the repository are synced with the container and used in the tests and runtime of the development container.

It also means you can use your favorite IDE or editor develop your 3scale policy.

Optional setup an IDE for policy development:

The use of an IDE or text editor and more specifically which one is very personal so there is definitely no one size fits all here. But for those looking for a dedicated Lua IDE ZeroBraneStudio is a good choice.

Since I come from a Java background I am very used to working with IntelliJ IDEA, and luckily there are some plugins available that make Lua development a little bit nicer.

These are the plugins I installed for developing Lua code and 3scale policies in particular:

And for Openresty/Nginx there is also a plugin:

As a final step, but this is more relevant if you are also planning on developing some Openresty based applications locally (outside the APIcast development container), you can install Openresty, based on the instructions on their website.

What I did was I linked the Lua runtime engine of Openresty, which is LuaJIT, to the SDK of my IntelliJ IDEA so that I am developing code against the LuaJIT engine of Openresty.

As I already mentioned these steps are not required for developing policies in APIcast, and you definitely do not need to use IntelliJ IDEA. But having a good IDE or Text editor, whatever your choice, can make your development life a little bit easier.

Now we are ready to create a 3scale APIcast policy, which is the subject of the next part!

 
Leave a comment

Posted by on 2019-03-01 in API Management

 

Tags: , , , , , , ,

Authenticating a JMS consumer with 3Scale, Camel and ActiveMQ

3Scale is an API Management platform used for authenticating an throttleing API calls among many, many other things. Now when thinking of API’s most people think of RESTfull API’s these days. And altough 3Scale primarily targets RESTfull API’s it is also possible to use other types of API’s as this blog will demonstrate. In this post we will use a Camel JMS subscriber in combination with ActiveMQ and authenticate requests against the 3Scale API Management platform.

First let’s look at the 3Scale setup.

The first step is to create a new service, however normally one would select one of the APICast API Gateway options for handling the API traffic. This time however we are selecting the Java plugin option, since Camel is based on Java. Obviously the same principles could be applied in one of the other programming languages for which plugins are available.   

The next step is to go to the integration page. But, where normally we would configure the mapping rules of our RESTfull API, we now get instructions to implement the Java plugin.

 

It is good to note the rest of the 3Scale setup is completely default. De default Hits metric is used as shown below, although custom methods could easily be defined.

 

For this example one application plan with a rate limit has been configured.

Integrating the 3Scale Java plugin with Apache Camel

Apache Camel has numerous ways of integrating custom code and creating customizations. For this example a custom processor is created, although a bean, or component would work also.

The first step is to import the 3Scale java plugin dependency via Maven, by adding the following to the pom.xml file:

 

<dependency>
    <groupId>net.3scale</groupId>
    <artifactId>3scale-api</artifactId>
    <version>3.0.4</version>
</dependency>

Now we can integrate the 3Scale Java plugin in our Camel processor, which is going to retrieve the 3Scale appId and appKey, used for authentication from JMS headers. With the appId and appKey the 3Scale API is called for authentication. However this is not the only thing we need to pass in our request towards 3Scale. To authenticate against 3Scale selecting the correct 3Scale account and service we need to pass the ServiceId of the service we created above and pass the accompanying service token. Since these are fixed per environment we retrieve these values from a properties file. Finally we need to increment the hits metric. Once all these parameters are passed in the request we can invoke 3Scale and authenticate our request. If we are authenticated and authorized for this API we finish the processor, following the Camel Route execution. However, when we are not authenticated we are going to stop the route and any further processing.
The entire processor looks like this:

package nl.rubix.eos.api.camelthreescale.processor;

import org.apache.camel.Exchange;
import org.apache.camel.Processor;
import org.apache.camel.RuntimeCamelException;
import org.apache.deltaspike.core.api.config.ConfigProperty;
import threescale.v3.api.AuthorizeResponse;
import threescale.v3.api.ParameterMap;
import threescale.v3.api.ServerError;
import threescale.v3.api.ServiceApi;
import threescale.v3.api.impl.ServiceApiDriver;

import javax.inject.Inject;
import javax.inject.Named;

@Named("authRepProcessor")
public class AuthRepProcessor implements Processor {

  @Inject
  @ConfigProperty(name = "SERVICE_TOKEN")
  private String serviceToken;

  @Inject
  @ConfigProperty(name = "SERVICE_ID")
  private String serviceId;

  @Override
  public void process(Exchange exchange) throws Exception {
    String appId = exchange.getIn().getHeader("appId", String.class);
    String appKey = exchange.getIn().getHeader("appKey", String.class);

    AuthorizeResponse authzResponse = authrep(createParametersMap(appId, appKey));

    if(authzResponse.success() == false) {
      exchange.setProperty(Exchange.ROUTE_STOP, true);
      exchange.getIn().setHeader("authz:errorCode", authzResponse.getErrorCode());
      exchange.getIn().setHeader("authz:reason", authzResponse.getReason());
    }

  }

  private ParameterMap createParametersMap(String appId, String appKey) {
    ParameterMap params = new ParameterMap();
    params.add("app_id", appId);
    params.add("app_key", appKey);

    ParameterMap usage = new ParameterMap();
    usage.add("hits", "1");
    params.add("usage", usage);

    return params;
  }

  private AuthorizeResponse authrep(ParameterMap params) {

    ServiceApi serviceApi = ServiceApiDriver.createApi();

    AuthorizeResponse response = null;

    try {
      response = serviceApi.authrep(serviceToken, serviceId, params);
    } catch (ServerError serverError) {
      serverError.printStackTrace();
      throw new RuntimeCamelException(serverError.getMessage(), serverError.getCause());
    }
    return response;
  }
}

We simply use this processor in our Camel route to add the 3Scale functionality:

package nl.rubix.eos.api.camelthreescale;

import io.fabric8.annotations.Alias;
import org.apache.activemq.camel.component.ActiveMQComponent;
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.cdi.ContextName;

import javax.inject.Inject;

@ContextName("activemq-camel-api")
public class ActiveMqCamelApi extends RouteBuilder{

  @Inject
  @Alias("jms")
  private ActiveMQComponent activeMQComponent;

  @Override
  public void configure() throws Exception {
      from("jms:queue:test")
        .log("received message")
        .process("authRepProcessor")
        .log("request authenticated and authorized");
  }
}

When looking at the logs we can see the request is authenticated when we send a request with the correct appId and appKey in the JMS headers. When looking at the logs we can see the request is passing the processor:

2018-03-10 20:28:40,294 [cdi.Main.main()] INFO  DefaultCamelContext            - Route: route1 started and consuming from: Endpoint[jms://queue:test]
2018-03-10 20:28:40,295 [cdi.Main.main()] INFO  DefaultCamelContext            - Total 1 routes, of which 1 are started.
2018-03-10 20:28:40,295 [cdi.Main.main()] INFO  DefaultCamelContext            - Apache Camel 2.17.0.redhat-630187 (CamelContext: activemq-camel-api) started in 0.512 seconds
2018-03-10 20:28:40,318 [cdi.Main.main()] INFO  Bootstrap                      - WELD-ENV-002003: Weld SE container STATIC_INSTANCE initialized
2018-03-10 20:29:37,157 [sConsumer[test]] INFO  route1                         - received message
2018-03-10 20:29:38,307 [sConsumer[test]] INFO  route1                         - request authenticated and authorized

And off course we can see the metrics in 3Scale:

Now this processor discards the message when the authentication by 3Scale fails, but it is off course possible to send the unauthorized messages towards a special error queue, or make the entire route transactional and simply do not send an ACK when authentication fails.

The entire code of this example is available on Github.

 
Leave a comment

Posted by on 2018-03-10 in API Management

 

Tags: , , ,

Camel setting exchange headers in a custom dataformat

Apache Camel is a great framework with dozens (hundreds even) components, dataformats and expression languages. However one of the thing that makes Camel even greater is the various ways to provide your own customizations to these items. In this blog we are going to create a custom dataformat used in the marshal and unmarshal statements within a Camel route.

Creating your custom data format is pretty straightforward, all we have to do is provide an implementation of the Dataformat interface which looks like this:


public interface DataFormat {
void marshal(Exchange var1, Object var2, OutputStream var3) throws Exception;
Object unmarshal(Exchange var1, InputStream var2) throws Exception;
}

Within the marshal method you can take whatever is on the Exchange or the body of the exchange, which is put in the second argument and serialize it into an OutputStream object. The unmarshal method however threw me off a bit by also taking the exchange as an argument. Let me explain.
The Object returned from the unmarshal method is, normally, put in the exchange body. But what if you for some reason need to put some parts of the original message into exchange headers instead of the body. I initially thought that I could leverage the exchange object being passed as an argument into the unmarshal method. However this did not seem to work. It seems the exchange object is only used as in imput of the unmarshal method as a parameter but this is not the same reference as the exchange object used in the route downstream.

Message to the rescue!

One elegant way to set both the exchange body and headers is to use the Camel Message interface. Where we can both set the exchange body and headers.


public interface Message {
...
void setHeader(String var1, Object var2);
...
Map<String, Object> getHeaders();
void setHeaders(Map<String, Object> var1);
...
void setBody(Object var1);
<T> void setBody(Object var1, Class<T> var2);
…

Returning the message object from the unmarshal method in the custom dataformat respects the message object and places this on the exchange using the Camel typeconversion method.

Samples

In order to demonstrate this we create two of the silliest of dataformats. The string reverser. One returning the body, the other returning the message.

We begin by implementing the unmarshal method returning a String:


@Override
public Object unmarshal(Exchange exchange, InputStream inputStream) throws Exception {
String originalBody = exchange.getIn().getBody(String.class);
String reversedBody = new StringBuilder(originalBody).reverse().toString();

//The statement below does nothing
exchange.getIn().setHeader("MyAwesomeHeader", reversedBody);

return reversedBody;
}

However the setHeader on the exchange is ignored as we can see in the log:


2018-01-10 17:53:01,656 [main ] INFO route1 - original body: Hello world

2018-01-10 17:53:01,659 [main ] INFO route1 - the body contains: dlrow olleH

2018-01-10 17:53:01,660 [main ] INFO route1 - the headers contain: {breadcrumbId=ID-15-9530-37168-1515603181324-0-1}

Even Modifiying the unmarshal method to return the exchange does not help:


@Override
public Object unmarshal(Exchange exchange, InputStream inputStream) throws Exception {
String originalBody = exchange.getIn().getBody(String.class);
String reversedBody = new StringBuilder(originalBody).reverse().toString();
//The statement below does nothing
exchange.getIn().setHeader("MyAwesomeHeader", reversedBody);
return reversedBody;
}

}

 


2018-01-10 17:50:08,847 [main ] INFO route1 - original body: Hello world

2018-01-10 17:50:08,849 [main ] INFO route1 - the body contains: Hello world

2018-01-10 17:50:08,850 [main ] INFO route1 - the headers contain: {breadcrumbId=ID-15-9530-36027-1515603008530-0-1}

lastly we create a dataformat returning a Camel Message:


@Override
public Object unmarshal(Exchange exchange, InputStream inputStream) throws Exception {
//set the message to the original message of the Exchange. This preserves the body and all headers previously present on the exchange.
Message response = exchange.getIn();

String originalBody = exchange.getIn().getBody(String.class);
String reversedBody = new StringBuilder(originalBody).reverse().toString();

response.setBody(reversedBody, String.class);
response.setHeader("MyAwesomeHeader", reversedBody);

return response;
}

Now when we look at the log we can see our header being set:


2018-01-10 17:56:04,135 [main ] INFO route2 - original body: Hello world
2018-01-10 17:56:04,138 [main ] INFO route2 - the body contains: dlrow olleH
2018-01-10 17:56:04,139 [main ] INFO route2 - the headers contain: {breadcrumbId=ID-15-9530-37739-1515603363797-0-1, MyAwesomeHeader=dlrow olleH}

Example code can be found on GitHub-Mark-120px-plus

 
Leave a comment

Posted by on 2018-01-10 in JBoss Fuse

 

Tags: , ,

Openshift Fuse health checks with Jolokia

I recently ran into a problem where I needed to create an Openshift healthcheck for a Fuse container. Normally all Fuse containers exposed an http endpoint which was used in the healthcheck. However additional security requirements dictated the use of client certificates. Currently it is not possible to create a healthcheck with a two-way-ssl connection.

As another way to monitor if all Camel routes are started I decided to leverage the JMX beans exposed by Jolokia. For those who are unfamiliar with Jolokia, it essentially is JMX over json/http.

 

The problem was that by default the Jolokia endpoint is secured with basic authentication, and the password is generated for each created container in Openshift.

However, the Jolokia password for each container is available in the base image (FIS 2 fis-java base image was used) of each container: /opt/etc/jolokia.pw

This file can be used to execute the curl request to the Jolokia endpoint:

curl -k -v https://jolokia:`cat /opt/jolokia/etc/jolokia.pw`@localhost:8778/jolokia/exec/org.apache.camel:context=myCamelContext,type=routes,name=%22myJettyEndpoint%22/getState%28%29

In the above example the state of the “myJettyEndpoint” camel endpoint is requested.

 
Leave a comment

Posted by on 2017-11-24 in JBoss Fuse, Openshift

 

Tags: , , , ,

Openshift limits explained

Openshift is a Paas platform offered by Red Hat based mainly on Docker and Kubernetes. One of the concepts behind it is that Ops can set boundaries for Dev. For example by providing a list of supported technologies in the form of base images.
One other way Ops can further control the Paas cluster is to impose various limits on the components running in Openshift.

However, Openshift currently has three different ways of setting restrictions on different levels which do interconnect in an implicit way. Which sometimes can become difficult to setup in a proper way and people end up with Pods never leaving the “Pending” state. So in this blogpost we are going to take look at the different limits or restrictions available in Openshift and how they influence each other.

But to understand the restrictions better it is good to know some basic Openshift concepts and components on which these limits act. Although I highly recommend to start experimenting with restrictions and limits after you become familiar with Openshift.

Below are the components of Openshift influenced by the restrictions. (source: https://docs.openshift.com/enterprise/3.0/architecture/core_concepts/index.html)

Containers

The basic units of OpenShift applications are called containers. Linux container technologies are lightweight mechanisms for isolating running processes so that they are limited to interacting with only their designated resources. Many application instances can be running in containers on a single host without visibility into each others’ processes, files, network, and so on. Typically, each container provides a single service (often called a “micro-service”), such as a web server or a database, though containers can be used for arbitrary workloads.

The Linux kernel has been incorporating capabilities for container technologies for years. More recently the Docker project has developed a convenient management interface for Linux containers on a host. OpenShift and Kubernetes add the ability to orchestrate Docker containers across multi-host installations.

Pods

OpenShift leverages the Kubernetes concept of a pod, which is one or more containers deployed together on one host, and the smallest compute unit that can be defined, deployed, and managed.

Namespaces

A Kubernetes namespace provides a mechanism to scope resources in a cluster. In OpenShift, a project is a Kubernetes namespace with additional annotations.

Namespaces provide a unique scope for:

  • Named resources to avoid basic naming collisions.
  • Delegated management authority to trusted users.
  • The ability to limit community resource consumption.

Most objects in the system are scoped by namespace, but some are excepted and have no namespace, including nodes and users.

The Kubernetes documentation has more information on namespaces.

 

Openshift limits and restrictions

There are three different types of limits and restrictions available in Openshift.

  • Quotas
  • Limit ranges
  • Compute resources

Quotas

Quotas are boundaries configured per namespace and act as a upper limit for resources in that particular namespace. It essentially defines the capacity of the namespace. How this capacity is used is up to the user using the namespace. For example if the total capacity us used by one or one hundred pods is not dictated by the quota, except when a max number of pods is configured.

Like most things in Openshift you can configure a quota with a yaml configuration. One basic configuration for a quota looks like:

apiVersion: v1
kind: ResourceQuota
metadata:
  name: namespace-quota
spec:
  hard:
    pods: "5" 
    requests.cpu: "500m" 
    requests.memory: 512Mi 
    limits.cpu: "2" 
    limits.memory: 2Gi 

This quota says that the namespace can have a maximum of 5 pods, and/or a max of 2 cores and 2 Gb of memory, the initial “claim” of this namespace is 500 millicores and 512 Mb of memory.

Now it is important to note that by default these limits are imposed to “NonTerminating” pods only. Meaning that for example build pods which terminate eventually are not counted in this quota.

Explicitly this can be configured by adding a scope to the yaml spec:


apiVersion: v1
kind: ResourceQuota
metadata:
  name: namespace-quota
spec:
  hard:
    pods: "5" 
    requests.cpu: "500m" 
    requests.memory: 512Mi 
    limits.cpu: "2" 
    limits.memory: 2Gi 
scopes:
- NotTerminating

There are also other scopes available, like Best effort and guaranteed.

Limit ranges

One other type of limit is the “limit range”. A limit range is also configured on a namespace, however a limit range defines limits per pod and/or container in that namespace. It essentially provides CPU and memory limits for containers and pods.

Again, configuring a limit range is also done by a yaml configuration:


apiVersion: "v1"
kind: "LimitRange"
metadata:
  name: "resource-limits" 
spec:
  limits:
    -
      type: "Pod"
      max:
        cpu: "2" 
        memory: "1Gi" 
      min:
        cpu: "200m" 
        memory: "6Mi" 
    -
      type: "Container"
      max:
        cpu: "2" 
        memory: "1Gi" 
      min:
        cpu: "100m" 
        memory: "4Mi" 
      default:
        cpu: "300m" 
        memory: "200Mi" 
      defaultRequest:
        cpu: "200m" 
        memory: "100Mi"

Here we can see both Pod and Container limits. These limits define the “range” (hence the term limit range) for each container of pod in the namespace. So in the above example each Pod in the namespace will initially claim 200 millicores and 6Mb of memory and can run with a max of 1 GB of memory and 2 cores of CPU. The actual limits the Pod or container runs with can be defined in the Pod or Container spec which we will discover below. However the limit range defines the range of these limits.

Another thing to note is the default and defaultRequest limits in the Container limit range. These are the limits applied to a container who do not specify further limits and hence get assigned the default.

Compute resources

The last of the limits is probably the easiest to understand, compute resources are defined on the Pod or the Container spec, in for example the deploymentconfig or the replication controller. And define the CPI and Memory limits for that particular pod.

Lets look at an example Yaml:

 


apiVersion: v1
kind: Pod
spec:
  containers:
  - image: nginx
    name: nginx
    resources:
      requests:
        cpu: 100m 
        memory: 200Mi 
      limits:
        cpu: 200m 
        memory: 400Mi 

In the above spec the Pod will initially claim 100 millicores and 200 Mb of memory and will max out at 200 millicores and 400 Mb of memory. Note whenever a Limit range is also provided in the namespace where the above Pod runs and the compute resources limits here are within the limit range the Pod will run fine. If however the limits are above the limits in the limit range the pod will not start.

Scopes

All limits have a request and a max which define further ranges the Pod can operate on. Where the request is by all intense and purposes “guaranteed” (as long as the underlying node has the capacity). This gives the option to implicitly set different QoS tiers.

  1. BestEffort – no limits are provided whatsoever. The Pod claims whatever it needs, but is the first one to get scaled down or killed when other Pods request for resources.
  2. Burstable – The request limits are lower than the max limits. The initial limit is guaranteed, but the Pod can, if resources are available, burst to its maximum.
  3. Guaranteed – the request and max are identical, so it directly claims the max resources, even though the pod might not initially use all resources they are already claimed by the cluster, and therefore guaranteed.

Below is an overall view of the three different Openshift limits.

 

 
Leave a comment

Posted by on 2017-08-25 in Openshift

 

Tags: , , , , , ,

AES-256 message encryption in Apache Camel

This blog post shows how to encrypt and decrypt the payload of the message using Apache Camel. The cryptografic algorithm used in this example is AES-256 since this was an explicit request from security. The key used in the example was obtained from a keystore.

For extra security purposes AES encryption can be extended by using a so called Initialization Vector, which is similar as a NONCE a random number used per request. In this example a random 16 bit byte[] is used.

For more information about AES encryption: https://en.wikipedia.org/wiki/Advanced_Encryption_Standard

For more information about Initialization Vectors: https://en.wikipedia.org/wiki/Initialization_vector

To encrypt and decrypt messages Camel has made an abstraction independend of the algorithm used for encryption and decryption. This abstraction is called the CryptoDataFormat and can be used as any other data format. The CryptoDataFormat is well documented here: http://camel.apache.org/crypto.html However, the use with the AES-256 encryption algorithm is less well documented so, hopefully this post helps someone.

 

Generating the key and the keystore

The first step is to generate the key we are going to use for encryption AND decryption (remember this is symmetric encryption, meaning the same key is used for encryption als for decryption opposed to PKI which is an assymmetric encryption technique).

For generating the key we can use keytool. The example below I conviently borrowed from this blogpost: https://dzone.com/articles/aes-256-encryption-java-and

keytool -genseckey -keystore aes-keystore.jck -storetype jceks -storepass mystorepass -keyalg AES -keysize 256 -alias jceksaes -keypass mykeypass

To retrieve the key from the keystore I created some helper method, nothing special so far.

public static Key getKeyFromKeystore(KeyConfig keyConfig) {

  String keystorePass = keyConfig.getKeystorePass();
  String alias = keyConfig.getAlias();
  String keyPass = keyConfig.getKeyPass();
  String keystoreLocation = keyConfig.getKeystoreLocation();
  String keystoreType = keyConfig.getKeystoreType();

  InputStream keystoreStream = null;
  KeyStore keystore = null;
  Key key = null ;

  try {
    keystoreStream = new FileInputStream(keystoreLocation);
    keystore = KeyStore.getInstance(keystoreType);

    keystore.load(keystoreStream, keystorePass.toCharArray());
    if (!keystore.containsAlias(alias)) {
      throw new RuntimeException("Alias for key not found");
    }

    key = keystore.getKey(alias, keyPass.toCharArray());

  } catch (Exception e) {
    e.printStackTrace();
  }

  return key;
}

The KeyConfig object is just a POJO containing some String values retrieved from a propertyfile.

Since we are going to use an Initialization Vector for extra security we create a helper method for this as well, which returns a 16 bit byte array:

public static byte[] generateIV() {
  byte[] iv = new byte[16];
  new Random().nextBytes(iv);

  return iv;
}

With the helper methods in place we can turn our attention to some Camel code…

Encrypting and decrypting using a Camel Route

The first thing we need is a CryptoDataFormat. Since we are using CDI for the bootstrapping we are going to use a PostConstruct annotated method to create a CryptoDataFormat. The trick is to set the encryption algorithm to “AES/CBC/PKCS5Padding” and enter a AES-256 key.

@PostConstruct
public void setupEncryption() {
  cryptoFormat = new CryptoDataFormat("AES/CBC/PKCS5Padding", EncryptionUtils.getKeyFromKeystore(keyConfig), "SunJCE");
}

When using a CryptoDataFormat in Apache Camel we can simply use the marshal and unmarshal statements within the Camel route. However, since we are creating a message specific Initialization Vector we need to set in as well. For this we can use the helper method we created earlier. Another way would be to use a proccessor.

.setHeader("iv", method("nl.rubix.eos.poc.util.EncryptionUtils", "generateIV"))
.bean(cryptoFormat, "setInitializationVector(${header.iv})")
.marshal(cryptoFormat)

Since the example uses the same instance of the CryptoDataFormat decryption is as simple as:

.unmarshal(cryptoFormat)

In practice it won’t often be feasible to use the same intance of the CryptoDataFormat for both encryption and decrypion. When decrypting the exact same key and Initialization Vector must be used to initialize the CryptoDataFormat instance used for decryption as was used for encryption. Since this is symmetric encryption. The key used for encrypion must be available to the party which performs the decryption. This opposed to assymmetric public/private key signage. This is inherently part of the AES encryption protocol.

The complete code repo can be found here: 

java.security.InvalidKeyException: Illegal key size

For reasons unknown to mankind Oracle decided not to include key sizes of 256 in the standard JRE despite the fact we are living in the 21st century. This results in a Caused by: java.security.InvalidKeyException: Illegal key size exception. To resolve this the so called “Unlimited Strength Juristriction Policy” files must be downloaded and extracted to the JRE.

Since Oracle has a tendency to change download links it won’t be posted here. But searching the internet will provide plenty of examples on how to download and install.

 
Leave a comment

Posted by on 2017-07-10 in JBoss Fuse

 

Tags: , , , , , ,