Java dsl vs Blueprint

Recently I was thinking whether it would be good to stick to either the purse Java dsl or the blueprint one, especially when deploying to the Karaf environment.

One of the good things about the blueprint xml is that it makes the OSGI stuff much easier to handle since Karaf can work well with blueprint so there is no need to OSGI:fy your code.

One the other hand, the java dsl is simply easier to work with and most of the documentation is for the java dsl so you will find more help. Also, who knows, in 5 years maybe blueprint will be replaced so the java dsl feels more safe.

So I was curious to know how other Camel users felt hence I put this question the Camel Nabble forum here:

http://camel.465427.n5.nabble.com/java-dsl-vs-blueprint-xml-td5775085.html

The consensus seemed to be that one could use Blueprint as a start-up mechanism to load the CamelContext and then build the routes, onexception and all other code in a RouteBuilder class which can be referenced via a bean.

I think this makes best use of both worlds. You use only blueprint for loading the routes and use java to build the actual routes and all the extra layers for logging and error handling. This way, if blueprint is replaced it only affects your loading and not the main routing part.

See example here:

https://github.com/cschneider/Karaf-Tutorial/blob/master/camel/jms2rest/src/main/java/net/lr/tutorial/karaf/camel/jms2rest/Jms2RestRoute.java

https://github.com/cschneider/Karaf-Tutorial/blob/master/camel/jms2rest/src/main/resources/OSGI-INF/blueprint/blueprint.xml

There was a mention of the groovy dsl and I know there is a scala one as well. I haven’t used it and there is very little documentation on both so it is hard to learn quickly by examples but if anyone has used it feel free to comment on there ease of use.

Annonser

Nginx and Hawtio

Well some days ago I was really struggling to get Nginx and Hawtio working together. I encountered nginx for the first time and had to add a proxy configuration to allow nginx to route the traffic to Hawtio. The initial configuration seemed obvious and should work. However once you came to the login page and entered your credentials it would simply reload and you get a 403 error.

Eventually after lots of fiddler snooping I saw this:

Screen Shot 12-11-15 at 04.41 PM

There is a cookie called JSESSIONID which wasn’t being passed along. So eventually I had to add this to nginx as well. Here is the final configuration:

location /integration/ {
proxy_pass http://integration.pool/hawtio/;
proxy_redirect http://$host/hawtio/  http://$host/integration/;
proxy_cookie_path /hawtio /integration;
}

Hope this helps someone else as well!

Camel error handling practises

Error Handling in Apache Camel

This post will discuss some thoughts on Apache Camel and its error handling features. Some parts are familiar from the main documentation and others are just practices I use myself. Let’s get started.

Recoverable errors

These are errors you can recover from, for example, network connection errors. These are usually Exceptions and Camel catches these and puts them on the Exchange.

Irrecoverable errors

These are errors that remain errors no matter how many times you retry. Examples include missing database tables. These are usually represented as message faults Camel does not attempt to recover from them.

 Error Handlers

  • DefaultErrorHandler – Default and automatically enabled
  • DeadLetterChannel – Implemented dead letter channel EIP – send message to a backout queue
  • TransactionErrorHandler – Used for handling transactions.
  • LoggingErrorHandler – Simply log the exception.
  • NoErrorHandler – Disable error handling.

The default error handler is enabled by default. There is no redelivery enabled and exceptions are sent back to the caller. The original message is usually discarded unless specified otherwise. The dead letter channel error handler moves failed messages to a dedicated error queue. It does provide handling of exceptions and you can extend it with retry functionality. By default, Camel suppresses exceptions, removes them from the exchange and puts them as properties on the exchange.

I would say in most causes you will be using the default error handler or the dead letter channel error handler.

Asynchronous and Synchronous Error handling

There are two types of communication modes that determine how an integration showed respond to errors. These are asynchronous and synchronous communication.

Asynchronous communication

In asynchronous mode, when an error occurs the behavior of the error handling will primarily use the Dead Letter Channel Error Handler together with OnException behavior. It can be described as follows:

  1. The CamelContext should have OnException, exception and redelivery policy defined. These determine how Camel should react when an error occurs and whether Camel should attempt redelivery based on some policy.
  2. Finally if the message could not be redelivered it should be routed to another route for being posted to a backout queue. This is to preserve the original message in order to examine it further and to allow redelivery at a later stage if possible.
  3. Relevant headers need to be written to the exchange so that the error handler can capture these and put them as metadata on the message being sent to the backout queue.
  4. Once the message has been routed to the error handler route, then relevant error headers are set and the route uses the backout queue name set on the header to route the message to the backout queue. Physically, the backout queue can be a queue in RabbitMQ, ActiveMQ or some other queuing engine which Camel can speak to. ActiveMQ is quit suitable because it can be installed as a feature in Karaf and Hawtio can connect it which makes it easy to view brokers and queues.

 Synchronous communication

In synchronous mode the client is waiting for a response. Here Camel has excellent fine-tuned error handling so we can adapt the error message or behavior depending on the type of error that occurs. The error message can then be sent back to the client who has the final responsibility to either retry or stop the call. No message is sent to the backout queue since client is made aware of the error. The process can be described as follows:

  1. The CamelContext should have OnException and Exception policies defined for each type of error it wants to catch.
  2. The parameter handled will be set to true so the actual error is not returned to the client.
  3. Within the OnException you can add relevant headers and change the response body giving the client a more user-friendly error message than a stack trace.

Best practice

  1. Do not underestimate error handling. It is by far more time consuming than sometimes writing the actual integration logic.
  2. Test your error handler in the unit tests so you ensure the behavior of the error handler matches your expectation. Discuss these expectations with the integration owner.
  3. Try to separate the error handling logic from the actual business logic. You can for instance have a route called ”errorHandler” and all it does is to accept a payload, verify that certain headers exist and puts the payload to a backout queue. You can off course set the queue name dynamically and let this route read the destination from the header. This way this route can be shared by all other routes that need this kind of error handling.
  4. Do not let clients of integrations know what actually went wrong. This in a sense means if something goes wrong in a route catch the error and provide some sensible error message to the client whilst writing the actual error in a log file or some where it can be analyzed. There are two main reasons for this.
    • Clients don’t want to see a giant stack trace of errors and some systems cannot handle seeing this as a response.
    • From security perspective it is not good to reveal the inner details of the integration.
  5. Decide on a range of metadata headers that should be part of all integrations and ensure these are set when you send a message to a backout queue.
  6. Have a backup plan if the backout queue is not available. If the ActiveMQ broker is down, what should you do? Write to a local file? Email it to the support?
  7. Finally, know that there are errors related to data and there are errors related to the actual functioning of the integration. Bad data can cause mapping errors. You need to catch these as well and react accordingly.

Hope you enjoyed and if you have questions just let me know. Next time I will try to cover a bit about logging.

Interaction with the open source community vs commercial vendors

This post is not very technical and sometimes these aspects are just as important as the technical ones. I will try to say a few words about how as a developer the interaction to the open source community can feel compared to the commercial vendors.

Having worked a consultant for commercial vendors for a long time I will refrain for naming specific vendors but I think most work in the same manner so there is room to generalise to some extent.

There are few things that really sets the different worlds apart:

Relationship & Interaction

Most of the time it is very hard to interact with commercial vendors. There are some practical reasons for that. They sell software and issue licenses and unless you have a license or agreement of some kind there is no incentive for them to answer any direct technical questions.

Some have forums you can discuss and get in touch with some of their developer’s but but there is still a distance to be had. What  I mean by this is that you can never establish a professional relationship with the creators of a commercial software. There is a long chain of people and bureaucracy to go through before you can get help or get directed to someone who can help you. Usually you have to pay for this. This means that as user of a commercial software I will most likely never establish a direct relationship to the creators of that software. I will have contact with sales people, consultants and perhaps support. There are off course ways to improve this. You could attend conferences, workshops, get invited to beta presentations and similar things but these usually cost a lot and it depends on your employer to allow you this as well.

In contrast I have got the impression that the open source community has a more direct approach. They in fact seem to encourage you to provide feedback, get in touch with the developers and even chat with them and they will help you if you have any issues. One concrete example personally was when I had a hard time installing Apache Decanter after reading this blog post by Jean-Baptiste Onofré http://blog.nanthrax.net/2015/07/monitoring-and-alerting-with-apache-karaf-decanter/. I sent him an e-mail and within a few hours he replied back and we chatted on skype and with his guidance I was able to solve the issue. He was very friendly and supportive and seemed genuinely eager for me as developer/user to solve the problem. I have had similar encounters in the Apache Camel community when I have discussed things with Claus Ibsen and met him in person.

I have also got good response from the RabbitMQ community where the developers there are very active and help out with questions.

In a lot of the other open source communities there is a good community spirit that is shared between them and it is encouraging to see this even among big projects, especially within the Apache community.To summaries, one of the big benefits of community based projects is that it is easier for users to establish a relationship with the creators of the project and this mutual feedback is the key to also developing the project further that benefit both parties.

Licensing model

One of the things I never understood when working as a consultant is the licensing model of the big vendors from Oracle, IBM, SAP etc. Maybe I am not understanding something but surely, if you are selling me a piece of software, you should not care where I am actually run this. You should not care whether I run it in my home PC, at some high spec work laptop, in the company server or in the cloud. Just sell me the software and be done with it! But no no, not only do they care about where you run the software, they care about the intrinsic details of the hardware which in my view is a crazy. I have also seen vendors providing agent software that monitors your usage of virtual machines to see if you are biding by the license or not. Again, why? Yeah, I understand the business goal is to make a lot of money, but surely this can be done without such a complicated and convoluted licensing model which belongs in the 1980’s and not soon to be 2016.

Everything is a service these days. Everything is a subscription. Software is in aspects providing a service and in my view should, if its going to be commercial, be subscription based with an extremely easy model that doesn’t require an phd in Law to understand. Most clients I have worked for have dreaded when it came to license negotiations because it was never clear from the start with the prerequisites were and what the actual price would be. Secondly, another customer could potentially get another price even if all else are equal.

Needless to the say, you don’t experience this problem with the open source community. One of the best aspects is that a lot of companies who develop software and then release it focus on providing extended functionality or commercial support and training. This is perfect because you get to quickly try the software and if you enjoy it can buy support which is crucial for production and get training as well.  Another example is docker which is big right now and where you can use the technology as if you fit but if you want more enterprisy stuff like a private dockerhub or other addons you pay for it. But again the pricing is simple to understand.

I will leave this post by saying that I think open source software is making a big impact and a lot of big players like Google, Microsoft and Apple will start to take advantage and probably release their technology to increase their impact and user space.

One thing I really hope is that with the advent of cloud we can finally get rid of these heavyweight licensing models and software as a service can truly be appreciated.

I’ll end on a cool note. This blog was refereed by Apache Camel celebrity Claus Ibsen on his twitter account 😉

Camel and Test

This post will highlight some guideline for working with Camel and test. This is more of an introduction than covering any advanced topics. As always the starting point should be the documentation so have a look at here .

Basically in Camel you have a test framework. This framework allows you to do some very important functionality.

  1. You can mock endpoints that are not available or not built yet. This means if you want to connect to some http endpoint which is not yet available you can simply mock it and inject the expected reply instead.
  2. You can intercept messages before they reach an endpoint and do something with them.
  3. You can do usual junit stuff like assertions for number of messages etc or comparing expected output with actual output.

There is more but these are the basics.

So usually you start with unit tests, then move to higher order tests. I will cover unit test and how to perform integration test using Cucumber-JVM to write Given-When-Then features which can be written b non-developers.

Unit test

Here is a very simple method:

@Test
import java.nio.charset.Charset;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.List;

import org.apache.camel.Exchange;
import org.apache.camel.component.mock.MockEndpoint;
import org.apache.camel.impl.JndiRegistry;
import org.apache.camel.model.ProcessorDefinition;
import org.apache.camel.test.blueprint.CamelBlueprintTestSupport;
import org.junit.Test;

public class TestFileTransfer extends CamelBlueprintTestSupport {

@Override
public void doPreSetup() throws Exception {
System.setProperty(”org.apache.aries.blueprint.synchronous”, Boolean.TRUE.toString());
}

@Override
protected String getBlueprintDescriptor() {
return ”/OSGI-INF/blueprint/filetransfer.xml”;
}

@Test
public void testPutFile(String body, String endpoint) throws Exception {
this.setUp();
template.sendBody(endpoint,body);
Thread.sleep(2000);
assertFileNotExists(endpoint);
}

@Test
public void testFileHasMoved(String path) throws Exception {

assertFileExists(path);
}

}

So let me break the unit test class so you can create your own:

  1. You need to extend the CamelBlueprintTestSupport if you are testing blueprint otherwise CamelTestSupport is sufficient.
  2. If it is blueprint you need the getBluePrintDescriptor method to let your test class know where to find your blueprint configuration.
  3. Finally you have your test methods where you can test writing some data to a file and let camel pick it up. The second test asserts that the file has been picked up.

This is how Camel tests usually work. Now there is more you can add such as debug logs, interceptions and replacing endpoints etc. But you can find more information in the documentation on this.

Now sometimes you are part of a team and there are testers who write GIVEN-WHEN-THEN scenarios to cover business requirements. You can call these scenario or integration tests. The name is not that important. But essentially you have feature files where you write your scenarios in G-W-T format. Then you have java classes that parse them and execute tests based on this.

Wouldn’t it be good to have scenario tests for Camel as well? Well you can.

First have a read through here:

http://stackoverflow.com/questions/33568649/camel-blueprint-testing-and-cucumber

The framework I will use is Cucumber-JVM. Cucumber is used to write scenario tests for Javascript and now there is library for java as well. The main parts are as follows:

  1. Write your feature files in G-W-T format and put them in a folder in your classpath.
  2. Write a Cucumber runner class.
  3. Write your Camel test class that includes the line this.setUp(); . This is important because Camel uses the JUNIT runner whereas Cucumber has its own Runner. Without that line it wouldn’t work.
  4. Finally run your Cucumber runner class and look at the logs. It should say xxx scenarios run and number of scenarios succeeded or similar. In the target folder there will be a html page that show this in a nice way.
Feature file

So here is a simple feature file. In real world feature file you have feature descriptions etc but I am simplifying here. You can save it as filetransfer.feature and put in a folder in your project in Eclipse or your editor.

Scenario: Hello world file transfer
Given an input file
When client puts the file in the input directory

Then the integration should move the file to the output directory

Cucumber runner class


import cucumber.api.CucumberOptions;
import cucumber.api.junit.Cucumber;

import org.apache.camel.Exchange;
import org.apache.camel.component.mock.MockEndpoint;
import org.apache.camel.model.ProcessorDefinition;
import org.apache.camel.test.blueprint.CamelBlueprintHelper;
import org.apache.camel.test.blueprint.CamelBlueprintTestSupport;
import org.junit.Test;
import org.junit.internal.runners.JUnit4ClassRunner;
import org.junit.runner.RunWith;
import org.junit.runners.JUnit4;

@RunWith(Cucumber.class)
@CucumberOptions(monochrome=true,
format={ ”pretty”, ”html:target/cucumber”},
features = ”C:/Users/Developer/workspace_camel/IntegrationScenarioTest/src/test/resources/cucumber/filetransfer.feature”)

public class CucumberRunner {

}

The main thing notice is I am referring to the feature file and asking it to create the html output and that it runs with the Cucumber runner. The class itself contains nothing.

Cucumber scenario java methods

Now we need to write the java class containing the methods to match the scenarios in the feature file. This is an example class for this.

import java.nio.charset.Charset;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.List;

import org.apache.camel.Exchange;
import org.apache.camel.component.mock.MockEndpoint;
import org.apache.camel.model.ProcessorDefinition;
import org.apache.camel.test.blueprint.CamelBlueprintTestSupport;
import org.junit.internal.runners.JUnit4ClassRunner;
import org.junit.runner.RunWith;
import org.junit.runners.JUnit4;

import cucumber.api.java.Before;
import cucumber.api.java.en.Given;
import cucumber.api.java.en.Then;
import cucumber.api.java.en.When;

public class FileTransferScenarioTest {

FileTransferCamelRunner filetransfer = new FileTransferCamelRunner();
StringBuilder fileData= new StringBuilder(”This is just some text.”);
StringBuilder endpoint= new StringBuilder();

@Given(”^an input file$”)
public void an_input_file() throws Throwable {
endpoint.append(”file:C:/Camel/input?fileName=input.txt”);
}

@When(”^client puts the file in the input directory$”)
public void client_puts_the_file_in_the_input_directory() throws Throwable {
filetransfer.testPutFile(fileData.toString(), endpoint.toString());
}

@Then(”^the integration should move the file to the output directory$”)
public void the_integration_should_move_the_file_to_the_output_directory() throws Throwable {
String outputPath = ”C:/Camel/output/input.txt”;
filetransfer.testFileHasMoved(outputPath);
}

}

As you can see the methods are matched using Given, When and Then and regex is used to match the text sentences in the scenarios to the actual java methods. Each part should correspond to some action. For example ”the integration should move the file to the output folder” calls the method testFileHasMoved in the Camel test class to check that the file has actually moved.

Camel test classes

Finally we have the Camel test classes to tie it all together.

import java.nio.charset.Charset;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.List;

import org.apache.camel.Exchange;
import org.apache.camel.component.mock.MockEndpoint;
import org.apache.camel.impl.JndiRegistry;
import org.apache.camel.model.ProcessorDefinition;
import org.apache.camel.test.blueprint.CamelBlueprintTestSupport;
import org.junit.Test;

public class FileTransferCamelRunner extends CamelBlueprintTestSupport {

@Override
public void doPreSetup() throws Exception {
System.setProperty(”org.apache.aries.blueprint.synchronous”, Boolean.TRUE.toString());
}

@Override
protected String getBlueprintDescriptor() {
return ”/OSGI-INF/blueprint/filetransfer.xml”;
}

@Test
public void testPutFile(String body, String endpoint) throws Exception {
this.setUp();
template.sendBody(endpoint,body);
Thread.sleep(2000);
assertFileNotExists(endpoint);
}

@Test
public void testFileHasMoved(String path) throws Exception {

assertFileExists(path);
}

}

The most important part is the line ”   this.setUp();” in the method testPutFile which is the first method that gets called. It ensures that the Camel runtime is started in a successful way even though we are using the Cucumber Runner. Without this line the connection from Cucumber to Camel would not work. All you need to do then is run the Cucumber runner class as a JUNIT test and view the output.

Running Karaf+Hawtio+Camel inside Docker

These days microservices is hot and running things in a container is helping that. Docker has received a lot of attention for this, and rightly so, when things work 😉 I will just mention very briefly what docker is but I recommend the official tutorial. It is a good walk through.

Docker is basically a tool that gives you the ability to run a single process/application isolated inside a linux machine. In a sense, you write a dockerfile a bunch of instructions about which application you want, how to install it, how to run it and to keep it running. Then you build that image and run it. Once it is running it is as if the application is running on your host except as docker container. The beauty of it is that it is scalable and can run on any host that can run docker. But yeah read more on dockerhub.

Now if you want a docker image with Apache Karaf 4.0.2, Hawtio 1.4.58 and Camel 2.15.1 I have setup an image on my repository. See the link below.

https://hub.docker.com/r/soucianceeqdamrashti/karaf/

Everything is in default state. If you need to change the pax logging log4j config you need to override the file. If you want the deploy folder to exist externally you need to add a volume.

You run it by issuing this command:

docker run -d -p 8181:8181 –name karaf karaf:4.0.2

The dockerfile instructions look as follows:

#Run using docker command
# docker run -d -p 8181:8181 –name karaf karaf:4.0.2
FROM java:8u66
MAINTAINER soucianceeqdamrashti <souciance.eqdam.rashti@gmail.com>
ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64
ENV KARAF_VERSION=4.0.2
RUN mkdir /opt/karaf
ADD http://apache.openmirror.de/karaf/${KARAF_VERSION}/apache-karaf-${KARAF_VERSION}.tar.gz /opt/karaf/
WORKDIR /opt/karaf/
RUN tar –strip-components=1 -C /opt/karaf -xzf apache-karaf-${KARAF_VERSION}.tar.gz;
WORKDIR /opt/karaf/
RUN bin/start && \
#allow the Karaf process to start
sleep 10 && \
#install camel repo url and version
bin/client feature:repo-add camel 2.15.1 && \
#allow feature url installation to complete
sleep 5 && \
#install camel core
bin/client feature:install camel
RUN sleep 10
RUN bin/start && \
#allow the Karaf process to start
sleep 10 && \
#install camel repo url and version
bin/client feature:repo-add camel 2.15.1 && \
#allow feature url installation to complete
sleep 5 && \
#install camel core
bin/client feature:install camel
RUN sleep 10
RUN bin/start && \
#allow the Karaf process to start
sleep 10 && \
#install hawtio repo url and version
bin/client feature:repo-add hawtio 1.4.58 && \
#allow feature url installation to complete
sleep 5 && \
#install hawtio
bin/client feature:install hawtio
#COPY /config/org.ops4j.pax.logging.cfg /opt/karaf/etc/
EXPOSE 8181
ENTRYPOINT [”/opt/karaf/bin/karaf”, ”start”]

Most of the commands are pretty self-explanatory. You may wonder why there is a sleep command. This is because I noticed that you cannot simply start Karaf and then go ahead and install the feature. Docker immediately goes to the install part after the start command not giving Karaf enough time to kick in. That is why there is a sleep to allow it to kick in and then go ahead and install the camel and hawtio parts.

At the ENTRYPOINT I am starting Karaf as a foreground process. This is because docker requires this, otherwise if you start Karaf as a background process docker things the application has finished and exists. Your container will simply exist.