Camel development series 12

Hello and welcome to another Camel development series. From now on I will paste less code to the blog and instead refer to the github repo where the code is stored. It makes writing the blog easier and that helps to motivate me to write more often.

In this serie I will touch upon a frequent scenario. Our problem is that we want to expose a HTTP endpoint in order to allow the client to perform a HTTP GET operation. We then want to call the backend HTTP(s) and retrieve say a picture and allow this to be displayed via the browser. How can we accomplish this?

Well if you want to jump straight to the code go here:
https://github.com/SoucianceEqdamRashti/Integration/blob/master/CamelDemo/src/main/java/org/souciance/integration/http/RedirectHTTP.java

As you can see we accomplish this in four lines of code. We first expose our undertow endpoint and ensure only HTTP GET operations can be performed.  Then we set the Exchange.HTTP_QUERY header to our query parameter. Finally we call the backend https service using the http4 component. The key here is to enable the parameter bridgeEndpoint=true so that Camel understands it is acting as a proxy and doesn’t mix up the endpoints. Finally we convert the payload to byte and return it to the original client. That’s all!

Camel development series 11

Hello and welcome to another Camel development series.

In this series I thought we’d go through some more hints and some tricks I learned after one year of working with Camel.

Avoid unnecessary object creation

Do not create unnecessary object models of your data. As a java developer you tend to think everything in terms of object. But should you always create objects? The whole point of integration is to remain stateless and act as an interface to transfer data. So I would say, where possible don’t create objects. This has nothing to do with performance since object creation is cheap but more to do with not having to write unnecessary code for an behaviour related functionality. Let us work with an example.

So you receive a json message, you need to do some content-based routing and then generate a SOAP message and send to some web service.

Now you could off course create an object model for the json message or write a bean and the same goes for the SOAP xml. But unless you are dealing with completed message structure or attachments you don’t need to write code that way when it comes to integration. Here is an approach to solve it in another way.

 

.choice()          
.when(PredicateBuilder.isEqualTo(ExpressionBuilder.languageExpression("jsonpath", "$.data.request"), constant("valueToMatch1")))
   .to("direct:route1")
        .when(PredicateBuilder.isEqualTo(ExpressionBuilder.languageExpression("jsonpath", "$.data.request"), constant("valueToMatch2")))
   .to("direct:route2")
.otherwise()
  .throwException(IllegalArgumentException.class, "Unknown request command received!")
.end()

In the above you can see that I don’t create any objects. I simply use the excellent library of json path or the camel version of it camel-jsonpath and simply perform a lookup in the data and based on that I route to my individual routes. This allows for writing less code, simpler code and I stay within the Camel dsl. It makes it also easy to understand what the integration is doing. I avoid going from one class to another.

Now about creating a SOAP message. Again you could write jaxb code, or use some other xml processing or use CXF or some other framework. But if your SOAP messages are relatively simple and does not contain attachments then you skip all that and use freemarker to inject data into your SOAP message. Here is an example:

.setHeader("requestId").jsonpath("$.data.id")
.to("freemarker:request.xml")

Here is my freemarker template file:

<?xml version="1.0" encoding="UTF-8"?>
<soapenv:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:lan="http://test">
  <soapenv:Header/>
  <soapenv:Body>
       <request>${headers.requestId}</request>
  </soapenv:Body>
</soapenv:Envelope>

As you can see I extract the data again using json path and insert it in a header. Then I route to a freemarker endpoint. In my freemarker template I have written a simple expression to where the data should be injected. That’s all! I don’t write any xml code or worry about name space creation and stuff like that. Exactly same thing goes for the opposite. You can see xpath to extract data from xml messages and use a template json freemarker template to insert a simple expression.

Creating environment variables in your tests

A lot of the times you need to access environment variables but you don’t want to use real values in your tests but simple some fake value but you don’t want to change your code. You still want to access that same environment variable but just change the value.

I use the excellent library called system-rules by Stefan Birkner. It works really well and is really use to use. If you work with maven add this to your pom

    <dependency>
      <groupId>com.github.stefanbirkner</groupId>
      <artifactId>system-rules</artifactId>
      <version>1.16.0</version>
      <scope>test</scope>
    </dependency>

How do you use it?

In your JUNIT test class add the following:

@Rule
  public final EnvironmentVariables environmentVariables = new EnvironmentVariables();

 @Before
  public void setUp() throws Exception {
    environmentVariables.set("VAR1", "1");
    environmentVariables.set("VAR2", "2");
    environmentVariables.set("VAR3", "3");
   
  }

As you can see we add the rule and create a variable of type EnvironmentVariables. Then in your setUp method you simply create the variables and add values. As easy as that!

Testing JSON messages in your route tests

So quit often when you write route tests you will need to check if the json you produce needs to match some expected json. To do complicated JSON matches or even simple ones I simply the excellent library alled jsonassert. Add this to your pom:

  <dependency>
      <groupId>org.skyscreamer</groupId>
      <artifactId>jsonassert</artifactId>
      <version>1.4.0</version>
      <scope>test</scope>
    </dependency>

To use it:
In your @test method add this:

JSONAssert.assertEquals(expectedResponse, response, false);

There are way more complicated things you can do with JSONAssert but if you want a simple way of comparing json messages this is great.

Camel development series 10

Hello everyone,

Well as usual it has been a long time since I wrote here but I thought a new year has started so a nice and simple update would be good.

This time I will keep it simple and instead show how you can verify json messages against a given schema in your Camel route.

The aim is thus:

Given a json message and a predefined json schema, we want to validate the message against the schema and return the result of the validation.

Camel route

Assuming you have create a Camel route here is how my (very) basic code looks like:

package org.souciance.integration.validate;

import org.apache.camel.builder.RouteBuilder;
import org.apache.commons.io.IOUtils;

import java.nio.charset.Charset;

public class CamelValidateJson extends RouteBuilder {

@Override
public void configure() throws Exception {
ClassLoader classLoader = getClass().getClassLoader();
String schema = IOUtils.toString(classLoader.getResourceAsStream("jsonvalidate/schema.json"), Charset.defaultCharset());
String json = IOUtils.toString(classLoader.getResourceAsStream("jsonvalidate/data.json"), Charset.defaultCharset());
from("timer://cameldemoValidateJson?repeatCount=1").routeId("ValidateJson")
.setBody(constant(json))
.setProperty("Schema", constant(schema))
.bean(ValidateJson.class, "isValidJson")
.choice()
.when(header("isValid")
.isEqualTo(true))
.log("Valid json!")
.when(header("isValid")
.isEqualTo(false))
.log("Invalid json!")
.end()
.end();
}
}

I have kept the steps very simple. The idea is to focus on the schema validation and nothing else so I am manually loading the data and the schema file. You could off course inject the schema path via some variable or some other way.

I then start the route via  timer, again this is to keep it simple.

I then create an exchange property called ”Schema” and insert the schema in it.

Then I call a bean using .bean and as parameters give the bean class and bean method.

The bean will return the result of the validation inside a header.

I use the choice() and when() to log the result.

The bean code looks like this:

package org.souciance.integration.validate;

import com.fasterxml.jackson.databind.JsonNode;
import com.github.fge.jackson.JsonLoader;
import com.github.fge.jsonschema.core.exceptions.ProcessingException;
import com.github.fge.jsonschema.core.report.ProcessingReport;
import com.github.fge.jsonschema.main.JsonSchemaFactory;
import com.github.fge.jsonschema.main.JsonValidator;
import org.apache.camel.Exchange;

import java.io.IOException;

/**
* Created by moeed on 2017-01-15.
*/
public class ValidateJson {
/**
* Method to jsonvalidate some json data based on a json schema
* @throws IOException
* @throws ProcessingException
*/
public static void isValidJson(Exchange exchange) throws IOException, ProcessingException {
final JsonNode data = JsonLoader.fromString(exchange.getIn().getBody().toString());
final JsonNode schema = JsonLoader.fromString(exchange.getProperty("Schema").toString());

final JsonSchemaFactory factory = JsonSchemaFactory.byDefault();
JsonValidator validator = factory.getValidator();

ProcessingReport report = validator.validate(schema, data);
System.out.println(report);
if (!report.toString().contains("success")) {
report.toString();
exchange.getIn().setHeader("isValid", false);
}
else {
exchange.getIn().setHeader("isValid", true);
}

}
}

The method isValidJson is very simple. It receives the exchange. It extracts the json data from the body and the json schema from the exchange property.

Now to the main part. I use the json schema library to do the actually schema validation. If the validation is false I update the exchange header and if it is successful I also update the exchange header.

If you log the output after running it with intentionally bad data for a given a schema you will see something like this:

com.github.fge.jsonschema.core.report.ListProcessingReport: failure
--- BEGIN MESSAGES ---
error: object has missing required properties (["age"])
level: "error"
schema: {"loadingURI":"#","pointer":"/items"}
instance: {"pointer":"/7"}
domain: "validation"
keyword: "required"
required: ["_id","about","address","age","balance","company","email","eyeColor","favoriteFruit","friends","greeting","guid","index","isActive","latitude","longitude","name","phone","picture","range","registered","tags"]
missing: ["age"]
--- END MESSAGES ---

As you can see I did not have the property ”age” so the validation failed. With the age property put back in the data the output will simply be ”success”.

This is just simple way of doing json validation in your Camel routes. You can use it whilst doing rest calls or simple file based data manipulation. For more info here is the source code on my github:

https://github.com/SoucianceEqdamRashti/Integration/tree/master/CamelDemo/src/main/java/org/souciance/integration/validate

Camel development series 9

Welcome again to another post regarding development with the Apache Camel framework. In this series I will cover a couple of different things that I have encountered or developed which may be of benefit to you.

Camel demo project on Github

I have updated my github page at https://github.com/SoucianceEqdamRashti/Integration and removed some old projects and added two new ones.

CamelDemo

The CamelDemo project contains a larger maven project based on camel-blueprint-archetype version 2.17.0. It consists of a series of routes that aim to show in a very simple manner the basics of some of the most common Camel functionalities. This includes features such as:

  1. Content-based routing
  2. Splitting csv, xml and json data
  3. Rest-dsl
  4. File handling
  5. Timer based routes
  6. Camel unit tests

More features will be added in the near future. But I think it is a pretty good place to start if you are a complete newbie to Camel to just get a feel for the framework and what you can do with very little code. It is indeed a very powerful framework.

CustomKaraf

This is also a maven project based on Karaf 4.0.2. This is a really cool and powerful feature that I discovered a month or so ago and is extremely powerful when developing micro-services in isolation.

In short what this does is to allow you to build and distribute a Karaf distribution customized to your needs and include all the bundles  you desire on boot level. In practical terms, this means that you tell the maven karaf plugin which bundles/features you are interested in and add them to boot level.

Then you run maven:clean install and the plugin creates a zip and a tar.gz file in the target directory. Choose whichever format you desire. Then all you do is move that distribution to your server or docker image and unzip and run /bin/karaf. Then when Karaf boots up the bundles, including Camel and your Camel routes will already be started.

Take a look at the project for an overview. You should be able to build it and have Camel installed at boot level.

Running Camel for OSGI in Intellij

Normally when I develop a project with Camel I always know I will run it in Karaf. Therefore I start with template based on camel-blueprint-archetype. Before version 2.17.0 I didn’t have to change anything with logging. Just let maven do its magic and add the components I want and develop.

However since version 2.17.0 it seems new features to the archetype have been added. Now when you create a new project based on that archetype and then run the generated project you get this error:


[INFO] --- camel-maven-plugin:2.17.0:run (default-cli) @ email-bibsent ---
[INFO] camel-blueprint detected on classpath
[INFO] OSGi Blueprint XML files detected in directory C:\Work\repo\LSP\integration-platform\integrations\PollEmailFromBibsent\src\main\resources\OSGI-INF\blueprint
[INFO] Using org.apache.camel.test.blueprint.Main to initiate a CamelContext
[INFO] Starting Camel ...
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.

Essentially Camel won’t start since the sl4j logger is not found. I then found that you can solve this by adding the following dependencies:


    <dependency>
      <groupId>org.slf4j</groupId>
      <artifactId>slf4j-api</artifactId>
      <version>${slf4j-version}</version>
    </dependency>
    <dependency>
      <groupId>org.slf4j</groupId>
      <artifactId>slf4j-simple</artifactId>
      <version>${slf4j-version}</version>
    </dependency> 

But then I thought this can’t be quit right so I posted this question on the Camel nabble forum. The answer I got from Claus Ibsen was that the logger dependency from Camel is not present in that archetype because Karaf or any other OSGI container has its own logger.

What this means is, add those dependencies when you want to run the route in Intellij for testing, but then change the scope to

<test>

when you want to deploy them to Karaf or to your maven repository.

Camel development series part 8

In this series I will simply touch upon a few issues I encountered during coding that you might find useful. For reference this is on Camel version 2.17.0.

PollEnrich will not move file after completion

So I had a route where in the middle of the route I had to enrich it with a file, and after consuming the file, and the route finished it should move it to the a specific folder with a timestamp. The route looked like this:

 

from(....)
  ...
  ...
 .pollEnrich().simple("file:" + location + "?fileName=${header.File}&charset=iso-8859-1&" +
            "move=../archive/${file:name.noext}-${date:now:yyyyMMddHHmm}.${file:ext}")

We are using simple because we want pollEnrich to evaulate dynamic endpoints from headers. This can be found in the documentation. However, what actually happened was that the pollEnrich created the archive folder and the completely ignore the file expression language and just created a folder with the timestamp. No matter how hard I tried to change it still produced an output similar to this. My intention was to create a folder called archive and in there save the files with a timestamp attached to their name.

The problem was that the ”?move=” part doesn’t evaluate to anything meaningful at the point it’s evaluated. The ”move=” expressions need to be evaluated after the file has been picked up, but they are being evaluated before the file is read.

So my workaround was to replace the pollEnrich with the following:

 ConsumerTemplate consumer = exchange.getContext().createConsumerTemplate();
    String fileName = exchange.getIn().getHeader("File", String.class);
    String fileUri = "file://" + location + "?fileName=" + fileName + "&charset=iso-8859-1&move=archive/${file:name.noext}-${date:now:yyyyMMddHHmmssSSS}.${file:ext}&moveFailed=failed/${file:name.noext}-${date:now:yyyyMMddHHmmssSSS}.${file:ext}";
byte[] fileBody = consumer.receiveBody(fileUri,5000, byte[].class);

The above code resides in a processor. Basically I had to revert to a ConsumerTemplate instead. This works fine, it was just confusing that the pollEnrich didn’t work with dynamic properties.

Building predicates from exchange properties

Exchange properties are very useful when you want to pass properties from one exchange to another. Predicates are very powerful to build rich expressions in order to make to complex statements. These two can then be combined into something as simple as:

    Predicate success = exchangeProperty("Status").isEqualTo("true");
Predicate failure = exchangeProperty("Status").isEqualTo("false");
.......
.........
from(..)
..
..
.choice()
.when(success)
.otherwise()
.when(failure)
..

The code statement is easier to read and you have replaced a complex statement with a simpler one that can be used in the dsl. This is something really cool about Camel.

Splitting large files

Suppose you have a big file that you want to read. Maybe you have millions of rows of CSV data and you want to split them by a certain number or simply one by one.Since each row should be handled the same, you also want things to be done in parallel.

The most performance optimal way can be expressed like this:

from("file:...")
.split().tokenize(System.lineSeparator(), 1000).streaming()
//use threading to handle the different split chunks in parallel
.threads(20, 50)
.to("someProcessor");

So, what we have done is to split based on a token. In this case it is by the operating system new line token e.g. in windows and in linux world. We also group them such that each split shall contain 1000 rows. Then we do streaming so that lazy parsing is done to save memory and finally we add threads so things can be done in parallel.

Specify exchange pattern and delay

The last two simple tips are for when you want to specify the exchange pattern directly as an endpoint and for introducing a simple delay in ms.

//specify the exhange pattern
.inOnly("seda:outputQueue")
//introduce a delay of 100 ms
 .delay(100)

As you can see we can write .inOnly or .inOut and then provide the endpoint. This can be good if you want to do some processing but don’t want to wait for a response, instead want to send an immediate reply to caller.

The delay is simple, simply add the value and you have your delay!

Camel development series part 7

In this post I will go through how I found a usefully way of handling asynchronous error handling when working with Apache Camel.

The requirements are as follows.

  1. We have a bunch of routes chained together who exist in different routeBuilder classes.
  2. The client does not require a response back so there is no need for an synchronous connection.
  3. We want a generic error handling strategy as we don’t want to replicate the error handling code in each routeBuilder class.

Essentially we want a global exception handler applied to all routeBuilder classes in a uniform way. Some months ago I faced the above requirements and googled a bit and found this post on stackoverflow:

http://stackoverflow.com/questions/9283861/camel-exception-handling-doesnt-work-if-exception-clause-is-defined-in-a-separat

This helped a lot accomplishing what I wanted. So let’s break it down:

Standard routeBuilder class

 

public class MyRouteBuilder extends RouteBuilder {
  
  @Override
  public void configure() throws Exception {
	ExceptionBuilder.setup(this);
	from()...to();
	}
}

Notice, that we have added the line

 ExceptionBuilder.setup(this);

which sends this instance of the RouteBuilder class to our ExceptionBuilder.

Now off course we need to create our ExceptionBuilder class. This is how I have done:

public class ExceptionBuilder {

    public static void setup(RouteBuilder routeBuilder) {
    routeBuilder.onException(Exception.class).useOriginalMessage().handled(true).to("direct://ErrorHandler");
  }  
}

I have created a standard java class which has a setup method accepting a RouteBuilder object as a parameter.

It then configures the .onException handling and directs the exchange to the ErrorHandler route. The configuration I have used is based on my needs, you can configure it your way. I want to listen for all possible exceptions, and use the original message and handle the exception which is why it looks like that. Now all we have left is to create our ErrorHandler route.

public class ErrorHandler extends RouteBuilder {

  @Override
  public void configure() throws Exception {
    String hashSeparator = "##################################################################################################################";
    String lineBreak = System.lineSeparator();
    from("direct://ErrorHandler").startupOrder(1).routeId("ErrorHandler")
      .log(LoggingLevel.ERROR, "ErrorHandler", hashSeparator)
      .log(LoggingLevel.ERROR, "ErrorHandler", "An error occured in the integration integration" + lineBreak)
      .setHeader("CamelMessageId", simple("${id}"))
      .setHeader("CamelContextId", simple("${camelId}"))
      .setHeader("CamelCreatedTimestamp", simple("${property.CamelCreatedTimestamp}"))
      .setHeader("CamelExceptionCaught", simple("Exception Object: ${property.CamelExceptionCaught}"))
      .setHeader("CamelStacktrace", simple("${exception.stacktrace}"))
      .setHeader("ExceptionMessage", simple("${exception.message}"))
      .setHeader("CamelFailureEndpoint", simple("${property.CamelFailureEndpoint}"))
      .setHeader("CamelFailureRouteId", simple("${property.CamelFailureRouteId}"))
      .log(LoggingLevel.ERROR, "ErrorHandler", " Headers: ${headers}" + lineBreak + lineBreak)
      .log(LoggingLevel.ERROR, "ErrorHandler", " Body: ${body}" + lineBreak)
      .log(LoggingLevel.ERROR, "ErrorHandler", hashSeparator);
  }
}

The ErrorHandler route is a normal route but with an explicit startup order and then I log a bunch of specific headers and set other headers. You can add and remove stuff as you please and as you want your error log output to look.

Combining the three steps we have seen gives us a powerful way to do error handling in a generic manner applied to all our RouteBuilder classes. This is especially useful for integrations where the client doesn’t need a response and the error handling is the same for all routes.
 

Camel development series part 6

It has been months since I wrote a blog on Camel. This has been mainly due to a lot of activity in my personal and professional life but I thought would get back on track with the blog.

In today post we will look at some basic beginner ”best practice”. Note, if someone reading disagree then please do comment as it is always interesting to receive constructive feedback.

Always name your CamelContext.

  • It does not matter if you are writing in pure java or a mix of blueprint and java. Always name your CamelContext.
  • It makes writing and reading logs easier as there is now a user friendly name to look out for.
  • It gives your reader an understanding of what this Context is supposed to do.
  • It helps to get into the spirit of naming your Camel objects such as routes.
  • Code example:

    CamelContext context = new DefaultCamelContext();
    context.setName("MyCamelContext");
    

    #Blueprint version

    <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0">
     
        <camelContext xmlns="http://camel.apache.org/schema/blueprint" name="MyCamelContext">
            <route>
                <from uri="timer:test" />
                <to uri="log:test" />
            </route>
        </camelContext>
     
    </blueprint>
    &nbsp;
    

    Always give a routeId to your routes.

  • Give a meaningful id to your route. Amongst others, it helps to show which routes have started when the camel context starts. Otherwise you will simply see route1, route2, route3 without know which of these match your routes.
  • It makes it much easier to write log statements if you have a routeId. Each log statement that belongs to a route can use the routeId to put the statements in their category. This makes it much easier to know which route publish which log statement.
  • It gives you an enormous benefit during testing because you can intercept, replace and do other things based on your routeId. I do a lot of replacement during tests for example I replace my eventbus route with a hardcoded response. You need a routeId for this.
  • Always give an id to statements that are configurable.

  • For example, I may send my exchange to a bean method or to a processor or to another route via direct. Give meaningful id names to each .to() statement. It makes testing so much easier as you can intercept and replace .to() endpoints and that is a big bonus during test.
  • Code example:

    from("inputUri).routeId("MyRouteId").to("outputUri).id("MyId");
    

    Keep inline processors short

  • In camel you can create inline processors to write normal java code and manipulate whatever is on the exchange or the headers. This is good for quick and dirty operations but if you are doing complex business logic or operations that should be configurable or hidden then create a Processor class instead and invoke that.
  • Keeping shorter inline processor code makes reading the camel dsl code easier as well.
  • By putting complex resuable code in a Processor class you ensure that other routes can call it as well. This means you write code once and call it from other parts of the CamelContext. You don’t then need to write same inline code everywhere.
  • Code example:

    from("activemq:myQueue").process(new Processor() {
        public void process(Exchange exchange) throws Exception {
            String payload = exchange.getIn().getBody(String.class);
            // do operations on the payload that should only happen at this //point in the route and only in this route.
    // Add simple logic for e.g. header or body manipulation.
    //Complex logic goes in separate Processor class.
           exchange.getIn().setBody("Changed body");
       }
    }).to("activemq:myOtherQueue");
    
    #Processor class
    public class MyProcessor implements Processor {
      public void process(Exchange exchange) throws Exception {
        String body = exchange.getIn().getBody().toString();
        body="changed";
    exchange.getIn().setBody(body);
    //Create a class when you need to class the Processor class from several parts of your route or routes or has configurable parts or contains complex
    business logic.
    
      }
    }
    

    Generate your Camel endpoint uris

  • Your endpoint uri whether in a from() or to() should be generated or injected rather than hardcoded.
  • Avoid from(”file://test/?fileName=test.txt”). Instead do from(fileUri) where fileUri is created in a utility class, some bean method or injected via a property. The same goes for .to() endpoints
  • Generating uri makes testing much easier because you can generate other test endpoints and simply inject those instead and never touch your main uris. If you have hardcoded uri the same is possible but cumbersome.
  • In particular if you have different uri for different environment then you cannot hardcode them otherwise you are creating different code for different environments and that is a bad practise. Here you defintely need to inject them via property file determined by some environment variable that tells you which environment the context is running in.
  • Code example:

    @PropertyInject("{{fileUri}}")
    private String fileUri;
    @PropertyInject("{{toUri}}")
    private String toUri;
    ...
    ...
    from(fileUri).to(toUri);
    

    Don’t complicate your logs

  • Your log files should contain at least two levels. One at debug level intended for technical minded people who wanted in depth knowledge of what is going on. Then you have logs at info level which should only give a brief description of what has happened. The main steps in a route should be at info level. The details of the exchange should be at debug level.
  • Do not put exchange headers, body or properties at info level at least it is of buisiness importance. Info level logs should be of the form:
    Order 123 generated. Customer request received. Message published. No technical details present.
  • At debug level you should log all aspects of the exchange to provide maximum details for debugging and troubleshooting. That means log ${headers}, ${body} and ${properties} where applicable. Because debug levels are not on by default the log files will not get massive straight away.
  • Link the log statements with the routeId so you know which route generated that statement. This makes it easier to backtrack from the logs to your code.
  • Code example:

    #Don't do this. It contains unnecessary amount of information.
    from(fileUri).routeId("MyRouteId").log(LoggingLevel.INFO,"MyRouteId", "Order ${body} received").to(toUri);
    #Log like this.
    from(fileUri).routeId("MyRouteId").log(LoggingLevel.INFO,"MyRouteId", "Order from customer received").log(LoggingLevel.DEBUG, "MyRouteId", "Order from customer with body: ${body} and headers ${headers}").to(toUri);
    

    Split your logic into several routes

  • Yes, you can create a giantic 1000 line log Camel route just as you can with normal java class but it is not considered good practice.
  • Decide how your problem can be proken into individual parts, set each parts action to be done by a specific route and chain the routes together.
  • Chaining routes means you can replace routes during tests or due to requirements changes which is harder to do when all the code is in one giant route
  • Chaining routes makes it easier to understand the different components and you can programmatically or via external commands stop individual routes for troubleshooting. You cannot do this if all the code exist in one giant route.
  • Use common sense off course. You can always start with a big route in order to check your code and then split the code in smaller routes. It is more important to get the coding working first and then get it more coherent then start with the optimization.
  • Code example:

    #Don't do this. It is a massive route.
    from(fileUri).routeId("MyRouteId").log(LoggingLevel.INFO,"MyRouteId", "Order ${body} received").to(eventBusUri).convertBodyTo(String.class).removeHeaders("eventbusHeaders").to()....;
    #Do more like this as each step is broken down and the implementation is written in individual routes
    from(fileUri).routeId("MyRouteId").to(eventBusRoute).to(processOrderRoute).to(generateReceiptRoute).to(eventbusRoute).to(saveReceiptRoute);