First impressions of IBM ACE 11

Hello and welcome to another blog series. Unlike the usual posts which are about Apache Camel (and related) development, this post will look at first impressions of IBM App Connect Enterprise v11.

If you are interested in trying it out for yourself you can get hold of the developer edition and test it for free. The link can be found here: https://www-01.ibm.com/marketing/iwm/iwm/web/pick.do?source=swg-wmbfd

Background

IBM App Connect Enterprise v11 is the successor to IBM Integration Bus 10 which was released back in 2015. IIB 10 has been a stable evolution of the product series since version 8 and 9 and has little by little added new functionality to the product. IBM usually releases a new version every 2 – 3 years so in 2018 they have now release IBM ACE 11 which is a combination of two existing products IBM IIB 10 and IBM App Connect.

I have spent a few days playing with it and getting a feel so let’s go through it a bit more.

Installation

Installation is as simple as IIB 10. Just download and run the installer and you are good to go. You can install it on the same machine as IIB 10. After installation you get a new toolkit and a console. There is however no Integration node available. We’ll look into this in the next sections.

As far as installation goes, it is really easy, quick and painless (way easier compared to some of its competitors).

Tooling

This has in my view been one of the weaker aspects of the platform. It is still dependent on Eclipse and hardly anything new has been added tooling wise. You are still dealing with Apps, Libraries and bar files. Anyone coming from IIB 10 will instantly recognize themselves.

In my view, there is one big part still missing from the tooling and that is a powerful test framework, similar to JUNIT in the java world. It is a shame they haven’t added this. If anyone wants to vote for my RFE to add this please go here: http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=119156

I would have liked it if they somehow had allowed developers to use other IDEs as well such as Intellij.

Like I said it looks extremely similar in terms of layout, nodes and components to IIB 10. Some extra nodes exist that are related to App Connect and Watson but I doubt the majority of users will benefit from them.

You basically develop similar to before:

  1. Create an App.
  2. Build your flows.
  3. Build your bar file.
  4. Deploy.

Essentially IIB 10 or ACE 11 lacks in one fundamental way and that is, as a developer I cannot write integration like others develop applications. I cannot write my tests first, run them, see them fail, then add code, then run the tests again and watch them go green. This style of working is something I really hope IBM looks at and adds support for.

I will discuss the policy editor in another section.

Deployment strategy and docker support

Better late than sorry as they say. Well docker has been around for a few years now and it has almost become a defacto standard in application build&deploy pipelines. IBM decided to join the party so now they have added support for the runtime environment and added a new payment model. I have no idea yet what this means but they are talking about some sort of a ”pay-as-you-go” model. Maybe they will charge per message, cpu usage or some other metric, but it is at least an improvement to their current PVU/VPC model which has sucked for a long time.

The biggest change by far in IBM ACE 11 is the absence of an Integration node. This means there is no definite Integration Server either. With this approach IBM wanted to add support for those wanted to move away from an ESB topology to a more distributed and ”lightweight” model.

In a sense, what this means is that you can do create an Integration Server on any server you like, connect to it, and deploy your integration there. You can have an integration server installed on an on-prem server, in a cloud server or bundled in a docker container and run anywhere you like.

You can have choose to have 1 integration server per App or 1 integration server with many Apps. That is up to you. Indirectly IBM is basically saying, ok we still support on-prem but please note that we are making changes to move towards containers and cloud. I wouldn’t be suprised if most of the coming fix packs focused on additional docker support.

Is it all good? Well, it is good that they have added additional docker support. What is not good is:

  1. Why do I even need to create an integration server if I truly want to work lightweight? In an ideal scenario the ”bar file” would contain not only my code but my runtime as well. In fact, going even further, the bar file would help to generate a docker file and allow me to create a docker image based on that. Now, that would have been revolutionary for IBM (this already exist in the Spring boot/Camel/Maven world) and would have made my life as a developer much easier. Instead I still need to care about Integration Servers. They have taken a positive step forward, I just wished they had gone all the way and added support for the bar file containing its own runtime as well.
  2. Why can’t I deploy bar files via mqsibar when the Integration Server is running? Well you can but the Server won’t detect the changes until you restart it which is just stupid. Why would I want to restart the server in production? By the way, mqsibar is the new command to install bar files to a new Integration Server. You can off course deploy via the toolkit or the webadmin. I understand what IBM is saying, yeah you can create a new docker container with the new App and replace the existing one. But for those who still run ACE like they did IIB this functionality doesn’t give any benefit. mqsibar would have been great if it could deploy bar files whilst the Server is running so that it would detect changes on the fly. Instead it requires a restart!! I mean Apache Karaf had support for this years ago.

Policy editor

My biggest complaint by far is the new Policy editor which is supposed to replace what was before configurable services.

Now, to configure a new Integration Server you change properties in the server’s server.conf.yaml file. You write yaml and you configure the Integration Server. Now, that is easy and simple. Now, why did they introduce a new artefact that we need to maintain and take care of? We now need to write a policy, and associate that policy to a message flow/node?? Couldn’t IBM just used YAML files all the way?? Not only that, but if I update my policy and want to deploy it, I can just overwrite the existing one. I have to delete the policy and all the flows that depend on it, then deploy the new policy and redeploy the apps. I hope really hope they change in upcoming fix packs. Just write your yaml files and please add support for live updates. Nobody wants to delete or remove stuff.

Is it worth it?

If you are on IIB 10, then no not really. At least not now. Wait a year or so until a couple of fix packs have arrived and see what new functionality they add. I don’t see anything in ACE 11 that is revoltionary or that hasn’t existed in competitors products. I would have hoped IBM would have taken a big step forward. Instead it took a few small steps to catch up but it is still not there.

Annonser

Camel development series part 15

Hello and welcome to another short Camel development series. This time we will look at a real life use case and how we can run Camel inside spring boot. We will also see how Camel can be used not just for enterprise integration but as a useful tool in your toolbox.

In our case we work daily with IBM Integration Bus. We needed an automatic and easy way to generate on a daily manner a list of execution groups, deployed applications and restapis together with some properties. The aim was to post this json message to a logic app on Azure which then generate a Sharepoint table where we could see what is available on a specific environment and when it was last deployed.

To go straight to the code https://github.com/SoucianceEqdamRashti/Integration/tree/master/artefacts . Note there are probably lots that can be improved and I have not included any tests and the logging is basic to say the least but the overall functionality should work. You can off course extend it and customize it your IIB environment and needs. The advantage of doing it this way is that you don’t need to mess with mqsi commands and mqsi profiles. You use the API to get all the data. This means you can incorporate it as part of your CI/CD process or do it remotely. The main problem with the IIB API v10 is that there is no way to get all the properties of all deployed artefacts at once. You have to first get a list of execution groups, then for each execution group get a list of apps or restapis and then for each such component get its properties. Finally combine the whole thing into a complete json message.

Solution

You basically start the spring boot app by running the following command:

java -jar iibartefacts.jar -Dcom.sun.net.ssl.checkRevocation=false -Dspring.config.location=<path-to-your-application-props-file>  -Diib.endpoint=<url-to-your-iib-endpoint-including-port>  -Denvironment=<specify-environment>

Here we are providing a few jvm properties:

  1. We disable ssl validation. If you need to validate to off course remove this parameter. I don’t in our case so that’s why its there.
  2. I provide the path to my spring boot application.properties file. I don’t want it to be bundled in my fat jar so here I provide a full path to its location.
  3. The parameter iib.endpoint is a complete url to your iib endpoint. It could be test-myiib.com:4414.
  4. Finally the parameter environment is required because in the json message we provide the type of the environment i.e. if its test, preprod or prod. If you don’t need this then you need to change the code as well.

The spring boot app uses the in memory database H2 to store the extracted data (list of execution groups etc). The H2 config is located in the application.properties file and you can view the schema.sql file to see how I configured the table. Should you need further properties simply add additional columns.

Then there is the straight forward SpringApplication main class which starts the main route. This route simply kicks off everything and calls the other routes. Finally the json message is generated and saved to a file. If you need to post the json somewhere else then simply change the endpoint.

The logging and error handling could be improved, as well as some unit tests but this was basically an attempt to use spring boot and camel to create a tool that would complement an existing an platform. It took me roughly 1.5 day working to get to work properly and tidy up.

If you run into any issues or have suggestions let me know.

Camel development series 14

Hello and welcome to yet another Camel development series. In this post I will describe how to use some basic Camel concepts together with Telegram chat bot and accessing a REST API to receive chat messages. You can customize this for many use causes for push/pull type of apps. Finally I will show how you can deploy this non-web app to Heroku and see the result.

Basic flow

The basic flow of the app is as follows:

  1. On startup it calls a REST API found at svenskaspel.se which is the swedish government’s lottery site. They have a REST API where you can access various lotteries, draws and results.
  2. The app first calls an endpoint to get the draws for this week.
  3. It then retrieves the draw number for Saturday and Wednesday draw.
  4. It calls another endpoint and receives the actual lottery numbers for the draws for those days.
  5. It then pushes out a friendly message to a Telegram chat bot.

Now off course, you can add additional features such as sending commands and getting more data and even returning random numbers to suggest for playing the lottery.

First the code

You can find all the code here https://github.com/SoucianceEqdamRashti/svenskaspel . I will not got through code in detail such most Camel users should be familiar with it since it is pretty standard Camel functionality.

Maven setup

To get our app to deploy to Heroku we need to be able to run our pom file with the package command. To to do that we need to assemble our app and this can be done using the appassembler-maven-plugin. Simply add the plugin below and change the parts to your project. Pay attention to the CamelWorker which is specified under target. This will be used by Heroku when starting our worker process.

<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>appassembler-maven-plugin</artifactId>
<version>1.1.1</version>
<configuration>
<assembleDirectory>target</assembleDirectory>
<programs>
<program>
<mainClass>org.souciance.camel.lotto.MainApp</mainClass>
<name>CamelWorker</name>
</program>
<program>
<mainClass>org.souciance.camel.lotto.MainApp</mainClass>
<name>svenskalotto</name>
</program>
</programs>
</configuration>
<executions>
<execution>
<phase>package</phase><goals><goal>assemble</goal></goals>
</execution>
</executions>
</plugin>

Heroku setup

Now we come to the Heroku setup. Let’s follow these setups.

1. Create an account at Heroku.

2. Download the Heroku cli and install it.

3. Before proceeding ensure that your folder structure is allows:
/pom.xml

/.git

That is, don’t have a long folder structure leading to your pom file. This is because Heroku looks at the root folder for the pom.xml, system.properties and the Procfile. If your repo is not structured like this rearrange it before proceeding.

4. Verify by opening a cmd and type ”heroku” in the command prompt. You should receive a prompt to login. Enter your credentials to login.  Afterwards next time just typing heroku should display the following:

heroku-start.png

5. Then it is time to create your heroku app. In the command prompt heroku apps:create :

heroku apps:create svenskalotto
Creating svenskalotto… done
https://svenskalotto.herokuapp.com/ | https://git.heroku.com/svenskalotto.git

The link above should the remote git repo that you need to add to your remote link in your local git repository. I called mine heroku. Ensure that if you type git remote -v, the ”heroku” remote is shown with the link to your app repo.

5. Now in order for our app to work in Heroku a couple things are needed. Heroku by default works with webapps where you have some website to show. Our Camel app is java standalone app. We instead need to use a Heroku worker process rather than a webapp process. This is so that Heroku understands that our app is a ”backend” app. For more on this look into Heroku processes. We also need to specify the jdk to use, add jvm arguments to ignore ssl certification validations and create an environment variable to store our API access key.

6. Heroku requires two files, a system.properties file and a Procfile. Create both of them and put them in the root folder of your repo. You can look at my repo https://github.com/SoucianceEqdamRashti/svenskaspel to see what values I used. Essentially the system.properties file tells Heroku how to run our app and parameters to use when running it. Procfile contains the type of worker to use on the Dyno.

7. Once you have added the two files remember to push to your repo.

8. Now it is time to push to your heroku repo. In your cmd  at your project folder type:

git push heroku master

You should now see heroku push your code and building the app as can been here:

Finally the build process finishes as seen here:

If the app did not start immediately you can view the logs using:

heroku logs

or

heroku logs:tail

9. Now, our app will start but will crash. Why? Because we have not created an environment variable four access key. Login to our app dashboard. Go to settings. There is a option to show configVars. Click and then a key value input form will appear. Write the name of your environment variable and its value. Ensure you have referred to it in your Camel code. Once you save it the app will restart.

10. If everything works your Telegram bot should show some nice lotto text 😉 For example:

telegram-bot

If you have any questions on the code or the setup let me know. Eventually I hope to expand and do a more push based chat.

Camel development series 13

Hello, and welcome to another post regarding development with Camel and all things related to Camel.

Its been far too long since my previous post and although any excuse is a bad excuse but since my previous post I have switched back to consulting and ventured back in the commercial world. The big thing noticeable is off course how far ahead Camel is even of the commercial vendors. Already in 2015 you could dockerize Camel. These days these are either alpha features or just about production ready. More importantly, you can’t just auto-scale as you like as there are complicated licensing agreements to take into consideration. Another different is that commercial vendors have a huge obstacle and that is, it is practically impossible right now to develop true micro-services with them unless you pay loads of cash. With Camel, you just write your app, dockerize it and go. I think if the commercial vendors want to catch up they need to break apart their architecture and make things more modular. If I am writing and REST-API I don’t want  a gazillion other parts connected to the runtime that clogs resources. I think they are moving in this direction but it will be at least some before they are there.

Anyway here I thought I’d show a couple of new features in Camel 2.20 which looks quit cool.

JSON Schema validation

In order to use this component add camel-json-validator to your pom file.
Here is an example RouteBuilder class for you

public class JsonSchemaValidation extends RouteBuilder {
@Override
public void configure() throws Exception {
from("file:resources/data.json?noop=true")
.doTry()
.to("json-validator:resources/schema.json")
.log("${body}")
.doCatch(ValidationException.class)
.log("Failed")
.doFinally()
.to("finally")
.end();

}
}

As you can see we grap the json data, we send it to the validator component and specify the schema, and catch any validation error. Pretty easy right?

Health check API

I don’t have any code to show here but it is pretty cool that Camel has started to introduce a health check API. I think that was one of the issues that was missing previously. Once it becomes to easy to check if a CamelContext is available or which routes are up from API then this will do wonders for monitoring.

JSONPath writeasstring

One annoying thing which isn’t Camel related per say was that when you used jsonpath and wanted to get the value of json field it would write it as [”myvalue”] rather than ”myvalue”. Finally there is a writeAsString method to do this for you.

Support for AWS Lambda

In Camel 2.20 there is now direct support for AWS lambda function calls. See more info and examples here https://github.com/apache/camel/blob/master/components/camel-aws/src/main/docs/aws-lambda-component.adoc

This is pretty cool which means you can use Camel to call your AWS Lambda environment and have mixture of both type of functionality. You could have Camel running in docker on AWS calling your lamda functions!

Camel development series 12

Hello and welcome to another Camel development series. From now on I will paste less code to the blog and instead refer to the github repo where the code is stored. It makes writing the blog easier and that helps to motivate me to write more often.

In this serie I will touch upon a frequent scenario. Our problem is that we want to expose a HTTP endpoint in order to allow the client to perform a HTTP GET operation. We then want to call the backend HTTP(s) and retrieve say a picture and allow this to be displayed via the browser. How can we accomplish this?

Well if you want to jump straight to the code go here:
https://github.com/SoucianceEqdamRashti/Integration/blob/master/CamelDemo/src/main/java/org/souciance/integration/http/RedirectHTTP.java

As you can see we accomplish this in four lines of code. We first expose our undertow endpoint and ensure only HTTP GET operations can be performed.  Then we set the Exchange.HTTP_QUERY header to our query parameter. Finally we call the backend https service using the http4 component. The key here is to enable the parameter bridgeEndpoint=true so that Camel understands it is acting as a proxy and doesn’t mix up the endpoints. Finally we convert the payload to byte and return it to the original client. That’s all!

Camel development series 11

Hello and welcome to another Camel development series.

In this series I thought we’d go through some more hints and some tricks I learned after one year of working with Camel.

Avoid unnecessary object creation

Do not create unnecessary object models of your data. As a java developer you tend to think everything in terms of object. But should you always create objects? The whole point of integration is to remain stateless and act as an interface to transfer data. So I would say, where possible don’t create objects. This has nothing to do with performance since object creation is cheap but more to do with not having to write unnecessary code for an behaviour related functionality. Let us work with an example.

So you receive a json message, you need to do some content-based routing and then generate a SOAP message and send to some web service.

Now you could off course create an object model for the json message or write a bean and the same goes for the SOAP xml. But unless you are dealing with completed message structure or attachments you don’t need to write code that way when it comes to integration. Here is an approach to solve it in another way.

 

.choice()          
.when(PredicateBuilder.isEqualTo(ExpressionBuilder.languageExpression("jsonpath", "$.data.request"), constant("valueToMatch1")))
   .to("direct:route1")
        .when(PredicateBuilder.isEqualTo(ExpressionBuilder.languageExpression("jsonpath", "$.data.request"), constant("valueToMatch2")))
   .to("direct:route2")
.otherwise()
  .throwException(IllegalArgumentException.class, "Unknown request command received!")
.end()

In the above you can see that I don’t create any objects. I simply use the excellent library of json path or the camel version of it camel-jsonpath and simply perform a lookup in the data and based on that I route to my individual routes. This allows for writing less code, simpler code and I stay within the Camel dsl. It makes it also easy to understand what the integration is doing. I avoid going from one class to another.

Now about creating a SOAP message. Again you could write jaxb code, or use some other xml processing or use CXF or some other framework. But if your SOAP messages are relatively simple and does not contain attachments then you skip all that and use freemarker to inject data into your SOAP message. Here is an example:

.setHeader("requestId").jsonpath("$.data.id")
.to("freemarker:request.xml")

Here is my freemarker template file:

<?xml version="1.0" encoding="UTF-8"?>
<soapenv:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:lan="http://test">
  <soapenv:Header/>
  <soapenv:Body>
       <request>${headers.requestId}</request>
  </soapenv:Body>
</soapenv:Envelope>

As you can see I extract the data again using json path and insert it in a header. Then I route to a freemarker endpoint. In my freemarker template I have written a simple expression to where the data should be injected. That’s all! I don’t write any xml code or worry about name space creation and stuff like that. Exactly same thing goes for the opposite. You can see xpath to extract data from xml messages and use a template json freemarker template to insert a simple expression.

Creating environment variables in your tests

A lot of the times you need to access environment variables but you don’t want to use real values in your tests but simple some fake value but you don’t want to change your code. You still want to access that same environment variable but just change the value.

I use the excellent library called system-rules by Stefan Birkner. It works really well and is really use to use. If you work with maven add this to your pom

    <dependency>
      <groupId>com.github.stefanbirkner</groupId>
      <artifactId>system-rules</artifactId>
      <version>1.16.0</version>
      <scope>test</scope>
    </dependency>

How do you use it?

In your JUNIT test class add the following:

@Rule
  public final EnvironmentVariables environmentVariables = new EnvironmentVariables();

 @Before
  public void setUp() throws Exception {
    environmentVariables.set("VAR1", "1");
    environmentVariables.set("VAR2", "2");
    environmentVariables.set("VAR3", "3");
   
  }

As you can see we add the rule and create a variable of type EnvironmentVariables. Then in your setUp method you simply create the variables and add values. As easy as that!

Testing JSON messages in your route tests

So quit often when you write route tests you will need to check if the json you produce needs to match some expected json. To do complicated JSON matches or even simple ones I simply the excellent library alled jsonassert. Add this to your pom:

  <dependency>
      <groupId>org.skyscreamer</groupId>
      <artifactId>jsonassert</artifactId>
      <version>1.4.0</version>
      <scope>test</scope>
    </dependency>

To use it:
In your @test method add this:

JSONAssert.assertEquals(expectedResponse, response, false);

There are way more complicated things you can do with JSONAssert but if you want a simple way of comparing json messages this is great.

Camel development series 10

Hello everyone,

Well as usual it has been a long time since I wrote here but I thought a new year has started so a nice and simple update would be good.

This time I will keep it simple and instead show how you can verify json messages against a given schema in your Camel route.

The aim is thus:

Given a json message and a predefined json schema, we want to validate the message against the schema and return the result of the validation.

Camel route

Assuming you have create a Camel route here is how my (very) basic code looks like:

package org.souciance.integration.validate;

import org.apache.camel.builder.RouteBuilder;
import org.apache.commons.io.IOUtils;

import java.nio.charset.Charset;

public class CamelValidateJson extends RouteBuilder {

@Override
public void configure() throws Exception {
ClassLoader classLoader = getClass().getClassLoader();
String schema = IOUtils.toString(classLoader.getResourceAsStream("jsonvalidate/schema.json"), Charset.defaultCharset());
String json = IOUtils.toString(classLoader.getResourceAsStream("jsonvalidate/data.json"), Charset.defaultCharset());
from("timer://cameldemoValidateJson?repeatCount=1").routeId("ValidateJson")
.setBody(constant(json))
.setProperty("Schema", constant(schema))
.bean(ValidateJson.class, "isValidJson")
.choice()
.when(header("isValid")
.isEqualTo(true))
.log("Valid json!")
.when(header("isValid")
.isEqualTo(false))
.log("Invalid json!")
.end()
.end();
}
}

I have kept the steps very simple. The idea is to focus on the schema validation and nothing else so I am manually loading the data and the schema file. You could off course inject the schema path via some variable or some other way.

I then start the route via  timer, again this is to keep it simple.

I then create an exchange property called ”Schema” and insert the schema in it.

Then I call a bean using .bean and as parameters give the bean class and bean method.

The bean will return the result of the validation inside a header.

I use the choice() and when() to log the result.

The bean code looks like this:

package org.souciance.integration.validate;

import com.fasterxml.jackson.databind.JsonNode;
import com.github.fge.jackson.JsonLoader;
import com.github.fge.jsonschema.core.exceptions.ProcessingException;
import com.github.fge.jsonschema.core.report.ProcessingReport;
import com.github.fge.jsonschema.main.JsonSchemaFactory;
import com.github.fge.jsonschema.main.JsonValidator;
import org.apache.camel.Exchange;

import java.io.IOException;

/**
* Created by moeed on 2017-01-15.
*/
public class ValidateJson {
/**
* Method to jsonvalidate some json data based on a json schema
* @throws IOException
* @throws ProcessingException
*/
public static void isValidJson(Exchange exchange) throws IOException, ProcessingException {
final JsonNode data = JsonLoader.fromString(exchange.getIn().getBody().toString());
final JsonNode schema = JsonLoader.fromString(exchange.getProperty("Schema").toString());

final JsonSchemaFactory factory = JsonSchemaFactory.byDefault();
JsonValidator validator = factory.getValidator();

ProcessingReport report = validator.validate(schema, data);
System.out.println(report);
if (!report.toString().contains("success")) {
report.toString();
exchange.getIn().setHeader("isValid", false);
}
else {
exchange.getIn().setHeader("isValid", true);
}

}
}

The method isValidJson is very simple. It receives the exchange. It extracts the json data from the body and the json schema from the exchange property.

Now to the main part. I use the json schema library to do the actually schema validation. If the validation is false I update the exchange header and if it is successful I also update the exchange header.

If you log the output after running it with intentionally bad data for a given a schema you will see something like this:

com.github.fge.jsonschema.core.report.ListProcessingReport: failure
--- BEGIN MESSAGES ---
error: object has missing required properties (["age"])
level: "error"
schema: {"loadingURI":"#","pointer":"/items"}
instance: {"pointer":"/7"}
domain: "validation"
keyword: "required"
required: ["_id","about","address","age","balance","company","email","eyeColor","favoriteFruit","friends","greeting","guid","index","isActive","latitude","longitude","name","phone","picture","range","registered","tags"]
missing: ["age"]
--- END MESSAGES ---

As you can see I did not have the property ”age” so the validation failed. With the age property put back in the data the output will simply be ”success”.

This is just simple way of doing json validation in your Camel routes. You can use it whilst doing rest calls or simple file based data manipulation. For more info here is the source code on my github:

https://github.com/SoucianceEqdamRashti/Integration/tree/master/CamelDemo/src/main/java/org/souciance/integration/validate