A (brief) review of ARM templates in Azure

Hello again

This blog has been very quiet for the past months but after working with Azure Integration stack and a few other Azure components I think it is about time to write a post. Specially this post will simply give my thoughts after having worked with ARM templates for less than a year.

I understand Microsoft’s idea behind ARM templates, you want a universal way of describing any resource. In theory that should make it easy for you and easy for users. However a language is only as good as the ecosystem of tools that aid in its usage and development.

Yes, you can work with ARM templates using Azure CLI, Powershell, Visual Studio Code and any old JSON browser, but the main problem here is just that, an ARM template can describe any resource in the Azure world. This makes it hard for developer as each ARM template will look the same but still be different. A logic app ARM template will have the same overall structure as an API Management template but there will still be a lot of additional fields that differ.

Consider now a tool like Ansibe which have divided each resource or component into modules. That makes it extremely easy for developers as they simply refer to the correct module and use the variables and functionality available. A module for creating a VM in AWS will look different than a module creating docker containers. Not to mention, it is in YAML which is more user friendly than json. Here is an example of a playbook for creating a VM in AWS:


# Single instance with ssd gp2 root volume
- ec2:
    key_name: mykey
    group: webserver
    instance_type: c3.medium
    image: ami-123456
    wait: yes
    wait_timeout: 500
      - device_name: /dev/xvda
        volume_type: gp2
        volume_size: 8
    vpc_subnet_id: subnet-29e63245
    assign_public_ip: yes
    exact_count: 1

Now an ARM template has the following structure:

    "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
    "contentVersion": "",
    "parameters": {  },
    "variables": {  },
    "functions": [  ],
    "resources": [  ],
    "outputs": {  }

Which is easier to read? I prefer the Ansible, YAML format which tells me immediately via the –ec2 tag that I am working with AWS compute. I don’t need to read anymore to find out.

For me ARM Templates are good for computer software to parse but not for humans to manipulate. Microsoft needs to invest more in tooling so that developers can work with YAML or other third-party tools such as TerraForm to make it easy to work with ARM Templates.

Another thing is that it does not seem that ARM Templates are idoempotent. This means, if I am deploying something that already exist, and no changes have been made, it should leave source untouched. This is standard practise in Ansible. This is a much needed feature in ARM Templates.

Then there is the issue know what was actually deployed. You don’t really a way of knowing until you go manually and look afterwards. This is an interesting discussion since now we are almost saying that infrastructure-as-code also means we should work test-driven with our infrastructure code. Here again there is not a lot of tooling support to write integration tests for ARM Templates. There is a powershell module called Pester that people seem to be using. But I find it strange that Microsoft hasn’t released one themselves.

Another cool feature would to somehow be able to deploy the ARM template locally and get the ability to browse or view the component similar to how it would appear in the portal. Now that would be awesome!

Another aspect has to do with structuring your code and how to do it in a efficient manner. Ansible has a clear structure of how to organize your playbooks to deploy complete solutions. I don’t think ARM Templates are that sophisticated yet. This means each company will have to invest in their own best practise rather than Microsoft laying down the foundation.

I think ARM Templates are a good jump forward for Microsoft. New features are added and things are evolving. But as it is I think AWS makes devops much easier and there is far more support in third-party tooling. But I think this will change in the next couple of years so let’s se.


First impressions of IBM ACE 11

Hello and welcome to another blog series. Unlike the usual posts which are about Apache Camel (and related) development, this post will look at first impressions of IBM App Connect Enterprise v11.

If you are interested in trying it out for yourself you can get hold of the developer edition and test it for free. The link can be found here: https://www-01.ibm.com/marketing/iwm/iwm/web/pick.do?source=swg-wmbfd


IBM App Connect Enterprise v11 is the successor to IBM Integration Bus 10 which was released back in 2015. IIB 10 has been a stable evolution of the product series since version 8 and 9 and has little by little added new functionality to the product. IBM usually releases a new version every 2 – 3 years so in 2018 they have now release IBM ACE 11 which is a combination of two existing products IBM IIB 10 and IBM App Connect.

I have spent a few days playing with it and getting a feel so let’s go through it a bit more.


Installation is as simple as IIB 10. Just download and run the installer and you are good to go. You can install it on the same machine as IIB 10. After installation you get a new toolkit and a console. There is however no Integration node available. We’ll look into this in the next sections.

As far as installation goes, it is really easy, quick and painless (way easier compared to some of its competitors).


This has in my view been one of the weaker aspects of the platform. It is still dependent on Eclipse and hardly anything new has been added tooling wise. You are still dealing with Apps, Libraries and bar files. Anyone coming from IIB 10 will instantly recognize themselves.

In my view, there is one big part still missing from the tooling and that is a powerful test framework, similar to JUNIT in the java world. It is a shame they haven’t added this. If anyone wants to vote for my RFE to add this please go here: http://www.ibm.com/developerworks/rfe/execute?use_case=viewRfe&CR_ID=119156

I would have liked it if they somehow had allowed developers to use other IDEs as well such as Intellij.

Like I said it looks extremely similar in terms of layout, nodes and components to IIB 10. Some extra nodes exist that are related to App Connect and Watson but I doubt the majority of users will benefit from them.

You basically develop similar to before:

  1. Create an App.
  2. Build your flows.
  3. Build your bar file.
  4. Deploy.

Essentially IIB 10 or ACE 11 lacks in one fundamental way and that is, as a developer I cannot write integration like others develop applications. I cannot write my tests first, run them, see them fail, then add code, then run the tests again and watch them go green. This style of working is something I really hope IBM looks at and adds support for.

I will discuss the policy editor in another section.

Deployment strategy and docker support

Better late than sorry as they say. Well docker has been around for a few years now and it has almost become a defacto standard in application build&deploy pipelines. IBM decided to join the party so now they have added support for the runtime environment and added a new payment model. I have no idea yet what this means but they are talking about some sort of a ”pay-as-you-go” model. Maybe they will charge per message, cpu usage or some other metric, but it is at least an improvement to their current PVU/VPC model which has sucked for a long time.

The biggest change by far in IBM ACE 11 is the absence of an Integration node. This means there is no definite Integration Server either. With this approach IBM wanted to add support for those wanted to move away from an ESB topology to a more distributed and ”lightweight” model.

In a sense, what this means is that you can do create an Integration Server on any server you like, connect to it, and deploy your integration there. You can have an integration server installed on an on-prem server, in a cloud server or bundled in a docker container and run anywhere you like.

You can have choose to have 1 integration server per App or 1 integration server with many Apps. That is up to you. Indirectly IBM is basically saying, ok we still support on-prem but please note that we are making changes to move towards containers and cloud. I wouldn’t be suprised if most of the coming fix packs focused on additional docker support.

Is it all good? Well, it is good that they have added additional docker support. What is not good is:

  1. Why do I even need to create an integration server if I truly want to work lightweight? In an ideal scenario the ”bar file” would contain not only my code but my runtime as well. In fact, going even further, the bar file would help to generate a docker file and allow me to create a docker image based on that. Now, that would have been revolutionary for IBM (this already exist in the Spring boot/Camel/Maven world) and would have made my life as a developer much easier. Instead I still need to care about Integration Servers. They have taken a positive step forward, I just wished they had gone all the way and added support for the bar file containing its own runtime as well.
  2. Why can’t I deploy bar files via mqsibar when the Integration Server is running? Well you can but the Server won’t detect the changes until you restart it which is just stupid. Why would I want to restart the server in production? By the way, mqsibar is the new command to install bar files to a new Integration Server. You can off course deploy via the toolkit or the webadmin. I understand what IBM is saying, yeah you can create a new docker container with the new App and replace the existing one. But for those who still run ACE like they did IIB this functionality doesn’t give any benefit. mqsibar would have been great if it could deploy bar files whilst the Server is running so that it would detect changes on the fly. Instead it requires a restart!! I mean Apache Karaf had support for this years ago.

Policy editor

My biggest complaint by far is the new Policy editor which is supposed to replace what was before configurable services.

Now, to configure a new Integration Server you change properties in the server’s server.conf.yaml file. You write yaml and you configure the Integration Server. Now, that is easy and simple. Now, why did they introduce a new artefact that we need to maintain and take care of? We now need to write a policy, and associate that policy to a message flow/node?? Couldn’t IBM just used YAML files all the way?? Not only that, but if I update my policy and want to deploy it, I can just overwrite the existing one. I have to delete the policy and all the flows that depend on it, then deploy the new policy and redeploy the apps. I hope really hope they change in upcoming fix packs. Just write your yaml files and please add support for live updates. Nobody wants to delete or remove stuff.

Is it worth it?

If you are on IIB 10, then no not really. At least not now. Wait a year or so until a couple of fix packs have arrived and see what new functionality they add. I don’t see anything in ACE 11 that is revoltionary or that hasn’t existed in competitors products. I would have hoped IBM would have taken a big step forward. Instead it took a few small steps to catch up but it is still not there.

Camel development series part 15

Hello and welcome to another short Camel development series. This time we will look at a real life use case and how we can run Camel inside spring boot. We will also see how Camel can be used not just for enterprise integration but as a useful tool in your toolbox.

In our case we work daily with IBM Integration Bus. We needed an automatic and easy way to generate on a daily manner a list of execution groups, deployed applications and restapis together with some properties. The aim was to post this json message to a logic app on Azure which then generate a Sharepoint table where we could see what is available on a specific environment and when it was last deployed.

To go straight to the code https://github.com/SoucianceEqdamRashti/Integration/tree/master/artefacts . Note there are probably lots that can be improved and I have not included any tests and the logging is basic to say the least but the overall functionality should work. You can off course extend it and customize it your IIB environment and needs. The advantage of doing it this way is that you don’t need to mess with mqsi commands and mqsi profiles. You use the API to get all the data. This means you can incorporate it as part of your CI/CD process or do it remotely. The main problem with the IIB API v10 is that there is no way to get all the properties of all deployed artefacts at once. You have to first get a list of execution groups, then for each execution group get a list of apps or restapis and then for each such component get its properties. Finally combine the whole thing into a complete json message.


You basically start the spring boot app by running the following command:

java -jar iibartefacts.jar -Dcom.sun.net.ssl.checkRevocation=false -Dspring.config.location=<path-to-your-application-props-file>  -Diib.endpoint=<url-to-your-iib-endpoint-including-port>  -Denvironment=<specify-environment>

Here we are providing a few jvm properties:

  1. We disable ssl validation. If you need to validate to off course remove this parameter. I don’t in our case so that’s why its there.
  2. I provide the path to my spring boot application.properties file. I don’t want it to be bundled in my fat jar so here I provide a full path to its location.
  3. The parameter iib.endpoint is a complete url to your iib endpoint. It could be test-myiib.com:4414.
  4. Finally the parameter environment is required because in the json message we provide the type of the environment i.e. if its test, preprod or prod. If you don’t need this then you need to change the code as well.

The spring boot app uses the in memory database H2 to store the extracted data (list of execution groups etc). The H2 config is located in the application.properties file and you can view the schema.sql file to see how I configured the table. Should you need further properties simply add additional columns.

Then there is the straight forward SpringApplication main class which starts the main route. This route simply kicks off everything and calls the other routes. Finally the json message is generated and saved to a file. If you need to post the json somewhere else then simply change the endpoint.

The logging and error handling could be improved, as well as some unit tests but this was basically an attempt to use spring boot and camel to create a tool that would complement an existing an platform. It took me roughly 1.5 day working to get to work properly and tidy up.

If you run into any issues or have suggestions let me know.

Camel development series 14

Hello and welcome to yet another Camel development series. In this post I will describe how to use some basic Camel concepts together with Telegram chat bot and accessing a REST API to receive chat messages. You can customize this for many use causes for push/pull type of apps. Finally I will show how you can deploy this non-web app to Heroku and see the result.

Basic flow

The basic flow of the app is as follows:

  1. On startup it calls a REST API found at svenskaspel.se which is the swedish government’s lottery site. They have a REST API where you can access various lotteries, draws and results.
  2. The app first calls an endpoint to get the draws for this week.
  3. It then retrieves the draw number for Saturday and Wednesday draw.
  4. It calls another endpoint and receives the actual lottery numbers for the draws for those days.
  5. It then pushes out a friendly message to a Telegram chat bot.

Now off course, you can add additional features such as sending commands and getting more data and even returning random numbers to suggest for playing the lottery.

First the code

You can find all the code here https://github.com/SoucianceEqdamRashti/svenskaspel . I will not got through code in detail such most Camel users should be familiar with it since it is pretty standard Camel functionality.

Maven setup

To get our app to deploy to Heroku we need to be able to run our pom file with the package command. To to do that we need to assemble our app and this can be done using the appassembler-maven-plugin. Simply add the plugin below and change the parts to your project. Pay attention to the CamelWorker which is specified under target. This will be used by Heroku when starting our worker process.


Heroku setup

Now we come to the Heroku setup. Let’s follow these setups.

1. Create an account at Heroku.

2. Download the Heroku cli and install it.

3. Before proceeding ensure that your folder structure is allows:


That is, don’t have a long folder structure leading to your pom file. This is because Heroku looks at the root folder for the pom.xml, system.properties and the Procfile. If your repo is not structured like this rearrange it before proceeding.

4. Verify by opening a cmd and type ”heroku” in the command prompt. You should receive a prompt to login. Enter your credentials to login.  Afterwards next time just typing heroku should display the following:


5. Then it is time to create your heroku app. In the command prompt heroku apps:create :

heroku apps:create svenskalotto
Creating svenskalotto… done
https://svenskalotto.herokuapp.com/ | https://git.heroku.com/svenskalotto.git

The link above should the remote git repo that you need to add to your remote link in your local git repository. I called mine heroku. Ensure that if you type git remote -v, the ”heroku” remote is shown with the link to your app repo.

5. Now in order for our app to work in Heroku a couple things are needed. Heroku by default works with webapps where you have some website to show. Our Camel app is java standalone app. We instead need to use a Heroku worker process rather than a webapp process. This is so that Heroku understands that our app is a ”backend” app. For more on this look into Heroku processes. We also need to specify the jdk to use, add jvm arguments to ignore ssl certification validations and create an environment variable to store our API access key.

6. Heroku requires two files, a system.properties file and a Procfile. Create both of them and put them in the root folder of your repo. You can look at my repo https://github.com/SoucianceEqdamRashti/svenskaspel to see what values I used. Essentially the system.properties file tells Heroku how to run our app and parameters to use when running it. Procfile contains the type of worker to use on the Dyno.

7. Once you have added the two files remember to push to your repo.

8. Now it is time to push to your heroku repo. In your cmd  at your project folder type:

git push heroku master

You should now see heroku push your code and building the app as can been here:

Finally the build process finishes as seen here:

If the app did not start immediately you can view the logs using:

heroku logs


heroku logs:tail

9. Now, our app will start but will crash. Why? Because we have not created an environment variable four access key. Login to our app dashboard. Go to settings. There is a option to show configVars. Click and then a key value input form will appear. Write the name of your environment variable and its value. Ensure you have referred to it in your Camel code. Once you save it the app will restart.

10. If everything works your Telegram bot should show some nice lotto text 😉 For example:


If you have any questions on the code or the setup let me know. Eventually I hope to expand and do a more push based chat.

Camel development series 13

Hello, and welcome to another post regarding development with Camel and all things related to Camel.

Its been far too long since my previous post and although any excuse is a bad excuse but since my previous post I have switched back to consulting and ventured back in the commercial world. The big thing noticeable is off course how far ahead Camel is even of the commercial vendors. Already in 2015 you could dockerize Camel. These days these are either alpha features or just about production ready. More importantly, you can’t just auto-scale as you like as there are complicated licensing agreements to take into consideration. Another different is that commercial vendors have a huge obstacle and that is, it is practically impossible right now to develop true micro-services with them unless you pay loads of cash. With Camel, you just write your app, dockerize it and go. I think if the commercial vendors want to catch up they need to break apart their architecture and make things more modular. If I am writing and REST-API I don’t want  a gazillion other parts connected to the runtime that clogs resources. I think they are moving in this direction but it will be at least some before they are there.

Anyway here I thought I’d show a couple of new features in Camel 2.20 which looks quit cool.

JSON Schema validation

In order to use this component add camel-json-validator to your pom file.
Here is an example RouteBuilder class for you

public class JsonSchemaValidation extends RouteBuilder {
public void configure() throws Exception {


As you can see we grap the json data, we send it to the validator component and specify the schema, and catch any validation error. Pretty easy right?

Health check API

I don’t have any code to show here but it is pretty cool that Camel has started to introduce a health check API. I think that was one of the issues that was missing previously. Once it becomes to easy to check if a CamelContext is available or which routes are up from API then this will do wonders for monitoring.

JSONPath writeasstring

One annoying thing which isn’t Camel related per say was that when you used jsonpath and wanted to get the value of json field it would write it as [”myvalue”] rather than ”myvalue”. Finally there is a writeAsString method to do this for you.

Support for AWS Lambda

In Camel 2.20 there is now direct support for AWS lambda function calls. See more info and examples here https://github.com/apache/camel/blob/master/components/camel-aws/src/main/docs/aws-lambda-component.adoc

This is pretty cool which means you can use Camel to call your AWS Lambda environment and have mixture of both type of functionality. You could have Camel running in docker on AWS calling your lamda functions!

Camel development series 12

Hello and welcome to another Camel development series. From now on I will paste less code to the blog and instead refer to the github repo where the code is stored. It makes writing the blog easier and that helps to motivate me to write more often.

In this serie I will touch upon a frequent scenario. Our problem is that we want to expose a HTTP endpoint in order to allow the client to perform a HTTP GET operation. We then want to call the backend HTTP(s) and retrieve say a picture and allow this to be displayed via the browser. How can we accomplish this?

Well if you want to jump straight to the code go here:

As you can see we accomplish this in four lines of code. We first expose our undertow endpoint and ensure only HTTP GET operations can be performed.  Then we set the Exchange.HTTP_QUERY header to our query parameter. Finally we call the backend https service using the http4 component. The key here is to enable the parameter bridgeEndpoint=true so that Camel understands it is acting as a proxy and doesn’t mix up the endpoints. Finally we convert the payload to byte and return it to the original client. That’s all!

Camel development series 11

Hello and welcome to another Camel development series.

In this series I thought we’d go through some more hints and some tricks I learned after one year of working with Camel.

Avoid unnecessary object creation

Do not create unnecessary object models of your data. As a java developer you tend to think everything in terms of object. But should you always create objects? The whole point of integration is to remain stateless and act as an interface to transfer data. So I would say, where possible don’t create objects. This has nothing to do with performance since object creation is cheap but more to do with not having to write unnecessary code for an behaviour related functionality. Let us work with an example.

So you receive a json message, you need to do some content-based routing and then generate a SOAP message and send to some web service.

Now you could off course create an object model for the json message or write a bean and the same goes for the SOAP xml. But unless you are dealing with completed message structure or attachments you don’t need to write code that way when it comes to integration. Here is an approach to solve it in another way.


.when(PredicateBuilder.isEqualTo(ExpressionBuilder.languageExpression("jsonpath", "$.data.request"), constant("valueToMatch1")))
        .when(PredicateBuilder.isEqualTo(ExpressionBuilder.languageExpression("jsonpath", "$.data.request"), constant("valueToMatch2")))
  .throwException(IllegalArgumentException.class, "Unknown request command received!")

In the above you can see that I don’t create any objects. I simply use the excellent library of json path or the camel version of it camel-jsonpath and simply perform a lookup in the data and based on that I route to my individual routes. This allows for writing less code, simpler code and I stay within the Camel dsl. It makes it also easy to understand what the integration is doing. I avoid going from one class to another.

Now about creating a SOAP message. Again you could write jaxb code, or use some other xml processing or use CXF or some other framework. But if your SOAP messages are relatively simple and does not contain attachments then you skip all that and use freemarker to inject data into your SOAP message. Here is an example:


Here is my freemarker template file:

<?xml version="1.0" encoding="UTF-8"?>
<soapenv:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:lan="http://test">

As you can see I extract the data again using json path and insert it in a header. Then I route to a freemarker endpoint. In my freemarker template I have written a simple expression to where the data should be injected. That’s all! I don’t write any xml code or worry about name space creation and stuff like that. Exactly same thing goes for the opposite. You can see xpath to extract data from xml messages and use a template json freemarker template to insert a simple expression.

Creating environment variables in your tests

A lot of the times you need to access environment variables but you don’t want to use real values in your tests but simple some fake value but you don’t want to change your code. You still want to access that same environment variable but just change the value.

I use the excellent library called system-rules by Stefan Birkner. It works really well and is really use to use. If you work with maven add this to your pom


How do you use it?

In your JUNIT test class add the following:

  public final EnvironmentVariables environmentVariables = new EnvironmentVariables();

  public void setUp() throws Exception {
    environmentVariables.set("VAR1", "1");
    environmentVariables.set("VAR2", "2");
    environmentVariables.set("VAR3", "3");

As you can see we add the rule and create a variable of type EnvironmentVariables. Then in your setUp method you simply create the variables and add values. As easy as that!

Testing JSON messages in your route tests

So quit often when you write route tests you will need to check if the json you produce needs to match some expected json. To do complicated JSON matches or even simple ones I simply the excellent library alled jsonassert. Add this to your pom:


To use it:
In your @test method add this:

JSONAssert.assertEquals(expectedResponse, response, false);

There are way more complicated things you can do with JSONAssert but if you want a simple way of comparing json messages this is great.