Deploy Azure storage table objects

Hello, and welcome to another post.

For the past year or so I have worked quit a bit with MS Azure and although there is a lot to like, there is off course a lot to improve as well.

One area of improvement which I do think they are doing is to include more Azure components in their ARM Template suite. Previously blob containers could not be created via ARM template but I believe they have added that support mid 2018. That is great. Otherwise you need to resort to most likely do it manually via the portal or, if you are trying to build a pipeline, write some sort of powershell script.

Today, there is no ARM template support for Table storage. You need to use scripts to automate this process. The easiest way is to add an inline powershell script in your release pipeline.

You can test the powershell script locally first before putting it in your pipeline. The prerequistes are that you are at least on powershell 5.0 or later. Preferably upgrade to powershell 6.X.

Below you can see the code.
Things to note:
1. You need to install the AzureRmStorageTable module.
2. I have just written dummy variable names. Replace them with your own values.
3. I basically set the current azure storage account.
4. I get a SAS token to it.
5. I get the azure storage context.
6. I create the table storage container and pass the context.
7. I create a new row by calling Add-StorageTable row and pass the table name and the partitionKey and properties.

If you run into issues, check your user permissions and that you have the modules installed. Once you have got it to work from your computer, either check it in your git repo and refer to it from your pipeline or put it as a inline code task.


Install-Module AzureRmStorageTable -Force
$resourceGroup ="resourceGroup";
$storageAccount ="storageAccount";
Set-AzureRmCurrentStorageAccount -ResourceGroupName "$resourceGroup" -AccountName "$storageAccount ";
$sas = New-AzureStorageAccountSASToken -Service Table -ResourceType Service -
$ctxsas = New-AzureStorageContext -StorageAccountName "storageAccount" -SasToken $sas
$tableName = "tablename1";
$createContainer = New-AzureStorageTable –Name $tableName;
$storageTable = Get-AzureStorageTable –Name $tableName # –Context $ctxsas 
Add-StorageTableRow -table $storageTable -partitionKey "key1" -rowKey ("1") -property @{"prop"="prop1"}
$tableName1 = "tablename1";
$createContainer = New-AzureStorageTable –Name $tableName1;
$storageTable1 = Get-AzureStorageTable –Name $tableName1 #–Context $ctxsas
Add-StorageTableRow -table $storageTable1 -partitionKey "key2" -rowKey ("1") -property @{"prop"="prop1"}

Review of Microsoft Azure after a year

Welcome to another post which will focus on my experience with Micorosft Azure after about a year or so of usage. I will try to cover on a high level some pitfalls to be aware, some areas that could use improvement and some areas that have impressed me. I would like to point that I have not used every single feature on every single service. This post covers only the parts that I have used and with the state of knowledge that I possess. A lot of pitfalls is probably due to my lack of more in-depth experience. But anyway here we go.

The good..

I have in the past 1.5 year worked with Azure API management, Azure devops, Azure Active Directory, Azure Functions, Azure logic apps, Azure Service Bus and Azure Apps (specifically native java apps).

  • One positive aspect for beginners is how easy it is to get started with a new service. In most cases there is plenty of documentation and most of the time you follow some sort of a ”install” wizard to get going.  This makes it easier when you are starting off to actually get going and develop something and not just get stuck in configuration or other details. If you are new my advice is just get started and try it out. Worst comes to worse, just delete the resource 😉
  • Another positive note is that the documentation in most cases is quit good and updated and there are good examples to show you how to get started and move from the basics to more advanced features. This is important when you start.
  • Very very good tooling support. This was one aspect which really suprised me. Microsoft has invested heavily in the tooling for Azure. You can deploy to Azure via ARM templates, powershell scripts, Azure devops, from your IDE like Visual Studio Code or Intellij. I commend Microsoft for investing so much in providing better tooling for developers and others to integrate with Azure.  The fact that I can deploy a spring boot app to Azure App Service via Intelij through a maven plugin is just awesome! Hell, you can even connect services to code in github repos and update the service based on code changes in the repo. How cool is that?!
  • Good community. I have had good experience with the Azure community, whether it is as stackoverflow, msdn or with Azure experts. They have been helpful and provide good feedback.
  • Good support for Java! That’s something I didn’t expect to say about Microsoft but I am very impressed with their Java support in terms of the number of frameworks they have developed and support for Java for Azure Functions and for spring boot starters. This is really good and makes it very nice for other developers than .NET to consider Azure.
  • Good Portal. Despite the fact that the portal can sometimes feel a bit messy I like it and having easy access to resources, resource groups and being able to scroll horizontally across settings is also very cool. They are constantly updating and improving the portal so I am sure it will look differently in a years time.
  • Pricing calculators. For a lot of the services there are pricing calculators that help you get a rough idea of the cost of running your services based on different factors. I encourage you to play around with them to compare different services and get a good feel for the different pricing levels.
  • Azure devops! I really like Azure devops! I have used github, jenkins and Jira and those are awesome services and together have a lot of really good features. But if you are using Azure I would strongly advice you to move ot Azure devops. There is a lot of build-in support for Azure deployment than will help you get starting and setting up your CI/CD pipeline in no time. For instance the agents you configure to run your builds have support for maven by default. NO need to install maven on your build server!! There is also built in support for deploying to Azure App Service and for deploying to different deployment slots. Really cool! Off course there are features that it is lacking compared to each of the mentioned services but from what I have experienced it is really easy to get started with your repos and CI/CD pipelines.
  • Azure App Insights. I will mention this in other sections too but for me using App Insights inside my code to emit log events is really nice. I can emit as much or as little as I want and I can set the severity level as well. Simple and easy to get going.

There are many more positive qualites that I have forgotten but I’ll save those for another posting.

The ”could be improved” part..

  • I am missing a central repository of best practises. What I mean here is that it is hard in the beginning to know if what you are developing is following established best practises or you are building yourself into a anti-pattern or exposing yourself to some limitation or risks. Sure, there is documentation that discusses certain aspects here and there and there are plenty of other blogs out there but I would really prefer if Microsoft published a ”best practises” site describing how their sites are to be used. Quit often you may start with a service and half way you realize it was not the way you want and have to roll-back and pick another service instead.
  • Make certain resources more light-weight. Some services like API Management usually only have one instances per environment. This is because when you create them you get a DNS name and you need to reuse that for all your APIs. This means that as you add more and more APIs they are all added to that API management instance. This makes creating ARM templates for API Management much more difficult. In fact we are forced to use a third-party powershell script to extract and deploy APIs to API Management. I would much prefer if you can create many API management instances that reside in many resource groups but belong to the same DNS. This would make creating and managing their ARM templates much easier and the APIs would reside closer to the solution as well. Off course the developer portal would still show all the APIs.  The same goes for service bus instances, key vault and others. I understand these are seen as central services that are seen as shared but but if so, there should be improvements done when it comes to deploying objects to these shared resources.
  • Should I use app service deployment slots or new resource groups? This is something I am trying to figure out right now.  Deployment slots are great for quickly spinning up a new test or preprod environment and do blue-green deployments and the ”Testing in Production” is an awesome feature! However, this also means in practise that the app service should reside in a single resource group. At the same time we have resource groups per environment. So one resource group for dev, one for preprod and one for prod. So how do you work with these two concepts? I would Microsoft made this more clearer.
  • API Management policies. The language you use to code policies needs to be improvement or at least the tooling part should be improvement. Ideally when you add new policies you should be able to debug the policy and put breakpoints to see what has happened and what went wrong. I hope they improve the policy language and debugging capability.
  • Logic app language support. Logic apps are great for building buisness flows involving established services. However, building more complex expressions can be very frustrating and involves a lot of trial and error. There is also no way to debugging logic apps. I hope they add better debugging support and ability to add break points. Above all, building more complicated expressions for setting variables, doing string operations or extracting parts of a json object should be made much easier. This alone has forced use to sometimes used Functions instead.
  • API Management lacks support for schemas. This is for me a strange one. I would expect an API Management service to support XML and JSON schemas so that the API management can validate requests/responses and return the appropiate http header based on the outcome of the validation. I really hope they improve on this.
  • Too many different views in Azure Active Directory. In Azure Active Directory, you can view a registered app through ”Enterprise Applications”, ”Application Registrations” and the preview mode of ”Application Registrations”. I don’t really know why there exist three different places where you can view the App and see different setings for it. There should simply be one page where you view your App settings and make changes to it. Today, you need to go to one page for one setting and another for a different setting. Consolidate all the views in one area and make it easier for users to find and view the app settings.
  • Make it more intuitive to create application roles and permissions. When you register an app in Azure AD you get a barebone app. Ok, but that app is useless unless you add roles and permissions to it. These steps should be added after the app registration process and adding roles and permission should be done more intuitively. Today, you need to know about these things in order to add them.

I am sure there are more points to discuss but I’ll leave those for another post.

All in all I am enjoying working with Azure and have greated enjoyed the java support and the ease of creating new services and trying them out. Another important part is how to do your CI/CD and although ARM templates are a pain there is a lot of good tooling support to make life easier for you. Then there are more details features that could be improved and I Microsoft is constantly trying to change and improve their service so I look forward to see how Azure will look and feel in a years time.

Thanks, that’s all for this time.

A (brief) review of ARM templates in Azure

Hello again

This blog has been very quiet for the past months but after working with Azure Integration stack and a few other Azure components I think it is about time to write a post. Specially this post will simply give my thoughts after having worked with ARM templates for less than a year.

I understand Microsoft’s idea behind ARM templates, you want a universal way of describing any resource. In theory that should make it easy for you and easy for users. However a language is only as good as the ecosystem of tools that aid in its usage and development.

Yes, you can work with ARM templates using Azure CLI, Powershell, Visual Studio Code and any old JSON browser, but the main problem here is just that, an ARM template can describe any resource in the Azure world. This makes it hard for developer as each ARM template will look the same but still be different. A logic app ARM template will have the same overall structure as an API Management template but there will still be a lot of additional fields that differ.

Consider now a tool like Ansibe which have divided each resource or component into modules. That makes it extremely easy for developers as they simply refer to the correct module and use the variables and functionality available. A module for creating a VM in AWS will look different than a module creating docker containers. Not to mention, it is in YAML which is more user friendly than json. Here is an example of a playbook for creating a VM in AWS:


# Single instance with ssd gp2 root volume
- ec2:
    key_name: mykey
    group: webserver
    instance_type: c3.medium
    image: ami-123456
    wait: yes
    wait_timeout: 500
      - device_name: /dev/xvda
        volume_type: gp2
        volume_size: 8
    vpc_subnet_id: subnet-29e63245
    assign_public_ip: yes
    exact_count: 1

Now an ARM template has the following structure:

    "$schema": "",
    "contentVersion": "",
    "parameters": {  },
    "variables": {  },
    "functions": [  ],
    "resources": [  ],
    "outputs": {  }

Which is easier to read? I prefer the Ansible, YAML format which tells me immediately via the –ec2 tag that I am working with AWS compute. I don’t need to read anymore to find out.

For me ARM Templates are good for computer software to parse but not for humans to manipulate. Microsoft needs to invest more in tooling so that developers can work with YAML or other third-party tools such as TerraForm to make it easy to work with ARM Templates.

Another thing is that it does not seem that ARM Templates are idoempotent. This means, if I am deploying something that already exist, and no changes have been made, it should leave source untouched. This is standard practise in Ansible. This is a much needed feature in ARM Templates.

Then there is the issue know what was actually deployed. You don’t really a way of knowing until you go manually and look afterwards. This is an interesting discussion since now we are almost saying that infrastructure-as-code also means we should work test-driven with our infrastructure code. Here again there is not a lot of tooling support to write integration tests for ARM Templates. There is a powershell module called Pester that people seem to be using. But I find it strange that Microsoft hasn’t released one themselves.

Another cool feature would to somehow be able to deploy the ARM template locally and get the ability to browse or view the component similar to how it would appear in the portal. Now that would be awesome!

Another aspect has to do with structuring your code and how to do it in a efficient manner. Ansible has a clear structure of how to organize your playbooks to deploy complete solutions. I don’t think ARM Templates are that sophisticated yet. This means each company will have to invest in their own best practise rather than Microsoft laying down the foundation.

I think ARM Templates are a good jump forward for Microsoft. New features are added and things are evolving. But as it is I think AWS makes devops much easier and there is far more support in third-party tooling. But I think this will change in the next couple of years so let’s se.

First impressions of IBM ACE 11

Hello and welcome to another blog series. Unlike the usual posts which are about Apache Camel (and related) development, this post will look at first impressions of IBM App Connect Enterprise v11.

If you are interested in trying it out for yourself you can get hold of the developer edition and test it for free. The link can be found here:


IBM App Connect Enterprise v11 is the successor to IBM Integration Bus 10 which was released back in 2015. IIB 10 has been a stable evolution of the product series since version 8 and 9 and has little by little added new functionality to the product. IBM usually releases a new version every 2 – 3 years so in 2018 they have now release IBM ACE 11 which is a combination of two existing products IBM IIB 10 and IBM App Connect.

I have spent a few days playing with it and getting a feel so let’s go through it a bit more.


Installation is as simple as IIB 10. Just download and run the installer and you are good to go. You can install it on the same machine as IIB 10. After installation you get a new toolkit and a console. There is however no Integration node available. We’ll look into this in the next sections.

As far as installation goes, it is really easy, quick and painless (way easier compared to some of its competitors).


This has in my view been one of the weaker aspects of the platform. It is still dependent on Eclipse and hardly anything new has been added tooling wise. You are still dealing with Apps, Libraries and bar files. Anyone coming from IIB 10 will instantly recognize themselves.

In my view, there is one big part still missing from the tooling and that is a powerful test framework, similar to JUNIT in the java world. It is a shame they haven’t added this. If anyone wants to vote for my RFE to add this please go here:

I would have liked it if they somehow had allowed developers to use other IDEs as well such as Intellij.

Like I said it looks extremely similar in terms of layout, nodes and components to IIB 10. Some extra nodes exist that are related to App Connect and Watson but I doubt the majority of users will benefit from them.

You basically develop similar to before:

  1. Create an App.
  2. Build your flows.
  3. Build your bar file.
  4. Deploy.

Essentially IIB 10 or ACE 11 lacks in one fundamental way and that is, as a developer I cannot write integration like others develop applications. I cannot write my tests first, run them, see them fail, then add code, then run the tests again and watch them go green. This style of working is something I really hope IBM looks at and adds support for.

I will discuss the policy editor in another section.

Deployment strategy and docker support

Better late than sorry as they say. Well docker has been around for a few years now and it has almost become a defacto standard in application build&deploy pipelines. IBM decided to join the party so now they have added support for the runtime environment and added a new payment model. I have no idea yet what this means but they are talking about some sort of a ”pay-as-you-go” model. Maybe they will charge per message, cpu usage or some other metric, but it is at least an improvement to their current PVU/VPC model which has sucked for a long time.

The biggest change by far in IBM ACE 11 is the absence of an Integration node. This means there is no definite Integration Server either. With this approach IBM wanted to add support for those wanted to move away from an ESB topology to a more distributed and ”lightweight” model.

In a sense, what this means is that you can do create an Integration Server on any server you like, connect to it, and deploy your integration there. You can have an integration server installed on an on-prem server, in a cloud server or bundled in a docker container and run anywhere you like.

You can have choose to have 1 integration server per App or 1 integration server with many Apps. That is up to you. Indirectly IBM is basically saying, ok we still support on-prem but please note that we are making changes to move towards containers and cloud. I wouldn’t be suprised if most of the coming fix packs focused on additional docker support.

Is it all good? Well, it is good that they have added additional docker support. What is not good is:

  1. Why do I even need to create an integration server if I truly want to work lightweight? In an ideal scenario the ”bar file” would contain not only my code but my runtime as well. In fact, going even further, the bar file would help to generate a docker file and allow me to create a docker image based on that. Now, that would have been revolutionary for IBM (this already exist in the Spring boot/Camel/Maven world) and would have made my life as a developer much easier. Instead I still need to care about Integration Servers. They have taken a positive step forward, I just wished they had gone all the way and added support for the bar file containing its own runtime as well.
  2. Why can’t I deploy bar files via mqsibar when the Integration Server is running? Well you can but the Server won’t detect the changes until you restart it which is just stupid. Why would I want to restart the server in production? By the way, mqsibar is the new command to install bar files to a new Integration Server. You can off course deploy via the toolkit or the webadmin. I understand what IBM is saying, yeah you can create a new docker container with the new App and replace the existing one. But for those who still run ACE like they did IIB this functionality doesn’t give any benefit. mqsibar would have been great if it could deploy bar files whilst the Server is running so that it would detect changes on the fly. Instead it requires a restart!! I mean Apache Karaf had support for this years ago.

Policy editor

My biggest complaint by far is the new Policy editor which is supposed to replace what was before configurable services.

Now, to configure a new Integration Server you change properties in the server’s server.conf.yaml file. You write yaml and you configure the Integration Server. Now, that is easy and simple. Now, why did they introduce a new artefact that we need to maintain and take care of? We now need to write a policy, and associate that policy to a message flow/node?? Couldn’t IBM just used YAML files all the way?? Not only that, but if I update my policy and want to deploy it, I can just overwrite the existing one. I have to delete the policy and all the flows that depend on it, then deploy the new policy and redeploy the apps. I hope really hope they change in upcoming fix packs. Just write your yaml files and please add support for live updates. Nobody wants to delete or remove stuff.

Is it worth it?

If you are on IIB 10, then no not really. At least not now. Wait a year or so until a couple of fix packs have arrived and see what new functionality they add. I don’t see anything in ACE 11 that is revoltionary or that hasn’t existed in competitors products. I would have hoped IBM would have taken a big step forward. Instead it took a few small steps to catch up but it is still not there.

Camel development series part 15

Hello and welcome to another short Camel development series. This time we will look at a real life use case and how we can run Camel inside spring boot. We will also see how Camel can be used not just for enterprise integration but as a useful tool in your toolbox.

In our case we work daily with IBM Integration Bus. We needed an automatic and easy way to generate on a daily manner a list of execution groups, deployed applications and restapis together with some properties. The aim was to post this json message to a logic app on Azure which then generate a Sharepoint table where we could see what is available on a specific environment and when it was last deployed.

To go straight to the code . Note there are probably lots that can be improved and I have not included any tests and the logging is basic to say the least but the overall functionality should work. You can off course extend it and customize it your IIB environment and needs. The advantage of doing it this way is that you don’t need to mess with mqsi commands and mqsi profiles. You use the API to get all the data. This means you can incorporate it as part of your CI/CD process or do it remotely. The main problem with the IIB API v10 is that there is no way to get all the properties of all deployed artefacts at once. You have to first get a list of execution groups, then for each execution group get a list of apps or restapis and then for each such component get its properties. Finally combine the whole thing into a complete json message.


You basically start the spring boot app by running the following command:

java -jar iibartefacts.jar -Dspring.config.location=<path-to-your-application-props-file>  -Diib.endpoint=<url-to-your-iib-endpoint-including-port>  -Denvironment=<specify-environment>

Here we are providing a few jvm properties:

  1. We disable ssl validation. If you need to validate to off course remove this parameter. I don’t in our case so that’s why its there.
  2. I provide the path to my spring boot file. I don’t want it to be bundled in my fat jar so here I provide a full path to its location.
  3. The parameter iib.endpoint is a complete url to your iib endpoint. It could be
  4. Finally the parameter environment is required because in the json message we provide the type of the environment i.e. if its test, preprod or prod. If you don’t need this then you need to change the code as well.

The spring boot app uses the in memory database H2 to store the extracted data (list of execution groups etc). The H2 config is located in the file and you can view the schema.sql file to see how I configured the table. Should you need further properties simply add additional columns.

Then there is the straight forward SpringApplication main class which starts the main route. This route simply kicks off everything and calls the other routes. Finally the json message is generated and saved to a file. If you need to post the json somewhere else then simply change the endpoint.

The logging and error handling could be improved, as well as some unit tests but this was basically an attempt to use spring boot and camel to create a tool that would complement an existing an platform. It took me roughly 1.5 day working to get to work properly and tidy up.

If you run into any issues or have suggestions let me know.

Camel development series 14

Hello and welcome to yet another Camel development series. In this post I will describe how to use some basic Camel concepts together with Telegram chat bot and accessing a REST API to receive chat messages. You can customize this for many use causes for push/pull type of apps. Finally I will show how you can deploy this non-web app to Heroku and see the result.

Basic flow

The basic flow of the app is as follows:

  1. On startup it calls a REST API found at which is the swedish government’s lottery site. They have a REST API where you can access various lotteries, draws and results.
  2. The app first calls an endpoint to get the draws for this week.
  3. It then retrieves the draw number for Saturday and Wednesday draw.
  4. It calls another endpoint and receives the actual lottery numbers for the draws for those days.
  5. It then pushes out a friendly message to a Telegram chat bot.

Now off course, you can add additional features such as sending commands and getting more data and even returning random numbers to suggest for playing the lottery.

First the code

You can find all the code here . I will not got through code in detail such most Camel users should be familiar with it since it is pretty standard Camel functionality.

Maven setup

To get our app to deploy to Heroku we need to be able to run our pom file with the package command. To to do that we need to assemble our app and this can be done using the appassembler-maven-plugin. Simply add the plugin below and change the parts to your project. Pay attention to the CamelWorker which is specified under target. This will be used by Heroku when starting our worker process.


Heroku setup

Now we come to the Heroku setup. Let’s follow these setups.

1. Create an account at Heroku.

2. Download the Heroku cli and install it.

3. Before proceeding ensure that your folder structure is allows:


That is, don’t have a long folder structure leading to your pom file. This is because Heroku looks at the root folder for the pom.xml, and the Procfile. If your repo is not structured like this rearrange it before proceeding.

4. Verify by opening a cmd and type ”heroku” in the command prompt. You should receive a prompt to login. Enter your credentials to login.  Afterwards next time just typing heroku should display the following:


5. Then it is time to create your heroku app. In the command prompt heroku apps:create :

heroku apps:create svenskalotto
Creating svenskalotto… done |

The link above should the remote git repo that you need to add to your remote link in your local git repository. I called mine heroku. Ensure that if you type git remote -v, the ”heroku” remote is shown with the link to your app repo.

5. Now in order for our app to work in Heroku a couple things are needed. Heroku by default works with webapps where you have some website to show. Our Camel app is java standalone app. We instead need to use a Heroku worker process rather than a webapp process. This is so that Heroku understands that our app is a ”backend” app. For more on this look into Heroku processes. We also need to specify the jdk to use, add jvm arguments to ignore ssl certification validations and create an environment variable to store our API access key.

6. Heroku requires two files, a file and a Procfile. Create both of them and put them in the root folder of your repo. You can look at my repo to see what values I used. Essentially the file tells Heroku how to run our app and parameters to use when running it. Procfile contains the type of worker to use on the Dyno.

7. Once you have added the two files remember to push to your repo.

8. Now it is time to push to your heroku repo. In your cmd  at your project folder type:

git push heroku master

You should now see heroku push your code and building the app as can been here:

Finally the build process finishes as seen here:

If the app did not start immediately you can view the logs using:

heroku logs


heroku logs:tail

9. Now, our app will start but will crash. Why? Because we have not created an environment variable four access key. Login to our app dashboard. Go to settings. There is a option to show configVars. Click and then a key value input form will appear. Write the name of your environment variable and its value. Ensure you have referred to it in your Camel code. Once you save it the app will restart.

10. If everything works your Telegram bot should show some nice lotto text 😉 For example:


If you have any questions on the code or the setup let me know. Eventually I hope to expand and do a more push based chat.

Camel development series 13

Hello, and welcome to another post regarding development with Camel and all things related to Camel.

Its been far too long since my previous post and although any excuse is a bad excuse but since my previous post I have switched back to consulting and ventured back in the commercial world. The big thing noticeable is off course how far ahead Camel is even of the commercial vendors. Already in 2015 you could dockerize Camel. These days these are either alpha features or just about production ready. More importantly, you can’t just auto-scale as you like as there are complicated licensing agreements to take into consideration. Another different is that commercial vendors have a huge obstacle and that is, it is practically impossible right now to develop true micro-services with them unless you pay loads of cash. With Camel, you just write your app, dockerize it and go. I think if the commercial vendors want to catch up they need to break apart their architecture and make things more modular. If I am writing and REST-API I don’t want  a gazillion other parts connected to the runtime that clogs resources. I think they are moving in this direction but it will be at least some before they are there.

Anyway here I thought I’d show a couple of new features in Camel 2.20 which looks quit cool.

JSON Schema validation

In order to use this component add camel-json-validator to your pom file.
Here is an example RouteBuilder class for you

public class JsonSchemaValidation extends RouteBuilder {
public void configure() throws Exception {


As you can see we grap the json data, we send it to the validator component and specify the schema, and catch any validation error. Pretty easy right?

Health check API

I don’t have any code to show here but it is pretty cool that Camel has started to introduce a health check API. I think that was one of the issues that was missing previously. Once it becomes to easy to check if a CamelContext is available or which routes are up from API then this will do wonders for monitoring.

JSONPath writeasstring

One annoying thing which isn’t Camel related per say was that when you used jsonpath and wanted to get the value of json field it would write it as [”myvalue”] rather than ”myvalue”. Finally there is a writeAsString method to do this for you.

Support for AWS Lambda

In Camel 2.20 there is now direct support for AWS lambda function calls. See more info and examples here

This is pretty cool which means you can use Camel to call your AWS Lambda environment and have mixture of both type of functionality. You could have Camel running in docker on AWS calling your lamda functions!

Camel development series 12

Hello and welcome to another Camel development series. From now on I will paste less code to the blog and instead refer to the github repo where the code is stored. It makes writing the blog easier and that helps to motivate me to write more often.

In this serie I will touch upon a frequent scenario. Our problem is that we want to expose a HTTP endpoint in order to allow the client to perform a HTTP GET operation. We then want to call the backend HTTP(s) and retrieve say a picture and allow this to be displayed via the browser. How can we accomplish this?

Well if you want to jump straight to the code go here:

As you can see we accomplish this in four lines of code. We first expose our undertow endpoint and ensure only HTTP GET operations can be performed.  Then we set the Exchange.HTTP_QUERY header to our query parameter. Finally we call the backend https service using the http4 component. The key here is to enable the parameter bridgeEndpoint=true so that Camel understands it is acting as a proxy and doesn’t mix up the endpoints. Finally we convert the payload to byte and return it to the original client. That’s all!

Camel development series 11

Hello and welcome to another Camel development series.

In this series I thought we’d go through some more hints and some tricks I learned after one year of working with Camel.

Avoid unnecessary object creation

Do not create unnecessary object models of your data. As a java developer you tend to think everything in terms of object. But should you always create objects? The whole point of integration is to remain stateless and act as an interface to transfer data. So I would say, where possible don’t create objects. This has nothing to do with performance since object creation is cheap but more to do with not having to write unnecessary code for an behaviour related functionality. Let us work with an example.

So you receive a json message, you need to do some content-based routing and then generate a SOAP message and send to some web service.

Now you could off course create an object model for the json message or write a bean and the same goes for the SOAP xml. But unless you are dealing with completed message structure or attachments you don’t need to write code that way when it comes to integration. Here is an approach to solve it in another way.


.when(PredicateBuilder.isEqualTo(ExpressionBuilder.languageExpression("jsonpath", "$.data.request"), constant("valueToMatch1")))
        .when(PredicateBuilder.isEqualTo(ExpressionBuilder.languageExpression("jsonpath", "$.data.request"), constant("valueToMatch2")))
  .throwException(IllegalArgumentException.class, "Unknown request command received!")

In the above you can see that I don’t create any objects. I simply use the excellent library of json path or the camel version of it camel-jsonpath and simply perform a lookup in the data and based on that I route to my individual routes. This allows for writing less code, simpler code and I stay within the Camel dsl. It makes it also easy to understand what the integration is doing. I avoid going from one class to another.

Now about creating a SOAP message. Again you could write jaxb code, or use some other xml processing or use CXF or some other framework. But if your SOAP messages are relatively simple and does not contain attachments then you skip all that and use freemarker to inject data into your SOAP message. Here is an example:


Here is my freemarker template file:

<?xml version="1.0" encoding="UTF-8"?>
<soapenv:Envelope xmlns:xsi="" xmlns:xsd="" xmlns:soapenv="" xmlns:lan="http://test">

As you can see I extract the data again using json path and insert it in a header. Then I route to a freemarker endpoint. In my freemarker template I have written a simple expression to where the data should be injected. That’s all! I don’t write any xml code or worry about name space creation and stuff like that. Exactly same thing goes for the opposite. You can see xpath to extract data from xml messages and use a template json freemarker template to insert a simple expression.

Creating environment variables in your tests

A lot of the times you need to access environment variables but you don’t want to use real values in your tests but simple some fake value but you don’t want to change your code. You still want to access that same environment variable but just change the value.

I use the excellent library called system-rules by Stefan Birkner. It works really well and is really use to use. If you work with maven add this to your pom


How do you use it?

In your JUNIT test class add the following:

  public final EnvironmentVariables environmentVariables = new EnvironmentVariables();

  public void setUp() throws Exception {
    environmentVariables.set("VAR1", "1");
    environmentVariables.set("VAR2", "2");
    environmentVariables.set("VAR3", "3");

As you can see we add the rule and create a variable of type EnvironmentVariables. Then in your setUp method you simply create the variables and add values. As easy as that!

Testing JSON messages in your route tests

So quit often when you write route tests you will need to check if the json you produce needs to match some expected json. To do complicated JSON matches or even simple ones I simply the excellent library alled jsonassert. Add this to your pom:


To use it:
In your @test method add this:

JSONAssert.assertEquals(expectedResponse, response, false);

There are way more complicated things you can do with JSONAssert but if you want a simple way of comparing json messages this is great.

Camel development series 10

Hello everyone,

Well as usual it has been a long time since I wrote here but I thought a new year has started so a nice and simple update would be good.

This time I will keep it simple and instead show how you can verify json messages against a given schema in your Camel route.

The aim is thus:

Given a json message and a predefined json schema, we want to validate the message against the schema and return the result of the validation.

Camel route

Assuming you have create a Camel route here is how my (very) basic code looks like:

package org.souciance.integration.validate;

import org.apache.camel.builder.RouteBuilder;

import java.nio.charset.Charset;

public class CamelValidateJson extends RouteBuilder {

public void configure() throws Exception {
ClassLoader classLoader = getClass().getClassLoader();
String schema = IOUtils.toString(classLoader.getResourceAsStream("jsonvalidate/schema.json"), Charset.defaultCharset());
String json = IOUtils.toString(classLoader.getResourceAsStream("jsonvalidate/data.json"), Charset.defaultCharset());
.setProperty("Schema", constant(schema))
.bean(ValidateJson.class, "isValidJson")
.log("Valid json!")
.log("Invalid json!")

I have kept the steps very simple. The idea is to focus on the schema validation and nothing else so I am manually loading the data and the schema file. You could off course inject the schema path via some variable or some other way.

I then start the route via  timer, again this is to keep it simple.

I then create an exchange property called ”Schema” and insert the schema in it.

Then I call a bean using .bean and as parameters give the bean class and bean method.

The bean will return the result of the validation inside a header.

I use the choice() and when() to log the result.

The bean code looks like this:

package org.souciance.integration.validate;

import com.fasterxml.jackson.databind.JsonNode;
import com.github.fge.jackson.JsonLoader;
import com.github.fge.jsonschema.core.exceptions.ProcessingException;
import com.github.fge.jsonschema.main.JsonSchemaFactory;
import com.github.fge.jsonschema.main.JsonValidator;
import org.apache.camel.Exchange;


* Created by moeed on 2017-01-15.
public class ValidateJson {
* Method to jsonvalidate some json data based on a json schema
* @throws IOException
* @throws ProcessingException
public static void isValidJson(Exchange exchange) throws IOException, ProcessingException {
final JsonNode data = JsonLoader.fromString(exchange.getIn().getBody().toString());
final JsonNode schema = JsonLoader.fromString(exchange.getProperty("Schema").toString());

final JsonSchemaFactory factory = JsonSchemaFactory.byDefault();
JsonValidator validator = factory.getValidator();

ProcessingReport report = validator.validate(schema, data);
if (!report.toString().contains("success")) {
exchange.getIn().setHeader("isValid", false);
else {
exchange.getIn().setHeader("isValid", true);


The method isValidJson is very simple. It receives the exchange. It extracts the json data from the body and the json schema from the exchange property.

Now to the main part. I use the json schema library to do the actually schema validation. If the validation is false I update the exchange header and if it is successful I also update the exchange header.

If you log the output after running it with intentionally bad data for a given a schema you will see something like this: failure
error: object has missing required properties (["age"])
level: "error"
schema: {"loadingURI":"#","pointer":"/items"}
instance: {"pointer":"/7"}
domain: "validation"
keyword: "required"
required: ["_id","about","address","age","balance","company","email","eyeColor","favoriteFruit","friends","greeting","guid","index","isActive","latitude","longitude","name","phone","picture","range","registered","tags"]
missing: ["age"]

As you can see I did not have the property ”age” so the validation failed. With the age property put back in the data the output will simply be ”success”.

This is just simple way of doing json validation in your Camel routes. You can use it whilst doing rest calls or simple file based data manipulation. For more info here is the source code on my github: