Azure App Service: BizTalk Server PaaS Done Right

By Nick Hauenstein

Today’s announcement from Scott Guthrie not only brought with it a re-imagined Microsoft Azure platform, but also finally gave BizTalk Server a pure PaaS home right in the center of the stack. To be fair, we’ve had clues that this was coming since Bill Staples’ keynote at the the Integrate 2014 conference in December, but we’re finally on the edge of it all becoming real.

AN Inevitable, Yet Happy, Re-imagination

Today's Azure App Platform

It’s a shift that was inevitable given the nature of the existing Azure services – all stand-alone with their own distinct hosting environment, extensibility points, and management experiences. Imagine that you were working with an integration involving an Azure Website, Service Bus Topic, and MABS Itinerary. In order to manage it, you may have found yourself using 3 separate versions of the management portal – with one living on top of Silverlight.


With the re-organized app platform, we’re seeing a move into a single hosting container – the Azure Website. That means that all investments in that space (hybrid connectivity, auto-deployment, language support, etc…) now serve to benefit the entire platform.

For the purposes of the rest of this article, and the interests of the community that this blog generally attracts, I’m going to focus in on two of the components of the Azure App Service – Logic Apps and API Apps.

Core Concepts of the Azure App Service Platform

Before we take a look under the hood at what it looks like to use this stuff, it would serve us well to examine some of the core concepts behind this new platform.

Logic Apps

Logic Apps are the workflow of the cloud. However, these flows are composed not of “activities” in the WF sense or “shapes” in the BizTalk sense, but instead are composed of API Apps – RESTful API calls. Logic Apps can be triggered manually, on a schedule, or by a message received by a connector – a special classification of API App. Within the Logic Apps, calls to the API Apps are referred to as Actions, and are configured graphically within a web-based Designer.

Logic App Blank Canvas

The web-based designer starts as a blank canvas. If you’ve never created one before, you will need to get some API Apps from the marketplace to use within your flow. Once you select them from the marketplace, an instance of each will provision itself within your Azure subscription – in the resource group of your choosing (think BizTalk Application for the cloud – a logical grouping mechanism) .

Each API App will be locked to the version that was retrieved at the time that you originally fetched it – until you update it. Versioning of each API app is done through NuGet-style packaging.

Getting ready to create the Twitter Connector

Benefits of this model are that you are able to control the scaling and locality of each of the API Apps that you use, you have some level of control of the version in use, and you’re not subject to the demands of everyone else in the world on that same capability.

This is a nice way to handle the concept of a service marketplace – not only selling the raw capability that the service provides, but providing an isolated / personal container for your calls against that logic that can be located directly alongside your app for the lowest possible latency.

There are quite a few connectors/actions available for use in the marketplace, and this is where we start to see capabilities from BizTalk Server light up in a purely PaaS way within Azure.

API Apps in the Marketplace

API Apps

Microsoft is not the only provider of API Apps – you too will have the ability to create API Apps for use in your own Logic Apps or for use by others when published to the marketplace. The new release of the Azure Tools for Visual Studio provides project templates to make this happen, as well as a new Publish Web experience.

Azure API App Template

When creating a new Azure API App, you will find quickly that you’re really building a Web API application that has a nice SwaggerConfig.cs class within the App_Start folder for publishing metadata about the endpoint itself. You can think of Swagger as the answer to WSDL for the RESTful world:


Other than where you’re ultimately publishing the bits, how the metadata is exposed, and some hooks you have into the Azure runtime, this is just a stock-standard Web API project. In fact, you could write it in any of the languages supported by Azure Websites.

For now, that’s all I’m going to say about custom API apps. Keep watching this space in the coming weeks though, as we at QuickLearn will be gearing up for a live in person and virtual event focused around building custom API apps and connectors.

That being said, let’s focus in on a few of the API Apps that are available right now in the marketplace – specifically the BizTalk API Apps.

BizTalk API Apps


Looking through the BizTalk API Apps, we find the ability to validate XML, execute maps (MABS-style maps), and execute Rules (among other things). When doing those tasks, additional artifacts will be required (beyond just the message itself, or a simple string parameter).

For such API Apps, the Azure portal allows you to navigate directly to the API App, and configure it further (e.g., uploading schemas for validation, creating transforms, and/or defining vocabularies and rules).

Adding maps for use in the transform service

I’m happy to see that the BizTalk API Apps will allow us to re-use existing artifacts (at least for schemas and maps). For rules, the story is a little bit different. We can actually define rules within the browser itself (more on that in a later post).

API Gateway

At this point we have Logic Apps (composite flows that bring together calls to multiple API Apps), and API Apps themselves (micro services that can be called directly). They live together in a logical resource group, but how can we manage authentication / authorization for calls to these apps? The answer to that is the API Gateway – a web app that handles API management functions such as authentication for all API apps in a resource group.

API Gateway

Everything is New

This is a time of huge change for Microsoft, not only embracing open source and greater platform support, but also investing more deeply into their own platform and integration. It’s definitely an exciting time to be a developer.

Here at QuickLearn, we will be hard at work making sure that you have a company that you can turn to for world-class cutting edge training in this space. So stay tuned, because some of the best days are still ahead.

Some exciting changes coming to our TFS courses

By Anthony Borton

One of the things that really helps set QuickLearn apart from our competitors is the fact our TFS courseware is being continually updated to ensure it is as current as possible.

With Microsoft releasing quarterly updates for Visual Studio and Team Foundation Server, you want to make sure you don’t miss out on any new or improved features that could make your job easier.

We also review the structure of our courses to ensure they are a great fit for the market and sometimes that means we change existing courses and sometimes it means we develop brand new courses.

Here are a list of changes that will take place over the next few months.

1. Merging and refining our offering for Team Leaders, Project Managers and Scrum Masters.

Our 2-day Managing Projects with Microsoft Visual Studio TFS 2013 course is being merged with our 3-day Applied Scrum Using Visual Studio 2013 course to become Managing Agile Projects using Visual Studio TFS 2015 (3-days).

2. Expanding our TFS Administration and Configuration offering

To help keep up with all the changes and to allow us to add more hands-on-lab exercises, we’re expanding our TFS 2013 Configuration & Administration course from 3-days to a 4-day course for TFS 2015. You’ll see this change occurring from July 1st, 2015.

3. An improved offering for DevOps training

Our Build, Release, and Monitor Software Using Visual Studio 2013 course is being replaced with a new DevOps using Visual Studio ALM 2015 course from July 1st, 2015. This has been revamped to offer a broader range of topics and some great new hands-on-labs exercises. Want to know about the new Build features? Perhaps you’re trying to get your head around Desired State Configuration (DSC)?

4. DeveloperS – choose Git or Team Foundation Version Control

Our TFS 2013 Developer Fundamentals course has always focused on Team Foundation Version Control in the past. The market has asked us for something to help them if they are using Git for their version control and we’ve got a solution.

Our TFS 2013 Developer Fundamentals is now available in two different flavors for the 2015 release. Simply choose the course corresponding to the version control provider you and your team are using.

  • TFS 2015 Developer Fundamentals – TFVC  (2-days)
  • TFS 2015 Developer Fundamentals – Git (2-days)

5. Developer Enterprise Features – NEW

Microsoft has long offered a tiered pricing/feature model for Visual Studio with versions ranging from Professional up to Ultimate. Often development teams continue to use just the “standard” Visual Studio capabilities and don’t take advantage of some of the great productivity features available in the higher level editions.

Starting with the 2015 release, we’re offering a brand new course “Visual Studio 2015 Enterprise Features” (2-days) which will focus in on the features only found in the higher level editions. Many of these features offer huge benefits to teams or enterprise developers and we want to make sure you’re using them to achieve your optimum productivity.

This new course hits the public calendar in the second half of 2015.

Visit quicklearn at the alm forum May 18-22, 2015

By Anthony Borton


The ALM Forum is moving to a new venue this year. The Bell Harbor Conference Center at Pier 66 on the Seattle waterfront plays host to a great line up of workshops, keynotes and breakout sessions. In previous years the event has been on the Microsoft Campus and most recently at the Washington State Convention Center.

QuickLearn is pleased to once again be a gold sponsor of the ALM Forum. It is the perfect place for all ALM practitioners and managers to come together and learn from the best in the industry. We’ll have a number of our experts manning the QuickLearn booth over the 3 main days of the conference so drop by and say hello.

This years event also introduces a new track to the program.

  1. Process of Software Development (NEW)
  2. Business of Software Delivery
  3. Principles and Practices of DevOps

Lastly, don’t forget that in addition to the three main days of the conference, there are also some great pre-conference and post-conference workshops you can sign up for.

TFS/ALM Pre-conference Workshop

QuickLearn’s lead ALM trainer and curriculum developer, Anthony Borton, will be presenting an updated version of his popular pre-conference workshop titled “Enhance your Application Lifecycle using Visual Studio Online and TFS”. If you’re planning on attending the ALM Forum make sure you look at his workshop and sign up if it sounds interesting for you. At just $495 for a full day of cutting edge, hands-on technical training, it is great value!

Integrate 2014 – Final Thoughts

By Nick Hauenstein

The last day at Integrate 2014 started off early with Microsoft IT demonstrating the benefits of their early investment in BizTalk Services to the bottom line, and then transitioned into presentations by Microsoft Integration MVPs Michael Stephenson and Kent Weare discussing the cloud in practice, and  how to choose an integration platform respectively.

Those last two talks were especially good, and I would recommend giving them a watch once the videos are posted on the Integrate 2014 event channel on the Channel 9 site.

Integration Current/Futures q+A Panel

At this point, I’m going to stray from the style of my previous posts on Integration 2014. The reason for that is that I want to take a little bit of time to clarify some things that I have posted, as well as to correct factual errors – given that we’re all learning this stuff at the same time right now. Don’t get me wrong, I do not at all want to discount the excellent sessions for the day from Integration MVPs and partners, I just believe it more important right now to make sure that I don’t leave errors on the table and propagate misunderstanding.

It seemed like throughout the conference, the whole BizTalk team, but Guru especially, was constantly fielding questions and correcting misunderstandings about the new microservices strategy. To that end, he organized an informal ad-hoc panel of fellow team members and Microsoft representatives to put everything out on the table and to answer any question that was kicking around in the minds of attendees about all of the new stuff for the week.

I’m going to let an abbreviated transcript (the best I could manage without recording the audio) do the talking here.

Microservices is not the name of the product, it’s a way you can build Stuff

Q – We heard about microservices on Wednesday, how come you (to Kannan from MS IT) are going live with MABS, when you know that there are changes coming down the pipeline?

A – (Vivek Dali): “A lot of people walked away with microservices is the name of the product, and it’s not the name of the product, it’s an architectural pattern that creates the foundation for building BizTalk on top of. There is no product called Microservices that will replace BizTalk Services. BizTalk Services v1.0 had certain functionality, it had B2B, EAI, and there was big demand for orchestration and rules engine, and what we’re doing is adding that functionality. It does not mean that BizTalk Service v1.0 is dead and we have to re-do it. MS IT is actually one of the biggest customers of BizTalk [Services], and we’re telling them to stay in it, and we’re committing to support them as well as move them as we introduce new functionality over […]. The next step in how MABS is evolving is to a microservices architecture.”

Microsoft Is CommitteD To a Solid Core For Long-Term Cloud Integration

Q – (Michael Stephenson): I’m kind of one of the people that gives Guru quite the hard time […] About this time last year, I did a really deep dive with you on MABS 1.0 because we were considering when we would use it, what it offered, what the use cases were. At the time, I decided that it wasn’t ready for us yet […]. When we did the SDR earlier this last year, it was quite different at that time […] We were giving the team a lot of feedback on isolation and ease of deployment, and personally my opinion is that I really like the stuff shown this week, you really fielded that feedback. What I’ve seen from where we were a year ago, and from that SDR, personally I’m really pleased.

A) Don’t worry about coming around and telling us what we’re doing wrong — we do value that feedback. We will commit to come back to you as often as we can […].

(Vivek): Here’s how I think about the product: there’s a few fundamental things that we HAVE to get right, and then there’s a feature list. I’m not worried about the feature list right now, I’m worried about what we NEED to get right to last for the next ten years. Don’t worry about how we’ll take the feedback, send us your emails, we value that feedback.

BizTalk Services Isn’t Going Away, It’s Being Aligned to a Microservices Architecture

Q: I had a conversation with Guru outside, which I think is worthwhile sharing with everybody […] I was really confused at the beginning of the session as to how microservices fits in with where we are with BizTalk Server and with MABS 1.0 and where that brings us moving forward. How do the pipelines and bridges map to where we’re going. I was really excited about the workflow concept, but I couldn’t see that link between the workflow and the microservices.

A – (Guru): The flow was that you had to have a receive port, and a pipeline, and you would persist it in a message box for somebody to subscribe, and that subscriber could be a workflow and a downstream system. That was server, that continues and that has been there for 10+ years.

Then there’s a pattern of I’m receiving something, transforming, and sending it somewhere, and in services that was one entity — and we called that a bridge. It consisted of a receiving endpoint a sequence of activities and then routing it. This concept was a bridge. If you look at it as executing a sequence of activities, then what you have is a workflow.

The difference between what we were doing then and what we’re doing now is that we’re exposing the pieces of that workflow for external pieces to access. [Paraphrased]

How do we extend those workflow capabilities outside of just a BizTalk Server application? (microservices) [Paraphrased]

I’m (Nick) going to inject this slide from Sameer’s first day presentation where he compared/contrasted EAI in MABS with the microservices model for the same, as it’s incredibly relevant to demonstrating the evolution of MABS:

Sameer compares MABS with microservices architecture for the same

You Don’t Have to Wait for vNext in 2015 to Upgrade Your BizTalk Environment

Q: We’ve got a small but critical BizTalk 2006 installation that we’re upgrading now, or in the very near future. And I was wondering if we should upgrade it to 2013 R2, or should we upgrade it to the next release, and when is the next release?

A – (Guru): This is a scenario where we’re starting from 2006? I would strongly encourage you to move to 2013 R2, not just for the support lifecycle. One for lifecycle, and the other for compatibility with the latest Windows, SQL, SharePoint etc…

Then, look at what the application is doing. Is it something that needs to be on-prem, or is it something that is adaptable to the cloud architecture, or even if that application is something that could be managed in the cloud? There’s nothing that is keeping you from migrating to 2013 R2 today.

To further drive home Guru’s point here, I’m (Nick) personally going to add in a slide that was shown on the first day, showing the huge investments the BizTalk team has been making into BizTalk Server over the last few versions. Quite a few people see it as not a lot of change on the surface, but this really goes to show just how much change is really there (heck, it took me ~100 pages worth of content just to lightly touch on the changes in 2013, and I’m still working on 2013 R2):

How BizTalk Server 2006 R2 stacks up to BizTalk Server 2013 R2

MABS 1.0 is Production Ready and Already Doing Great Things, You Should Feel Confident to Use It

Q: How do we reassure our customers that are moving to cloud based integration now, and are seeing MABS now, and are seeing the same tweets about the next version? Migration tools aren’t the full answer because there’s still a cost in doing a migration, so how do we convince customers to use MABS now?

A – (Guru): MABS 1.0 primary market has been EDI because that was the first workload that we targeted. That’s something that is complete in all aspects. So if you’re looking at a customer that is looking to use MABS for EDI, then I strongly encourage that because there’s nothing that changes between using EDI in MABS and whatever future implementation we have [Heavily paraphrased]

(Vivek): Remember MS IT is one of the biggest customers, and it’s not like we’re telling them a different thing than we’re telling you […]. Joking aside, the stuff they’re running is serious stuff, and we don’t want to take a risk, and if there’s not faith in that technology, I don’t want them to take a dependency on it.

Azure Resource MAnager Isn’t The New Workflow – But the Engine that it uses Is

Q: How will the Azure Resource manage fit into this picture?

A – (Vivek): [How] Azure Resource Manager fit in? Azure Resource Manager is a product that has a purpose of installing resources on Azure. It is built on an engine that can execute actions in a long-running fashion, and wait for the results to come, it does parallels. Azure Resource Manager has a purpose and it will be its own thing, but we’re using the engine. We picked that engine because it’s already running at a massive scale and it was built thinking about how the workload will evolve eventually. It already knows how to talk to different services. We share technologies, but those are two different products.

Microservices ALM Is partially there, and is On the Radar, But Is Still A Challenge

Q: What is the ALM story?

A: Support for CI for example? The workflow is a part that we’re still trying to figure out. For the microservices technology part of it, the host that we run on already supports it. One other feedback that came was “how do I do this for an entire workflow” and we’ll go figure that out.

componentizing Early Will Pay Dividends Moving Forward

Q: (Last question) As teams continue to design to the existing platform, we understand the messaging of don’t worry about microservices quite yet. As we design systems going forward, is there a better way to do it, keeping in mind how that will fit into the microservices world? For example, componentizing things more, deciding when to use what kind of adapter. What are things that we can do to ensure a clean migration

A – (Vivek): I think there are two kinds of decisions. One are the business decisions (do we need to have it on premise, etc…) What stays on Hybrid vs what goes on cloud. We want you to make the decision based on business, we will have technology everywhere.

There are patterns that you can benefit from. I think componentizing [will be good]. There are design principles that are just common patterns that you should follow (e.g., how you take dependencies).

So that’s where we are in terms of hearing things direct from the source at this point. Certainly a lot of information to take in, but I’m really happy to see that the team building the product realizes that, and is actively working on clearing up misconceptions and clarifying the vision of for the microservices ecosystem.

Three Shout-Outs

Before I wrap this up, I want to give 3 shout outs right now in terms of the content that I more or less glossed over and/or omitted right now.

  • Stott Creations is doing great things, and I have to hand it to the HIS team for being so intimately involved in not only helping a customer, but helping a partner look good while helping that customer. In addition to that – the Informix adapter looks awesome, and I’m really digging the schema generation from the custom SQL query; that was a really nice touch.

Paul Larsen Presents Features of the BizTalk Adapter for Informix

  • Sam Vanhoutte’s session touched on a not too often discussed tension between what the cloud actually brings in terms of development challenges, and what customers are trying to get out of it. While he was presenting in terms of how Codit addresses these customer asks by dealing with the constant change and risk on their customers’ behalf, these are all still valid points in general. I think he did a great job at summing it up nicely in these two slides:

Challenges - Constant change / Multi-tenancy / Roadmap / DR Planningimage

  • Last, but certainly not least, I want to give a shout out and huge thanks to Saravana and  the BizTalk 360 team for making the event happen. Also they really took one for the team here today as well, as Richard Broida pointed out – ensuring that everyone would have time to share on a jam packed day. The execution was spot-on for a really first class event.

To Microsoft: Remember That BizTalk Server Connects Customers To The Cloud

As a final thought from the Integrate 2014 event: We’re constantly seeing Microsoft bang the drum of “Azure, azure, azure, cloud, cloud, cloud…” Personally, I love it, I fell in love with Azure in October of 2008 when Steve Marx showed up on stage at PDC and laid it all out. However, what we can’t forget, and what Microsoft needs to remember is that any customer bringing their applications to the cloud is doing integration – and Microsoft’s flagship product for doing that integration, better than any other, is BizTalk Server.

BizTalk Server is how you get customers connected to the cloud – not in a wonky disruptive way – but in a way that doesn’t necessarily require that other systems bend to either how the cloud works, or how BizTalk works.

It’s a Good Day To Be a BizTalk Dev

These are good times to be a developer, and great times to be connected into the BizTalk Community as a whole. The next year is going to open up a lot of interesting opportunities, as well as empower customers to take control of their data (wherever it lives) and make it work for them.

I’m out for now. If you were at the conference, and you want to stick around town a little bit longer, I will be teaching the BizTalk Server Developer Deep Dive class over at QuickLearn Training headquarters in Kirkland, WA this coming week. I’d love to continue the discussion there. That being said, you can always connect live to our classes from anywhere in the world as well! Winking smile

Integrate 2014 Day 2 in Review

By Nick Hauenstein

I’m going to start off today’s post with some clarifications/corrections from my previous posts.

First off – It is now my understanding that the “containers” in which the Microservices will be hosted and executed in are simply a re-branding of the Azure Websites functionality that we already have. This has interesting implications for the Hybrid Connections capability as well – inasmuch as our Microservices essentially inherit the ability to interface directly with on-premise systems as if they were local.

This also brings clarity to the “any language” remark from the first day. In reality, we’re looking at building them in any language supported by Azure Websites (.NET languages, Java, PHP, Node.js, Python) – or truly any language if we host the implementation externally but expose a facade through Azure Websites (at the expense of egress, added latency, loss of auto-load balancing and scale), but I digress.

UPDATE (05-DEC-2014): There are actually some additional clarifications now available here, please read before continuing. Most importantly there is no product called the Azure BizTalk Microservices Platform – it’s just a new style in which Microsoft is approaching building out and componentizing integration (and other) capabilities within the Azure Platform. Second, Azure Resource Manager is a product that sits on top of an engine. The engine is what’s being shared with the new Workflow capability discusssed – not the product itself. You could say it’s similar to how workflow services and TFS builds use the same underlying engine (WF).

The rest of the article remains unchanged because there are simply too many places where the name was called out as if it were a product.

Rules Engine as a (Micro)Service

After a long and exciting day yesterday, day 2 of Integrate 2014 got underway with Anurag Dalmia bringing the latest thinking around the re-implementation of the BizTalk Business Rules Engine that is designed to run as a Microservice in the Azure BizTalk Microservices Platform.

Anurag Dalmia presents the Rules Engine Design Principles 

First off, this is not the existing BizTalk Rules Engine repackaged for the cloud. This is a complete re-implementation designed for cloud execution and with the existing BRE pain points in mind. From the presentation, it sounds as if the core engine is complete, and all that remains is a new Azure Portal-based design experience (which currently only exists in storyboard form) around designing vocabularies, rules, and policies for the engine.

Currently the (XML-based, not JSON!) vocabularies support:

  • Constant & XML based vocabulary definitions
  • Single value, range and set of constants
  • XML vocabulary definitions (created from uploaded schema)
  • Bulk Generation (no details were discussed for this, but I’d be very interested in seeing what that will look like)
  • Validation

Vocabulary Design Experience in Azure BizTalk Microservices

Missing from the list above are really important things like .NET objects and Database tables, but these are slated for future inclusion. That being said, I’m not sure how exactly custom .NET classes as facts are going to work in a Microservices infrastrcture assuming that each Microservices is an independent isolated chunk of functionality invoked via RESTful interactions. Really, the question becomes how does it get your .dlls so that it can Activator.CreateInstance that jazz? I guess if schema upload can be a thing there, then .dll upload can as well. But then, are these stored in private Azure blob containers, some other kind of repository, or should we even care?

On the actual Rules creation side, things become quite a bit more interesting. Gone is the painful 1 million click Business Rule Composer – instead, free flowing text takes its place. All of this is still happening in a web-based editor that also provides Intellisense-like functionality, tool-tops, and color-coding of special keywords. To get a sense for what these rules look like, here’s one rule that was shown:

If (Condition)

ClaimAmount is greater than AutoApprovalLimit OR
TreatmentID is in SpecialTreatmentIDs

Then (Action)

ClaimStatus equals "Manual Approval Required"
ClaimStatesReason equals "Claim sent for Manual Approval"

Features of the Rules Engine were said to include:

  • Handling of optional XML nodes
  • Enable/Disable Rules
  • Rule prioritization through drag-and-drop
  • Support for Update / Halt Forward Chaining (No Assert?)
  • Test Policy (through Web UI, or via Test APIs)
  • Schema Management

I’m not going to lie, at that point, I got really concerned with no declared ability to Assert new facts (or to Retract facts for that matter), and I’m hoping that this was a simple omission to the slide, but I do intend to reach out for clarification there.

Storyboard for the Web-based Test Policy UI

Building Connectors and Activities

After the session on the Rules Engine, Mohit Srivastava was up to discuss Building Connectors an Activities. The session began, however, with a recap of some of the things that Bill Staples discussed yesterday morning. I’m actually really thankful for this recap as I had missed some things along the way (namely Azure Websites as the hosting container), and I also had a chance to snap a picture of what is likely the most important slide of the entire conference (which I had missed getting a picture of the first time around).

Microservices are part of refactored App Platform with integration at the core

I’ve re-created the diagram of the “refactored” Azure App Platform with a few parenthetical annotations:

Re-factored Azure App Platform

One interesting thing about this diagram, when you really think about it, is that the entry point (for requests coming into stuff in the platform) doesn’t have to be from the top down. It can be direct to a capability, or to a process, or to a composed set of capabilities or to a full human friendly UI around any one of those things.

So what are all of the moving pieces that will make it all work?

  1. Gallery for Microservice Discovery
    • Some Microservices will be codeless (e.g., SaaS and On-premises connectors)
    • Others will be code (e.g. activities and custom logic)
  2. Hosting – Azure App Container (formerly Azure Websites)
  3. Gateway
    1. Security – identity broker, SSO, secure token store
    2. Runtime – name resolution, isolated storage, shared config, “IDispatch” on WADL/Swagger (though such metadata is technically optional)
    3. Proxy – Monitoring, governance, test pages
      • Brings all of the value of API management to the gateway out-of-the-box
  4. Developers
    • Writing RESTful services in your language of choice.

To further prove just exactly what a Microservice is, he demoed a sample service starting from just the raw endpoint. You can even look for yourselves here:

What’s really cool about all of this, is that the tooling support for building such services is going to be baked into Visual Studio. We already have Web API for cleanly building out RESTful services, but the ability to package these with metadata and publish to the gallery (a la NuGet) is going to be included as part of a project template and Publish Web experience. This was all shown in storyboard form, and that’s when I had my moment of developer happiness (much like Nino’s yesterday as he gained reprieve from crying over BizTalk development pain points  when first using the productivity tool that he developed).

Publish Web Experience for BizTalk Microservices built using Web API

Finally, we’re getting low enough into the platform that we’re inside Visual Studio and can meaningfully deploy some code – one of the greatest feelings in the whole world.

The talk continued showing fragments of code (that, unfortunately, were too blurry in my photos to capture here) that demonstrated the direct runtime API that Microservices will have access into in order to do things like have encrypted isolated storage, and a mechanism to manage and flow tokens for external SaaS products that are used within a larger workflow. There’s some really exciting stuff here. I honestly could have sat through an entire day of that session just going all the way into it.

But, alas, there were still more sessions to be had.

API Management and Mobile Services

I’m grouping these together inasmuch as they represent functionality within Azure that we have had now for some amount of time (Movile Services certainly longer than API management). I’ve seen quite a bit on these already, and was mainly looking for those touchpoints with the Microservices story.

API Management sits right under Microservices in the diagram shown earlier, and it would make sense that it would become the monetization strategy for developers that want to write/expose a specific capability within Azure. However, that wasn’t explicitly stated, and, in fact, the only direct statement we had was above where we saw that the capabilities of API Management are available within the gateway. That left me a little confused, and I honestly could have missed something obvious there. As much as Josh was fighting PowerPoint, I was fighting my Surface towards the beginning of his talk:

Fighting my Surface at the beginning of Josh Twist's talk on API Management

If you’re not familiar with API Management, it provides the ability to put a cloud-hosted wrapper around your API and project it (in the data shaping sense) to choose carefully the exposed resources, actions, and routes through which they can be accessed. It handles packaging your APIs into saleable subscriptions and monitoring their use. That’s a gross oversimplification, and I highly recommend that you dig in right away and explore it because there’s a lot there, and it’s super cool.

That being said, in terms of Microservices, it would be truly great if we could use that to wrap around external services and then turn the Azure hosted portion of the API into a Microservice in such a way that we can even flow back to our external service some of the same information that we can get directly from the APIs that would be available if we were writing within a proper Azure App Container. For example, to be able to request a certain value from the secure store to be passed in a special HTTP Header to our external service –- which could then use that value in any way that it wanted. That would really help speed adoption, as I could quite easily then take any BizTalk Server on-premise capability, wrap a nice RESTful endpoint around it, and not have to worry about authorization, rate-limited, or re-implementation.

Next up was Kirill Gavrylyuk rocking Xamarin Studio on a Mac to talk about Mobile Services (he even went for a Hat-trick and launched an Android emulator). He actually did feature a slide towards the end of his talk showing the enterprise/non-consumer-centric Mobile Services development  experience by positioning Mobile Services within the scope of the refactored Azure App Platform:

Mobile Services in light of Refactored App Platform

I’m going to let that one speak for itself for now.

Those two talks were a lot of fun, and I don’t want to sell them short by not writing as much, but there’s certainly already a lot of information already out there for these ones.

Big Data With Azure Data Factory & Power BI

The day took a little bit of a shift after lunch as we saw a few talks on both Azure Data Factory and Power BI. In watching the demos, and seeing those talks, it’s clear that there’s definitely some really exciting stuff there. Sadly, I’m already out-of-date in that area, as there were quite a few things mentioned that I was entirely unaware of (e.g., Azure Data Factory itself). For now, I’ll leave any coverage of those topics to the BI and Big Data experts – which I will be the first to admit is not me. I don’t think in more than 4 dimensions at a time – though with Power BI maybe all I need to know how to do is to speak English.

For all of those out there that spend their days writing MDX queries, I salute you. You deserve a raise, no matter what you’re being paid.

HCA Rocks BizTalk Server 2013 R2

For the last talk of the day, Alan Scott from HCA and Todd Rivers from Microsoft presented on HCA’s use of BizTalk Server 2010 & 2013 R2 for processing HL7 workloads (and MSMQ + XML) workloads. The presentation was excellent, and it’s going to be really difficult to capture it here. One of the most impressive things (besides their own web-based rules editing experience) is the sheer scale of the installation:

HCA Rocks BizTalk Server 2013 R2

Cultural Change Reaps Biggest Rewards – Value People Not Software

The presentation really highlighted not only the flexibility of the BIzTalk platform, but the power of having a leader that is able to evangelize the capability to the business – while being careful to not talk in terms of the platform, but in terms of the people and the data, and also while equipping the developers with the tools they will need to succeed with that platform.



Looking Forward

Looking forward beyond today, I’m getting really excited to see the direction that we’re headed. We still have a rock solid platform on-premise alongside a hyper-flexible distributed platform brewing in the cloud.

To that end, I actually want to announce today that QuickLearn Training will be hosting an Azure BizTalk Microservices Hackathon shortly after the release of the public preview. It will be a fun time to get together and look through it all together, to discuss which microservices will be valuable, and most of all to build some together that can provide value to the entire community.

If any community is up for that, I know it’s the BizTalk community. I’m just really excited that there’s going to be a proper mechanism to surface those efforts so that anyone who builds for the platform will have it at their disposal without worries.

If you want more details, or you want to join us (physically, or even remotely) when that happens, head over here:

For that matter, if you want to host one in your city at the same time and connect up with us here in Kirkland, WA via live remote feed, that would be great too ;-) Let’s build the future together.

Well, that’s all for now! Take care!

BizTalk Microservices: A Whole New World – Or Is It?

By Nick Hauenstein

NOTE: I’m still processing all of the information I took in from Day 1 at the Integrate 2014 conference here in Redmond, WA. This post represents a summary of the first day with some thoughts injected along the way.

This morning was a morning of great changes at Integrate 2014. It kicked off with Scott Guthrie presenting the keynote session without his characteristic red shirt – a strange omen indeed. He brought the latest news from the world of Azure and touted the benefits of the cloud alongside the overall strategy and roadmap.

ScottGu Blue Shirt

After his presentation concluded, Bill Staples (unfortunate owner of the email address) took the stage and presented the new vision for BizTalk Services.

Introducing Azure BizTalk Microservices

NOTE: Since there are a lot of places linking directly to this post, I have made factual changes in light of new information found here.

Microsoft Azure BizTalk Services, still very much in its 1.0 iteration is undergoing a fundamental change. Instead of providing the idea of a bridge tied together with other bridges in an itinerary, the actual bridge stages themselves – the raw patterns – are being extracted out and exposed as Azure BizTalk Microservices aligned with a microservices style architecture.

In reality, this re-imagination of BizTalk Services won’t really be a separate Azure offering – in fact, it’s more like the BizTalk capabilities are being exposed as first class capabilities within the core Azure Platform. Every developer that leverages Azure in any way could choose to pull in (and pay for) only the specific capabilities BizTalk Microservices they need – at the same time that same developer has a framework that allows them to build their own microservices and deploy them to a platform that enables automatic scaling & load balancing, and also provides monetization opportunities.

Bill Staples Presents Azure BizTalk Microservices Microservices

The following BizTalk features were presented as candidates for implementation in the form of microservices.

  • Validation
  • Batching/Debatching
  • Format Conversion (XML, JSON, FlatFile) – i.e., Translation
  • Extract
  • Transform
  • Mediation Patterns (Request Response / One Way)
  • Business Rules
  • Trading Partner Management
  • AS2 / X12 / EDIFACT

It definitely all sounds familiar. I remember a certain talk with Tony Meleg at the helm presenting a similar concept a few years back. This time, it looks like it has legs in a big way – inasmuch as it actually exists, even if only in part – with a public preview coming in Q1 2015.

So What Are Microservices Anyway?

Microservice architecture isn’t a new thing in general. Netflix is known for a very successful implementation of the pattern. No one has commented to me about the previous link regarding Netflix’s implementation. Read it, understand it, and then you can have a prophetic voice in the future as you are able to anticipate specific challenges that can come up when using this architecture – although Microsoft’s adoption of Azure Websites as the hosting container can alleviate some of these concerns outright.  Martin Fowler says this as his introduction to the subject:

The term “Microservice Architecture” has sprung up over the last few years to describe a particular way of designing software applications as suites of independently deployable services. While there is no precise definition of this architectural style, there are certain common characteristics around organization around business capability, automated deployment, intelligence in the endpoints, and decentralized control of languages and data.

Fowler further features a sidebar that distances microservice architecture from SOA in a sort of pedantic manner – that honestly, I’m not sure adds value. There are definitely shades of SOA there, and that’s not a bad thing. It also adds value to understand the need for different types of services and to have an ontology and taxonomy for services (I’m sure my former ESB students have all read Shy’s article, since I’ve cited it to death over the years).

Yeah, But How Are They Implemented?

Right now, it looks like microservices are going to simply be code written in any language* that exposes a RESTful endpoint that provides a small capability. They will be hosted in an automatically scaled and load balanced execution container (not in the Docker sense, but instead Azure Websites rebranded) on Azure. They can further be monetized (e.g., you pay me to use my custom microservice), and tied together to form a larger workflow.

Azure BizTalk Microservices Workflow Running in the Azure Portal

Yes, I did just use the W word, but it’s NOT the WF word surprisingly. XAML has no part in the workflows of BIzTalk vFuture. Instead, we have declarative JSON workflows seemingly based on those found in Azure Resource Manager. That is, the share the same engine that Azure Resource Manager uses under the covers, because that engine was already built for cloud scale and has certain other characteristics that made it a good candidate for microservice composition and managing long running processes. They can be composed in the browser, and as shown in the capture above, they can also be monitored in the browser as they execute live.

Workflow Designer

The workflow engine calls each service along the path, records execution details, and then moves along to the next service with the data required for execution (which can include the output of any previous step):

JSON Workflows -- Check the Namespace, we have Resource Manager in play

Further, the workflow engine has the following characteristics:

  • Supports sequential or conditional control flow
  • Supports long-running workflows
  • Can start, pause, resume, or cancel workflow instances
  • Provides message assurance
  • Logs rich tracking data

I’m really keen on seeing how long-running workflow is a thing when we’re chaining RESTful calls (certainly we don’t hold the backchannel open for a month while waiting for something to happen) – but I may be missing something obvious here, since I just drank from the fire hose that is knowledge.

What does the Designer Look Like?

The designer supports the idea of pre-baked workflow templates for common integrations :

  • SurveyMonkey to Salesforce
  • Copy Dropbox files to Office 365
  • “When Facebook profile picture…”
  • Add new leads to newsletter – Salesforce + Mailchimp
  • Alert on Tweet Workflow – Twitter + Email
  • Download photos you’re tagged in – Facebook + Dropbox
  • Tweet new RSS articles
  • Twitter + Salesforce (?)

However, it also provides for custom workflows built from BizTalk Microservice activities microservices composed within a browser-based UI. It was presented as if it were going to be a tool that Business Analysts would ultimately use, but I’m not sure if that’s going to be the case up front, or even down the line.

Workflow Designer in BizTalk vFuture

These workflows will be triggered by something. Triggers shown in the UI and/or on slides included (but weren’t necessarily limited to):

  • FTP File Added
  • Any tweet
  • A lot of tweets
  • Recurring schedule
  • On-demand schedule
  • Any Facebook post
  • A lot of Facebook posts

In terms of the microservices actually seen in the UI, we saw (and likely more if those presenting were to have scrolled down):

  • Validate
  • Rules
  • Custom filter
  • Send an Email
  • SendResponse
  • SurveyMonkey
  • Custom API
  • Custom map
  • Create digest
  • Create multi-item digest
  • XML Transform
  • XML Validate
  • Flat File Decode
  • XML XPath Extract
  • Delete FTP File
  • Send To Azure Table
  • Add Salesforce leads

The tool is definitely pretty, and it was prominently featured in demos for talks throughout the day – even though quite a few pieces of functionality were shown in the form of PowerPoint Storyboards.

So How Do We Do Map EAI Concepts To This New Stuff?


Well, we have special entities within this world called Connectors. They are our interface to the outside world. Everything else within the world of the original MABS 1.0 and even BizTalk Server is seen as simply a capability that could be provided by a microservice.

So That’s the Cloud, What’s on-Prem?

In the future – not yet, but at some point – we will see this functionality integrated into the Azure Pack alongside all of the other Azure goodness that it already brings to your private cloud. But remember, this is all still in the very beginning stages. We’ve yet to hear much about how really critical concerns like debugging, unit testing, or even team development, differencing / branching / merging / source control in general are going to be handled in a world where you’re building declarative stuff in a browser window.

So that’s all fine and good for the future, but what about my BizTalk Server 2013 R2 stuff that I have right now? Well keep doing great things with that, because BizTalk Server isn’t going away. There’s still going to be a new major version coming every 2 years with minor versions every other year, and cumulative updates every 3 months.


What about my investments in Azure BizTalk Services 1.0? Well it’s not like Microsoft is going to pull the plug on all of your great work that you’re actively paying them to host. That’s monies they are still happy to take from you in exchange for providing you a great service, and they will continue to do so under SLA – it’s a beautiful thing.

Also, if you’re moving to the new way of doing things, your schemas and maps remain unchanged, they will move forward in every way. However, you will see a new web-based mapping tool (which I simply lack the energy at the moment to discuss further for this post).

However, future investment in that model is highly unlikely based on everything announced today. I’m going to let this statement stand, because it was opinion at the time I wrote it. That being said, read this post before formulating your own.

The Old New Thing

I hate to keep coming back to patterns here, but I find myself in the same place. I will soon have yet another option available within the Microsoft stack for solving integration challenges (however, this time it’s not a separate offering, it is part of the core stack). At the same time, the problems being solved are the same, and we still can apply lessons learned moving forward. Also, integration problems are presenting themselves to a larger degree in a world of devices, APIs everywhere, and widely adopted SaaS solutions.

It’s an exciting time to be a BizTalk Developer – because after today, every developer became a BizTalk Developer – it’s part of the core Azure stack, every piece of it. For those that have been around the block a few times, the wisdom is there to do the right things with it. For those who haven’t, a whole new world has just opened up.

That’s all for now. I need some sleep before day 2. :-)

* With regards to the any language comment – that was the statement, but there was a slide at one point that called out very specifically “.NET, Java, node.js, PHP” as potential technologies there, so take it with a grain of salt. It looks like the reason for that is we’re hosting our microservices in a Azure Websites hosting container that has been bebranded.

** Still waiting for some additional clarification on this point, I will update if my understanding changes. Updated in red.

Integrate 2014 Starting Up

By Nick Hauenstein

Tomorrow morning, the Integrate 2014 conference revs up here in Redmond, WA with Microsoft presenting the vision for the next versions of BizTalk Server and Microsoft Azure BizTalk Services.

I’m getting pretty excited as I’m looking over the first day agenda to see what to expect from the talks and sessions like Business Process Management and Enterprise Application Integration placed in between talks about BizTalk Services and On-Premise BizTalk Server. I’m even more excited seeing the second day agenda and seeing talks like Business Rules Improvements, and especially Building Connectors and Activities.

What’s Keeping Me Busy

In anticipation of this event, I have been building out a hybrid cloud integration sample application that leverages some of the ideas laid out in my previous post regarding cloud integration patterns while also providing the content for the long overdue next piece in my new features series for BizTalk Server 2013 R2.

In the sample, I’m running requests through MABS for content-based routing based on an external SQL lookup, with the requests targeting either an on-premise SQL Server database or a BizTalk Server instance for further processing that can’t otherwise be done currently with BizTalk Services (namely Business Rules Evaluation and Process Orchestration).

I’m hoping that after the conference this week, I will be able to tear the sample apart and build out most (if not all) of the elements in the cloud.

Best Bridge Between Azure and On-Prem?

Along the way, I’ve been further playing with using Service Bus Topics/Subscriptions as the Message Box of the cloud. At the same time, it represents a nice bridge between BizTalk Services and On-Premise BizTalk Server.

Consider the itinerary below;


This was actually the first draft of prototyping out the application. What this represents is an application that is receiving requests from client applications that have been published to a service bus topic. As they are received, they are routed based on content and sent either to a database to be recorded (as an example of an on-premise storage target), or to BizTalk Server 2013 R2 for further processing through a Service Bus Relay.

Given the scenario, maybe a relay is appropriate – lower latency requirement, no requirement for durability (which we have already sacrificed by running it through an initial bridge).

However, maybe we want to take a more holistic approach, and assume that the cloud is giving us a free public endpoint and some quite powerful content-based routing, translation, and even publish-subscribe capability when we bring Azure Service Bus to the mix. Let’s further assume that we view these capabilities as simply items in our toolbox alongside everything that BizTalk Server 2013 R2 is already providing us.

Let’s make it more concrete. What if the on-premise processing ultimately ends up sending the message back to the TradesDb? Is there a potential waste in building out that portion of the process in both locations?

Service Bus is the New Hybrid Integration MessageBox

Let’s try this instead:


Here, instead of using a relay, we’re using a Service Bus Topic to represent the call out to BizTalk Server 2013 R2.

Why would we do  this? While it introduces slightly more latency (in theory – though I haven’t properly tested that theory), it becomes pure loosely coupled pub-sub. I’m going to go out on a limb here (and risk being called an architecture astronaut), and say that not only is that not a bad thing, but it might even be a good idea. By feeding a topic rather than directly submitting to BizTalk Server via Relay allows us to easily swap out the processing of the rules with any mechanism we want, at any time – even if it means that we will have to transform the message and further submit by a completely different transport. Maybe one day we will be able to replace this with a call to a Rules Engine capability within MABS (crossing my fingers here) if we see such a capability come.

Further, we have broken out the logging of trades to the database into it’s own separate miniature process alongside the rest. This one might be fed by messages generated by BizTalk Server 2013 R2 on-premise or the earlier processing in MABS – providing only a single implementation of the interface to manage and change.

Is It The Right Way™?

I don’t know. It feels kind of right. I can see the benefits there, but again, we are potentially making a sacrifice by introducing latency to pay for a loosely coupled architecture. Call me out on it in the comments if that makes you unhappy. Ultimately, this would have to simply become a consideration that is weighed in choosing or not choosing to do things this way.

What Could Make it Even Better?

Imagine a time when we could build-out a hybrid integration using a design surface that makes it seamless. One where I could quickly discover the rest of the story. Right now there’s quite a bit happening on-premise into which we have no visibility via the itinerary – and very limited visibility within the orchestration since most logic is in Business Rules and my maps happen just before/after port processing.


Tomorrow I will be writing a follow-up blog with a re-cap of the first day of the Integrate 2014 conference. Additionally, I will be playing with this application in my mind and seeing where the things announced this week change the possibilities.

If you’re going to be there tomorrow, be sure to stop by the QuickLearn Training table to sign-up for a chance to win some fun prizes. You can also just stop by to talk about our classes, or about any of the ideas I’ve thrown out here – I welcome both the positive and negative/nitpicky feedback.

Also, make sure you’re following @QuickLearnTrain on Twitter. We’ll be announcing an event that you won’t want to miss sometime in the next few days.

See you at the conference!

– Nick Hauenstein

Upgrading BizTalk Server

By Rob Callaway

In my experience there are two upgrade methods for BizTalk Server environments. You either (1) buy new hardware and rebuild from scratch by installing the latest versions of Windows Server, SQL Server, etc. or (2) perform an “in-place” upgrade where you simply install the new version of BizTalk Server to replace the existing version.

While I’ve personally done (and prefer) the former many times, while recently updating QuickLearn Training’s BizTalk Server classes to BizTalk Server 2013 R2 I decided to give the in-place upgrade a shot. I figured that Windows Server 2012 R2 and SQL Server 2014 weren’t bringing much to BizTalk table so keeping the existing SQL Server 2012 SP1 and Windows Server 2012 installations for another year or so would be fine. Additionally since our courses utilize virtual machines there’s no hardware/software entanglements to consider.

The Plan

  1. Uninstall Visual Studio 2012
  2. Install Visual Studio 2013
  3. Update BizTalk Server 2013 to BizTalk Server 2013 R2
  4. Install all available updates for the computer via Microsoft Update

Let’s get started.

Uninstall Visual Studio 2012

Unless you’re upgrading a development system this step likely isn’t required but in my case the virtual machine is used for QuickLearn Training’s Developer Immersion and Developer Deep Dive courses. Although Visual Studio supports side-by-side installations of multiple versions I opted to remove Visual Studio 2012 since it wasn’t needed for our courses anymore.

This was pretty easy. I went to Programs and Features and chose to uninstall Visual Studio 2012.

 Install Visual Studio 2013

Again, this was pretty easy. I simply accepted the default options for installation and walked away for 45 minutes. When I came back I saw this.

VS 2013 Install

Upgrade BizTalk Server 2013 to BizTalk Server 2013 R2

This is where I started feeling nervous. Would it work? Is it really going to be this easy? There was only one way to find out. Before starting the upgrade, I thought about the services used by BizTalk Server and stopped the following services:

  • All the BizTalk host instances
  • Enterprise Single Sign-On Service
  • Rule Engine Update Service
  • World Wide Web Publishing Service

I mounted the BizTalk installation ISO to the virtual machine and ran Setup.exe.

BizTalk Setup.exe

The splash screen! This might actually work!

BizTalk Splash Screen

Product key redacted to protect the innocent.

BizTalk License

Finally, it knows I’m upgrading! I guess I was wrong about needing the Enterprise Single Sign-On Service stopped.

BizTalk Summary

Start ESSO and now we are in business. Hit Upgrade!

BizTalk Upgrade

A few minutes later… boom!

BizTalk Upgrade Complete

Install Other Updates

It’s been awhile since we installed updates from Windows Update so let’s run that.


I’m going to be here forever.

Lessons Learned

This upgrading stuff is a lot easier than I thought it would be. I strongly recommend it and I’ll probably use the same method when updating the Administrator Immersion and Administrator Deep Dive courses.

Exam 70-499 MCSD:ALM Recertification exam prep

By Anthony Borton

To keep the Microsoft Certified Solution Developer: Application Lifecycle Management (MCSD:ALM) certification current you must complete a recertification exam every two years. Since the release of the MCSD:ALM certification, many of our students have taken our TFS courses to help them prepare for the exams.

As the two year recertification deadline starts to arrive,  early charter exam members are facing the task of preparing for the recertification exam. Here are some helpful resources to help you focus your study.

If you do not currently hold the MCSD:ALM certification, you will be required to complete three exams to earn this certification. QuickLearn Training offers great instructor led courses to help you prepare for these exams.

Aligning Microsoft Azure BizTalk Services Development with Enterprise Integration Patterns

By Nick Hauenstein

We have just finished a fairly large effort in moving the QuickLearn Training Team blog over to, as a result we had a mix-up with the link for our last post.

This post has moved here: Aligning Microsoft Azure BizTalk Services Development with Enterprise Integration Patterns