Integrate 2014 – Final Thoughts

By Nick Hauenstein

The last day at Integrate 2014 started off early with Microsoft IT demonstrating the benefits of their early investment in BizTalk Services to the bottom line, and then transitioned into presentations by Microsoft Integration MVPs Michael Stephenson and Kent Weare discussing the cloud in practice, and  how to choose an integration platform respectively.

Those last two talks were especially good, and I would recommend giving them a watch once the videos are posted on the Integrate 2014 event channel on the Channel 9 site.

Integration Current/Futures q+A Panel

At this point, I’m going to stray from the style of my previous posts on Integration 2014. The reason for that is that I want to take a little bit of time to clarify some things that I have posted, as well as to correct factual errors – given that we’re all learning this stuff at the same time right now. Don’t get me wrong, I do not at all want to discount the excellent sessions for the day from Integration MVPs and partners, I just believe it more important right now to make sure that I don’t leave errors on the table and propagate misunderstanding.

It seemed like throughout the conference, the whole BizTalk team, but Guru especially, was constantly fielding questions and correcting misunderstandings about the new microservices strategy. To that end, he organized an informal ad-hoc panel of fellow team members and Microsoft representatives to put everything out on the table and to answer any question that was kicking around in the minds of attendees about all of the new stuff for the week.

I’m going to let an abbreviated transcript (the best I could manage without recording the audio) do the talking here.

Microservices is not the name of the product, it’s a way you can build Stuff

Q – We heard about microservices on Wednesday, how come you (to Kannan from MS IT) are going live with MABS, when you know that there are changes coming down the pipeline?

A – (Vivek Dali): “A lot of people walked away with microservices is the name of the product, and it’s not the name of the product, it’s an architectural pattern that creates the foundation for building BizTalk on top of. There is no product called Microservices that will replace BizTalk Services. BizTalk Services v1.0 had certain functionality, it had B2B, EAI, and there was big demand for orchestration and rules engine, and what we’re doing is adding that functionality. It does not mean that BizTalk Service v1.0 is dead and we have to re-do it. MS IT is actually one of the biggest customers of BizTalk [Services], and we’re telling them to stay in it, and we’re committing to support them as well as move them as we introduce new functionality over […]. The next step in how MABS is evolving is to a microservices architecture.”

Microsoft Is CommitteD To a Solid Core For Long-Term Cloud Integration

Q – (Michael Stephenson): I’m kind of one of the people that gives Guru quite the hard time […] About this time last year, I did a really deep dive with you on MABS 1.0 because we were considering when we would use it, what it offered, what the use cases were. At the time, I decided that it wasn’t ready for us yet […]. When we did the SDR earlier this last year, it was quite different at that time […] We were giving the team a lot of feedback on isolation and ease of deployment, and personally my opinion is that I really like the stuff shown this week, you really fielded that feedback. What I’ve seen from where we were a year ago, and from that SDR, personally I’m really pleased.

A) Don’t worry about coming around and telling us what we’re doing wrong — we do value that feedback. We will commit to come back to you as often as we can […].

(Vivek): Here’s how I think about the product: there’s a few fundamental things that we HAVE to get right, and then there’s a feature list. I’m not worried about the feature list right now, I’m worried about what we NEED to get right to last for the next ten years. Don’t worry about how we’ll take the feedback, send us your emails, we value that feedback.

BizTalk Services Isn’t Going Away, It’s Being Aligned to a Microservices Architecture

Q: I had a conversation with Guru outside, which I think is worthwhile sharing with everybody […] I was really confused at the beginning of the session as to how microservices fits in with where we are with BizTalk Server and with MABS 1.0 and where that brings us moving forward. How do the pipelines and bridges map to where we’re going. I was really excited about the workflow concept, but I couldn’t see that link between the workflow and the microservices.

A – (Guru): The flow was that you had to have a receive port, and a pipeline, and you would persist it in a message box for somebody to subscribe, and that subscriber could be a workflow and a downstream system. That was server, that continues and that has been there for 10+ years.

Then there’s a pattern of I’m receiving something, transforming, and sending it somewhere, and in services that was one entity — and we called that a bridge. It consisted of a receiving endpoint a sequence of activities and then routing it. This concept was a bridge. If you look at it as executing a sequence of activities, then what you have is a workflow.

The difference between what we were doing then and what we’re doing now is that we’re exposing the pieces of that workflow for external pieces to access. [Paraphrased]

How do we extend those workflow capabilities outside of just a BizTalk Server application? (microservices) [Paraphrased]

I’m (Nick) going to inject this slide from Sameer’s first day presentation where he compared/contrasted EAI in MABS with the microservices model for the same, as it’s incredibly relevant to demonstrating the evolution of MABS:

Sameer compares MABS with microservices architecture for the same

You Don’t Have to Wait for vNext in 2015 to Upgrade Your BizTalk Environment

Q: We’ve got a small but critical BizTalk 2006 installation that we’re upgrading now, or in the very near future. And I was wondering if we should upgrade it to 2013 R2, or should we upgrade it to the next release, and when is the next release?

A – (Guru): This is a scenario where we’re starting from 2006? I would strongly encourage you to move to 2013 R2, not just for the support lifecycle. One for lifecycle, and the other for compatibility with the latest Windows, SQL, SharePoint etc…

Then, look at what the application is doing. Is it something that needs to be on-prem, or is it something that is adaptable to the cloud architecture, or even if that application is something that could be managed in the cloud? There’s nothing that is keeping you from migrating to 2013 R2 today.

To further drive home Guru’s point here, I’m (Nick) personally going to add in a slide that was shown on the first day, showing the huge investments the BizTalk team has been making into BizTalk Server over the last few versions. Quite a few people see it as not a lot of change on the surface, but this really goes to show just how much change is really there (heck, it took me ~100 pages worth of content just to lightly touch on the changes in 2013, and I’m still working on 2013 R2):

How BizTalk Server 2006 R2 stacks up to BizTalk Server 2013 R2

MABS 1.0 is Production Ready and Already Doing Great Things, You Should Feel Confident to Use It

Q: How do we reassure our customers that are moving to cloud based integration now, and are seeing MABS now, and are seeing the same tweets about the next version? Migration tools aren’t the full answer because there’s still a cost in doing a migration, so how do we convince customers to use MABS now?

A – (Guru): MABS 1.0 primary market has been EDI because that was the first workload that we targeted. That’s something that is complete in all aspects. So if you’re looking at a customer that is looking to use MABS for EDI, then I strongly encourage that because there’s nothing that changes between using EDI in MABS and whatever future implementation we have [Heavily paraphrased]

(Vivek): Remember MS IT is one of the biggest customers, and it’s not like we’re telling them a different thing than we’re telling you […]. Joking aside, the stuff they’re running is serious stuff, and we don’t want to take a risk, and if there’s not faith in that technology, I don’t want them to take a dependency on it.

Azure Resource MAnager Isn’t The New Workflow – But the Engine that it uses Is

Q: How will the Azure Resource manage fit into this picture?

A – (Vivek): [How] Azure Resource Manager fit in? Azure Resource Manager is a product that has a purpose of installing resources on Azure. It is built on an engine that can execute actions in a long-running fashion, and wait for the results to come, it does parallels. Azure Resource Manager has a purpose and it will be its own thing, but we’re using the engine. We picked that engine because it’s already running at a massive scale and it was built thinking about how the workload will evolve eventually. It already knows how to talk to different services. We share technologies, but those are two different products.

Microservices ALM Is partially there, and is On the Radar, But Is Still A Challenge

Q: What is the ALM story?

A: Support for CI for example? The workflow is a part that we’re still trying to figure out. For the microservices technology part of it, the host that we run on already supports it. One other feedback that came was “how do I do this for an entire workflow” and we’ll go figure that out.

componentizing Early Will Pay Dividends Moving Forward

Q: (Last question) As teams continue to design to the existing platform, we understand the messaging of don’t worry about microservices quite yet. As we design systems going forward, is there a better way to do it, keeping in mind how that will fit into the microservices world? For example, componentizing things more, deciding when to use what kind of adapter. What are things that we can do to ensure a clean migration

A – (Vivek): I think there are two kinds of decisions. One are the business decisions (do we need to have it on premise, etc…) What stays on Hybrid vs what goes on cloud. We want you to make the decision based on business, we will have technology everywhere.

There are patterns that you can benefit from. I think componentizing [will be good]. There are design principles that are just common patterns that you should follow (e.g., how you take dependencies).

So that’s where we are in terms of hearing things direct from the source at this point. Certainly a lot of information to take in, but I’m really happy to see that the team building the product realizes that, and is actively working on clearing up misconceptions and clarifying the vision of for the microservices ecosystem.

Three Shout-Outs

Before I wrap this up, I want to give 3 shout outs right now in terms of the content that I more or less glossed over and/or omitted right now.

  • Stott Creations is doing great things, and I have to hand it to the HIS team for being so intimately involved in not only helping a customer, but helping a partner look good while helping that customer. In addition to that – the Informix adapter looks awesome, and I’m really digging the schema generation from the custom SQL query; that was a really nice touch.

Paul Larsen Presents Features of the BizTalk Adapter for Informix

  • Sam Vanhoutte’s session touched on a not too often discussed tension between what the cloud actually brings in terms of development challenges, and what customers are trying to get out of it. While he was presenting in terms of how Codit addresses these customer asks by dealing with the constant change and risk on their customers’ behalf, these are all still valid points in general. I think he did a great job at summing it up nicely in these two slides:

Challenges - Constant change / Multi-tenancy / Roadmap / DR Planningimage

  • Last, but certainly not least, I want to give a shout out and huge thanks to Saravana and  the BizTalk 360 team for making the event happen. Also they really took one for the team here today as well, as Richard Broida pointed out – ensuring that everyone would have time to share on a jam packed day. The execution was spot-on for a really first class event.

To Microsoft: Remember That BizTalk Server Connects Customers To The Cloud

As a final thought from the Integrate 2014 event: We’re constantly seeing Microsoft bang the drum of “Azure, azure, azure, cloud, cloud, cloud…” Personally, I love it, I fell in love with Azure in October of 2008 when Steve Marx showed up on stage at PDC and laid it all out. However, what we can’t forget, and what Microsoft needs to remember is that any customer bringing their applications to the cloud is doing integration – and Microsoft’s flagship product for doing that integration, better than any other, is BizTalk Server.

BizTalk Server is how you get customers connected to the cloud – not in a wonky disruptive way – but in a way that doesn’t necessarily require that other systems bend to either how the cloud works, or how BizTalk works.

It’s a Good Day To Be a BizTalk Dev

These are good times to be a developer, and great times to be connected into the BizTalk Community as a whole. The next year is going to open up a lot of interesting opportunities, as well as empower customers to take control of their data (wherever it lives) and make it work for them.

I’m out for now. If you were at the conference, and you want to stick around town a little bit longer, I will be teaching the BizTalk Server Developer Deep Dive class over at QuickLearn Training headquarters in Kirkland, WA this coming week. I’d love to continue the discussion there. That being said, you can always connect live to our classes from anywhere in the world as well! Winking smile

Integrate 2014 Day 2 in Review

By Nick Hauenstein

I’m going to start off today’s post with some clarifications/corrections from my previous posts.

First off – It is now my understanding that the “containers” in which the Microservices will be hosted and executed in are simply a re-branding of the Azure Websites functionality that we already have. This has interesting implications for the Hybrid Connections capability as well – inasmuch as our Microservices essentially inherit the ability to interface directly with on-premise systems as if they were local.

This also brings clarity to the “any language” remark from the first day. In reality, we’re looking at building them in any language supported by Azure Websites (.NET languages, Java, PHP, Node.js, Python) – or truly any language if we host the implementation externally but expose a facade through Azure Websites (at the expense of egress, added latency, loss of auto-load balancing and scale), but I digress.

UPDATE (05-DEC-2014): There are actually some additional clarifications now available here, please read before continuing. Most importantly there is no product called the Azure BizTalk Microservices Platform – it’s just a new style in which Microsoft is approaching building out and componentizing integration (and other) capabilities within the Azure Platform. Second, Azure Resource Manager is a product that sits on top of an engine. The engine is what’s being shared with the new Workflow capability discusssed – not the product itself. You could say it’s similar to how workflow services and TFS builds use the same underlying engine (WF).

The rest of the article remains unchanged because there are simply too many places where the name was called out as if it were a product.

Rules Engine as a (Micro)Service

After a long and exciting day yesterday, day 2 of Integrate 2014 got underway with Anurag Dalmia bringing the latest thinking around the re-implementation of the BizTalk Business Rules Engine that is designed to run as a Microservice in the Azure BizTalk Microservices Platform.

Anurag Dalmia presents the Rules Engine Design Principles 

First off, this is not the existing BizTalk Rules Engine repackaged for the cloud. This is a complete re-implementation designed for cloud execution and with the existing BRE pain points in mind. From the presentation, it sounds as if the core engine is complete, and all that remains is a new Azure Portal-based design experience (which currently only exists in storyboard form) around designing vocabularies, rules, and policies for the engine.

Currently the (XML-based, not JSON!) vocabularies support:

  • Constant & XML based vocabulary definitions
  • Single value, range and set of constants
  • XML vocabulary definitions (created from uploaded schema)
  • Bulk Generation (no details were discussed for this, but I’d be very interested in seeing what that will look like)
  • Validation

Vocabulary Design Experience in Azure BizTalk Microservices

Missing from the list above are really important things like .NET objects and Database tables, but these are slated for future inclusion. That being said, I’m not sure how exactly custom .NET classes as facts are going to work in a Microservices infrastrcture assuming that each Microservices is an independent isolated chunk of functionality invoked via RESTful interactions. Really, the question becomes how does it get your .dlls so that it can Activator.CreateInstance that jazz? I guess if schema upload can be a thing there, then .dll upload can as well. But then, are these stored in private Azure blob containers, some other kind of repository, or should we even care?

On the actual Rules creation side, things become quite a bit more interesting. Gone is the painful 1 million click Business Rule Composer – instead, free flowing text takes its place. All of this is still happening in a web-based editor that also provides Intellisense-like functionality, tool-tops, and color-coding of special keywords. To get a sense for what these rules look like, here’s one rule that was shown:

If (Condition)

ClaimAmount is greater than AutoApprovalLimit OR
TreatmentID is in SpecialTreatmentIDs

Then (Action)

ClaimStatus equals "Manual Approval Required"
ClaimStatesReason equals "Claim sent for Manual Approval"
Halt

Features of the Rules Engine were said to include:

  • Handling of optional XML nodes
  • Enable/Disable Rules
  • Rule prioritization through drag-and-drop
  • Support for Update / Halt Forward Chaining (No Assert?)
  • Test Policy (through Web UI, or via Test APIs)
  • Schema Management

I’m not going to lie, at that point, I got really concerned with no declared ability to Assert new facts (or to Retract facts for that matter), and I’m hoping that this was a simple omission to the slide, but I do intend to reach out for clarification there.

Storyboard for the Web-based Test Policy UI

Building Connectors and Activities

After the session on the Rules Engine, Mohit Srivastava was up to discuss Building Connectors an Activities. The session began, however, with a recap of some of the things that Bill Staples discussed yesterday morning. I’m actually really thankful for this recap as I had missed some things along the way (namely Azure Websites as the hosting container), and I also had a chance to snap a picture of what is likely the most important slide of the entire conference (which I had missed getting a picture of the first time around).

Microservices are part of refactored App Platform with integration at the core

I’ve re-created the diagram of the “refactored” Azure App Platform with a few parenthetical annotations:

Re-factored Azure App Platform

One interesting thing about this diagram, when you really think about it, is that the entry point (for requests coming into stuff in the platform) doesn’t have to be from the top down. It can be direct to a capability, or to a process, or to a composed set of capabilities or to a full human friendly UI around any one of those things.

So what are all of the moving pieces that will make it all work?

  1. Gallery for Microservice Discovery
    • Some Microservices will be codeless (e.g., SaaS and On-premises connectors)
    • Others will be code (e.g. activities and custom logic)
  2. Hosting – Azure App Container (formerly Azure Websites)
  3. Gateway
    1. Security – identity broker, SSO, secure token store
    2. Runtime – name resolution, isolated storage, shared config, “IDispatch” on WADL/Swagger (though such metadata is technically optional)
    3. Proxy – Monitoring, governance, test pages
      • Brings all of the value of API management to the gateway out-of-the-box
  4. Developers
    • Writing RESTful services in your language of choice.

To further prove just exactly what a Microservice is, he demoed a sample service starting from just the raw endpoint. You can even look for yourselves here:

What’s really cool about all of this, is that the tooling support for building such services is going to be baked into Visual Studio. We already have Web API for cleanly building out RESTful services, but the ability to package these with metadata and publish to the gallery (a la NuGet) is going to be included as part of a project template and Publish Web experience. This was all shown in storyboard form, and that’s when I had my moment of developer happiness (much like Nino’s yesterday as he gained reprieve from crying over BizTalk development pain points  when first using the productivity tool that he developed).

Publish Web Experience for BizTalk Microservices built using Web API

Finally, we’re getting low enough into the platform that we’re inside Visual Studio and can meaningfully deploy some code – one of the greatest feelings in the whole world.

The talk continued showing fragments of code (that, unfortunately, were too blurry in my photos to capture here) that demonstrated the direct runtime API that Microservices will have access into in order to do things like have encrypted isolated storage, and a mechanism to manage and flow tokens for external SaaS products that are used within a larger workflow. There’s some really exciting stuff here. I honestly could have sat through an entire day of that session just going all the way into it.

But, alas, there were still more sessions to be had.

API Management and Mobile Services

I’m grouping these together inasmuch as they represent functionality within Azure that we have had now for some amount of time (Movile Services certainly longer than API management). I’ve seen quite a bit on these already, and was mainly looking for those touchpoints with the Microservices story.

API Management sits right under Microservices in the diagram shown earlier, and it would make sense that it would become the monetization strategy for developers that want to write/expose a specific capability within Azure. However, that wasn’t explicitly stated, and, in fact, the only direct statement we had was above where we saw that the capabilities of API Management are available within the gateway. That left me a little confused, and I honestly could have missed something obvious there. As much as Josh was fighting PowerPoint, I was fighting my Surface towards the beginning of his talk:

Fighting my Surface at the beginning of Josh Twist's talk on API Management

If you’re not familiar with API Management, it provides the ability to put a cloud-hosted wrapper around your API and project it (in the data shaping sense) to choose carefully the exposed resources, actions, and routes through which they can be accessed. It handles packaging your APIs into saleable subscriptions and monitoring their use. That’s a gross oversimplification, and I highly recommend that you dig in right away and explore it because there’s a lot there, and it’s super cool.

That being said, in terms of Microservices, it would be truly great if we could use that to wrap around external services and then turn the Azure hosted portion of the API into a Microservice in such a way that we can even flow back to our external service some of the same information that we can get directly from the APIs that would be available if we were writing within a proper Azure App Container. For example, to be able to request a certain value from the secure store to be passed in a special HTTP Header to our external service –- which could then use that value in any way that it wanted. That would really help speed adoption, as I could quite easily then take any BizTalk Server on-premise capability, wrap a nice RESTful endpoint around it, and not have to worry about authorization, rate-limited, or re-implementation.

Next up was Kirill Gavrylyuk rocking Xamarin Studio on a Mac to talk about Mobile Services (he even went for a Hat-trick and launched an Android emulator). He actually did feature a slide towards the end of his talk showing the enterprise/non-consumer-centric Mobile Services development  experience by positioning Mobile Services within the scope of the refactored Azure App Platform:

Mobile Services in light of Refactored App Platform

I’m going to let that one speak for itself for now.

Those two talks were a lot of fun, and I don’t want to sell them short by not writing as much, but there’s certainly already a lot of information already out there for these ones.

Big Data With Azure Data Factory & Power BI

The day took a little bit of a shift after lunch as we saw a few talks on both Azure Data Factory and Power BI. In watching the demos, and seeing those talks, it’s clear that there’s definitely some really exciting stuff there. Sadly, I’m already out-of-date in that area, as there were quite a few things mentioned that I was entirely unaware of (e.g., Azure Data Factory itself). For now, I’ll leave any coverage of those topics to the BI and Big Data experts – which I will be the first to admit is not me. I don’t think in more than 4 dimensions at a time – though with Power BI maybe all I need to know how to do is to speak English.

For all of those out there that spend their days writing MDX queries, I salute you. You deserve a raise, no matter what you’re being paid.

HCA Rocks BizTalk Server 2013 R2

For the last talk of the day, Alan Scott from HCA and Todd Rivers from Microsoft presented on HCA’s use of BizTalk Server 2010 & 2013 R2 for processing HL7 workloads (and MSMQ + XML) workloads. The presentation was excellent, and it’s going to be really difficult to capture it here. One of the most impressive things (besides their own web-based rules editing experience) is the sheer scale of the installation:

HCA Rocks BizTalk Server 2013 R2

Cultural Change Reaps Biggest Rewards – Value People Not Software

The presentation really highlighted not only the flexibility of the BIzTalk platform, but the power of having a leader that is able to evangelize the capability to the business – while being careful to not talk in terms of the platform, but in terms of the people and the data, and also while equipping the developers with the tools they will need to succeed with that platform.

image

image

Looking Forward

Looking forward beyond today, I’m getting really excited to see the direction that we’re headed. We still have a rock solid platform on-premise alongside a hyper-flexible distributed platform brewing in the cloud.

To that end, I actually want to announce today that QuickLearn Training will be hosting an Azure BizTalk Microservices Hackathon shortly after the release of the public preview. It will be a fun time to get together and look through it all together, to discuss which microservices will be valuable, and most of all to build some together that can provide value to the entire community.

If any community is up for that, I know it’s the BizTalk community. I’m just really excited that there’s going to be a proper mechanism to surface those efforts so that anyone who builds for the platform will have it at their disposal without worries.

If you want more details, or you want to join us (physically, or even remotely) when that happens, head over here: http://bit.ly/1AcMWIy

For that matter, if you want to host one in your city at the same time and connect up with us here in Kirkland, WA via live remote feed, that would be great too ;-) Let’s build the future together.

Well, that’s all for now! Take care!

BizTalk Microservices: A Whole New World – Or Is It?

By Nick Hauenstein

NOTE: I’m still processing all of the information I took in from Day 1 at the Integrate 2014 conference here in Redmond, WA. This post represents a summary of the first day with some thoughts injected along the way.

This morning was a morning of great changes at Integrate 2014. It kicked off with Scott Guthrie presenting the keynote session without his characteristic red shirt – a strange omen indeed. He brought the latest news from the world of Azure and touted the benefits of the cloud alongside the overall strategy and roadmap.

ScottGu Blue Shirt

After his presentation concluded, Bill Staples (unfortunate owner of the bills@microsoft.com email address) took the stage and presented the new vision for BizTalk Services.

Introducing Azure BizTalk Microservices

NOTE: Since there are a lot of places linking directly to this post, I have made factual changes in light of new information found here.

Microsoft Azure BizTalk Services, still very much in its 1.0 iteration is undergoing a fundamental change. Instead of providing the idea of a bridge tied together with other bridges in an itinerary, the actual bridge stages themselves – the raw patterns – are being extracted out and exposed as Azure BizTalk Microservices aligned with a microservices style architecture.

In reality, this re-imagination of BizTalk Services won’t really be a separate Azure offering – in fact, it’s more like the BizTalk capabilities are being exposed as first class capabilities within the core Azure Platform. Every developer that leverages Azure in any way could choose to pull in (and pay for) only the specific capabilities BizTalk Microservices they need – at the same time that same developer has a framework that allows them to build their own microservices and deploy them to a platform that enables automatic scaling & load balancing, and also provides monetization opportunities.

Bill Staples Presents Azure BizTalk Microservices Microservices

The following BizTalk features were presented as candidates for implementation in the form of microservices.

  • Validation
  • Batching/Debatching
  • Format Conversion (XML, JSON, FlatFile) – i.e., Translation
  • Extract
  • Transform
  • Mediation Patterns (Request Response / One Way)
  • Business Rules
  • Trading Partner Management
  • AS2 / X12 / EDIFACT

It definitely all sounds familiar. I remember a certain talk with Tony Meleg at the helm presenting a similar concept a few years back. This time, it looks like it has legs in a big way – inasmuch as it actually exists, even if only in part – with a public preview coming in Q1 2015.

So What Are Microservices Anyway?

Microservice architecture isn’t a new thing in general. Netflix is known for a very successful implementation of the pattern. No one has commented to me about the previous link regarding Netflix’s implementation. Read it, understand it, and then you can have a prophetic voice in the future as you are able to anticipate specific challenges that can come up when using this architecture – although Microsoft’s adoption of Azure Websites as the hosting container can alleviate some of these concerns outright.  Martin Fowler says this as his introduction to the subject:

The term “Microservice Architecture” has sprung up over the last few years to describe a particular way of designing software applications as suites of independently deployable services. While there is no precise definition of this architectural style, there are certain common characteristics around organization around business capability, automated deployment, intelligence in the endpoints, and decentralized control of languages and data.

Fowler further features a sidebar that distances microservice architecture from SOA in a sort of pedantic manner – that honestly, I’m not sure adds value. There are definitely shades of SOA there, and that’s not a bad thing. It also adds value to understand the need for different types of services and to have an ontology and taxonomy for services (I’m sure my former ESB students have all read Shy’s article, since I’ve cited it to death over the years).

Yeah, But How Are They Implemented?

Right now, it looks like microservices are going to simply be code written in any language* that exposes a RESTful endpoint that provides a small capability. They will be hosted in an automatically scaled and load balanced execution container (not in the Docker sense, but instead Azure Websites rebranded) on Azure. They can further be monetized (e.g., you pay me to use my custom microservice), and tied together to form a larger workflow.

Azure BizTalk Microservices Workflow Running in the Azure Portal

Yes, I did just use the W word, but it’s NOT the WF word surprisingly. XAML has no part in the workflows of BIzTalk vFuture. Instead, we have declarative JSON workflows seemingly based on those found in Azure Resource Manager. That is, the share the same engine that Azure Resource Manager uses under the covers, because that engine was already built for cloud scale and has certain other characteristics that made it a good candidate for microservice composition and managing long running processes. They can be composed in the browser, and as shown in the capture above, they can also be monitored in the browser as they execute live.

Workflow Designer

The workflow engine calls each service along the path, records execution details, and then moves along to the next service with the data required for execution (which can include the output of any previous step):

JSON Workflows -- Check the Namespace, we have Resource Manager in play

Further, the workflow engine has the following characteristics:

  • Supports sequential or conditional control flow
  • Supports long-running workflows
  • Can start, pause, resume, or cancel workflow instances
  • Provides message assurance
  • Logs rich tracking data

I’m really keen on seeing how long-running workflow is a thing when we’re chaining RESTful calls (certainly we don’t hold the backchannel open for a month while waiting for something to happen) – but I may be missing something obvious here, since I just drank from the fire hose that is knowledge.

What does the Designer Look Like?

The designer supports the idea of pre-baked workflow templates for common integrations :

  • SurveyMonkey to Salesforce
  • Copy Dropbox files to Office 365
  • “When Facebook profile picture…”
  • Add new leads to newsletter – Salesforce + Mailchimp
  • Alert on Tweet Workflow – Twitter + Email
  • Download photos you’re tagged in – Facebook + Dropbox
  • Tweet new RSS articles
  • Twitter + Salesforce (?)

However, it also provides for custom workflows built from BizTalk Microservice activities microservices composed within a browser-based UI. It was presented as if it were going to be a tool that Business Analysts would ultimately use, but I’m not sure if that’s going to be the case up front, or even down the line.

Workflow Designer in BizTalk vFuture

These workflows will be triggered by something. Triggers shown in the UI and/or on slides included (but weren’t necessarily limited to):

  • FTP File Added
  • Any tweet
  • A lot of tweets
  • Recurring schedule
  • On-demand schedule
  • Any Facebook post
  • A lot of Facebook posts

In terms of the microservices actually seen in the UI, we saw (and likely more if those presenting were to have scrolled down):

  • Validate
  • Rules
  • Custom filter
  • Send an Email
  • SendResponse
  • SurveyMonkey
  • Custom API
  • Custom map
  • Create digest
  • Create multi-item digest
  • XML Transform
  • XML Validate
  • Flat File Decode
  • XML XPath Extract
  • Delete FTP File
  • Send To Azure Table
  • Add Salesforce leads

The tool is definitely pretty, and it was prominently featured in demos for talks throughout the day – even though quite a few pieces of functionality were shown in the form of PowerPoint Storyboards.

So How Do We Do Map EAI Concepts To This New Stuff?

image

Well, we have special entities within this world called Connectors. They are our interface to the outside world. Everything else within the world of the original MABS 1.0 and even BizTalk Server is seen as simply a capability that could be provided by a microservice.

So That’s the Cloud, What’s on-Prem?

In the future – not yet, but at some point – we will see this functionality integrated into the Azure Pack alongside all of the other Azure goodness that it already brings to your private cloud. But remember, this is all still in the very beginning stages. We’ve yet to hear much about how really critical concerns like debugging, unit testing, or even team development, differencing / branching / merging / source control in general are going to be handled in a world where you’re building declarative stuff in a browser window.

So that’s all fine and good for the future, but what about my BizTalk Server 2013 R2 stuff that I have right now? Well keep doing great things with that, because BizTalk Server isn’t going away. There’s still going to be a new major version coming every 2 years with minor versions every other year, and cumulative updates every 3 months.

image

What about my investments in Azure BizTalk Services 1.0? Well it’s not like Microsoft is going to pull the plug on all of your great work that you’re actively paying them to host. That’s monies they are still happy to take from you in exchange for providing you a great service, and they will continue to do so under SLA – it’s a beautiful thing.

Also, if you’re moving to the new way of doing things, your schemas and maps remain unchanged, they will move forward in every way. However, you will see a new web-based mapping tool (which I simply lack the energy at the moment to discuss further for this post).

However, future investment in that model is highly unlikely based on everything announced today. I’m going to let this statement stand, because it was opinion at the time I wrote it. That being said, read this post before formulating your own.

The Old New Thing

I hate to keep coming back to patterns here, but I find myself in the same place. I will soon have yet another option available within the Microsoft stack for solving integration challenges (however, this time it’s not a separate offering, it is part of the core stack). At the same time, the problems being solved are the same, and we still can apply lessons learned moving forward. Also, integration problems are presenting themselves to a larger degree in a world of devices, APIs everywhere, and widely adopted SaaS solutions.

It’s an exciting time to be a BizTalk Developer – because after today, every developer became a BizTalk Developer – it’s part of the core Azure stack, every piece of it. For those that have been around the block a few times, the wisdom is there to do the right things with it. For those who haven’t, a whole new world has just opened up.

That’s all for now. I need some sleep before day 2. :-)

* With regards to the any language comment – that was the statement, but there was a slide at one point that called out very specifically “.NET, Java, node.js, PHP” as potential technologies there, so take it with a grain of salt. It looks like the reason for that is we’re hosting our microservices in a Azure Websites hosting container that has been bebranded.

** Still waiting for some additional clarification on this point, I will update if my understanding changes. Updated in red.

Integrate 2014 Starting Up

By Nick Hauenstein

Tomorrow morning, the Integrate 2014 conference revs up here in Redmond, WA with Microsoft presenting the vision for the next versions of BizTalk Server and Microsoft Azure BizTalk Services.

I’m getting pretty excited as I’m looking over the first day agenda to see what to expect from the talks and sessions like Business Process Management and Enterprise Application Integration placed in between talks about BizTalk Services and On-Premise BizTalk Server. I’m even more excited seeing the second day agenda and seeing talks like Business Rules Improvements, and especially Building Connectors and Activities.

What’s Keeping Me Busy

In anticipation of this event, I have been building out a hybrid cloud integration sample application that leverages some of the ideas laid out in my previous post regarding cloud integration patterns while also providing the content for the long overdue next piece in my new features series for BizTalk Server 2013 R2.

In the sample, I’m running requests through MABS for content-based routing based on an external SQL lookup, with the requests targeting either an on-premise SQL Server database or a BizTalk Server instance for further processing that can’t otherwise be done currently with BizTalk Services (namely Business Rules Evaluation and Process Orchestration).

I’m hoping that after the conference this week, I will be able to tear the sample apart and build out most (if not all) of the elements in the cloud.

Best Bridge Between Azure and On-Prem?

Along the way, I’ve been further playing with using Service Bus Topics/Subscriptions as the Message Box of the cloud. At the same time, it represents a nice bridge between BizTalk Services and On-Premise BizTalk Server.

Consider the itinerary below;

image

This was actually the first draft of prototyping out the application. What this represents is an application that is receiving requests from client applications that have been published to a service bus topic. As they are received, they are routed based on content and sent either to a database to be recorded (as an example of an on-premise storage target), or to BizTalk Server 2013 R2 for further processing through a Service Bus Relay.

Given the scenario, maybe a relay is appropriate – lower latency requirement, no requirement for durability (which we have already sacrificed by running it through an initial bridge).

However, maybe we want to take a more holistic approach, and assume that the cloud is giving us a free public endpoint and some quite powerful content-based routing, translation, and even publish-subscribe capability when we bring Azure Service Bus to the mix. Let’s further assume that we view these capabilities as simply items in our toolbox alongside everything that BizTalk Server 2013 R2 is already providing us.

Let’s make it more concrete. What if the on-premise processing ultimately ends up sending the message back to the TradesDb? Is there a potential waste in building out that portion of the process in both locations?

Service Bus is the New Hybrid Integration MessageBox

Let’s try this instead:

image

Here, instead of using a relay, we’re using a Service Bus Topic to represent the call out to BizTalk Server 2013 R2.

Why would we do  this? While it introduces slightly more latency (in theory – though I haven’t properly tested that theory), it becomes pure loosely coupled pub-sub. I’m going to go out on a limb here (and risk being called an architecture astronaut), and say that not only is that not a bad thing, but it might even be a good idea. By feeding a topic rather than directly submitting to BizTalk Server via Relay allows us to easily swap out the processing of the rules with any mechanism we want, at any time – even if it means that we will have to transform the message and further submit by a completely different transport. Maybe one day we will be able to replace this with a call to a Rules Engine capability within MABS (crossing my fingers here) if we see such a capability come.

Further, we have broken out the logging of trades to the database into it’s own separate miniature process alongside the rest. This one might be fed by messages generated by BizTalk Server 2013 R2 on-premise or the earlier processing in MABS – providing only a single implementation of the interface to manage and change.

Is It The Right Way™?

I don’t know. It feels kind of right. I can see the benefits there, but again, we are potentially making a sacrifice by introducing latency to pay for a loosely coupled architecture. Call me out on it in the comments if that makes you unhappy. Ultimately, this would have to simply become a consideration that is weighed in choosing or not choosing to do things this way.

What Could Make it Even Better?

Imagine a time when we could build-out a hybrid integration using a design surface that makes it seamless. One where I could quickly discover the rest of the story. Right now there’s quite a bit happening on-premise into which we have no visibility via the itinerary – and very limited visibility within the orchestration since most logic is in Business Rules and my maps happen just before/after port processing.

Tomorrow

Tomorrow I will be writing a follow-up blog with a re-cap of the first day of the Integrate 2014 conference. Additionally, I will be playing with this application in my mind and seeing where the things announced this week change the possibilities.

If you’re going to be there tomorrow, be sure to stop by the QuickLearn Training table to sign-up for a chance to win some fun prizes. You can also just stop by to talk about our classes, or about any of the ideas I’ve thrown out here – I welcome both the positive and negative/nitpicky feedback.

Also, make sure you’re following @QuickLearnTrain on Twitter. We’ll be announcing an event that you won’t want to miss sometime in the next few days.

See you at the conference!

– Nick Hauenstein

Upgrading BizTalk Server

By Rob Callaway

In my experience there are two upgrade methods for BizTalk Server environments. You either (1) buy new hardware and rebuild from scratch by installing the latest versions of Windows Server, SQL Server, etc. or (2) perform an “in-place” upgrade where you simply install the new version of BizTalk Server to replace the existing version.

While I’ve personally done (and prefer) the former many times, while recently updating QuickLearn Training’s BizTalk Server classes to BizTalk Server 2013 R2 I decided to give the in-place upgrade a shot. I figured that Windows Server 2012 R2 and SQL Server 2014 weren’t bringing much to BizTalk table so keeping the existing SQL Server 2012 SP1 and Windows Server 2012 installations for another year or so would be fine. Additionally since our courses utilize virtual machines there’s no hardware/software entanglements to consider.

The Plan

  1. Uninstall Visual Studio 2012
  2. Install Visual Studio 2013
  3. Update BizTalk Server 2013 to BizTalk Server 2013 R2
  4. Install all available updates for the computer via Microsoft Update

Let’s get started.

Uninstall Visual Studio 2012

Unless you’re upgrading a development system this step likely isn’t required but in my case the virtual machine is used for QuickLearn Training’s Developer Immersion and Developer Deep Dive courses. Although Visual Studio supports side-by-side installations of multiple versions I opted to remove Visual Studio 2012 since it wasn’t needed for our courses anymore.

This was pretty easy. I went to Programs and Features and chose to uninstall Visual Studio 2012.

 Install Visual Studio 2013

Again, this was pretty easy. I simply accepted the default options for installation and walked away for 45 minutes. When I came back I saw this.

VS 2013 Install

Upgrade BizTalk Server 2013 to BizTalk Server 2013 R2

This is where I started feeling nervous. Would it work? Is it really going to be this easy? There was only one way to find out. Before starting the upgrade, I thought about the services used by BizTalk Server and stopped the following services:

  • All the BizTalk host instances
  • Enterprise Single Sign-On Service
  • Rule Engine Update Service
  • World Wide Web Publishing Service

I mounted the BizTalk installation ISO to the virtual machine and ran Setup.exe.

BizTalk Setup.exe

The splash screen! This might actually work!

BizTalk Splash Screen

Product key redacted to protect the innocent.

BizTalk License

Finally, it knows I’m upgrading! I guess I was wrong about needing the Enterprise Single Sign-On Service stopped.

BizTalk Summary

Start ESSO and now we are in business. Hit Upgrade!

BizTalk Upgrade

A few minutes later… boom!

BizTalk Upgrade Complete

Install Other Updates

It’s been awhile since we installed updates from Windows Update so let’s run that.

Updates

I’m going to be here forever.

Lessons Learned

This upgrading stuff is a lot easier than I thought it would be. I strongly recommend it and I’ll probably use the same method when updating the Administrator Immersion and Administrator Deep Dive courses.

Exam 70-499 MCSD:ALM Recertification exam prep

By Anthony Borton

To keep the Microsoft Certified Solution Developer: Application Lifecycle Management (MCSD:ALM) certification current you must complete a recertification exam every two years. Since the release of the MCSD:ALM certification, many of our students have taken our TFS courses to help them prepare for the exams.

As the two year recertification deadline starts to arrive,  early charter exam members are facing the task of preparing for the recertification exam. Here are some helpful resources to help you focus your study.

If you do not currently hold the MCSD:ALM certification, you will be required to complete three exams to earn this certification. QuickLearn Training offers great instructor led courses to help you prepare for these exams.

Aligning Microsoft Azure BizTalk Services Development with Enterprise Integration Patterns

By Nick Hauenstein

We have just finished a fairly large effort in moving the QuickLearn Training Team blog over to http://www.quicklearn.com/blog, as a result we had a mix-up with the link for our last post.

This post has moved here: Aligning Microsoft Azure BizTalk Services Development with Enterprise Integration Patterns

Aligning Microsoft Azure BizTalk Services Development with Enterprise Integration Patterns

By Nick Hauenstein

Earlier this week, I started a series of discussions with the development team here at QuickLearn Training. These discussion included John Callaway, Rob Callaway, and Paul Pemberton, and centered around Best Practices for development of Microsoft Azure BizTalk Services integrations. The topic arose as we were working on our upcoming Microsoft Azure BizTalk Services course.

Best practices are always a strange topic to address for a technology that is relatively young, and at the same time rapidly evolving. However, the question comes up in both classes and consulting engagements – no matter what the technology.

With regards to BizTalk Server, we have years’ worth of real-world experience from which to pull both personally, and from our customers’ narratives. In addition, there are extensive writings in books, blog posts, white papers, and technical documentation. The same can’t be said, yet, of BizTalk Services.

This then represents an attempt at importing some known-good ideas and ideals into the world of MABS so that value can be realized. It is written not to codify practices and anoint them as “Best” without being proven, but instead to start a dialog on how best to use MABS both in present form, and with additional functionality that is yet to come.

NOTE: I am actively building out integrations to explore these ideas more fully, and expect to find that at least one thing I mention here either won’t work, or isn’t the best after all.

A Language of Patterns

For a brief period of my BizTalk life, I worked on the team that built the Enterprise Service Bus Toolkit 2.0. My job was to document not only the functionality as it was built, as well as sample applications, but also how existing Integration Patterns and SOA Patterns could be implemented using the Toolkit.

I explicitly recall that this last point was especially emphasized by the leader of that small team, Dmitri Ossipov. He wanted to communicate how integration patterns documented by Gregor Hophe and SOA patterns documented by Thomas Erl could be implemented using the ESB Toolkit.

Why would we spend time linking product documentation to patterns? Because Dmitri understood something important. He understood that buzzword compliance wasn’t enough to drive adoption of the platform (i.e., being able to say “this thing uses an ESB under the covers,” or nowadays, “this is cloud-based integration,” or “hybrid integration” [isn’t all integration a hybrid of something?]). Instead adoption would be driven as developers could see it solving a problem, and solving it elegantly – where each Itinerary Service, out-of-the-box or custom, implemented a specific integration pattern and composing them could solve any integration challenge.

So which problems were the best to try and solve? Probably the most common ones (i.e., the ones that are so common as to have vendor-agnostic industry-wide documented patterns for overcoming them). So what does that have to do with MABS? Everything.

The problem space hasn’t changed. The cloud has been quite disruptive in the overall industry – likely for the best. It has leveled the playing field to the point that the established players can be publicly challenged by marketing teams of smaller players that are brand-new to the scene because the scene itself is still new. At the same time, the art and science of integrating systems is the same.

The patterns for approaching this brave new world are remixes and riffs on the patterns that we’ve already had. As a result I’m going back to the fundamentals, and using it to understand MABS.

When I’m starting on a new integration with BizTalk Server, I first sit down and think of that integration in terms of the problems I’m trying to solve and well known patterns that can be used to overcome those problems. A nice visual example of such a pattern is reproduced here (from Gregor Hohpe’s list of Enterprise Integration Patterns):

Here we’re seeing the Scatter-Gather pattern, which is actually a composite of both the Recipient List pattern (1 message being sent to multiple recipients) and the aggregation pattern (multiple messages being aggregated into a single message). This is being presented in the light of a fictional RFQ scenario.

To see it further broken down, we could pull in just the Recipient List Pattern:

Or we could pull in just the Aggregator Pattern:

Once we’ve determined the patterns involved, and how they’re composed, it’s a fairly straight-forward exercise to envision the BizTalk components that would be involved and/or absolutely necessary in order to implement the solution.

For the purposes of this post, I’m going to see how specific patterns might be implemented in Microsoft BizTalk Server, as well as Microsoft Azure BizTalk Services, and Microsoft Azure Service Bus. Specifically, I will be examining the Canonical Data Model pattern, Normalizer pattern, Content-Based Router pattern, Publish-Subscribe Channel pattern, and even the Dynamic Router pattern.

This is not to say that these are somehow the only patterns possible in MABS, but will instead form a nice basis for discussion. Let’s not get ahead of ourselves though, as there’s still some groundwork to cover.

Seeing the Itinerary Through the Bridges

So many times as I’m reading through content discussing Microsoft Azure BizTalk Services (blogs, code samples, documentation, tutorials, books), I see lots of emphasis placed on a single bridge, but rarely on the itinerary as a whole that contains it. Essentially I’m seeing a lot of this:

image

What are we looking at here? A really boring itinerary that does not showcase the value of MABS, and makes it seem impotent in the process.

Inside the bridge though, there’s a lot going on: message format translation, message validation, message enrichment, message data model transformation, with extensibility points to boot – all put together in a pipes and filters style.

But what if I want to build something bigger than data-in/data-out where not everything has a one-to-one relationship?

Bridges Revisited In Light of BizTalk Server

In our BizTalk Server classes, we ultimately caution people against building these direct point-to-point integrations. Instead, we try to leverage the Canonical Data Model pattern when possible – with the Canonical Data Model being a Canonical Schema.

What does that look like in BizTalk Server? First of all, I personally like to extend a tad beyond this base pattern and mix in a little bit of the Datatype Channel pattern as well.

With that in mind, we would start with a Receive Port for each type of message (e.g., Order, Status Request), and a Receive Location for each source system and data type we expect to see (e.g., FTP with Flat-file, vs. WCF-BasicHttp with XML).

From there, we have a canonical schema for each of those types of messages. An internal Order message for example, or internal Status Request message. At the end of the Receive Port, the appropriate transform would be selected so that a message conforming to the canonical schema is published to the Message Box database.

image

At the Message Box, another pattern is at work, whether or not we ever wanted it. BizTalk Server is doing publish subscribe routing through the Message Box based on properties stored in the context of the message.

At this point each recipient of the message has their own Send Port through which a final transformation, to THEIR order format is performed along with a final pipeline before it is transmitted through the adapter.

What is all of this doing for us? It’s separating concerns and providing a loosely-coupled design.

I can have multiple sources for a message, and easily add a new source. To add a source is a matter of adding a Receive Location and a Map that translates the incoming message to internal message formats. To add a destination is just as simple and loosely coupled. It really is a beautiful place to live.

The Tightly Coupled Nightmare We’re Building for Ourselves

So why then, when presented with the same integration problems, do we use MABS like this? Is it just that what’s being published widely isn’t representative of the real-world live production solutions that are out there right now? Or is it that we’re blinded and can’t see the itinerary as a whole because we’re caught up with the bridge? I honestly don’t know the answer to this question. All I know for sure is that single bridge itineraries are not the answer for everything.

Why? Because this is a fairly tightly coupled design, and something that we might want to reconsider.

image

So let’s do just that. Let’s start though by following a message as it would flow through this system, while comparing how BizTalk Server would handle the same.

The whole thing starts with the Channel Adapter pattern.

In BizTalk Server, that’s the Adapter configured within each Receive Location that indicates from where the message will be transported. In MABS, it’s our Source.

Next we go through some filters:

In BizTalk Server, this is our Pipeline inside the Receive Location. In MABS, it’s most of our Bridge. WCF is also rocking this pattern (potentially earlier in the process) in the form of the channel stack, and both BizTalk Server and MABS give you hooks into deep customization of that WCF channel stack if you can’t get enough opportunities to touch the message.

For the BizTalk implementation of the Canonical Data Model pattern, we would then go through the Message Translator:

This is actually where things get interesting. In BizTalk Server, this is the Map that executes after the Pipeline, within the context of the Receive Port. For MABS, we’re still in the middle of the Bridge at this point. In fact for MABS, it’s one of the “Filters” to borrow a term from the pattern above, alongside other patterns such as message enrichment.

After this point, BizTalk enters a publish and subscribe step, and our simple, single-bridge, MABS itinerary breaks down. We won’t have another opportunity to prepare the message for its destination beyond our Route Actions – unless we’ve already prematurely made the leap and readied the message for only a single destination, ever.

Why does it break down? Because we’ve artificially limited ourselves to a single bridge. We’ve seen that bridges are highly capable, and have a lot going on, so we try to get it all done up-front. But is that a Best Practice even for BizTalk? Do we do all of our processing on receive just because pipelines are extensible and can host any custom code our hearts desire? Thankfully the model of BizTalk Server, centered around the Message Box, forces us to be a little bit smarter than that.

Standard best practices for BizTalk are helping us “Minimize dependencies when integrating applications that use different data formats.” Shouldn’t best practices for MABS allow us to do the same?

Crossing the Next Bridge

Consider the following Itinerary:

image

Here we have a situation where as I’m receiving my message, I don’t yet know the destination. In fact, I may have to perform all of the work in my first bridge before I know that destination. After I’m done with that first bridge though, I will have a standard internal representation of a “Trade” in this case.

From there, I am using Route Filters to determine my destination, and bridges to ensure the destination gets a message it can deal with.

More specifically, there are really four stages of the overall execution:

image

  1. The first stage is my source and first bridge. These are playing a role similar to that of a Receive Port in BizTalk Server, ending in a Map. I’m not necessarily taking advantage of every stage within the bridge, but I am certainly doing some message transformation, and maybe even enrichment. The main goal of this first stage is to get the message translated FROM whatever it looked like in the source system TO our internal representation
  2. The second stage of execution is between bridges. These are my routes, each with route filters that determine the next stage of the execution. They are playing a role similar to the Filters on my Send Ports in BizTalk, and yet at the same time they’re doing something completely different. Why? Because only one bridge will ever receive the message – even if both filters match, one will have priority. This is not true publish-subscribe, it is instead more akin to a Content-based Router of sorts. We’ll address in the next section how to get publish and subscribe happening for more even more loosely-coupled execution.
  3. The third stage is my second bridge, route actions, and my destination. These are playing a role similar to that of a Send Port in BizTalk Server beginning with a Map. I’m using this bridge to take our internal representation of the message and get it ready for the destination system.
  4. Sometimes we need dynamic routing to some degree even when we know the destination. That’s what might be happening through Route Actions in step 4. In this specific case, that’s not a true statement. However, if my destination were SFTP, for example, this would be my opportunity to set a destination file name based on some value inside the message.

True Publish-Subscribe In MABS

What if I were to tell you that MABS does publish-subscribe better than the Message Box database? Well it’s not true, so you shouldn’t believe it – like most click baiting headlines. In fact, MABS doesn’t do publish-subscribe at all. Rather, it relies on Microsoft Azure Service Bus to provide publish-subscribe capabilities – interestingly, a technology that we can also take advantage of through adapters first made available in BizTalk Server 2013.

So how do we do it? We publish to a Service Bus Topic, and receive through Service Bus Subscriptions (Queues that subscribe to messages published to Topics based on Filters). Consider the following itinerary:

image

We no longer have connections between the three seemingly independent sections of this itinerary. In fact, if we didn’t need to share any resources between them, they could be built as separate itineraries, in separate Visual Studio projects.

This is not new stuff, in fact Richard Seroter mused about the possibilities that Service Bus would provide in MABS all the way back in February.

Dynamic Sends and Receives

Another interesting scenario enabled here is the idea of a Dynamic Receive. What would that look like? Let’s say I have a subscription that I want to toggle or reconfigure at runtime. Maybe all trades for a specific company should result in a special alert message routed somewhere, based on conditions that are not known at design-time, and could change at any time. Let’s also say that I don’t want to re-deploy my existing bridge.

I can simply add another subscription to the topic for all trades related to that company, and deploy a new itinerary that handles the special alert message. But that doesn’t sound really dynamic in nature. In fact, it’s not bringing anything new to the table that BizTalk Server doesn’t do.

On the other hand, what if I had this alerting itinerary already built and deployed, and I want to dynamically route messages to it with conditions that change constantly at runtime (outside the scope of the messages themselves) – e.g., toggle a subscription for a given trade based on its current Ask price and the symbol in the trade message?

In that case, my special alert itinerary might be fed by a queue that receives messages forwarded from a subscription generated on the fly at runtime (because Service Bus can do that!), maybe even within a custom messaging inspector if I go fully insane.

Pain Points and Opportunities For Improvement

Ultimately we’re so close, and yet still so far away from having an ideal development experience with MABS. I’m happy that we have process visibility at design-time. Being able to visualize the whole message flow can be invaluable. On the other hand, I’m not happy with the cost at which this comes: sacrificing the ability to easily partition my resources on Visual Studio Project boundaries.

Separate Projects Per Interface

Ideally I would like to have a project per interface that I’m integrating with, and one common project that contains common schemas. I would also like to be able to split up the whole itinerary into smaller pieces (i.e., files) without losing the ability to visualize the itinerary as a whole.

Instance Subscription Equivalent for Two-Way Sources

In BizTalk Server, our two-way receives turn into two one-way exchanges as messages are published to the Message Box. There’s some magic built-in for correlation, but for the most part we aren’t forced into doing everything that follows in a purely request-reply message exchange pattern. Currently Microsoft Azure BizTalk Services does not offer the same capability. As a result, the nice publish/subscribe example itinerary above is only possible in the case of a one-way pattern.

Best Practices

So what are some practices that we can derive from this whole discussion? Here’s an incomplete list that I’m toying with at the moment:

  1. Build your cloud-based integrations with integration patterns in mind – don’t ignore the fundamentals, or throw away the lessons of the past
  2. Allow bridges to specialize on one portion of the process flow – don’t try to build your entire integration using a single bridge
  3. Use the canonical data model pattern when integrating between multiple source and destination systems when the list of those systems in subject to change
  4. Leverage the publish-subscribe capabilities of Service Bus when needed, don’t rely only on routes within MABS
  5. Push down to on-premise BizTalk Server to handle patterns that aren’t yet possible in MABS.
    • Mediating mismatched of message exchange patterns, through BizTalk Server’s instance subscriptions for two-way receives (e.g., incoming synchronous request/response interface provided against asynchronous one-way request with one-way callback)
    • Handling aggregation of messages, which will require the use of Orchestration
    • Handling the O in the VETRO  pattern (Validate, Enrich, Transform, Route, Operate), which again will require the use of Orchestration
    • Handling the execution of Business Rules within BizTalk Server on-premise

Looking Forward

I’m really looking forward to seeing how Workflow is going to fit into the picture. We’ll have to wait until Integrate 2014 in December to know for sure, but if we keep the fundamentals in mind, it shouldn’t be hard to figure out how they will fit into a concept of MABS itineraries rooted in the integration patterns that make them up.

I truly hope that you have enjoyed this post, and I thank you for reading to the end, if indeed you have and not simply skipped to the end to find out if I ever stopped typing. Remember that I want this to be a dialog, both retrospectively in terms of what is working and what isn’t working with your MABS integrations, as well as a forward-looking dialog as to what practices will likely emerge as generally applicable and ultimately most beneficial.

BizTalk Server 2013 R2 Pipeline Component Wizard

By Nick Hauenstein

While working on the current upcoming installment in my BizTalk Server 2013 R2 New Features Series, I realized that I did not yet have a version of Martijn Hoogendoorn’s excellent BizTalk Server Pipeline Component Wizard that would work happily in Visual Studio 2013.

As a result, I filed this issue, and then made the necessary modifications to make it work in Visual Studio 2013 with BizTalk Server 2013 R2. It is now available for download here if you want to give it a test run.

image

Unfortunately, that has consumed a significant portion of my evening, and thus the next installment in the blog series will be delayed. But fear not, there is some really cool stuff just around the corner.

Integrate 2014 Registration is Live

In other news, Integrate 2014 registration is now live, and is only 72 days away! It’s going to be an awesome start to a cool Redmond December as 300-400 BizTalk enthusiasts from around the globe converge on the Microsoft campus for a few days of announcements, learning, networking, and community.

I’m not going to lie, I’m pretty excited for this one! Especially seeing these session titles for the first day:

  • Understand the New Adapter Framework in BizTalk Services
  • Introducing the New Workflow Designer in BizTalk Services

Well, that’s all for now. I hope to see you there!

BizTalk Summit 2014 in December

By Paul Pemberton

Save the date for BizTalk Summit 2014, which will take place December 3-5 on the Microsoft Campus in Redmond, WA. There will be 300+ partners and customers, so bring your questions, knowledge, and business cards.

Agenda

  • Executive keynotes outlining Microsoft vision and roadmap
  • Technical deep-dives with product group and industry experts
  • Product announcements
  • Real-world demonstrations from lighthouse customers
  • Roundtable discussions + Q&A
  • Partner Showcase Sessions
  • We are also planning hands-on labs so you can roll up our sleeves and experience new capabilities
  • Networking and social activities