BizTalk Server 2016 New Features: Shared Access Signature Support for Relay Adapters

By Nick Hauenstein

At the end of last week, a few of us from QuickLearn Training hosted a webinar with an overview of a few of the new features in BizTalk Server 2016. This post serves as a proper write-up of the feature that I shared and demonstrated – Shared Access Signature Support for Relay Adapters. If you missed it, we’ve made the full recording available on YouTube here. We’ve also clipped out just the section on Shared Access Signature Support for Relay Adapters over here – which might be good to watch before reading through this post.

While that feature is not the most flashy or even the most prominent on the What’s New in BizTalk Server 2016 page within the MSDN documentation, it should come as a nice relief for developers who want to host a service in BizTalk Server while exposing it to consumers in the cloud — with the least amount of overhead possible.

Shared Access Signature (SAS) Support for Relay Adapters

Configuring SAS Security for the WCF-BasicHttpRelay Adapter

You can now use SAS authentication with the following adapters:

  • WCF-BasicHttpRelay
  • WCF-NetTcpRelay
  • WCF-BasicHttp*
  • WCF-WebHttp*

* = SAS for these adapters is used only when sending messages as a client (the adapters can still be used as receive adapters, just not to host Azure Relay enabled endpoints)

Why Use SAS Instead of ACS?

Before BizTalk Server 2016, our only security option for the BasicHttpRelay and NetTcpRelay adapters was the Microsoft Azure Access Control Service (ACS).

One of the main scenarios that the Access Control Service was designed for was Federated Identity. For simpler scenarios, wherein I don’t need claims mapping, or even the concept of a user, using ACS adds potentially unnecessary overhead to (1) the deployed resources (inasmuch as you must setup an ACS namespace alongside the resources you’re securing), and (2) the runtime communications.

Shared Access Signatures were designed more for fine-grained and time-limited authority delegation over resources. The holder of a key could sign and distribute small string-based tokens that define a resource a client could access and timeframe within which they were allowed to access the resource.

image

Hosting a Relay Secured by Shared Access Signatures

In order to expose a BizTalk hosted service in the cloud via Azure Relay, you must first create a namespace for the relay – a place for the cloud endpoint to be hosted. It’s at the namespace level that you can generate keys used for signing SAS tokens that allow BizTalk server to host a new relay, and tokens that allow clients to send messages to any of that namespace’s relays.

The generated keys are associated with policies that have certain associated claims / rights that each is allowed to delegate.

Shared Access Policies for the Azure Relay Namespace

In the example above, using the key associated with the biztalkhost policy, I would be able to sign tokens that allow applications to listen at a relay endpoint within the namespace, but I would not be able to sign tokens allowing applications to Send messages to the same relays.

Clicking a policy reveals its keys. Each policy has 2 keys that can be independently refreshed, allowing you to roll over to new keys while giving a grace period in which the older keys are still valid.

Shared Access Policy Keys

Either one of these keys can be provided in the BizTalk Server WCF-BasicHttpRelay adapter configuration to host a new relay.

Configuring the Security Settings for the WCF-BasicHttpRelay Adapter

When configuring the WCF-BasicHttpRelay adapter, rather than providing a pre-signed token with a pre-determined expiration date, you provide the key directly. The adapter can then sign its own tokens that will be used to authorize access to the Relay namespace and listen for incoming connections. This is configured on the Security tab of the adapter properties.

WCF-BasicHttpRelay Shared Access Signature Configuration

If you would like to require clients to authenticate with the relay before they’re allowed to send messages, you can set the Relay client authentication type to RelayAccessToken:

Enabling client authentication for relay endpoints

From there it’s a matter of choosing your service endpoint, and then you’re on your way to a functioning Relay:

Relay endpoint

Once you Enable the Receive Location, you should be able to see a new WCF Relay with the same name appear in the Azure Portal for your Relay namespace. If not, check your configuration and try again.

image

Most importantly, your clients can update their endpoint addresses to call your new service in the cloud.

The Larger Picture: BizTalk Hybrid Cloud APIs

The Larger Picture: BizTalk Hybrid Cloud APIs

One thing to note about this setup, however, is that the WCF-BasicHttpRelay adapter is actually not running in the Isolated Host. In other words, rather than running as part of a site in IIS, it’s running in-process within the BizTalk Server Host Instance itself. While that provides far less complexity, it also sacrifices the ability to run the request through additional processing before it hits BizTalk Server (e.g., rate limiting, blacklisting, caching, URL rewriting, etc…). If I were hosting the service on-premises I would have this ability right out of the box. So what would I do in the cloud?

Using API Management with BizTalk Server

In the cloud, we have the ability to layer on other Azure services beyond just using the Azure Relay capability. One such service that might solve our dilemma described in the previous section would be Azure API Management.

Rather than having our clients call the relay directly (and thus having all message processing done by BizTalk Server), we can provide API Management itself a token to access to our BizTalk Hosted service. The end users of the service wouldn’t know the relay address directly, or have the required credentials to access it. Instead they would direct all of their calls to an endpoint in API Management.

image

API Management, like IIS, and like BizTalk Server, provide robust and customizable request and response pipelines. In the case of API Management, the definitions of what happens in these pipelines are called “policies.” There are both inbound policies and outbound policies. These policies can be configured for a whole service at a time, and/or only for specific operations. They enable patterns like translation, transformation, caching, and rewriting.

In my case, I’ve designed a quick and dirty policy that replaces the headers of an inbound message so that it goes from being a simple GET request to being a POST request with a SOAP message body. It enables caching, and at a base level implements rate-limiting for inbound requests. On the outbound side it translates the SOAP response to a JSON payload — effectively exposing our on-premises BizTalk Server hosted SOAP service as a cloud-accessible RESTful API.

So what does it look like in action? Below, you can see the submission of a request from the client’s perspective:

BizTalk API from the client's perspective

How does BizTalk Server see the input message? It sees something like this (note that the adapter has stripped away the SOAP envelope at this point in processing):

Request message from BizTalk Server

What about on the outbound side? What did BizTalk Server send back through the relay? It sent an XML message resembling the following:

Response message from BizTalk Server

If you’re really keen to dig into the technical details of the policy configuration that made this possible, they’re all here in their terrifying glory (click to open in a new window, and read slowly from top to bottom):

API Management REST to SOAP policy definition

The token was generated with a quick and dirty purpose-built simple console app (the best kind).

Tips, Tricks, and Stumbling Blocks

Within the API Management policy shown above, you may have noticed the CDATA sections. This is mandatory where used. You’ll end up with some sad results if you don’t remember to escape any XML input you have, or the security token itself which includes unescaped XML entities.

Another interesting thing with the policy above is that the WCF-BasicHttpRelay adapter might choke while creating a BizTalk message out of the SOAP message constructed via the policy above (which includes heaps of whitespace so as to be human readable), failing with the following message The adapter WCF-BasicHttpRelay raised an error message. Details “System.InvalidOperationException: Text cannot be written outside the root element.

This can be fixed quite easily by adjusting the adapter properties so that they’re looking for the message body with the expression set to “*”.

image

Questions and Final Thoughts

During the webinar the following questions came up:

  • One audience member asked, “Is https supported?”
    • A: Yes, for both the relay itself and the API management endpoint.
  • Another audience member inquired, “Maximum size is 256KB; I was able to get a response about 800 KB; Is that because BizTalk and Azure apply the compression technology and after compression the 800KB response shrinks to about 56KB?”
    • A: The size limit mentioned applies to brokered messages within Service Bus (i.e., those you would receive using the SB-Messaging adapter). Azure Relay is a separate service that is not storing the message for any period of time – messages are streamed to the service host. Which means if BizTalk Server disconnects, the communication is terminated, but on the plus side you’re not having to worry about how much space you’re allowed to use per message in the cloud. There’s a nice article comparing the two communication styles over here.

I hope this has been both helpful and informative. Be sure to keep watching for more of QuickLearn Training’s coverage of New Features in BizTalk Server 2016, and our upcoming BizTalk Server 2016 training courses.

Integrate 2014 Day 2 in Review

By Nick Hauenstein

I’m going to start off today’s post with some clarifications/corrections from my previous posts.

First off – It is now my understanding that the “containers” in which the Microservices will be hosted and executed in are simply a re-branding of the Azure Websites functionality that we already have. This has interesting implications for the Hybrid Connections capability as well – inasmuch as our Microservices essentially inherit the ability to interface directly with on-premise systems as if they were local.

This also brings clarity to the “any language” remark from the first day. In reality, we’re looking at building them in any language supported by Azure Websites (.NET languages, Java, PHP, Node.js, Python) – or truly any language if we host the implementation externally but expose a facade through Azure Websites (at the expense of egress, added latency, loss of auto-load balancing and scale), but I digress.

UPDATE (05-DEC-2014): There are actually some additional clarifications now available here, please read before continuing. Most importantly there is no product called the Azure BizTalk Microservices Platform – it’s just a new style in which Microsoft is approaching building out and componentizing integration (and other) capabilities within the Azure Platform. Second, Azure Resource Manager is a product that sits on top of an engine. The engine is what’s being shared with the new Workflow capability discusssed – not the product itself. You could say it’s similar to how workflow services and TFS builds use the same underlying engine (WF).

The rest of the article remains unchanged because there are simply too many places where the name was called out as if it were a product.

Rules Engine as a (Micro)Service

After a long and exciting day yesterday, day 2 of Integrate 2014 got underway with Anurag Dalmia bringing the latest thinking around the re-implementation of the BizTalk Business Rules Engine that is designed to run as a Microservice in the Azure BizTalk Microservices Platform.

Anurag Dalmia presents the Rules Engine Design Principles 

First off, this is not the existing BizTalk Rules Engine repackaged for the cloud. This is a complete re-implementation designed for cloud execution and with the existing BRE pain points in mind. From the presentation, it sounds as if the core engine is complete, and all that remains is a new Azure Portal-based design experience (which currently only exists in storyboard form) around designing vocabularies, rules, and policies for the engine.

Currently the (XML-based, not JSON!) vocabularies support:

  • Constant & XML based vocabulary definitions
  • Single value, range and set of constants
  • XML vocabulary definitions (created from uploaded schema)
  • Bulk Generation (no details were discussed for this, but I’d be very interested in seeing what that will look like)
  • Validation

Vocabulary Design Experience in Azure BizTalk Microservices

Missing from the list above are really important things like .NET objects and Database tables, but these are slated for future inclusion. That being said, I’m not sure how exactly custom .NET classes as facts are going to work in a Microservices infrastrcture assuming that each Microservices is an independent isolated chunk of functionality invoked via RESTful interactions. Really, the question becomes how does it get your .dlls so that it can Activator.CreateInstance that jazz? I guess if schema upload can be a thing there, then .dll upload can as well. But then, are these stored in private Azure blob containers, some other kind of repository, or should we even care?

On the actual Rules creation side, things become quite a bit more interesting. Gone is the painful 1 million click Business Rule Composer – instead, free flowing text takes its place. All of this is still happening in a web-based editor that also provides Intellisense-like functionality, tool-tops, and color-coding of special keywords. To get a sense for what these rules look like, here’s one rule that was shown:

If (Condition)

ClaimAmount is greater than AutoApprovalLimit OR
TreatmentID is in SpecialTreatmentIDs

Then (Action)

ClaimStatus equals "Manual Approval Required"
ClaimStatesReason equals "Claim sent for Manual Approval"
Halt

Features of the Rules Engine were said to include:

  • Handling of optional XML nodes
  • Enable/Disable Rules
  • Rule prioritization through drag-and-drop
  • Support for Update / Halt Forward Chaining (No Assert?)
  • Test Policy (through Web UI, or via Test APIs)
  • Schema Management

I’m not going to lie, at that point, I got really concerned with no declared ability to Assert new facts (or to Retract facts for that matter), and I’m hoping that this was a simple omission to the slide, but I do intend to reach out for clarification there.

Storyboard for the Web-based Test Policy UI

Building Connectors and Activities

After the session on the Rules Engine, Mohit Srivastava was up to discuss Building Connectors an Activities. The session began, however, with a recap of some of the things that Bill Staples discussed yesterday morning. I’m actually really thankful for this recap as I had missed some things along the way (namely Azure Websites as the hosting container), and I also had a chance to snap a picture of what is likely the most important slide of the entire conference (which I had missed getting a picture of the first time around).

Microservices are part of refactored App Platform with integration at the core

I’ve re-created the diagram of the “refactored” Azure App Platform with a few parenthetical annotations:

Re-factored Azure App Platform

One interesting thing about this diagram, when you really think about it, is that the entry point (for requests coming into stuff in the platform) doesn’t have to be from the top down. It can be direct to a capability, or to a process, or to a composed set of capabilities or to a full human friendly UI around any one of those things.

So what are all of the moving pieces that will make it all work?

  1. Gallery for Microservice Discovery
    • Some Microservices will be codeless (e.g., SaaS and On-premises connectors)
    • Others will be code (e.g. activities and custom logic)
  2. Hosting – Azure App Container (formerly Azure Websites)
  3. Gateway
    1. Security – identity broker, SSO, secure token store
    2. Runtime – name resolution, isolated storage, shared config, “IDispatch” on WADL/Swagger (though such metadata is technically optional)
    3. Proxy – Monitoring, governance, test pages
      • Brings all of the value of API management to the gateway out-of-the-box
  4. Developers
    • Writing RESTful services in your language of choice.

To further prove just exactly what a Microservice is, he demoed a sample service starting from just the raw endpoint. You can even look for yourselves here:

What’s really cool about all of this, is that the tooling support for building such services is going to be baked into Visual Studio. We already have Web API for cleanly building out RESTful services, but the ability to package these with metadata and publish to the gallery (a la NuGet) is going to be included as part of a project template and Publish Web experience. This was all shown in storyboard form, and that’s when I had my moment of developer happiness (much like Nino’s yesterday as he gained reprieve from crying over BizTalk development pain points  when first using the productivity tool that he developed).

Publish Web Experience for BizTalk Microservices built using Web API

Finally, we’re getting low enough into the platform that we’re inside Visual Studio and can meaningfully deploy some code – one of the greatest feelings in the whole world.

The talk continued showing fragments of code (that, unfortunately, were too blurry in my photos to capture here) that demonstrated the direct runtime API that Microservices will have access into in order to do things like have encrypted isolated storage, and a mechanism to manage and flow tokens for external SaaS products that are used within a larger workflow. There’s some really exciting stuff here. I honestly could have sat through an entire day of that session just going all the way into it.

But, alas, there were still more sessions to be had.

API Management and Mobile Services

I’m grouping these together inasmuch as they represent functionality within Azure that we have had now for some amount of time (Movile Services certainly longer than API management). I’ve seen quite a bit on these already, and was mainly looking for those touchpoints with the Microservices story.

API Management sits right under Microservices in the diagram shown earlier, and it would make sense that it would become the monetization strategy for developers that want to write/expose a specific capability within Azure. However, that wasn’t explicitly stated, and, in fact, the only direct statement we had was above where we saw that the capabilities of API Management are available within the gateway. That left me a little confused, and I honestly could have missed something obvious there. As much as Josh was fighting PowerPoint, I was fighting my Surface towards the beginning of his talk:

Fighting my Surface at the beginning of Josh Twist's talk on API Management

If you’re not familiar with API Management, it provides the ability to put a cloud-hosted wrapper around your API and project it (in the data shaping sense) to choose carefully the exposed resources, actions, and routes through which they can be accessed. It handles packaging your APIs into saleable subscriptions and monitoring their use. That’s a gross oversimplification, and I highly recommend that you dig in right away and explore it because there’s a lot there, and it’s super cool.

That being said, in terms of Microservices, it would be truly great if we could use that to wrap around external services and then turn the Azure hosted portion of the API into a Microservice in such a way that we can even flow back to our external service some of the same information that we can get directly from the APIs that would be available if we were writing within a proper Azure App Container. For example, to be able to request a certain value from the secure store to be passed in a special HTTP Header to our external service –- which could then use that value in any way that it wanted. That would really help speed adoption, as I could quite easily then take any BizTalk Server on-premise capability, wrap a nice RESTful endpoint around it, and not have to worry about authorization, rate-limited, or re-implementation.

Next up was Kirill Gavrylyuk rocking Xamarin Studio on a Mac to talk about Mobile Services (he even went for a Hat-trick and launched an Android emulator). He actually did feature a slide towards the end of his talk showing the enterprise/non-consumer-centric Mobile Services development  experience by positioning Mobile Services within the scope of the refactored Azure App Platform:

Mobile Services in light of Refactored App Platform

I’m going to let that one speak for itself for now.

Those two talks were a lot of fun, and I don’t want to sell them short by not writing as much, but there’s certainly already a lot of information already out there for these ones.

Big Data With Azure Data Factory & Power BI

The day took a little bit of a shift after lunch as we saw a few talks on both Azure Data Factory and Power BI. In watching the demos, and seeing those talks, it’s clear that there’s definitely some really exciting stuff there. Sadly, I’m already out-of-date in that area, as there were quite a few things mentioned that I was entirely unaware of (e.g., Azure Data Factory itself). For now, I’ll leave any coverage of those topics to the BI and Big Data experts – which I will be the first to admit is not me. I don’t think in more than 4 dimensions at a time – though with Power BI maybe all I need to know how to do is to speak English.

For all of those out there that spend their days writing MDX queries, I salute you. You deserve a raise, no matter what you’re being paid.

HCA Rocks BizTalk Server 2013 R2

For the last talk of the day, Alan Scott from HCA and Todd Rivers from Microsoft presented on HCA’s use of BizTalk Server 2010 & 2013 R2 for processing HL7 workloads (and MSMQ + XML) workloads. The presentation was excellent, and it’s going to be really difficult to capture it here. One of the most impressive things (besides their own web-based rules editing experience) is the sheer scale of the installation:

HCA Rocks BizTalk Server 2013 R2

Cultural Change Reaps Biggest Rewards – Value People Not Software

The presentation really highlighted not only the flexibility of the BIzTalk platform, but the power of having a leader that is able to evangelize the capability to the business – while being careful to not talk in terms of the platform, but in terms of the people and the data, and also while equipping the developers with the tools they will need to succeed with that platform.

image

image

Looking Forward

Looking forward beyond today, I’m getting really excited to see the direction that we’re headed. We still have a rock solid platform on-premise alongside a hyper-flexible distributed platform brewing in the cloud.

To that end, I actually want to announce today that QuickLearn Training will be hosting an Azure BizTalk Microservices Hackathon shortly after the release of the public preview. It will be a fun time to get together and look through it all together, to discuss which microservices will be valuable, and most of all to build some together that can provide value to the entire community.

If any community is up for that, I know it’s the BizTalk community. I’m just really excited that there’s going to be a proper mechanism to surface those efforts so that anyone who builds for the platform will have it at their disposal without worries.

If you want more details, or you want to join us (physically, or even remotely) when that happens, head over here: http://bit.ly/1AcMWIy

For that matter, if you want to host one in your city at the same time and connect up with us here in Kirkland, WA via live remote feed, that would be great too 😉 Let’s build the future together.

Well, that’s all for now! Take care!