Upgrading BizTalk Server

By Rob Callaway

In my experience there are two upgrade methods for BizTalk Server environments. You either (1) buy new hardware and rebuild from scratch by installing the latest versions of Windows Server, SQL Server, etc. or (2) perform an “in-place” upgrade where you simply install the new version of BizTalk Server to replace the existing version.

While I’ve personally done (and prefer) the former many times, while recently updating QuickLearn Training’s BizTalk Server classes to BizTalk Server 2013 R2 I decided to give the in-place upgrade a shot. I figured that Windows Server 2012 R2 and SQL Server 2014 weren’t bringing much to BizTalk table so keeping the existing SQL Server 2012 SP1 and Windows Server 2012 installations for another year or so would be fine. Additionally since our courses utilize virtual machines there’s no hardware/software entanglements to consider.

The Plan

  1. Uninstall Visual Studio 2012
  2. Install Visual Studio 2013
  3. Update BizTalk Server 2013 to BizTalk Server 2013 R2
  4. Install all available updates for the computer via Microsoft Update

Let’s get started.

Uninstall Visual Studio 2012

Unless you’re upgrading a development system this step likely isn’t required but in my case the virtual machine is used for QuickLearn Training’s Developer Immersion and Developer Deep Dive courses. Although Visual Studio supports side-by-side installations of multiple versions I opted to remove Visual Studio 2012 since it wasn’t needed for our courses anymore.

This was pretty easy. I went to Programs and Features and chose to uninstall Visual Studio 2012.

 Install Visual Studio 2013

Again, this was pretty easy. I simply accepted the default options for installation and walked away for 45 minutes. When I came back I saw this.

VS 2013 Install

Upgrade BizTalk Server 2013 to BizTalk Server 2013 R2

This is where I started feeling nervous. Would it work? Is it really going to be this easy? There was only one way to find out. Before starting the upgrade, I thought about the services used by BizTalk Server and stopped the following services:

  • All the BizTalk host instances
  • Enterprise Single Sign-On Service
  • Rule Engine Update Service
  • World Wide Web Publishing Service

I mounted the BizTalk installation ISO to the virtual machine and ran Setup.exe.

BizTalk Setup.exe

The splash screen! This might actually work!

BizTalk Splash Screen

Product key redacted to protect the innocent.

BizTalk License

Finally, it knows I’m upgrading! I guess I was wrong about needing the Enterprise Single Sign-On Service stopped.

BizTalk Summary

Start ESSO and now we are in business. Hit Upgrade!

BizTalk Upgrade

A few minutes later… boom!

BizTalk Upgrade Complete

Install Other Updates

It’s been awhile since we installed updates from Windows Update so let’s run that.

Updates

I’m going to be here forever.

Lessons Learned

This upgrading stuff is a lot easier than I thought it would be. I strongly recommend it and I’ll probably use the same method when updating the Administrator Immersion and Administrator Deep Dive courses.

Exam 70-499 MCSD:ALM Recertification exam prep

By Anthony Borton

To keep the Microsoft Certified Solution Developer: Application Lifecycle Management (MCSD:ALM) certification current you must complete a recertification exam every two years. Since the release of the MCSD:ALM certification, many of our students have taken our TFS courses to help them prepare for the exams.

As the two year recertification deadline starts to arrive,  early charter exam members are facing the task of preparing for the recertification exam. Here are some helpful resources to help you focus your study.

If you do not currently hold the MCSD:ALM certification, you will be required to complete three exams to earn this certification. QuickLearn Training offers great instructor led courses to help you prepare for these exams.

Aligning Microsoft Azure BizTalk Services Development with Enterprise Integration Patterns

By Nick Hauenstein

We have just finished a fairly large effort in moving the QuickLearn Training Team blog over to http://www.quicklearn.com/blog, as a result we had a mix-up with the link for our last post.

This post has moved here: Aligning Microsoft Azure BizTalk Services Development with Enterprise Integration Patterns

Aligning Microsoft Azure BizTalk Services Development with Enterprise Integration Patterns

By Nick Hauenstein

Earlier this week, I started a series of discussions with the development team here at QuickLearn Training. These discussion included John Callaway, Rob Callaway, and Paul Pemberton, and centered around Best Practices for development of Microsoft Azure BizTalk Services integrations. The topic arose as we were working on our upcoming Microsoft Azure BizTalk Services course.

Best practices are always a strange topic to address for a technology that is relatively young, and at the same time rapidly evolving. However, the question comes up in both classes and consulting engagements – no matter what the technology.

With regards to BizTalk Server, we have years’ worth of real-world experience from which to pull both personally, and from our customers’ narratives. In addition, there are extensive writings in books, blog posts, white papers, and technical documentation. The same can’t be said, yet, of BizTalk Services.

This then represents an attempt at importing some known-good ideas and ideals into the world of MABS so that value can be realized. It is written not to codify practices and anoint them as “Best” without being proven, but instead to start a dialog on how best to use MABS both in present form, and with additional functionality that is yet to come.

NOTE: I am actively building out integrations to explore these ideas more fully, and expect to find that at least one thing I mention here either won’t work, or isn’t the best after all.

A Language of Patterns

For a brief period of my BizTalk life, I worked on the team that built the Enterprise Service Bus Toolkit 2.0. My job was to document not only the functionality as it was built, as well as sample applications, but also how existing Integration Patterns and SOA Patterns could be implemented using the Toolkit.

I explicitly recall that this last point was especially emphasized by the leader of that small team, Dmitri Ossipov. He wanted to communicate how integration patterns documented by Gregor Hophe and SOA patterns documented by Thomas Erl could be implemented using the ESB Toolkit.

Why would we spend time linking product documentation to patterns? Because Dmitri understood something important. He understood that buzzword compliance wasn’t enough to drive adoption of the platform (i.e., being able to say “this thing uses an ESB under the covers,” or nowadays, “this is cloud-based integration,” or “hybrid integration” [isn’t all integration a hybrid of something?]). Instead adoption would be driven as developers could see it solving a problem, and solving it elegantly – where each Itinerary Service, out-of-the-box or custom, implemented a specific integration pattern and composing them could solve any integration challenge.

So which problems were the best to try and solve? Probably the most common ones (i.e., the ones that are so common as to have vendor-agnostic industry-wide documented patterns for overcoming them). So what does that have to do with MABS? Everything.

The problem space hasn’t changed. The cloud has been quite disruptive in the overall industry – likely for the best. It has leveled the playing field to the point that the established players can be publicly challenged by marketing teams of smaller players that are brand-new to the scene because the scene itself is still new. At the same time, the art and science of integrating systems is the same.

The patterns for approaching this brave new world are remixes and riffs on the patterns that we’ve already had. As a result I’m going back to the fundamentals, and using it to understand MABS.

When I’m starting on a new integration with BizTalk Server, I first sit down and think of that integration in terms of the problems I’m trying to solve and well known patterns that can be used to overcome those problems. A nice visual example of such a pattern is reproduced here (from Gregor Hohpe’s list of Enterprise Integration Patterns):

Here we’re seeing the Scatter-Gather pattern, which is actually a composite of both the Recipient List pattern (1 message being sent to multiple recipients) and the aggregation pattern (multiple messages being aggregated into a single message). This is being presented in the light of a fictional RFQ scenario.

To see it further broken down, we could pull in just the Recipient List Pattern:

Or we could pull in just the Aggregator Pattern:

Once we’ve determined the patterns involved, and how they’re composed, it’s a fairly straight-forward exercise to envision the BizTalk components that would be involved and/or absolutely necessary in order to implement the solution.

For the purposes of this post, I’m going to see how specific patterns might be implemented in Microsoft BizTalk Server, as well as Microsoft Azure BizTalk Services, and Microsoft Azure Service Bus. Specifically, I will be examining the Canonical Data Model pattern, Normalizer pattern, Content-Based Router pattern, Publish-Subscribe Channel pattern, and even the Dynamic Router pattern.

This is not to say that these are somehow the only patterns possible in MABS, but will instead form a nice basis for discussion. Let’s not get ahead of ourselves though, as there’s still some groundwork to cover.

Seeing the Itinerary Through the Bridges

So many times as I’m reading through content discussing Microsoft Azure BizTalk Services (blogs, code samples, documentation, tutorials, books), I see lots of emphasis placed on a single bridge, but rarely on the itinerary as a whole that contains it. Essentially I’m seeing a lot of this:

image

What are we looking at here? A really boring itinerary that does not showcase the value of MABS, and makes it seem impotent in the process.

Inside the bridge though, there’s a lot going on: message format translation, message validation, message enrichment, message data model transformation, with extensibility points to boot – all put together in a pipes and filters style.

But what if I want to build something bigger than data-in/data-out where not everything has a one-to-one relationship?

Bridges Revisited In Light of BizTalk Server

In our BizTalk Server classes, we ultimately caution people against building these direct point-to-point integrations. Instead, we try to leverage the Canonical Data Model pattern when possible – with the Canonical Data Model being a Canonical Schema.

What does that look like in BizTalk Server? First of all, I personally like to extend a tad beyond this base pattern and mix in a little bit of the Datatype Channel pattern as well.

With that in mind, we would start with a Receive Port for each type of message (e.g., Order, Status Request), and a Receive Location for each source system and data type we expect to see (e.g., FTP with Flat-file, vs. WCF-BasicHttp with XML).

From there, we have a canonical schema for each of those types of messages. An internal Order message for example, or internal Status Request message. At the end of the Receive Port, the appropriate transform would be selected so that a message conforming to the canonical schema is published to the Message Box database.

image

At the Message Box, another pattern is at work, whether or not we ever wanted it. BizTalk Server is doing publish subscribe routing through the Message Box based on properties stored in the context of the message.

At this point each recipient of the message has their own Send Port through which a final transformation, to THEIR order format is performed along with a final pipeline before it is transmitted through the adapter.

What is all of this doing for us? It’s separating concerns and providing a loosely-coupled design.

I can have multiple sources for a message, and easily add a new source. To add a source is a matter of adding a Receive Location and a Map that translates the incoming message to internal message formats. To add a destination is just as simple and loosely coupled. It really is a beautiful place to live.

The Tightly Coupled Nightmare We’re Building for Ourselves

So why then, when presented with the same integration problems, do we use MABS like this? Is it just that what’s being published widely isn’t representative of the real-world live production solutions that are out there right now? Or is it that we’re blinded and can’t see the itinerary as a whole because we’re caught up with the bridge? I honestly don’t know the answer to this question. All I know for sure is that single bridge itineraries are not the answer for everything.

Why? Because this is a fairly tightly coupled design, and something that we might want to reconsider.

image

So let’s do just that. Let’s start though by following a message as it would flow through this system, while comparing how BizTalk Server would handle the same.

The whole thing starts with the Channel Adapter pattern.

In BizTalk Server, that’s the Adapter configured within each Receive Location that indicates from where the message will be transported. In MABS, it’s our Source.

Next we go through some filters:

In BizTalk Server, this is our Pipeline inside the Receive Location. In MABS, it’s most of our Bridge. WCF is also rocking this pattern (potentially earlier in the process) in the form of the channel stack, and both BizTalk Server and MABS give you hooks into deep customization of that WCF channel stack if you can’t get enough opportunities to touch the message.

For the BizTalk implementation of the Canonical Data Model pattern, we would then go through the Message Translator:

This is actually where things get interesting. In BizTalk Server, this is the Map that executes after the Pipeline, within the context of the Receive Port. For MABS, we’re still in the middle of the Bridge at this point. In fact for MABS, it’s one of the “Filters” to borrow a term from the pattern above, alongside other patterns such as message enrichment.

After this point, BizTalk enters a publish and subscribe step, and our simple, single-bridge, MABS itinerary breaks down. We won’t have another opportunity to prepare the message for its destination beyond our Route Actions – unless we’ve already prematurely made the leap and readied the message for only a single destination, ever.

Why does it break down? Because we’ve artificially limited ourselves to a single bridge. We’ve seen that bridges are highly capable, and have a lot going on, so we try to get it all done up-front. But is that a Best Practice even for BizTalk? Do we do all of our processing on receive just because pipelines are extensible and can host any custom code our hearts desire? Thankfully the model of BizTalk Server, centered around the Message Box, forces us to be a little bit smarter than that.

Standard best practices for BizTalk are helping us “Minimize dependencies when integrating applications that use different data formats.” Shouldn’t best practices for MABS allow us to do the same?

Crossing the Next Bridge

Consider the following Itinerary:

image

Here we have a situation where as I’m receiving my message, I don’t yet know the destination. In fact, I may have to perform all of the work in my first bridge before I know that destination. After I’m done with that first bridge though, I will have a standard internal representation of a “Trade” in this case.

From there, I am using Route Filters to determine my destination, and bridges to ensure the destination gets a message it can deal with.

More specifically, there are really four stages of the overall execution:

image

  1. The first stage is my source and first bridge. These are playing a role similar to that of a Receive Port in BizTalk Server, ending in a Map. I’m not necessarily taking advantage of every stage within the bridge, but I am certainly doing some message transformation, and maybe even enrichment. The main goal of this first stage is to get the message translated FROM whatever it looked like in the source system TO our internal representation
  2. The second stage of execution is between bridges. These are my routes, each with route filters that determine the next stage of the execution. They are playing a role similar to the Filters on my Send Ports in BizTalk, and yet at the same time they’re doing something completely different. Why? Because only one bridge will ever receive the message – even if both filters match, one will have priority. This is not true publish-subscribe, it is instead more akin to a Content-based Router of sorts. We’ll address in the next section how to get publish and subscribe happening for more even more loosely-coupled execution.
  3. The third stage is my second bridge, route actions, and my destination. These are playing a role similar to that of a Send Port in BizTalk Server beginning with a Map. I’m using this bridge to take our internal representation of the message and get it ready for the destination system.
  4. Sometimes we need dynamic routing to some degree even when we know the destination. That’s what might be happening through Route Actions in step 4. In this specific case, that’s not a true statement. However, if my destination were SFTP, for example, this would be my opportunity to set a destination file name based on some value inside the message.

True Publish-Subscribe In MABS

What if I were to tell you that MABS does publish-subscribe better than the Message Box database? Well it’s not true, so you shouldn’t believe it – like most click baiting headlines. In fact, MABS doesn’t do publish-subscribe at all. Rather, it relies on Microsoft Azure Service Bus to provide publish-subscribe capabilities – interestingly, a technology that we can also take advantage of through adapters first made available in BizTalk Server 2013.

So how do we do it? We publish to a Service Bus Topic, and receive through Service Bus Subscriptions (Queues that subscribe to messages published to Topics based on Filters). Consider the following itinerary:

image

We no longer have connections between the three seemingly independent sections of this itinerary. In fact, if we didn’t need to share any resources between them, they could be built as separate itineraries, in separate Visual Studio projects.

This is not new stuff, in fact Richard Seroter mused about the possibilities that Service Bus would provide in MABS all the way back in February.

Dynamic Sends and Receives

Another interesting scenario enabled here is the idea of a Dynamic Receive. What would that look like? Let’s say I have a subscription that I want to toggle or reconfigure at runtime. Maybe all trades for a specific company should result in a special alert message routed somewhere, based on conditions that are not known at design-time, and could change at any time. Let’s also say that I don’t want to re-deploy my existing bridge.

I can simply add another subscription to the topic for all trades related to that company, and deploy a new itinerary that handles the special alert message. But that doesn’t sound really dynamic in nature. In fact, it’s not bringing anything new to the table that BizTalk Server doesn’t do.

On the other hand, what if I had this alerting itinerary already built and deployed, and I want to dynamically route messages to it with conditions that change constantly at runtime (outside the scope of the messages themselves) – e.g., toggle a subscription for a given trade based on its current Ask price and the symbol in the trade message?

In that case, my special alert itinerary might be fed by a queue that receives messages forwarded from a subscription generated on the fly at runtime (because Service Bus can do that!), maybe even within a custom messaging inspector if I go fully insane.

Pain Points and Opportunities For Improvement

Ultimately we’re so close, and yet still so far away from having an ideal development experience with MABS. I’m happy that we have process visibility at design-time. Being able to visualize the whole message flow can be invaluable. On the other hand, I’m not happy with the cost at which this comes: sacrificing the ability to easily partition my resources on Visual Studio Project boundaries.

Separate Projects Per Interface

Ideally I would like to have a project per interface that I’m integrating with, and one common project that contains common schemas. I would also like to be able to split up the whole itinerary into smaller pieces (i.e., files) without losing the ability to visualize the itinerary as a whole.

Instance Subscription Equivalent for Two-Way Sources

In BizTalk Server, our two-way receives turn into two one-way exchanges as messages are published to the Message Box. There’s some magic built-in for correlation, but for the most part we aren’t forced into doing everything that follows in a purely request-reply message exchange pattern. Currently Microsoft Azure BizTalk Services does not offer the same capability. As a result, the nice publish/subscribe example itinerary above is only possible in the case of a one-way pattern.

Best Practices

So what are some practices that we can derive from this whole discussion? Here’s an incomplete list that I’m toying with at the moment:

  1. Build your cloud-based integrations with integration patterns in mind – don’t ignore the fundamentals, or throw away the lessons of the past
  2. Allow bridges to specialize on one portion of the process flow – don’t try to build your entire integration using a single bridge
  3. Use the canonical data model pattern when integrating between multiple source and destination systems when the list of those systems in subject to change
  4. Leverage the publish-subscribe capabilities of Service Bus when needed, don’t rely only on routes within MABS
  5. Push down to on-premise BizTalk Server to handle patterns that aren’t yet possible in MABS.
    • Mediating mismatched of message exchange patterns, through BizTalk Server’s instance subscriptions for two-way receives (e.g., incoming synchronous request/response interface provided against asynchronous one-way request with one-way callback)
    • Handling aggregation of messages, which will require the use of Orchestration
    • Handling the O in the VETRO  pattern (Validate, Enrich, Transform, Route, Operate), which again will require the use of Orchestration
    • Handling the execution of Business Rules within BizTalk Server on-premise

Looking Forward

I’m really looking forward to seeing how Workflow is going to fit into the picture. We’ll have to wait until Integrate 2014 in December to know for sure, but if we keep the fundamentals in mind, it shouldn’t be hard to figure out how they will fit into a concept of MABS itineraries rooted in the integration patterns that make them up.

I truly hope that you have enjoyed this post, and I thank you for reading to the end, if indeed you have and not simply skipped to the end to find out if I ever stopped typing. Remember that I want this to be a dialog, both retrospectively in terms of what is working and what isn’t working with your MABS integrations, as well as a forward-looking dialog as to what practices will likely emerge as generally applicable and ultimately most beneficial.

BizTalk Server 2013 R2 Pipeline Component Wizard

By Nick Hauenstein

While working on the current upcoming installment in my BizTalk Server 2013 R2 New Features Series, I realized that I did not yet have a version of Martijn Hoogendoorn’s excellent BizTalk Server Pipeline Component Wizard that would work happily in Visual Studio 2013.

As a result, I filed this issue, and then made the necessary modifications to make it work in Visual Studio 2013 with BizTalk Server 2013 R2. It is now available for download here if you want to give it a test run.

image

Unfortunately, that has consumed a significant portion of my evening, and thus the next installment in the blog series will be delayed. But fear not, there is some really cool stuff just around the corner.

Integrate 2014 Registration is Live

In other news, Integrate 2014 registration is now live, and is only 72 days away! It’s going to be an awesome start to a cool Redmond December as 300-400 BizTalk enthusiasts from around the globe converge on the Microsoft campus for a few days of announcements, learning, networking, and community.

I’m not going to lie, I’m pretty excited for this one! Especially seeing these session titles for the first day:

  • Understand the New Adapter Framework in BizTalk Services
  • Introducing the New Workflow Designer in BizTalk Services

Well, that’s all for now. I hope to see you there!

BizTalk Summit 2014 in December

By Paul Pemberton

Save the date for BizTalk Summit 2014, which will take place December 3-5 on the Microsoft Campus in Redmond, WA. There will be 300+ partners and customers, so bring your questions, knowledge, and business cards.

Agenda

  • Executive keynotes outlining Microsoft vision and roadmap
  • Technical deep-dives with product group and industry experts
  • Product announcements
  • Real-world demonstrations from lighthouse customers
  • Roundtable discussions + Q&A
  • Partner Showcase Sessions
  • We are also planning hands-on labs so you can roll up our sleeves and experience new capabilities
  • Networking and social activities

The Upcoming End of Extended Support for BizTalk Server 2006

By Paul Pemberton

In July, Microsoft ended all support for BizTalk Server 2004. This got us to thinking about the looming end of support for other versions of BizTalk Server: mainstream support for 2006 and 2009 has already ended, and extended support for 2006 (and R2) ends in July 2016:
http://social.technet.microsoft.com/wiki/contents/articles/18709.biztalk-server-product-lifecycle.aspx

According to Microsoft, extended support differs quite a bit from mainstream support, including:

  • No non-security hotfixes
  • You will be charged for incident support requests
  • No warranty claims
  • No design or feature change requests

Our new BizTalk Server Administrator Deep Dive course covers the skills your team needs to assess and upgrade your infrastructure. View our upcoming advanced BizTalk Server administration class schedule:

Date Duration Location
Sep 29 5 days Kirkland, WA (+ Remote)
Nov 03 5 days Utrecht, Netherlands
Nov 17 5 days Kirkland, WA (+ Remote)

Decoding JSON Data Using the BizTalk Server 2013 R2 JSONDecode Pipeline Component

By Nick Hauenstein

This is the fourth in a series of posts exploring What’s New in BizTalk Server 2013 R2. It is also the second in a series of three posts covering the enhancements to BizTalk Server’s support for RESTful services in the 2003 R2 release.

In my last post, I wrote about the support for JSON Schemas in BizTalk Server 2013 R2. I started out with a small project that included a schema generated from the Yahoo Finance API and a unit test to verify the schema model. I was going to put together a follow-up article last week, but spent the week traveling, in the hospital, and then recovering from being sick.

However, I am back and ready to tear apart the next installment that already hit the github repo a few days back.

Pipeline Support for JSON Documents

In BizTalk Server 2013 R2, Microsoft approached the problem of dealing with JSON content in a way fairly similar to the approach that we used in the previous version with custom components – performing the JSON conversion as an operation in the Decode stage of the pipeline, thus requiring the Disassemble stage to include an XMLDisassemble component for property promotion.

The official component Microsoft.BizTalk.Component.JsonDecoder takes in two properties Root Node and Root Node Namespace that help determine how the XML message will be created.

Finally, there isn’t a JSONReceive pipeline included in BizTalk Server 2013 R2 – only the pipeline component was included. In other words, in order to work with JSON, you will need a custom pipeline.

image

 

Creating a Pipeline for Receiving JSON Messages

Ultimately, I would like to create a pipeline that is going to be reusable so that I don’t have to create a new pipeline for each and every message that will be received. Since BizTalk message types are all about the target namespace and root node name, it’s not reasonable to set that value to be the same for every message – despite having different message bodies and content. As a result, it might be best to leave the value blank and only set it at design time.

This is also an interesting constraint, because if we are receiving this message not necessarily just as a service response, we might end up needing to create a fairly flexible schema (i.e., with a lot more choice groups) depending on the variety of inputs / responses that may be received – something that will not be explored within this blog post, but would be an excellent discussion to bring up during one of QuickLearn’s BizTalk Server Developer Deep Dive classes.

In order to make the pipeline behave in a way that will be consistent with typical BizTalk Server message processing, I decided to essentially take what we have in the XMLReceive pipeline and simply add a JsonDecoder in the Decode stage, with none of its properties set at design time.

image

Testing the JSONReceive Pipeline

In the same vein as my last post, I will be creating automated tests for the pipeline to verify its functionality. However, we cannot use the built-in support for testing pipelines in this case – because properties of the pipeline were left blank, and the TestablePipelineBase class does not support per instance configuration. Luckily, the Winterdom PipelineTesting library does support per instance configuration – and it has a nifty NuGet package as of June.

Unfortunately, the per-instance configuration is not really pretty. It requires an XML configuration file that resembles the guts of a bindings file in the section dedicated to the same purpose. In other words, it’s not as easy as setting properties on the class instance in code in any way. To get around that to some degree, and to be able to reuse the configuration file with different property values, I put together a template with tokens in place of the actual property values.

image

NOTE: If you’re copying this approach for some other pipeline components, the vt attribute is actually very important in ensuring your properties will be read correctly. See KB884981 for details.

From there, the per-instance configuration is a matter of XML manipulation and use of the ReceivePipelineWrapper class’ ApplyInstanceConfig method:

private void configureJSONReceivePipeline(ReceivePipelineWrapper pipeline, string rootNode, string namespaceUri)
{
    string configPath = Path.Combine(TestContext.DeploymentDirectory, "pipelineconfig.xml");

    var configDoc = XDocument.Load(configPath);

    configDoc.Descendants("RootNode").First().SetValue(rootNode);
    configDoc.Descendants("RootNodeNamespace").First().SetValue(namespaceUri);

    configDoc.Save(configPath);

    pipeline.ApplyInstanceConfig(configPath);
}

The final test code includes a validation of the output against the schema from last week’s post. As a result, we’re really dealing with an integration test here rather than a unit test, but it’s a test nonetheless.

[TestMethod]
[DeploymentItem(@"Messagessample.json")]
[DeploymentItem(@"Configurationpipelineconfig.xml")]
public void JSONReceive_JSONMessage_CorrectValidXMLReturned()
{

    string rootNode = "ServiceResponse";
    string namespaceUri = "http://schemas.finance.yahoo.com/API/2014/08/";

    string sourceDoc = Path.Combine(TestContext.DeploymentDirectory, "sample.json");
    string schemaPath = Path.Combine(TestContext.DeploymentDirectory, "ServiceResponse.xsd");
    string outputDoc = Path.Combine(TestContext.DeploymentDirectory, "JSONReceive.out");

    var pipeline = PipelineFactory.CreateReceivePipeline(typeof(JSONReceive));

    configureJSONReceivePipeline(pipeline, rootNode, namespaceUri);

    using (var inputStream = File.OpenRead(sourceDoc))
    {
        pipeline.AddDocSpec(typeof(ServiceResponse));
        var result = pipeline.Execute(MessageHelper.CreateFromStream(inputStream));

        Assert.IsTrue(result.Count > 0, "No messages returned from pipeline.");

        using (var outputFile = File.OpenWrite(outputDoc))
        {
            result[0].BodyPart.GetOriginalDataStream().CopyTo(outputFile);
            outputFile.Flush();
        }

    }

    ServiceResponse schema = new ServiceResponse();
    Assert.IsTrue(schema.ValidateInstance(outputDoc, Microsoft.BizTalk.TestTools.Schema.OutputInstanceType.XML),
        "Output message failed validation against the schema");

    Assert.AreEqual(XDocument.Load(outputDoc).Descendants("Bid").First().Value, "44.97", "Incorrect Bid amount in output file");

}

After giving it a run, it looks like we have a winner.

image

Coming Up in the Next Installment

In the next installment of this series, I will actually put to use what we have here, and build out a more complete integration that allows us to experience sending JSON messages as well, using the new JsonEncoder component.

Take care until then!

If you would like to access sample code for this blog post, you can find it on github.

JSON Schemas in BizTalk Server 2013 R2

By Nick Hauenstein

This is the third in a series of posts exploring What’s New in BizTalk Server 2013 R2. It is also the first in a series of three posts covering the enhancements to BizTalk Server’s support for RESTful services in the 2003 R2 release.

In my blog post series covering the last release of BizTalk Server 2013, I ran a 5 post series covering the support for RESTful services, with one of those 5 discussing how one might deal with JSON data. That effort yielded three separate executable components:

  1. JSON to XML Converter for use with the WFX Schema Generation tool
  2. JSON Decoder Pipeline Component (JSON –> XML)
  3. JSON Encoder Pipeline Component (XML –> JSON)

It also yielded some good discussion of the same over on the connectedcircuits blog, wherein some glitches in my sample code were addressed – many thanks for that!

All of that having been said, similar components in one form or another are now available out of the box with BizTalk Server 2013 R2 – and I must say the integrated VS 2013 tooling blows away a 5 minute WinForms app. In this post we will begin an in-depth examination of this improved JSON support by first exploring the support for JSON Schemas within a BizTalk Server 2013 R2 project.

How Does BizTalk Server Understand My Messages?

All BizTalk Server message translation occurs at the intersection between 2 components: (1) A declarative XSD file that defines the model of a given message, with optional inline parsing/processing annotations, and (2) an executable pipeline component (usually within the disassemble stage of a receive pipeline or assemble stage of the send pipeline) that reads the XSD file and uses any inline annotations necessary to parse the source document.

This is the case for XML documents, X12, EDIFACT, Flat-file, etc… It only logically follows then that this model could be extended for JSON. In fact, that’s exactly what the BizTalk Server team has done.

JSON is an interesting beast however, as there already exists a schema format for specifying the shape of JSON data. BizTalk Server prefers working with XSD, and makes no exception for JSON. Surprisingly this XSD looks no different than any other XSD, and contains no special annotations to reflect the message being typically represented as JSON content.

What Does a JSON Schema Look Like?

Let’s consider this JSON snippet, which represents the output of the Yahoo! Finance API performing a stock quote for MSFT:

image

This is a pretty simple instance, and it is also an interesting case because it has a null property Ask, as well as a repeating record quote that does not actually repeat in this instance. I went ahead and saved this snippet to the worst place possible – my Desktop – as quote.json and then created a new Empty BizTalk Server Project in Microsoft Visual Studio 2003 (with the recently released Update 3).

From there I can generate a schema for this document by using the Add New Items… context-menu item for the project within Solution Explorer. From there, I can choose JSON Schema Wizard in the Add New Item dialog:

image

The wizard looks surprisingly like the Flat-file schema wizard, and it looks like quite a bit of that work might have been lifted and re-purposed for the JSON schema wizard. What’s nice about this wizard though, is that this is really the only page requiring input (the previous page is the obligatory Welcome screen) – so you won’t be walking through the input document while also recursively walking through the wizard.

image

Instead the wizard makes some core assumptions about what the schema should look like (much like the WFX schema generator). In the case of this instance, it’s not looking so great. Besides from essentially every single element being optional in the schema, the quote record was not set as having a maxOccurs of unbounded – though this should really be expected given that our input instance gave no indication of this. However, maybe you’re of the opinion that the wizard may have been written to infer that upon noticing it was a child of a record with a plural name – which might be an interesting option to see.

image

Next the Ask record included was typed as anyType instead of decimal – which again should be expected given that it was simply null in the input instance. However, maybe this could be an opportunity to add pages to the wizard asking for the proper type of any null items in the input instance.

Essentially, it may take some initial massaging to get everything in place and happy. After tweaking the minOccurs and maxOccurs, as well as types assigned to each node, I decided it would be a good time to ensure that my modifications would still yield a schema that would properly validate the input instance I provided to the wizard.

How do We Test These Schemas Or Validate Our JSON Instances?

Quite simply, you don’t. At least not using the typical Validate Instance option available in the Solution Explorer context-menu for the .xsd file. Instead this will require a little bit of work in custom-code.

Where am I writing that custom code? Well right now I’m on-site in Enterprise, Alabama teaching a class that involves a lot of automated testing. As a result, I’m in the mood for writing some unit tests for the schema – which also means updating the project properties so that the class generated for the schema derives from TestableSchemaBase and adds a method we can use to quickly validate an instance against the schema.

image

It also means adding a new test project to the solution with a reference to the following assemblies:

  • System.Xml
  • C:Program Files (x86)Microsoft Visual Studio 12.0Common7IDEPublicAssembliesMicrosoft.BizTalk.TOM.dll
  • C:Program Files (x86)Microsoft Visual Studio 12.0Common7IDEPublicAssembliesMicrosoft.BizTalk.TestTools.dll
  • C:Program Files (x86)Microsoft Visual Studio 12.0Common7IDEPublicAssembliesMicrosoft.XLANGs.BaseTypes.dll
  • Newtonsoft.Json (via Nuget Package)

That’s not all the setup required unfortunately. I still have to add a new TestSettings file to the solution, ensure that deployment is enabled, that it is deploying the bolded Microsoft.BizTalk.TOM.dll assembly above, and that it is configured to run tests in a 32-bit hosts. From there I need to click TEST > Test Settings > Select Test Settings File, to select the added TestSettings file.

image

image

image

image

With all the references in place and the solution all setup, I’ll want to bring in the message instance(s) to validate. In order to ensure that the test has access to these items at runtime, I will add the applicable DeploymentItem attribute to each test case that requires one.

using Microsoft.VisualStudio.TestTools.UnitTesting;
using Newtonsoft.Json;
using System.IO;
using System.Xml;

namespace QuickLearn.Finance.Messaging.Test
{
    [TestClass]
    public class ServiceResponseTests
    {

        [TestMethod]
        [DeploymentItem(@"Messagessample.json")]
        public void ServiceResponse_ValidInstanceSingleResultNullAsk_ValidationSucceeds()
        {

            // Arrange
            ServiceResponse target = new ServiceResponse();
            string rootNode = "ServiceResponse";
            string namespaceUri = "http://schemas.finance.yahoo.com/API/2014/08/";
            string sourceDoc = Path.Combine(TestContext.DeploymentDirectory, "sample.json");
            string sourceDocAsXml = Path.Combine(TestContext.DeploymentDirectory, "sample.json.xml");

            ConvertJsonToXml(sourceDoc, sourceDocAsXml, rootNode, namespaceUri);

            // Act
            bool validationResult = target.ValidateInstance(sourceDocAsXml, Microsoft.BizTalk.TestTools.Schema.OutputInstanceType.XML);

            // Assert
            Assert.IsTrue(validationResult, "Instance {0} failed validation against the schema.", sourceDoc);

        }

        public void ConvertJsonToXml(string inputFilePath, string outputFilePath,
            string rootNode = "Root", string namespaceUri = "http://tempuri.org", string namespacePrefix = "ns0")
        {
            var jsonString = File.ReadAllText(inputFilePath);
            var rawDoc = JsonConvert.DeserializeXmlNode(jsonString, rootNode, true);

            // Here we are ensuring that the custom namespace shows up on the root node
            // so that we have a nice clean message type on the request messages
            var xmlDoc = new XmlDocument();
            xmlDoc.AppendChild(xmlDoc.CreateElement(namespacePrefix, rawDoc.DocumentElement.LocalName, namespaceUri));
            xmlDoc.DocumentElement.InnerXml = rawDoc.DocumentElement.InnerXml;

            xmlDoc.Save(outputFilePath);
        }

        public TestContext TestContext { get; set; }
    }
}

What Exactly Am I Looking At Here?

Here in the code we’re converting our JSON instance first to XML using the Newtonsoft.Json library. Once it is in an XML format, it should (in theory at least) conform to the schema definition generated by the BizTalk JSON Schema Wizard. So from there, we take output XML, and feed it into the ValidateInstance method of the schema to perform validation.

The nice thing about doing it this way, is that you will not only get a copy of the file to use within the automated test itself, but you can also use the file generated within the test in concert with the Validate Input Instance option of the schema for performing quick manual verifications as well.

After updating the schema, it looks like it’s going to be in a usable state for consuming the service:

Screenshot of final code

Coming Up Next Week

Next week will be part 2 of the JSON series in which we will test and then use this schema in concert with the tools that BizTalk Server 2013 R2 provides for consuming JSON content.

If you would like to access sample code for this blog post, you can find it on github.

Getting Started with BizTalk Server 2013 R2’s Built-in Health Monitoring

By Nick Hauenstein

This is the second in a series of posts exploring What’s New in BizTalk Server 2013 R2.

With the BizTalk Server 2013 R2 release, Microsoft has finally implemented a common request to have some level of built-in monitoring tool for a BizTalk Server installation. While this built-in option won’t replace things like the BizTalk Server 2013 Monitoring Management Pack for System Center Operations Manager, or come remotely close to the feature set of third party options like BizTalk360 or AIMS for BizTalk, but it does provide an out-of-the-box solution for performance monitoring, environment validation and notifications.

Ultimately, this tool was built by the same project team that created the MsgBoxViewer tool (MBV), and represents an effort to more tightly integrate this stand-alone tool with the BizTalk Server Administration Console.

The first release supports the following features, with updates promised in the future:

  • Ability to monitor multiple BizTalk Server environments
  • MBV report generation and viewing
  • Dashboard view for overall BizTalk Server environments health
  • Scheduled report collection
  • Email notifications
  • Performance monitor integration with pre-loaded scenario-based performance counters
  • Report management

I Have BizTalk Server 2013 R2, But Where Is This Health Monitor?

Unfortunately, the Health Monitor is not registered for use by default, and doesn’t show up anywhere by default. Before making use of it, you’ll have to do some dirty work to get it prepared for use. The core files live under the BizTalk Server installation directory at SDKUtilitiesSupport ToolsBizTalkHealthMonitor.

BizTalk Server 2013 R2 Health Monitor Files

So what do we do here? We need to run InstallUtil.exe against the MBVSnapin.dll. In order to accomplish this, we can either drop to the command line, or drag and drop MBVSnapin.dll on InstallUtil.exe.

Register BizTalk Server 2013 R2 Health Monitor

Once it is registered, you can add it to a Management Console alongside the BizTalk Server Administration Console for an all-up management experience.

In order to do that, run mmc /32

Running MMC /32

After a nice clean and empty management console appears, press CTRL+M, and then add both the BizTalk Health Monitor and the Microsoft BizTalk Server Administration snap-ins to the console.

Adding BizTalk Server 2013 R2 Health Monitor to the Console

You end up with an Administration Console window containing the items shown in the screenshot below. This might be a good opportunity to add the Event Viewer snap-in for each of your runtime servers as well. At this point, you may want to save the console setup for later use.

BizTalk Server 2013 R2 Administration Console with Health Monitor

What Can I Do with This Thing?

If you expand the Performance node, and click Current Activity, you will be able to examine select performance counters across your BizTalk Server installations through an embedded perfmon display.

image

If you right-click each BizTalk Group within the Health Monitor, you have the ability to execute a set of rules that validate your installation while highlighting problem areas.

Running Analysis

Once you run the analysis, a node is added to the navigation pane labeled with the date and time of the analysis. This report contains the result of executing validation rules at a fixed point in time. This report can be sent via email, or opened in the browser for additional details.

Results of Analysis

Right now, it’s looking like my installation is throwing a pretty critical warning when it comes to the Jobs category. Let’s see what that might be.

Jobs Warning

It looks like the Backup BizTalk Server job hasn’t been enabled, and there isn’t any history indicating that this job has ever executed. That’s fairly concerning and problematic. It would be nice if we could have been notified about that in a more proactive manner.

Enabling Automatic Scans / Notifications

If I go back to the BizTalk Group within the Health Monitor, and click Settings, I will find myself at a screen that enables me to configure automatic analysis of my BizTalk Server Group as well as notifications of scan results.

BizTalk Health Monitor Settings Menu

Additionally, I can even configure the queries executed, rules evaluated, and the views on top of that information I want to include in each analysis.

Configuring Reporting Settings

If I want to enable notifications, I have a few different options. I can either configure email notifications, or if I want to essentially pipe these notifications into another tool that can consume event log entries, I can direct notifications to the event log instead.

Notification Settings

More to Come

As mentioned earlier, it sounds like the team is already well underway with an update to this tool, and it’s safe to say that there will likely be more to come. I would venture to guess that this will mean either more features and deeper console integration (since there are still quite a few times where clicking an option launches the browser to show full details). We’ll keep this space updated.

In the meantime, if you’re just now moving to either BizTalk Server 2013, or BizTalk Server 2013 R2, and you want to keep your skills up to date, check out one of our BizTalk 2013 Developer Immersion classes or BizTalk 2013 Administrator Immersion classes. Just this last week, students in the developer class that I taught were able to see this functionality demonstrated live.

If you’re already a QuickLearn student, keep following the blog to get the latest bleeding edge information as it becomes available. The series will continue next week!