Azure App Service Logic Apps Course: Update 1

By Rob Callaway

Lessons Learned

Over the last few months, everyone here at QuickLearn Training has learned a thing or two about the Azure App Service and Logic Apps team at Microsoft. The most obvious is that the team is full of Work-a-saurus-Rexes. The number of changes and added features since Azure App Service Logic Apps went into Public Preview (on March 24th) is astounding.

Here’s another thing we’ve learned: keeping up with those changes (and more importantly keeping our Cloud-Based Integration Using Azure App Service course up-to-date with those changes) is going to be a fascinating process. It seems like every day we discover something new or different and we have to decide the best way to incorporate it into our course. Honestly, with the cutting-edge technology, the always interesting integration stories, and awesome team that I work with, I’ve never had more fun designing a course.

Updates to Azure App Service Logic Apps

Enough with the praise! The real purpose of this entry is to provide a log of the updates that we’ve made to the course since our first run last month (May 6th – 8th).

  • Coverage of the updates and changes to Visual Studio templates introduced in the Azure SDK 2.6
  • Added coverage for the JSON encoder API App
  • Added lecture and labs on building custom API Apps that implement Push and Poll triggers
  • Added using the T-Rex Metadata Library to markup API App objects and create custom Swagger metadata for use by the Logic App Designer
  • Restructured the course to provide a more seamless flow through the various technologies

These changes represent a month’s worth of work for the QuickLearn team, and are additive to all the amazing content that we had previously.

Trust Us, We’re Professionals

Azure App Service Logic Apps are the future of the Microsoft integration story. If you haven’t looked at it yet, the time to start is now. If you have looked and you’re finding it hard to keep up with the rapid evolution, don’t fret because we have your back. It’s probably not your full-time job to stay up-to-date on these rapid changes, but it is ours. We love doing it and our team is committed to staying up-to-date on everything in the realm of Logic Apps, and we’re happy to help keep you up-to-date too. Your next chance to catch this exciting and fun class is July 13th, 2015.

As always, your purchase of our class comes with the ability to retake the course for free anytime within 6 months.

Azure App Service Logic Apps in Visual Studio 2013 with Azure SDK 2.6

By Nick Hauenstein

As shown today in Ilya Grebnov, and Stephen Siciliano’s Build 2015 Session titled simply “Logic Apps”, there is now (as of the 29th actually) a nice project template for creating a deployment project containing a Logic App with separate per-environment parameters files. The deployment project is really scoped higher than the Logic App itself, it is instead a definition for an Azure Resource Group to be provisioned by Azure Resource Manager.

Azure Resource Group Project Template

Selecting the project template (found in the Cloud category as Azure Resouce Group) launches a dialog asking for the type of resource(s) that you would like the resource group project to start with. There are two resource types that include Logic Apps: Logic App, and Logic App and API App.

Logic App and API App resource selection dialog

Once created, the project (like any other Cloud Deployment Project up to this point) contains a PowerShell script to perform the actual deployment, along with a helper executable named AzCopy.exe. However, in addition to that, we not only get a file describing the deployment of an App Service Plan, Gateway, API App, and Logic App, we also get a parameters file — initially for a dev environment, but it is a step in the right direction and shows how to make it happen.

Resource Group Project Contents

How do we know that this parameters file will be used? Well the parameters file itself is actually a parameter within the Deploy-AzureResourceGroup.ps1 deployment script, and the default is to use dev:

image

Inside, you will find the parameters apiAppName, gatewayName, logicAppName, and svcPlanName.

Parameters

The definition for the Logic App itself is contained deep within the LogicAppAndAPIApp.json file (starting around line 271 in my test shown here):

Logic App Definition

It consists of a recurrence trigger (polling very hour) that invokes an operation with id of getValues on the deployed API App and outputs an array containing the value of the readValues property on the body of the API App response. I guess that’s the “Hello World” of the Logic App world, eh?

Code where Code Belongs

This represents a big step in the right direction for the team building Logic Apps. It’s putting code where code belongs, in the best IDE ever made and backed by proper source control. It also cleanly separates logic and configuration, enabling multiple environments.

However, without a visual editor, and an/or an quick/easy way to resolve API App ids from the marketplace, it’s going to be tough to build more complex flows in this fashion. I would also like to see the deployment spread across files. Imagine a resource group with multiple Logic Apps (a receive pipeline style Logic App, a process orchestration-style Logic App and a send pipeline style Logic App), working with that in one giant file would be a little bit painful.

In theory, there is a concept of a definitionLink to the body of the workflow itself (so as to not include it directly within the deployment script), but that’s not what the project template will give you:

image

That’s All From Build

I know that I wrote a lot for each of the major BizTalk events over the last 6 months, but for Build 2015, I’m going to keep it short and sweet and to the point. I’m juggling a lot of really cool things right now that I’m excited to share with you as soon as they’re ready. So stay tuned!

As a side note, BizTalk Server on-premise is going to be getting some love over the next year as well. Another major version is in the works, and you’d better bet that I’m going to be all over that as well.

How to Build A Polling Trigger API App

By Nick Hauenstein

In my first article recapping the BizTalk Summit 2015, I said I would revisit the topic of triggers for those of you wanting to build out custom API Apps that implemented either of those patterns.

After working through the official docs, and reviewing the code presented at the BizTalk Summit 2015 in anticipation of our upcoming Cloud-Based Integration Using Azure App Service class, I decided to take a little bit of a different direction.

Rather than do a write-up both here, and in hands-on-lab form within the class of writing a bunch of custom Swashbuckle operation filters for generating the appropriate metadata for a Logic App to properly consume an API App that’s trying to be a trigger, I decided to write up a library that just made it a little bit easier to create custom API Apps that are consumable from the Logic App designer.

How did I go about doing that? Well Sameer Chabungbam had mentioned in his presentation that a more elegant solution might be to use attributes for friendly names and the like. So I went in that direction, and made attributes for all of the stuff that the Logic App designer needs to have in a certain way (really anything that involved vendor extensions in the generated swagger). What do we do with all of those attributes? Well, we uh, read them in custom operation / schema filters of course! So yes, I did have to write some custom filters after all, but now you don’t have to!

Announcing QuickLearn’s T-Rex Metadata Library

PackageIconI rolled all of the code into a library that I’ve named the T-Rex Metadata Library1. The library is available as a NuGet package as well that you can add directly to your API App projects within Visual Studio 2013.

So how can we use that to make custom triggers? I’m glad you asked. Let’s get right into it.

Creating Custom Polling Triggers

The easiest kind of trigger to implement is a polling trigger. A polling trigger is polled by a Logic App at a set polling interval, and is asked for data. When the trigger has data available, it is supposed to return a 200 OK status with the data contained in the response body. When there is not data available, a 202 Accepted status should be returned with an empty response body.

You can find an example of a polling trigger over here. This polling trigger takes in a single configuration parameter named “divisor” and when polled will return data if and only if the current minute is evenly divisible by the divisor specified (kind of silly, I know).

So, how do I separate sample from reality and actually build one?

Steps for Creating a Polling Trigger

  1. Create a new project in Visual Studio using the Web Application template
  2. Choose API App (Preview) as the type of Web Application you are creating
  3. Add the TRex NuGet package to your project
  4. In the SwaggerConfig.cs file, add a using directive for TRex.Metadata
  5. In the SwaggerConfig.cs file, just after the line showing how to use the  c.SingleApiVersion add a line that reads c.ReleaseTheTRex();
  6. Add using directives for the Microsoft.Azure.AppService.ApiApps.Service, and TRex.Metadata namespaces.
    • Microsoft.Azure.AppService.ApiApps.Service provides the EventWaitPoll andEventTriggered extension methods
    • TRex.Metadata provides the T-Rex Metadata attribute, and the Trigger attribute
  7. Create an action that returns an HttpResponseMessage
  8. Decorate the action with the HttpGet attribute
  9. Decorate the action with the Metadata attribute and provide a friendly name, and description, for your polling action
  10. Decorate the action with the Trigger attribute passing the argument TriggerType.Poll to the constructor, as well as the type of model that will be sent when data is available (e.g.,typeof(MyModelClassHere))
  11. Make sure the action has a string parameter named triggerState
    • This is a value that you can populate and pass back whenever polling data is returned to the Logic App, and the Logic App will send it back to you on the next poll (e.g., to let you know that it is finished with the last item sent)
    • You do not need to decorate this parameter with any attributes. T-Rex looks for this property by name and automatically applies the correct metadata (friendly name, description, visibility, and default value)
  12. Optionally, add any other parameters that controls how it should poll (e.g., file name mask, warning temperature, target heart rate, etc…)
    • Decorate these parameters with the Metadata attribute to control their friendly names, descriptions, and visibility settings
  13. Make sure that the action returns the value generated by calling Request.EventWaitPoll when no data is available
    • You can also provide a hint to the Logic App as to a proper polling interval for the next request (if you anticipate data available at a certain time)
    • You can also provide a triggerState value that you want the Logic App to send to you on the next poll
  14. Make sure that the action returns the value generated by calling Request.EventTriggered when data is available
    • The first argument should be the data to be returned to the Logic App, followed by the newtriggerState value that you want to receive on the next poll, and optionally a new polling interval for the next request (if you anticipate data available at a certain time, or more likely that you know more data is immediately available and there isn’t a need to wait).

After you publish your API App, you should be able to use it as a trigger in a Logic App. Here’s what the sample looks like after being deployed:

Polling Trigger Sample

Next Up

In my next post, I’ll turn my focus to custom push triggers. For now though, I need to rest a little bit to get ready for 2 days of teaching followed immediately by Build 2015! It’s going to a long, yet quite fun week this week!

Until next week, take care!

1It was named this so that it would comply with the latest trends of fanciful library names, and so that I could justify naming a method ReleaseTheTRex. If you believe the name too unprofessional, let me know and I may add a TRex.Enterprise.Extensions namespace that includes a more professional sounding method name which simply calls into the former.

BizTalk Summit 2015 – London: Day 2

By Nick Hauenstein

After a relaxing 9-hour flight on a surprisingly empty plane, and a  week to re-adjust to the UTC-0700 time zone, I am finally back home in the Pacific Northwest of the United States and ready to record all of the happenings of Day 2 of the BizTalk Summit in London. This post has been delayed as I have been without my Surface Pro since arriving home — the power adapter was destroyed in transit.

If you haven’t already read codit’s write-up of day 2, I highly recommend giving that a read as well so that you can get a few different perspectives.

Hybrid Integration with BizTalk Server 2013 R2

Steef-Jan Wiggers BizTalk Summit 2015 London

The morning started off with back-to-back sessions from fellow Integration MVPs, beginning with Steef-Jan Wiggers who demonstrated how BizTalk Server 2013 R2 can be used to implement hybrid integration solutions (without relying on things that are currently in preview).

To get a flavor of his demo, you can check out his sample code here, and a wiki write-up over here. Ultimately, his session was a refreshing look at how we can deal with present challenges using presently available software.

I stole this image from the BizTalk 360 photo stream on Facebook because it looked really serious and epic, until you catch the background. It just goes to show that a good sense of humor really is something to be treasured when dealing with the insane world of integration.

From 17 Seconds to Sub-Second Processing with BizTalk Server

Johan Hedberg BizTalk Summit London 2015

Johan Hedberg took the stage next to walk through how he was able to take one integration from 17s all the way down to 0.95s, and how one might apply the same optimizations.

The optimizations that he applied were:

  • Reduce MsgBox hops (Call Orchestration vs. Send shape where possible) to provide a 7s improvement
  • Consider your Level/Layer of re-use (Avoid calling BizTalk process through external interface, and round-trip through send/receive port) to provide a 5s improvement
  • Use Caching to provide a 0.6s improvement
  • Optimize your logical flow (respond to caller as soon as you can, potentially before all work is done) to provide a 0.6s improvement
  • Consider your Host Settings (specifically reducing the polling interval from 500ms to 50ms) to provide a 1.6s improvement
  • Inline Sends (using code to send the message, rather than routing to send port) to provide a 0.3s improvement [Although, I’m personally not sure that it’s worth the cost of giving up everything a Send Port gives you most of the time, so use with caution)
  • Optimize instrumentation (find out where time is being spent). In this case, he mentioned that he discovered a database call being made where indexes weren’t being fully utilized. No sense doing a table scan where an index scan will do. Use the index Luke! This provided a 0.9s improvement
  • Optimize persistence points, providing a 0.05s improvement at this point.

I really enjoyed his talk, mainly because I’ve been through these things while beating integrations into shape to eek out the last little bit of raw performance, and it was nice to see the process all laid out and broken down in terms of numbers gained at each step.

If you want to see the talk for yourself, he gave the talk again as part of an #IntegrationMonday talk.

Sandro Pereira Annoys Tord

Sandro Pereira Annoys Tord

Sandro was up next with a list of tips for both BizTalk Server Administrators and BizTalk Server Developers. Here’s the list:

Tip #1 – Trying to annoy Tord: PowerShell!

Sandro started by weaving a tale involving famed commercial actor turned BizTalk Administrator Tord Glad Nordahl. In this tale, a developer unwittingly annoyed their BizTalk Administrator with mindless EventLog entries written out for every single event and action that occurred during normal processing. Rather than requiring code to change, or bad habits to die, the smart BizTalk Administrator used PowerShell to clean up and redirect the entries to a dedicated log. The script behind the legend is now available for download.

Tip #2 – What RosettaNet, ESB, and UDDI have in common?

They’re backed by databases that aren’t backed up by default. The second tip is then to ensure that these are included in your BizTalk backups.

Tip #3 – BizTalk MarkLog Tables

By default, every 15 minutes a string is stored in the table. Microsoft Terminator tool helps curb the growth within these tables, but requires stopping your entire environment. The solution here is a custom stored procedure to handle all of the clean-up work

Tip #4 – WCF-SAP Adapter supports 64-bit?

Yes. Just make sure that you install all the pre-reqs correctly, and you’ll be set.

Tip #5 – Take control of your environment: Tracking Data

If, as an administrator, you’re receiving MSI packages from your developers with full tracking enabled all over the place within the application, don’t fret, but instead take control of your environment by automating the tracking settings (again using PowerShell).

Tip #6,7 – Single Pipeline for Debatch/XML Validation/FF Encode

To combine tips 6 and 7, if you’re doing XML Validation, XML Debatching, FF Decoding, etc… You may not need to create a pipeline per-message type. Instead creating a generic pipeline and setting the properties in the BizTalk Server Administration console (rather than inside Visual Studio) can save you some heartache and suffering.

Tip #8 – Request-Response CBR with LOB operations

You don’t need orchestrations to use LOB Adapters (but you do need to promote BTS.Operation property for action mapping on the send side). One thing that I liked here, was that rather than creating a pipeline component that by code or configuration “hard-coded” an action in for the BTS.Operation, the solution presented guessed it from the root node of the message – pretty smart.

This is actually one of the things that I used to demo in the BizTalk Server Developer Immersion class if the question was ever posed, and one of the things that is now directly included as part of the core materials for the BizTalk Server Developer Deep Dive class. Yes, I am being shameless with those plugs, because I love our new Deep Dive class.

This is one of those things that provides a real Take the Red Pill moment when you show it to a BizTalk Developer who has been doing it the hard way for a few years.

Tip #9 – Creating custom pipeline components

Here is where my notes start to break down. I typed “custom pipeline components” but what follows are notes about custom functoid development. So either I missed a tip, or I mistyped. I will retain it as written here, and leave it up for you to determine.

So what was the tip in my notes? If you’re building out a custom functoid (see what I mean?), create custom inline functoids whenever possible. This will avoid the need to deploy a bunch of runtime artifacts, since the script itself is inlined in the map. Beautiful, and excellent tip with which to end the talk.

Tord Will Not Be Shaken… By Power BI

Tord will not be shaken by Power BI

Tord was next to the stage for his 30 minute tour of Power BI. I think he introduced it in the right way – at least I felt like it addressed where I was at with the technology. I’ve always been scared of BI, probably unnecessarily – ever since that defining moment as a young developer being exposed to the horror that was a raw, naked, MDX query. When I saw Power BI demoed for the first time, it looked amazing and absolutely incredible – with MDX queries hidden behind plain English, pretty graphs leaping off of dash boards. It looked great. Because it looked great, and computers aren’t really magic, that must mean that there is hard work and lots of technologies to learn behind the scenes, right?

That was the angle that Tord took, as even he was looking at it skeptically, thinking there was going to be a lot of stuff to learn just to get started. However, during his talk, he proved that this wasn’t the case, and showed just how easy it really can be to get up and running with Power BI.

So where’s the BizTalk tie-in? He was using Power BI against data gathered by BAM, because it really is just data in a database at the end of the day. A really beautiful way to tie it all together (and certainly more modern looking than the BAM portal).

Since I’ve bulleted pointed everything else, here’s some bullet points for you on Power BI (really selling points when it comes down to it):

  • Free access to Power BI designer
  • Easy to use web interface
  • Works on any platform
  • Loved by management
  • Like Azure Services, it’s cheap (10 USD per month, per user)

On the Origin of Microservices – Charles Young Connects Past to Present

Charles Young - The Donald Knuth of EAI

For me it’s a personal treat to be able to hear Charles Young speak, and luckily I got the opportunity at the 2015 BizTalk Summit in London. He’s the Donald Knuth of EAI, and always serves to bring the larger picture to developers (myself included) who tend to have myopic tendencies driven by the lust for shiny things.

So how would he introduce himself? “I am an EAI dinosaur. I’ve been doing it for years, and years, and years, and we either evolve, or obviously we atrophy or whatever”. Of course, we all know what whatever is. Poor dinos (except for the raptors in Jurassic Park – they had it coming).

His talk walked through the evolution from n-tier architectures of the 90’s to Alistair Cockburn’s Hexagonal Architecture (to which BizTalk Server closely aligns, and shares terminology with, though pre-dating the architecture), all the way through Microservices Architecture.

He saw as driving forces of this evolution the aspiration for:

  • Simplicity in addressing complex requirements
  • Velocity (speed of delivering solutions)
  • Evolution (see above)
  • Democratization

On Microservices, “[they are] essentially based very consciously on hexagonal architecture. It’s the idea of taking your application domain and ensuring that the way you implement services within the application domain are very fine grained […]. Microservice principles are based on the notion that the services that we tended to build in the middle tier have tended to be monolithic, which isn’t quite exactly what is being built, but in terms of how it is packaged and hosted, maybe. There’s a lot of chunky services that cover lots of concerns which can be problematic. The idea is to decompose the monolith into fine grained services […]. [It is] the new new fine grained SOA approach for a modern audience. But just because Microservices is the new buzzword doesn’t mean that we should leave our brain at the door and forget what we know about building complex integrations”

As he expanded out to iPaaS and Microservices, he mentioned that he has a problem with most of the iPaaS platforms today (e.g., MABS), where the full integration / mediation workflow is emitted up to the cloud as a monolithic thing which makes no proper separation of concerns (e.g., Itinerary containing a bridge with VETR flow). However, with App Service, he sees a change of direction in which we don’t have to use Logic Apps, for example, to use API Apps – the capabilities are no longer tightly coupled to the workflow.

I must admit, his session was so packed full of essential and incredible content, that I was unable to take notes fast enough to capture it all. He doesn’t waste a lot of words, and still succeeds to use lots of them.

If you want a better view into his brain right now, as he’s processing the change in the industry, read through his latest series of posts on architecture (Part 1, Part 2, Part 3).

Announcing The Migration Factory

The Migration Factory

Next up was Jon Fancy and Dan Probert, to completely blow my mind. At first, it seemed that their talk was going to be providing tips to migrate BizTalk solutions to Azure App Service. Instead they announced that they had created an automated service to do just that.

I thought it was a joke. I thought they were going to have a fake announcement, and then resolve it with, “of course, automated migration will never be possible, so here’s a checklist that you will need to manually go through”. But this was not the case. Indeed, they were announcing, and DEMONSTRATED a working migration service that was able to take a BizTalk Server application and migrate it over to Logic App form on Azure App Service. Good show!

I was so thrown off by this that I couldn’t get my camera up fast enough and ended up with some wonderful candid pictures of my shirt, my shoes, and the floor.

Best of all, you can sign-up today. I’m still in shock over this one.

API Management with Kent Weare and Tomasso Groenendijk

The next set of talks were a coordinated effort between Kent Weare and Tomasso Groenendijk.

Kent introduced the topic of APIs by showing their power in modern business (e.g., Uber – the largest hired ride company that owns no cars, Facebook – the largest media company that produces no content) to enable the exchange of data which is where the real value lies. He likened APIs to a doorway for your business, which with the help of API Management gets a bouncer. Why a bouncer? Because they handle similar concerns:

  • Authentication / authorization
  • Policy enforcement
  • Analytics
  • Engagement

So what’s the Red Pill moment that API Management delivers to a BizTalk Server developer who is keen on exposing BizTalk endpoints directly to end consumers with whatever format the end consumer wants? Of course, we still have to think about security, enrollment, governance, visibility, etc…

 What if I told you, you don't have to involve the entirety of IT to get this setup?

Kent followed through with a nice end to end demo of API Management being applied to a car insurance purchasing scenario with a shiny Windows Store app front-end.

Next up was Tomasso.

Tomasso showing off BizTalk 360's ESB Management Portal

I always get excited when I see Tomasso, because I know that I’m in the same room as one of the few human beings that shares the same passion for the ESB Toolkit as I. He’s also probably the only person I know who has ever implemented IExtenderStyle (so rare, it’s not even Google-able, though Bing makes an effort) – that shows that he really takes pride in his work.

He demonstrated what it looks like to put API management in front of the ESB Toolkit on-premises, which adds the benefit of message repair thanks to the ESB Exception Management Framework. For an added bonus he demonstrated the message repair being done through the BizTalk360 ESB Portal.

Power BI to replacing the BAM Portal, and BizTalk360 to replacing the ESB Portal demoed in the same day to the same audience? Yes! I’m really looking forward to seeing more modern looking front-ends for BizTalk installations in the wild as a result.

Nino Brings it Home

Nino's Evolution

Nino closed out the conference with his talk on his own personal journey as a developer that led to the thing that is currently eating up his free development time – JitGate.

So how did he get to that point? He started out riding the roller coaster of integration technologies through to the present era (shedding hair and resolve along the way), until he had a realization – what he really wanted out of integration he got from file-based integration:

  • Simple to manage
  • Fast to use
  • Polymorphic
  • Adaptable
  • Serializable
  • Fully extensible
  • Persistent
  • Multi-platform
  • Scalable
  • Reliable

From there he set out to use the tools provided by Azure to build out an integration framework that gave him those same benefits. The result was JitGate – Just-in-time integration.

So does this mean he’s announcing an App Service killer? Not necessarily. It means that Azure is an enabler. It has the power as a platform to do integration, and to handle insane loads. You don’t have to use Microsoft’s solution for doing it, you can use the same technologies that they’re using and build your own that is just as capable.

In a way, this is similar to what we had with BizTalk Server – built on top of .NET since 2004. .NET and SQL were always available to any developer who wanted to build something similar to BizTalk.

I don’t want to minimize what Nino has built here though. It is truly impressive and something that sadly, I can’t really capture here. It’s one of those things that you have to see demoed. Stay tuned to his blog for a post with full details and a video of it in action.

What’s Next?

Well it looks like Microsoft has announced Azure Service Fabric, which appears to be a re-branding of Azure Pack and promises to bring the world of Azure App Service on-premises. There are certainly exciting times ahead.

I’ll be at Build 2015 next week in San Francisco, CA to learn all that I can about that, and then the following week will be back in Kirkland, WA teaching the first run of our Cloud-Based Integration Using Azure App Service class.

I hope to see some of you along the way, and if you’re going to the BizTalk Boot Camp 2015 event, y’all better tweet everything so I can get a glimpse of it too!

BizTalk Summit 2015 – London: Day 1

By Nick Hauenstein

The first day of the London BizTalk Summit 2015 was a high paced, jammed packed, highly informative, and outstanding1 way to kick-off an event. I sat all the sessions, and ended up with 25 pages of notes, and nearly 200 photos. I have no idea how I’m going to pull this post together in a timely fashion, so I’m just going to dive right in and do what I can.

Service Bus Backed Code-based Orchestration using Durable Task Framework

The morning kicked off with Dan Rosanova delivering his session a little bit early, and in place of Josh Twist’s planned keynote (Josh had taken ill and was unable to make it to the conference). Dan presented the Durable Task Framework – an orchestration engine built on top of Azure Service Bus for durability and the Task Parallel Library for .NET developer happiness.

Dan Rosanova being generally awesome

While not necessarily one of the big new things in the world of BizTalk (i.e., it has been around and doing great since 2013), it is one of those things that allows you to implement some of the same patterns as BizTalk Server, whilst solving some of the same problems that Orchestrations tackle, but with tracing and persistence provided by the cloud, and a design-time experience that feels comfortable to those averse to using mice – pure, real, unadulterated C# code.

The framework provides the following out-of-the-box capabilities:

  • Error handling & compensation
  • Versioning
  • Automatic retries
  • Durable timers
  • External events
  • Diagnostics

That last paragraph is doing the framework a great disservice, and it completely understates the value. Please understand that this is really cool stuff, and one of those things that you just have to see. I’m really happy to have seen the framework featured here, it is definitely now on my list of fun things to experiment with.

I really love Azure Service Bus – it makes me very happy. It’s also making a good showing of Microsoft’s cloud muscle – recently crossing the 500 billion message per month barrier. That’s nuts!

On a side note, I must say that Dan is an excellent presenter, and I would recommend checking out any talk of his that you have the opportunity to hear. POW!

API Apps for IBM Connectivity

Following Dan’s talk, Paul Larsen took the stage to talk about API Apps for IBM Connectivity. Some time was spent recapping the Azure App Service announcement from a few weeks back for those in attendance who were unfamiliar, but then he dove right into it.

The first connector he discussed was the MQ Connector – an API App for connecting applications through Azure App Service to IBM WebSphere MQ server. It can reach all the way on-prem using VPN or Hybrid Connection. The connector uses IBM’s native protocols and formats, while being implemented as a completely custom, fully managed, Microsoft-built client – something that was needed to most efficiently build out the capability given the host. That’s really impressive, especially given the compressed timeline, and complete paradigm shift thrown into the mix. There are some limitations though – compatibility is only for version 8, there is no backwards compatibility yet.

Microsoft Connector for MQ

He also took some time to demo the DB2 connector and the Informix connector that are currently in preview, and to present a roadmap that included a TI Connector, TI Service (Host Initialized), DRDA Service (Host Initialized), and of course Host Integration Server v10.

Karandeep Anand Rescues the Keynote

Karandeep Anand, Partner Director of Program Management at Microsoft, flew in from Seattle at the last moment to ensure that the keynote session could still be delivered. He came fresh from the airplane, and started immediately into his talk.

One of the first things he said as he was working through the slides introducing the Azure App Service platform was, “We’re coming to a world where it doesn’t take us 3 years to come out with the next version but 3 weeks to come out with the next feature.” This is so true, and really only fully possible to do with a cloud-based application.

If that statement has you concerned, keep calm. This is something a model that Microsoft has already accomplished on a massive scale with Team Foundation Server. Since the 2012 version of the server product, there has been a mirror of the capabilities in the cloud (first Team Foundation Service, and now Visual Studio Online) providing new features, enhanced functionality, and bug fixes on a 3-week cadence, followed-up by a quarterly release cycle for on-prem.

It’s a model that can definitely work, and I’m excited to see how it might play out on the BizTalk side of things (especially in a micro services architecture where each API App that Microsoft builds is individually versioned and deployed on potentially a per-application basis — not just per-tenant).

It was a model of necessity though when the challenge was issued: “In less than a quarter, we will rebuild our entire integration stack to be aligned with the rest of the app platform.”

Overall, Karandeep did a good job to demonstrate that the Azure App Service offering isn’t just following a new fad, or achieving buzzwords compliance, but instead about learning from past mistakes and applying those learnings.

In discussing the key learnings, there were some interesting things that came out of building BizTalk Services:

  • Validated the brand – there is power in the BizTalk brand name, it is synonymous with Integration on the Microsoft platform
  • Validated cloud design patterns – MSDTC works on-prem, but doesn’t make a lot of sense in the cloud
  • Hybrid is critical and a differentiator
  • Feature and capability gaps (esp. around OOTB sources/destinations)
  • Pipeline templates, custom code support
  • Long running workflows, parallel execution
  • Needs a lot more investment

Keynote being rescued

As a result of all of the lessons learned (not just with BizTalk Services), the three guiding principles of building out Azure App Service became: (1) Democratize Integration, (2) Rich Ecosystems, (3) iPaaS Leader.

Shortly after working through the vision, Stephen Siciliano, Sr. Program Manager at Microsoft, and Prashant Kumar came up to assist in demonstrating some of the capabilities provided by the Azure App Service platform.

Stephen’s demo focused on flexing the power of the Logic App engine to compose API Apps and connect to social and SaaS providers (extract tweets on a schedule and write to dropbox, and then looping over the array of tweets rather than only grabbing the first one), while Prashant’s demo demonstrated the use of BizTalk API Apps to handle EAI scenarios in the cloud (calling rules to apply a discount to an XML-based order received at an HTTP location, and then routing the order to a SQL server database).

App Service Intensives Featuring Stephen Siciliano, Prashant Kumar, and Sameer Chabungbam

Right after the keynote, there were three back-to-back sessions (with lunch packed somewhere in between) that provided some raw information download of things that have been out there for discovery, but not yet mentioned directly. I was really pleased with being able to see it live, because it’s going to be awhile before all of the information shown can be documented and disseminated fully by the community.

We’ve been hard at work at QuickLearn on our new Azure App Service class – with the first run just 3 weeks away! For a lot of the conceptual material we’ve relied heavily on the language specification for Logic Apps that was posted not too long ago – trying to understand the capabilities of the engine, rather than the limitations of the designer. So one of the big takeaways for me from Stephen’s session was seeing a lot of those assumptions that we’ve had validated, and then seeing other things pointed out that weren’t as clear at first glance.

Essentially he discussed the expression language (see the language spec for full coverage), the mechanics of triggering a Logic App (see here for full coverage), and the one that stuck out to me: the three ways to introduce dependencies between actions.

Dropping some wisdom

Actions aren’t what they seem in the designer (if you follow me on twitter, you would have seen the moment I first learned that thanks to Daniel Probert’s post). They can have complex relationships and dependencies, which can be defined either explicitly or implicitly. The way that Stephen laid it out, I think, was done really well, and certainly sparked my imagination. So here it is, the three ways you can define dependencies for actions:

  1. Implicitly – whenever you reference the output of an action you’ll depend on that action executing first
  2. Explicit “dependsOn” condition – you can mark certain actions to run only after previous ones have completed
  3. Explicit “expression condition – a complex function that evaluates properties of other actions
    “expression” (ex: only execute if there was a failure detected / CBR scenarios)

For those that are lost at this point (confused as to why I’m so excited here): If you don’t have a dependency between two steps, they will run in parallel. You can have a fork and rejoin if the rejoined point has a dependency on the last step in both branches. Picture Nino Credele-level happiness, and that’s what you should be feeling right now upon hearing that.

After Stephen left the stage, Prashant Kumar started into his presentation covering BizTalk on App Service. He set the stage by recapping the initial customer feedback on MABS (e.g., more sources/destinations needed out of the box, etc…), and then started to show how Azure App Service capabilities map to BizTalk Server capabilities and patterns that we all know and love:

API Apps

  • Connectors: SaaS, Enterprise, Hybrid
  • Pipeline Components: VETR
  • Extensibility: Connectors and Pipeline

Logic Apps

  • Orchestration
  • Mediation Pipeline
  • Tracking

The interesting one is the dual role of Logic Apps as both Orchestration and Mediation Pipeline (with API App components). So will we build pipeline (VETR focused) Logic Apps ending in a Service Bus connector that could potentially be subscribed to by orchestration (O focused) Logic Apps? Maybe? Either way, that’s a solid way to start the presentation.

Prashant

After a trip through the slides, Prashant took us to the Azure Portal where all the magic happens. There he demoed a pretty nice scenario wherein we got to see all of the core BizTalk API Apps in action (complete with rules engine invocation, and EDIFACT normalization to a canonical XML model).

Sameer Chabungbam’s session was the last in the group of sessions diving deep into the world of Azure App Service. His session focused around building a custom connector/trigger (in this case for the Azure Storage Queue). He walked through the API App project template specifics (i.e., its inclusion of required references, and additional files that make it work, plus a quick walk through the helper class for Swagger metadata generation). Most of the specifics that he dealt with centered around custom metadata generation.

I’m going to take a step back for a moment because I sense there has been some confusion based on conversations that I’ve had today. The confusion centers around Swashbuckle and Swagger – both technologies used by API Apps. Neither of these technologies were built by Microsoft, they are both community efforts attempting to solve two different problems. Swagger is trying to solve the problem of providing a uniform metadata format for describing RESTful resources. Swashbuckle is trying to solve the problem of automatically generating swagger metadata for Web API applications. Swashbuckle can optionally provide a UI for navigating through documentation related to the API exposed by the metadata with help from swagger-ui.

With that background in place, essentially Sameer showed how we can make the Logic App designer-UI happy with consuming the metadata in such a way that it displays a connector and trigger in a user-friendly fashion (through custom display names, UI-suppressed parameters, etc…), while also taking advantage of the state information provided by the runtime within the custom trigger.

Generation/modification of the generated metadata can be accomplished through a few different mechanisms (that operate at different levels within the metadata) within Swashbuckle. The methods demonstrated made heavy use of OperationFilters (which control metadata generation at the operation level).

Unfortunately, my seat during this session did not provide a clear view of the screen, so I am unable to share all of the specifics at this time in a really easy fashion, but that is something that I will be writing up in short order in a separate posting.

UPDATE: The code from his talk has now been posted as a sample.

Yes, You Did Hear Jackhammering at Kovai

To kick off the BizTalk 360 presentation, Nino Credele provided comic relief demonstrating BizTalk Nos Ultimate – now the second offering in the BizTalk 360 family. I’ve got to just throw out a big congratulations to Nino especially and to the team at Kovai for pulling together the tool into a full commercial product. It has been really cool to watch it progress over time, and it must have been a special moment to be able to print out the product banners and make it real. Good work! A lot of developers are going to be very happy.

I also want to take time out to thank the BizTalk 360 team for continuing to organize these events – even though they’re not an event management company by trade – it does take a lot of work (and apparently involves walking 10km around the convention center each day).

No Zombies for Michael Stephenson and Oliver Davy

Not content to show off a web-scale, robust, enterprise-grade file copy demo, Michael Stephenson made integration look like magic by recasting the problem of application integration through the lens of Minecraft. While maybe an excuse for just playing around with fun technology, and getting in some solid cool dad time along the way, Michael showed what it might look like when one is forced to challenge established paradigms to extend data to applications and experiences that weren’t considered or imagined in advance.

Michael Stephenson and Oliver Davy

This effort started during his work with Northumbria University, a university focused on turning integration into an enabler and not a blocker, to envision what the distant future of integration might look like. That future is one where the existence of an integration platform is assumed, and this becomes the fertile soil into which the seeds of curiosity can be sown. This future was positioned as a solution to those systems that were “developed by the business, but [haven’t] really ever been designed by the business. [They have] just grown.” (Oliver Davy – Architecture & Analysis Manager @ Northumbria University).

The approach to designing the integration platform of the future was to layer abstract logical capabilities / services within a core integration platform (e.g., Application Connector Service, Business Service, Integration Infrastructure Service, Service Gateway, API Management), and then, and only then, layer on concrete technologies with considerations for the hosting container of each. Capabilities outside of the core are grouped as extensions to the core platform (e.g., SOA, API & Services, Hybrid Integration, Industry Verticals, etc…). I felt like there were echoes of Shy Cohen’s Ontology and Taxonomy of Services2 paper.

Honestly, there’s some real wisdom in the approach – one that recognizes the additive nature of technologies in the integration spaces inasmuch as it is very rare that some architectural trend replace another trend. It is also an approach that seeks to apply lessons learned instead of throwing them away for shiny objects.

A common theme throughout Michael’s talk was the theme of agile integration. I’m hoping he takes time to expand on this concept further3. In other words, I don’t see “agile integrations” as integrations that fall into the “hacking IT” zone (which limits scope and often sacrifices quality to provide lower cost), but those that are limited in scope to provide lower cost without sacrificing quality.

Stephen Thomas’ Top 14 Integration Challenges

Stephen Thomas dropping 14 years of knowledge in 30 mins

To wrap-up the day, Integration MVP Stephen Thomas shared the top 14 integration challenges that he has seen over the past 14 years (and did really well with a rough time slot):

      • Finding Skilled Resources
      • Having Too Much Production Access
      • Following a Naming Standard
      • Developing a Build and Deployment Process
      • Understanding the Data
      • Using the ESB Toolkit (Incorrectly)
      • Planning Capacity Properly
      • Creating Automated Unit Tests
      • Thinking BizTalk Development is like .NET Development
      • Having Environments Out of Sync
      • Involving Production Support Too Late
      • Allowing Operations to Drive Business Requirements
      • Over Architecting
      • Integrators Become Too Smart!

With a rich 14-year history in the space (and the ability to live up to his email address), Stephen shared some anecdotes for each of the points to address why they were included in the list.

Takeaway for the Day

I don’t have just one take-away for the day. I’ve written thousands of words here, and have 25 pages of raw unfiltered notes. Again, this is one of those times where I’m dumping information now, and will be synthesizing it in my head for another 6 months before having 1 or 2 moments of sheer clarity.

So on that note, here’s my take-away: people in the integration space are very intelligent, passionate, and driven – ultimately insanely impressive. I’m just happy to be here and I’m hoping that iron does indeed sharpen iron as we all come together and share new toys, battle stories, and proven patterns.


1 Not like @wearsy’s questions.
2 I’m sure just seeing the title of that article would give @MartinFowler heartburn.
3 Though he may have already, and I’m simply unaware.

Congratulations Nick Hauenstein – Newly minted Microsoft MVP

By Anthony Borton

Nick

I have been an MVP for the past 9 years and have met some very passionate and talented individuals during that time. It is important to remember that the MVP Award is not a technical award – it is an acknowledgement of the work an individual does for the community over the previous 12 months. Having said that, it is usually very technically knowledgeable  people with a passion to share their knowledge that earn the MVP award.

My friend and colleague, Nick Hauenstein, has all the attributes that qualify him for this award. He is amazingly knowledgeable across many technologies including BizTalk, TFS/ALM and software development in general. He is passionate about sharing that knowledge with anyone wanting to learn. He is a committed and caring individual that cares about communities – both technical and non-technical.

Nick will be at these upcoming events. If you happen to bump into him, be sure to congratulate him on his award and ask him anything about BizTalk/TFS that you’ve ever wanted to know. You’ll soon learn that Nick is the ideal recipient of the Microsoft MVP Award.

  • BizTalk Summit 2015 – London
  • Microsoft Build Conference
  • ALM Forum 2015 – Seattle

Azure App Service: BizTalk Server PaaS Done Right

By Nick Hauenstein

Today’s announcement from Scott Guthrie not only brought with it a re-imagined Microsoft Azure platform, but also finally gave BizTalk Server a pure PaaS home right in the center of the stack. To be fair, we’ve had clues that this was coming since Bill Staples’ keynote at the the Integrate 2014 conference in December, but we’re finally on the edge of it all becoming real.

AN Inevitable, Yet Happy, Re-imagination

Today's Azure App Platform

It’s a shift that was inevitable given the nature of the existing Azure services – all stand-alone with their own distinct hosting environment, extensibility points, and management experiences. Imagine that you were working with an integration involving an Azure Website, Service Bus Topic, and MABS Itinerary. In order to manage it, you may have found yourself using 3 separate versions of the management portal – with one living on top of Silverlight.

image

With the re-organized app platform, we’re seeing a move into a single hosting container – the Azure Website. That means that all investments in that space (hybrid connectivity, auto-deployment, language support, etc…) now serve to benefit the entire platform.

For the purposes of the rest of this article, and the interests of the community that this blog generally attracts, I’m going to focus in on two of the components of the Azure App Service – Logic Apps and API Apps.

Core Concepts of the Azure App Service Platform

Before we take a look under the hood at what it looks like to use this stuff, it would serve us well to examine some of the core concepts behind this new platform.

Logic Apps

Logic Apps are the workflow of the cloud. However, these flows are composed not of “activities” in the WF sense or “shapes” in the BizTalk sense, but instead are composed of API Apps – RESTful API calls. Logic Apps can be triggered manually, on a schedule, or by a message received by a connector – a special classification of API App. Within the Logic Apps, calls to the API Apps are referred to as Actions, and are configured graphically within a web-based Designer.

Logic App Blank Canvas

The web-based designer starts as a blank canvas. If you’ve never created one before, you will need to get some API Apps from the marketplace to use within your flow. Once you select them from the marketplace, an instance of each will provision itself within your Azure subscription – in the resource group of your choosing (think BizTalk Application for the cloud – a logical grouping mechanism) .

Each API App will be locked to the version that was retrieved at the time that you originally fetched it – until you update it. Versioning of each API app is done through NuGet-style packaging.

Getting ready to create the Twitter Connector

Benefits of this model are that you are able to control the scaling and locality of each of the API Apps that you use, you have some level of control of the version in use, and you’re not subject to the demands of everyone else in the world on that same capability.

This is a nice way to handle the concept of a service marketplace – not only selling the raw capability that the service provides, but providing an isolated / personal container for your calls against that logic that can be located directly alongside your app for the lowest possible latency.

There are quite a few connectors/actions available for use in the marketplace, and this is where we start to see capabilities from BizTalk Server light up in a purely PaaS way within Azure.

API Apps in the Marketplace

API Apps

Microsoft is not the only provider of API Apps – you too will have the ability to create API Apps for use in your own Logic Apps or for use by others when published to the marketplace. The new release of the Azure Tools for Visual Studio provides project templates to make this happen, as well as a new Publish Web experience.

Azure API App Template

When creating a new Azure API App, you will find quickly that you’re really building a Web API application that has a nice SwaggerConfig.cs class within the App_Start folder for publishing metadata about the endpoint itself. You can think of Swagger as the answer to WSDL for the RESTful world:

SwaggerConfig.cs

Other than where you’re ultimately publishing the bits, how the metadata is exposed, and some hooks you have into the Azure runtime, this is just a stock-standard Web API project. In fact, you could write it in any of the languages supported by Azure Websites.

For now, that’s all I’m going to say about custom API apps. Keep watching this space in the coming weeks though, as we at QuickLearn will be gearing up for a live in person and virtual event focused around building custom API apps and connectors.

That being said, let’s focus in on a few of the API Apps that are available right now in the marketplace – specifically the BizTalk API Apps.

BizTalk API Apps

image 

Looking through the BizTalk API Apps, we find the ability to validate XML, execute maps (MABS-style maps), and execute Rules (among other things). When doing those tasks, additional artifacts will be required (beyond just the message itself, or a simple string parameter).

For such API Apps, the Azure portal allows you to navigate directly to the API App, and configure it further (e.g., uploading schemas for validation, creating transforms, and/or defining vocabularies and rules).

Adding maps for use in the transform service

I’m happy to see that the BizTalk API Apps will allow us to re-use existing artifacts (at least for schemas and maps). For rules, the story is a little bit different. We can actually define rules within the browser itself (more on that in a later post).

API Gateway

At this point we have Logic Apps (composite flows that bring together calls to multiple API Apps), and API Apps themselves (micro services that can be called directly). They live together in a logical resource group, but how can we manage authentication / authorization for calls to these apps? The answer to that is the API Gateway – a web app that handles API management functions such as authentication for all API apps in a resource group.

API Gateway

Everything is New

This is a time of huge change for Microsoft, not only embracing open source and greater platform support, but also investing more deeply into their own platform and integration. It’s definitely an exciting time to be a developer.

Here at QuickLearn, we will be hard at work making sure that you have a company that you can turn to for world-class cutting edge training in this space. So stay tuned, because some of the best days are still ahead.

Some exciting changes coming to our TFS courses

By Anthony Borton

One of the things that really helps set QuickLearn apart from our competitors is the fact our TFS courseware is being continually updated to ensure it is as current as possible.

With Microsoft releasing quarterly updates for Visual Studio and Team Foundation Server, you want to make sure you don’t miss out on any new or improved features that could make your job easier.

We also review the structure of our courses to ensure they are a great fit for the market and sometimes that means we change existing courses and sometimes it means we develop brand new courses.

Here are a list of changes that will take place over the next few months.

1. Merging and refining our offering for Team Leaders, Project Managers and Scrum Masters.

Our 2-day Managing Projects with Microsoft Visual Studio TFS 2013 course is being merged with our 3-day Applied Scrum Using Visual Studio 2013 course to become Managing Agile Projects using Visual Studio TFS 2015 (3-days).

2. Expanding our TFS Administration and Configuration offering

To help keep up with all the changes and to allow us to add more hands-on-lab exercises, we’re expanding our TFS 2013 Configuration & Administration course from 3-days to a 4-day course for TFS 2015. You’ll see this change occurring from July 1st, 2015.

3. An improved offering for DevOps training

Our Build, Release, and Monitor Software Using Visual Studio 2013 course is being replaced with a new DevOps using Visual Studio ALM 2015 course from July 1st, 2015. This has been revamped to offer a broader range of topics and some great new hands-on-labs exercises. Want to know about the new Build features? Perhaps you’re trying to get your head around Desired State Configuration (DSC)?

4. DeveloperS – choose Git or Team Foundation Version Control

Our TFS 2013 Developer Fundamentals course has always focused on Team Foundation Version Control in the past. The market has asked us for something to help them if they are using Git for their version control and we’ve got a solution.

Our TFS 2013 Developer Fundamentals is now available in two different flavors for the 2015 release. Simply choose the course corresponding to the version control provider you and your team are using.

  • TFS 2015 Developer Fundamentals – TFVC  (2-days)
    or
  • TFS 2015 Developer Fundamentals – Git (2-days)

5. Developer Enterprise Features – NEW

Microsoft has long offered a tiered pricing/feature model for Visual Studio with versions ranging from Professional up to Ultimate. Often development teams continue to use just the “standard” Visual Studio capabilities and don’t take advantage of some of the great productivity features available in the higher level editions.

Starting with the 2015 release, we’re offering a brand new course “Visual Studio 2015 Enterprise Features” (2-days) which will focus in on the features only found in the higher level editions. Many of these features offer huge benefits to teams or enterprise developers and we want to make sure you’re using them to achieve your optimum productivity.

This new course hits the public calendar in the second half of 2015.

Visit quicklearn at the alm forum May 18-22, 2015

By Anthony Borton

 

The ALM Forum is moving to a new venue this year. The Bell Harbor Conference Center at Pier 66 on the Seattle waterfront plays host to a great line up of workshops, keynotes and breakout sessions. In previous years the event has been on the Microsoft Campus and most recently at the Washington State Convention Center.

QuickLearn is pleased to once again be a gold sponsor of the ALM Forum. It is the perfect place for all ALM practitioners and managers to come together and learn from the best in the industry. We’ll have a number of our experts manning the QuickLearn booth over the 3 main days of the conference so drop by and say hello.

This years event also introduces a new track to the program.

  1. Process of Software Development (NEW)
  2. Business of Software Delivery
  3. Principles and Practices of DevOps

Lastly, don’t forget that in addition to the three main days of the conference, there are also some great pre-conference and post-conference workshops you can sign up for.

TFS/ALM Pre-conference Workshop

QuickLearn’s lead ALM trainer and curriculum developer, Anthony Borton, will be presenting an updated version of his popular pre-conference workshop titled “Enhance your Application Lifecycle using Visual Studio Online and TFS”. If you’re planning on attending the ALM Forum make sure you look at his workshop and sign up if it sounds interesting for you. At just $495 for a full day of cutting edge, hands-on technical training, it is great value!

Integrate 2014 – Final Thoughts

By Nick Hauenstein

The last day at Integrate 2014 started off early with Microsoft IT demonstrating the benefits of their early investment in BizTalk Services to the bottom line, and then transitioned into presentations by Microsoft Integration MVPs Michael Stephenson and Kent Weare discussing the cloud in practice, and  how to choose an integration platform respectively.

Those last two talks were especially good, and I would recommend giving them a watch once the videos are posted on the Integrate 2014 event channel on the Channel 9 site.

Integration Current/Futures q+A Panel

At this point, I’m going to stray from the style of my previous posts on Integration 2014. The reason for that is that I want to take a little bit of time to clarify some things that I have posted, as well as to correct factual errors – given that we’re all learning this stuff at the same time right now. Don’t get me wrong, I do not at all want to discount the excellent sessions for the day from Integration MVPs and partners, I just believe it more important right now to make sure that I don’t leave errors on the table and propagate misunderstanding.

It seemed like throughout the conference, the whole BizTalk team, but Guru especially, was constantly fielding questions and correcting misunderstandings about the new microservices strategy. To that end, he organized an informal ad-hoc panel of fellow team members and Microsoft representatives to put everything out on the table and to answer any question that was kicking around in the minds of attendees about all of the new stuff for the week.

I’m going to let an abbreviated transcript (the best I could manage without recording the audio) do the talking here.

Microservices is not the name of the product, it’s a way you can build Stuff

Q – We heard about microservices on Wednesday, how come you (to Kannan from MS IT) are going live with MABS, when you know that there are changes coming down the pipeline?

A – (Vivek Dali): “A lot of people walked away with microservices is the name of the product, and it’s not the name of the product, it’s an architectural pattern that creates the foundation for building BizTalk on top of. There is no product called Microservices that will replace BizTalk Services. BizTalk Services v1.0 had certain functionality, it had B2B, EAI, and there was big demand for orchestration and rules engine, and what we’re doing is adding that functionality. It does not mean that BizTalk Service v1.0 is dead and we have to re-do it. MS IT is actually one of the biggest customers of BizTalk [Services], and we’re telling them to stay in it, and we’re committing to support them as well as move them as we introduce new functionality over […]. The next step in how MABS is evolving is to a microservices architecture.”

Microsoft Is CommitteD To a Solid Core For Long-Term Cloud Integration

Q – (Michael Stephenson): I’m kind of one of the people that gives Guru quite the hard time […] About this time last year, I did a really deep dive with you on MABS 1.0 because we were considering when we would use it, what it offered, what the use cases were. At the time, I decided that it wasn’t ready for us yet […]. When we did the SDR earlier this last year, it was quite different at that time […] We were giving the team a lot of feedback on isolation and ease of deployment, and personally my opinion is that I really like the stuff shown this week, you really fielded that feedback. What I’ve seen from where we were a year ago, and from that SDR, personally I’m really pleased.

A) Don’t worry about coming around and telling us what we’re doing wrong — we do value that feedback. We will commit to come back to you as often as we can […].

(Vivek): Here’s how I think about the product: there’s a few fundamental things that we HAVE to get right, and then there’s a feature list. I’m not worried about the feature list right now, I’m worried about what we NEED to get right to last for the next ten years. Don’t worry about how we’ll take the feedback, send us your emails, we value that feedback.

BizTalk Services Isn’t Going Away, It’s Being Aligned to a Microservices Architecture

Q: I had a conversation with Guru outside, which I think is worthwhile sharing with everybody […] I was really confused at the beginning of the session as to how microservices fits in with where we are with BizTalk Server and with MABS 1.0 and where that brings us moving forward. How do the pipelines and bridges map to where we’re going. I was really excited about the workflow concept, but I couldn’t see that link between the workflow and the microservices.

A – (Guru): The flow was that you had to have a receive port, and a pipeline, and you would persist it in a message box for somebody to subscribe, and that subscriber could be a workflow and a downstream system. That was server, that continues and that has been there for 10+ years.

Then there’s a pattern of I’m receiving something, transforming, and sending it somewhere, and in services that was one entity — and we called that a bridge. It consisted of a receiving endpoint a sequence of activities and then routing it. This concept was a bridge. If you look at it as executing a sequence of activities, then what you have is a workflow.

The difference between what we were doing then and what we’re doing now is that we’re exposing the pieces of that workflow for external pieces to access. [Paraphrased]

How do we extend those workflow capabilities outside of just a BizTalk Server application? (microservices) [Paraphrased]

I’m (Nick) going to inject this slide from Sameer’s first day presentation where he compared/contrasted EAI in MABS with the microservices model for the same, as it’s incredibly relevant to demonstrating the evolution of MABS:

Sameer compares MABS with microservices architecture for the same

You Don’t Have to Wait for vNext in 2015 to Upgrade Your BizTalk Environment

Q: We’ve got a small but critical BizTalk 2006 installation that we’re upgrading now, or in the very near future. And I was wondering if we should upgrade it to 2013 R2, or should we upgrade it to the next release, and when is the next release?

A – (Guru): This is a scenario where we’re starting from 2006? I would strongly encourage you to move to 2013 R2, not just for the support lifecycle. One for lifecycle, and the other for compatibility with the latest Windows, SQL, SharePoint etc…

Then, look at what the application is doing. Is it something that needs to be on-prem, or is it something that is adaptable to the cloud architecture, or even if that application is something that could be managed in the cloud? There’s nothing that is keeping you from migrating to 2013 R2 today.

To further drive home Guru’s point here, I’m (Nick) personally going to add in a slide that was shown on the first day, showing the huge investments the BizTalk team has been making into BizTalk Server over the last few versions. Quite a few people see it as not a lot of change on the surface, but this really goes to show just how much change is really there (heck, it took me ~100 pages worth of content just to lightly touch on the changes in 2013, and I’m still working on 2013 R2):

How BizTalk Server 2006 R2 stacks up to BizTalk Server 2013 R2

MABS 1.0 is Production Ready and Already Doing Great Things, You Should Feel Confident to Use It

Q: How do we reassure our customers that are moving to cloud based integration now, and are seeing MABS now, and are seeing the same tweets about the next version? Migration tools aren’t the full answer because there’s still a cost in doing a migration, so how do we convince customers to use MABS now?

A – (Guru): MABS 1.0 primary market has been EDI because that was the first workload that we targeted. That’s something that is complete in all aspects. So if you’re looking at a customer that is looking to use MABS for EDI, then I strongly encourage that because there’s nothing that changes between using EDI in MABS and whatever future implementation we have [Heavily paraphrased]

(Vivek): Remember MS IT is one of the biggest customers, and it’s not like we’re telling them a different thing than we’re telling you […]. Joking aside, the stuff they’re running is serious stuff, and we don’t want to take a risk, and if there’s not faith in that technology, I don’t want them to take a dependency on it.

Azure Resource MAnager Isn’t The New Workflow – But the Engine that it uses Is

Q: How will the Azure Resource manage fit into this picture?

A – (Vivek): [How] Azure Resource Manager fit in? Azure Resource Manager is a product that has a purpose of installing resources on Azure. It is built on an engine that can execute actions in a long-running fashion, and wait for the results to come, it does parallels. Azure Resource Manager has a purpose and it will be its own thing, but we’re using the engine. We picked that engine because it’s already running at a massive scale and it was built thinking about how the workload will evolve eventually. It already knows how to talk to different services. We share technologies, but those are two different products.

Microservices ALM Is partially there, and is On the Radar, But Is Still A Challenge

Q: What is the ALM story?

A: Support for CI for example? The workflow is a part that we’re still trying to figure out. For the microservices technology part of it, the host that we run on already supports it. One other feedback that came was “how do I do this for an entire workflow” and we’ll go figure that out.

componentizing Early Will Pay Dividends Moving Forward

Q: (Last question) As teams continue to design to the existing platform, we understand the messaging of don’t worry about microservices quite yet. As we design systems going forward, is there a better way to do it, keeping in mind how that will fit into the microservices world? For example, componentizing things more, deciding when to use what kind of adapter. What are things that we can do to ensure a clean migration

A – (Vivek): I think there are two kinds of decisions. One are the business decisions (do we need to have it on premise, etc…) What stays on Hybrid vs what goes on cloud. We want you to make the decision based on business, we will have technology everywhere.

There are patterns that you can benefit from. I think componentizing [will be good]. There are design principles that are just common patterns that you should follow (e.g., how you take dependencies).

So that’s where we are in terms of hearing things direct from the source at this point. Certainly a lot of information to take in, but I’m really happy to see that the team building the product realizes that, and is actively working on clearing up misconceptions and clarifying the vision of for the microservices ecosystem.

Three Shout-Outs

Before I wrap this up, I want to give 3 shout outs right now in terms of the content that I more or less glossed over and/or omitted right now.

  • Stott Creations is doing great things, and I have to hand it to the HIS team for being so intimately involved in not only helping a customer, but helping a partner look good while helping that customer. In addition to that – the Informix adapter looks awesome, and I’m really digging the schema generation from the custom SQL query; that was a really nice touch.

Paul Larsen Presents Features of the BizTalk Adapter for Informix

  • Sam Vanhoutte’s session touched on a not too often discussed tension between what the cloud actually brings in terms of development challenges, and what customers are trying to get out of it. While he was presenting in terms of how Codit addresses these customer asks by dealing with the constant change and risk on their customers’ behalf, these are all still valid points in general. I think he did a great job at summing it up nicely in these two slides:

Challenges - Constant change / Multi-tenancy / Roadmap / DR Planningimage

  • Last, but certainly not least, I want to give a shout out and huge thanks to Saravana and  the BizTalk 360 team for making the event happen. Also they really took one for the team here today as well, as Richard Broida pointed out – ensuring that everyone would have time to share on a jam packed day. The execution was spot-on for a really first class event.

To Microsoft: Remember That BizTalk Server Connects Customers To The Cloud

As a final thought from the Integrate 2014 event: We’re constantly seeing Microsoft bang the drum of “Azure, azure, azure, cloud, cloud, cloud…” Personally, I love it, I fell in love with Azure in October of 2008 when Steve Marx showed up on stage at PDC and laid it all out. However, what we can’t forget, and what Microsoft needs to remember is that any customer bringing their applications to the cloud is doing integration – and Microsoft’s flagship product for doing that integration, better than any other, is BizTalk Server.

BizTalk Server is how you get customers connected to the cloud – not in a wonky disruptive way – but in a way that doesn’t necessarily require that other systems bend to either how the cloud works, or how BizTalk works.

It’s a Good Day To Be a BizTalk Dev

These are good times to be a developer, and great times to be connected into the BizTalk Community as a whole. The next year is going to open up a lot of interesting opportunities, as well as empower customers to take control of their data (wherever it lives) and make it work for them.

I’m out for now. If you were at the conference, and you want to stick around town a little bit longer, I will be teaching the BizTalk Server Developer Deep Dive class over at QuickLearn Training headquarters in Kirkland, WA this coming week. I’d love to continue the discussion there. That being said, you can always connect live to our classes from anywhere in the world as well! Winking smile