Integrate 2016 Talk In Text: API Apps 101 for BizTalk Server Developers

By Nick Hauenstein

In this post, I am going to try to capture in text the presentation that I gave at the Integrate 2016 conference over in London. Text is likely the worst medium in which to capture such a session, but, alas, I do realize that sometimes it is the best medium for proper digestion of such content. If you’d rather see it in video form, click here.

So with that, let’s pretend that you are sitting among fellow professionals in a beautiful room on the 3rd floor of ExCeL London – complete with bright colored lights to set the mood. A wild American then appears, flailing his arms and babbling about how it’s actually 3 AM, and we’ve all been deceived. Then he starts talking about food.

Getting Our Priorities Straight

The world of software development might be a better place if we approached our tasks in that world the way that we approach each meal. We don’t really start each meal by a trip to the racks or shelves that hold our appliances – thinking, “Well, I have a vegetable peeler, and a fondue pot, I guess that means we’re eating some melted Gruyère and Emmentaler mixed with white wine, and carrot strings for every meal.”

Utensils vs. Cravings + Ingredients

Usually, the way it actually works out is that I’m thinking about what I’m craving, the ingredients that I have on hand and their flavor/nutritional value relative to my needs. From there, I look to proven recipes that satisfy those things, and finally, reach for the specific tools needed to do the job. If I don’t have them, I acquire them, or fashion a workable approximation.

We have to be really careful that we, when approaching software development and integration, take the same approach as we would when crafting an excellent meal. An approach that looks first to the needs and constraints, then to proven patterns/recipes, and allows the tools used to flow from the rest – even to the point of crafting/buying new tools that we haven’t used before if necessary.

Slide - Priorities - Lunch values cravings and ingredients, then proven recipes, then tools. Integration should consider business challenges + constraints, proven patterns, and then finally tools.

Business Challenges / Constraints

So, let’s imagine that we all work together now. We want to take the approach outlined above – one in which we have to consider the business need and the constraints that we may very well simply be stuck with. From there, we can consider proven patterns that might help us overcome, and then finally identify/acquire/create the tools required to get the job done.

Our company makes custom bobbleheads.

Slide - Imagine that we make custom bobbleheads. Dan Rosanova bobblehead is pictured along with his wife BizTalk Server, also a bobblehead. Next to them a T-Rex bobblehead also bobs his head in honor of Sandro Pereira's stickers from the conference.

The way that it works is that a customer uploads a 3D model of their face, and then selects a pre-built body from the gallery. The 3D print of their face starts immediately so that the order can be shipped as soon as possible. The customer is permitted to take as long as they need to select the body from the gallery of pre-built bodies. Once they select a body, we attach the printed face to the chosen body and ship the assembled dolls to the customer.

So what happens behind the scenes that we can’t escape?

Slide - What happens behind the scenes (diagram - text inline describes the diagram)

Well, sadly we don’t do greenfield development here. It’s not really brownfield development either. It’s more like a house haunted by ghost IT – shadow IT that has left.

Whenever a customer uploads their 3D face model, an XML notification message is created that contains the order id and a reference to their face model. At the time it was built, our developers emulated the BizTalk Server demos of the day and built distributed, fault-tolerant XML file copy operations whilst applying the wisdom of Chris Maden who has been quoted saying, “XML is like violence. If it doesn’t solve your problem, you’re not using enough of it.”

These same developers learned while attending conferences over the past few years that Dropbox is the next big thing in enterprise integration. They may have been wrong, but it’s now up to us to Make Dropbox Great Again™.

Once the customer selects a body for their doll, another XML message is created and dropped into the same folder in Dropbox with the same file naming conventions. Ultimately, we need to pair up the two components of the order – the head and body – in order to complete the processing.

Proven Patterns

It’s at this point that we would consult the great oracles of all wisdom and knowledge in the world of integration, Gregor Hohpe and Bobby Woolf. We will search through the patterns to find pieces that help solve each piece of the puzzle.

Which Patterns (slide)

Which patterns might we find? Well, to handle the communication with Dropbox, we might utilize the adapter pattern in hopes that one day the data will be sourced from a different system. We could apply the pipes and filters pattern and build a reusable translation/transformation pipeline made up of reusable independent steps organized in the proper order to provide the required translation/transformation/message enrichment for each interface.

From there, we could apply the publish and subscribe pattern to enable loosely coupled communication between the source system and any number of downstream subscribers – maybe routing the message in a content-based fashion. We could also layer on top of this a process manager to enable content-based correlation.

Tools

How would we use these patterns in concert with the tools we have/don’t have? BizTalk Server might seem like a natural fit.

BizTalk Server Architecture (Slide)

It already provides for us the concept of a port that begins with an adapter which delivers a byte stream to a pipes and filters style pipeline responsible for translation of the message and promotion of context properties used for routing, this is followed by a transform before publishing to the message box. From there, we have process managers in the form of BizTalk Orchestrations that understand the concept of content-based correlation of published messages, allowing them to be reunited by the messaging engine. You get adapters, pipelines, maps, orchestration out of the box – and publish subscribe whether you want it or not!

It’s already bringing to the table everything I need. Everything but a handy Dropbox adapter. Now, I know that we could always build our own, or use out of the box adapters with ungodly amounts of WCF extensions to make some magic happen, but maybe that won’t be our best bet here.

So, let’s set that aside for a moment, and consider what might become possible if we started using Logic Apps like this? It’s really the same question I posed before about MABS.

What if we did Logic Apps like this? (Slide)

In this case, we’re marrying Logic Apps and Service Bus. We have some Logic Apps that act in a similar fashion as BizTalk Ports. They provide adaptation and message enrichment and transformation through the use of relevant API apps for those concerns. Others act more like BizTalk Server orchestrations, coordinating the sends and receives of messages and operating on the content.

The messages are routed to the “orchestration” style Logic Apps through Service Bus. Each flow is triggered by subscribing to messages that arrive on a given Service Bus topic subscription (pre-created). Correlation can then be enabled by subscriptions dynamically created mid-flow.

At this point, you may have the following thought (which I humbly share indeed):


Demo Walkthrough

This isn’t all just a pipe dream – it’s real. I’ve built it. So, let’s see how it can fit together. The flow kicks off with an XML message. For this message, I have created a BizTalk Server 2016 schema (i.e., a regular XML schema with special notations about properties that should be promoted to the message context for routing purposes). The message looks like this:

First message

The message contains a promoted OrderId property that we should be able to correlate on. In other words, the second message that will show up in Dropbox for the order should also contain the same OrderId value – which allows us to determine that they are indeed related messages. The first message also contains a reference to the head for the bobblehead doll that we will be printing.

When this message is uploaded to Dropbox, it will be picked up by our “Port” style Logic App that looks like this:

Logic App - XmlIn-FILE

The first API App after the Dropbox receive is a custom API app that essentially builds a context property bag when it is passed an XML payload. It does this by comparing the document to a BizTalk schema, and using the instructions in the schema to “promote” properties by extracting the relevant content. It takes two inputs to operate.

The first input is a URL to the root of an Azure Blob Storage container that contains BizTalk schemas. It will use these schemas to perform message type resolution and property promotion. The second input is a string containing the entirety of an XML message. Not exactly the screaming performance of a forward-only streaming pipeline component, but it gets the job done, considering we’re already taking on latency to get to the cloud in the first place.

The output of that API app looks like this (click to enlarge):

Output of the ExtractPromotedProperties API App

The next API app takes the payload, along with the property bag (which it treats as a set of brokered message properties) and publishes the message to an Azure Service Bus Topic. This is just the out of the box connector using the outputs of our custom API app. The call out to that app looks like this:

Inputs to Service Bus Publish

This published message is picked up by our Logic App that is acting like an Orchestration. That Logic App has a pre-defined subscription on the same service bus topic for any message with a Message Type of Print Job.

Logic App: Print Process (Slide)

After the message is received, the Logic App must quickly setup a subscription for any related messages that come in for that order. Unfortunately, the out of the box connector for Service Bus doesn’t yet have a way to create a new subscription – only ways to subscribe for messages on an existing subscription.

Thus, we will have to use a custom API app to create a subscription unique to this running instance of the Logic App – one that is based on the OrderId property of the received message. To provide this capability, we have a custom API app called CreateInstanceSubscription.

It requires quite a few inputs to function since we don’t yet have the capability of reading details from a stored API connection in a custom API app.

Create Instance Subscription API App (Custom)

The API takes in a Correlation Property property, which contains the name of the property that is shared in common between the message that triggered the Logic App instance and the message that will be correlated with this running instance.

It also takes in the Message Type (in the namespace + # + root node name format) of the next expected message. Both of these properties will be used to create a new subscription on the service bus topic referenced by the last two configuration properties (Service Bus Topic, and Service Bus Connection String).

After it executes, we might expected to see a subscription like the following (click to enlarge):

Create Instance Subscription API App - Created Subscription (Slide) 

Now that we have the subscription created, we can take our time with the rest of the process until we absolutely require the second message. In this case, we’re calling another custom API which provides a visualization of the received messages. In order to read the content, we can either use the xpath() function of Logic Apps to read the XML directly, or we can covert it to JSON first using the json() function, and then simply dot into it. I decided to use the JSON function since I hadn’t attempted to use it in a situation like this yet. It was okay (it was pretty darn verbose). xpath() would have been a better choice here – and the more natural choice given an XML payload.

image

This yields the following visualization (a body-less bobblehead awaiting its correlated message containing information about which body to use):

image

And we would expect that there is both an instance subscription in service bus and a Logic App that is still actively processing – waiting for Service Bus to re-activate it with a new message.

Running Logic App

At this point, the new message can arrive at any moment in time. It will land in the same Dropbox folder, and process through the same Logic App serving as a “Port” – with the same XML property promotion, and Service Bus publishing action. It will land in the same Service Bus topic as before, and with a matching order id to the originally submitted message.

Second Message Submitted (Slide)

This second message carries some new information, however. In this case, it contains the body that the customer selected for their custom bobblehead doll.

Once published to the topic, the instance subscription previously created by the second Logic App in the process will be matched by a listening Service Bus connector.

Service Bus Connector subscribing to 2nd message (slide) 

The connector uses the topic name and instance subscription name passed to it from the Create Instance Subscription API App. The name of the subscription will be a randomly generated id for that running instance of the Logic App.

Now that we have the message, it’s time to ensure that we don’t bring the problem of zombies into the world of Logic Apps. There is a step that follows which will clean up the instance subscription for the Logic App before continuing with the final bits of the process.

Delete Instance Subscription API (Slide)

Again, since the OOTB Service Bus connector does not contain any operations for managing subscriptions, the custom API App is called to delete the instance subscription using the details returned from the original call to the API which created it.

After that, it’s back off to the bobblehead visualizer with the details from the correlated message received.

Last Step (Slide)Final Result (Slide)

Call to Action

So that’s pretty cool. We can now stand with confidence and proclaim that content-based correlation is possible with Logic Apps! However, it was built out of necessity, and required custom crafted components – as is often the case with anything worth doing.

We needed custom components (slide)

You may be wondering why this talk was titled API Apps 101 for BizTalk Developers. I didn’t really tell you how to create API apps. Instead, I showed that API apps behave in a fashion similar to different components within BizTalk Server (adapters, pipeline components, orchestration shapes, etc…). I don’t want to leave you hanging though, because we are at a point in time where there is a golden opportunity to make your mark in the foundations of this new world.

This is the ground floor of Logic Apps and API Apps. As BizTalk Server developers, we know the required ingredients of enterprise integrations. We know the recipes for success. It’s just a matter of crafting some additional tools for use in the world of Logic Apps, and for the first time we have a unified marketplace to share and even sell these components.

From working on BizTalk Server integrations, we know that we will need custom API apps that can serve as adapters, pipeline components, and pattern utility apps (e.g., content-based correlators). In fact, you may have built such things before. It’s honestly not that difficult to port those things over into this new world of integration (where it makes sense) and reap the rewards. If you need inspiration, check out the listings of such components that have already been created for BizTalk Server. Each component represents a solution to a specific integration challenge – many of which are timeless challenges.

What Now? (Slide)

We write BizTalk components and API Apps in the same languages, though with different techniques, and targeting a different runtime.

How do we make that all happen? Well, today we are providing the world with “the goods”. All of the slides from this talk, a sample module from the February 2016 version of our Cloud-based Integration Using Azure App Service course, and all of the code involved in the demo. With those combined resources, you should be set on the right track to start building custom API Apps for use in Logic Apps – leveraging skills and work you’ve already accomplished.

If you’re ready to get started, click the image below to download the resources:

Slide46

Until Next Time

That’s all for now! Again, go forth and create API apps and come visit us in our Cloud-based Integration Using Azure App Service course if you’d like to learn more.

I’ll leave Simon Young with the final word – and dining tip!

API Apps 101 for BizTalk Developers at Integrate 2016

By Nick Hauenstein

I’m happy to announce that I’ve been asked to speak at the BizTalk 360 Integrate 2016 conference May 11th-13th in London. When brainstorming for the session topics I had lots of ideas of fun things that could be accomplished with Logic Apps and API Apps, but decided to take things in a slightly different direction.

Over the last few years, I’ve seen BizTalk Server developers who are skilled as general .NET developers, but who may not have the time or energy to keep up-to-date with the evolution of Logic Apps, API Apps – and all of the things that come with them Web API, node.js, Swagger, etc. That is perfectly understandable because as a developer building enterprise integrations exchanging X12 or EDIFACT data using BizTalk Server, you might not have needed to interact with JSON serialization in the past.

This Will Make Your Dreams Come True

This last year, my team and I have been working hard to stay on top of changes to Microsoft’s cloud-based integration technologies. Time and again we’ve fallen, and seen others do it as well, into the trap of looking at a tool and then trying to figure out how it can solve all of our problems – even to the point of searching out problems (imagined or otherwise) that it could tackle. It happens anytime there’s a sufficiently impressive tool, or sufficiently impressive salesperson (or both). Go ahead, click the link, I’ll wait. But if you do click that link, you will end up trying to find how you too could buy a 10 pack of vegetable peelers. Then you would be trying to figure out how to incorporate zucchini strands into dessert.

Getting Back on Track

Whenever that happens we’ve been able to course correct by forgetting about the tool and focusing on the problem we want to solve, or even better, the ideal solution to the problem. When describing the solution using Enterprise Integration Patterns as our vocabulary, we can quickly model an answer that works without assigning a particular tool or technology. Maybe a given solution needs a content enricher, or a resequencer, or guaranteed delivery, or whatever – not necessarily a specific technology applied.

As a BizTalk Developer, I know how to implement these patterns using BizTalk; the real question that we keep running into is can we implement these patterns using API Apps and/or Logic Apps – and better yet should we?

Can != Should?

My session is designed to help BizTalk Developers learn how to answer those questions for themselves. Specifically, identifying the capabilities of the new additions to our toolset, and then showing how to use them without assumptions of in-depth knowledge of the underpinnings. In the talk, you will see API App that creates promoted properties from an XML document (just like BizTalk Server) so that we could then potentially reach out to other Azure capabilities and implement publish-subscribe in concert with content-based routing and correlation.

Azure App Service Logic Apps Refresh

By Nick Hauenstein

Much has happened in the world of Logic Apps and API Apps since the original announcement back in December of 2014. We have seen the continued development of SaaS connectivity within the product, along with the overall expansion of integration capabilities. We have also seen the team responding to customer feedback actively while maintaining transparency in the process, and even providing a roadmap to give insight into what is coming and when we can expect to see the sweet moment that is GA.

Sometimes, customer feedback causes fairly large shifts in the underlying product. Such is the case seen in the latest updates for the product in the form of a completely overhauled designer, new feature support for triggering flows (i.e., any action can be a trigger), and an API deployment model that is more consistent with the rest of App Service and does not require a dedicated gateway.

New Designer

One of the most obvious changes that will stick out immediately as you go out to create a Logic App is the new designer that moved over into App Service from Power Apps.

New Logic Apps Designer

The new designer supports editing workflows build using the updated workflow language (schema version: 2015-08-01-preview), and sports a vertical layout, rather than a horizontal layout, and conditions that appear to wrap around actions instead of being embedded inside actions (though the code view demonstrates that the underlying behavior is similar).

image

You might also notice that the experience of adding actions is much quicker, as this act no longer provisions a new instance of an API App within your own subscription. Instead in an interesting reversal, Microsoft hosts managed instances of out-of-the-box API Apps. The result is that configuration information is sent as part of the request, instead of stored inside the API App container, and you will have far more simplified ARM deployment templates given that your deployment will no longer need to take into account each API App used by your Logic Apps.

imageSo how do my own custom API Apps end up in the list? Well, you can apply to have them registered in the Azure Marketplace, or you can use the Http + Swagger action in order to point to a custom API App that already exists. Of course that brings us to the question of what it looks like to actually build a custom API App in this refresh of the preview.

New API App Development Model

In the preview refresh, the process to develop and consume a custom API using the designer is quite a bit different. You still have the ability to use swagger extensions for a clean designer experience – but there are new extensions intended to take advantage of new designer capabilities. These capabilities include things like dynamic schemas for parameters / return types of API (imagine a different object shape depending on the type of entity within a CRM system, or a table within a database), and dynamic values for enumerations.

The biggest change here though is that we no longer have a gateway managing authentication, internal storage, or configuration for our APIs, and get to manage that ourselves, but  as a side effect, we’re no longer constrained by where our APIs live – all APIs get the same first class experience.

I would definitely recommend taking some time to read each link within this article before starting out on building a new API. I’m working on building out updates to T-Rex to help with the metadata – while also providing a few example APIs to take advantage of all of the new capabilities – but if you want a head start, the knowledge is out there!

New Triggering Capabilities

What other changes are under the hood that you should know about? Well, you may have noticed the announcement of the availability of Webhooks for Logic apps for one, or even saw the x-ms-trigger extension called out in the article linked above. The end result of this is that any action within a Logic App can have a polling trigger style behavior, or even an async push style behavior, and the Logic App itself can be triggered manually at an endpoint that isn’t tied to a specific Azure subscription.

We can see some of these changes in action as we look at actions like the Send approval email action from the Office 365 connector/API. The action sends an email, and then notifies the Logic App what the response is when the response is available – without polling.

Office 365 send approval email metadata

It even includes the shape of the notification as part of the swagger metadata that is exposed, so that the designer can support using the shape of that async output in later steps. The result is that as a developer, I can use the action to build what looks like a synchronous flow without the complexity of an async flow, and yet I’m benefitting from the performance characteristics of the async implementation (i.e., immediate notification when the event happens rather than polling at a fixed or variable interval).

What Are We Doing About It?

Reading about all of this might have you wondering what QuickLearn Training is doing about all of this, and/or what you should do about all of this.

Well, I (Nick Hauenstein) am hard at work on an update to QuickLearn’s T-Rex metadata library that takes into account the new way to build API Apps. I’m on target to wrap up the core code by end of week, and hopefully have some decent sample apps out there shortly thereafter.

We’re all busy learning everything we can about the new functionality so that we can rapidly integrate those changes into our Cloud-Based Integration Using Azure App Service course.

In the meantime, keep an eye out for announcements from the BizTalk 360 folks about Integrate 2016 Europe. You might be able to meet up with myself (Nick Hauenstein), Rob Callaway, or John Callaway to talk about BizTalk Server, or any of the things in this post. Also watch for the next release of TRex on NuGet which will include support for all of the new goodies we have available in Logic Apps.

In the meantime, take care, and have fun building great things!

BizTalk Server’s Road Ahead for the Next Year

By Nick Hauenstein

I’m finally settling back into the swing of things as we kick off the year 2016! It has been quite a relaxing break, spending Christmas and New Year’s with my family out in the woods of Snohomish, WA. Since getting back to the office, I’ve been catching up on quite the backlog of emails. Among them was an email that called out a file that was uploaded to the Microsoft download site at the end of last month – the long awaited BizTalk Server Roadmap for 2016 or should I say the Microsoft Integration Roadmap (more on that to below).

Continued Commitment to BizTalk Server

The document opens up with a bullet pointed summary of the core takeaways (I for one appreciate that it leads with the TLDR):

  • Continuing commitment to BizTalk Server, with our 10th release of BizTalk Server in Q4 2016.
  • Expansion of our iPaaS vision to provide a comprehensive and compelling integration offering spanning both traditional and modern integration requirements. Preview refresh in January 2016 and General Availability (GA) in April 2016.
  • Deliver our iPaaS offering on premises through Logic Apps on Azure Stack in preview around Q3 2016 and GA around end of the year.
  • Strong roadmap and significant investments to ensure we continue to be recognized as a market leader in integration.
  • The next release of Host Integration Server is planned on the same timeline as BizTalk Server below.

BizTalk Server 2016 Roadmap

That’s right; 2016 is the year where we start to see Microsoft’s integration investments in the cloud start to pay dividends on-premises – with two complementary offerings that each offer their own approach to solving integration challenges while still ensuring that you can build mission critical BizTalk Server integrations on the latest Microsoft platform. Though Microsoft is expanding the integration toolbox beyond just BizTalk Server, the focus is still firm on Integration, and the tools are built on proven platforms with a proven infrastructure.

BizTalk Server 2016 New Features

So what can we expect in BizTalk Server 2016?

  • Platform alignment – SQL 2016, Windows Server 2016, Office 2016 and latest release of Visual Studio.
  • BizTalk support for SQL 2016 AlwaysOn Availability Groups both on-premises and in Azure IaaS to provide high availability (HA).
  • HA production workloads supported in Azure IaaS.
  • Tighter integration between BizTalk Server and API connectors to enable BizTalk Server to consume our cloud connectors such as SalesForce.Com and O365 more easily.
  • Numerous enhancements including
    • Improved SFTP adapter,
    • Improved WCF NetTcpRelay adapter with SAS support
    • WCF-SAP adapter based on NCo (.NET library)
    • SHA2 support
  • Host Integration Server “2016”
    • New and improved BizTalk adapters for Informix, MQ & DB2
    • Improvements to PowerShell integration, and installation and configuration

I don’t know about you, but I’m fairly excited to see this listing. With the death of SHA1 certificates this year, it’s good to see SHA2 support finally coming into BizTalk Server 2016, if for nothing else, then for SHA2 a BizTalk Server 2016 upgrade is going to be a must.

Also, notice the tighter integration between BizTalk Server and API connectors. That’s fantastic! One thing that Logic Apps do really well is provide friendly connectivity to SaaS endpoints. One thing they don’t do as well is content-based correlation and long -running transactions. One thing that BizTalk Server doesn’t do too well is provide friendly connectivity to SaaS endpoints (there is generic REST connectivity, but you’re going to be wishing that you would have built/bought/downloaded an adapter once you start going down that road). One thing that BizTalk Server does really well is content-based correlation and long-running transactions. Here we’re seeing the best of Azure App Service Logic Apps meeting the best of BizTalk Server. That should make anyone happy.

An Integration Taxonomy

One interesting thing found in the roadmap is a brief discussion of an integration taxonomy that makes a distinction between “Modern Integration” – which is usually SaaS and web-centric, based in the cloud, and within the realm of Web and mobile developers — and “Enterprise Integration” – which includes support for industry standards (e.g., X12, EDIFACT, etc…), targets mission critical workloads, and caters more towards enterprise integration specialists.

In a way, this sets the context for the two core integration offerings of BizTalk Server and Logic Apps – defining the persona that might gravitate towards each. However, Logic Apps will offer an Enterprise Integration Pack for the pro developer that wants the power of BizTalk Server with the elasticity of a PaaS offering.

Where Is This Going?

Well, you might be reading this because you’re passionate about Logic Apps; you might be reading this if you’ve been working with BizTalk Server since the year 2000. Either way, you’re in the business of doing integration. MIcrosoft isn’t interested in building up cliques of developers, but instead catering to all while providing an easy to use location agnostic (cloud/on-prem) rock solid, highly scalable platform for mission critical integration.

The focus is on evolving capabilities, it doesn’t matter what brand name is slapped on the side of it (whether it’s Logic Apps, Power Apps, or BizTalk Server), Microsoft is committed to making the world of enterprise integration a better place!

A Brief History of Cloud-Based Integration in Microsoft Azure

By Rob Callaway

Mission Briefing

In conversations with students and other integration specialists, I’m discovering more and more how confused some people are about the evolution of cloud-based integration technologies. I suspect that cloud-based integration is going to be big business in the coming years, but this confusion will be an impediment to us all.

To address this I want to write a less technical, very casual, blog post explaining where we are today (November of 2015), and generally how we got here. I’ll try to refrain from passing judgement on the technologies that came before and I’ll avoid theorizing on what may come in the future. I simply want to give a timeline that anyone can use to understand this evolution, along with a high-level description of each technology.

I’ll only speak to Microsoft technologies because that’s where my expertise lies, but it’s worth acknowledging that there are alternatives in the marketplace.

If you’d like a more technical write-up of these technologies and how to use them, Richard Seroter has a good article on his blog that can be found here.

On the First Day, Microsoft Created Azure

Way, way back in October of 2008 Microsoft unveiled Windows Azure (although it wouldn’t be until February of 2010 that Azure went “live”). On that first day, Azure wasn’t nearly the monster it has become.

It provided a service platform for .NET services, SQL Services, and Live Services. Many people were still very skeptical about “the cloud” (if they even knew what that meant). As an industry we were entering a brave new world with many possibilities.

From an integration perspective, Windows Azure .NET Services offered Service Bus as a secure, standards-based messaging infrastructure.

What’s the Deal with Service Bus?

Over the years, Service Bus has been rebranded several times but the core concepts have stayed the same: reduce the barriers for building composite applications, even when their components have to communicate across organizational boundaries. Initially, Service Bus offered Topics/Subscriptions and Queues as a means for systems and services to exchange data reliably through the cloud.

Service Bus Queues are just like any other queueing technology. We have a queue to which any number of clients can post messages. These messages can be received from the queue later by some process. Transactional delivery, message expiry, and ordered delivery are all built-in features.

Sample Service Bus queue

Sample Service Bus queue

I like to call Topics/Subscriptions “smart queues.” We have concepts similar to queues with the addition of message routing logic. That is, within a Topic I can define one or more Subscription(s). Each Subscription is used to identify messages that meet certain conditions and “grab” them. Clients don’t pick up messages from the Topic, but rather from a Subscription within the Topic. A single message can be routed to multiple Subscriptions once published to the Topic.

Sample Service Bus Topic and Subscriptions

Sample Service Bus Topic and Subscriptions

If you have a BizTalk Server background, you can essentially think of each Service Bus Topic as a MessageBox database.

Interacting with Service Bus is easy to do across a variety of clients using the .NET or REST APIs. With the ability to connect on-premises applications to cloud-based systems and services, or even connect cloud services to each other, Service Bus offered the first real “integration” features to Azure.

Since its release, Service Bus has grown to include other messaging features such as Relays, Event Hubs, and Notification Hubs, but at its heart it has remained the same and continues to provide a rock-solid foundation for exchanging messages between systems in a reliable and programmable way. In June of 2015, Service Bus processed over 1 trillion (1,000,000,000,000) messages! (Starts at 1:20)

What About VETRO?

As integration specialists we know that integration problems are more complex than simply grabbing some data from System A and dumping it in System B.

Message transport is important but it’s not the full story. For us, and the integration applications we build, VETRO (Validate, Enrich, Transform, Route, and Operate) is a way of life. I want to validate my input data. I may need to enrich the data with alternate values or contextual information. I’ll most likely need to transform the data from one format or schema to another. Identifying and routing the message to the correct destination is certainly a requirement. Any integration solution that fails to deliver all of these capabilities probably won’t interest me much.

VETRO Diagram

VETRO Diagram

So, in a world where Service Bus is the only integration tool available to me, do I have VETRO? Not really.

I have a powerful, scalable, reliable, messaging infrastructure that I can use to transport messages, but I cannot transform that data, nor can I manipulate that data in a meaningful way, so I need something more.

I need something that works in conjunction with this messaging engine.

You Got Your BizTalk in My Cloud!

Microsoft’s first attempt at providing a more traditional integration platform that provided VETRO-esque capabilities was Microsoft Azure BizTalk Services (MABS) (to confuse things further, this was originally branded as Windows Azure BizTalk Services, or WABS). You’ll notice that Azure itself has changed its name from Windows Azure to Microsoft Azure, but I digress.

MABS was announced publicly at TechEd 2013.

Despite the name, Microsoft Azure BizTalk Services DOES NOT have a common code-base with Microsoft BizTalk Server (on second thought, perhaps the EDI pieces share some code with BizTalk Server, but that’s about all). In the MABS world we could create itineraries. These itineraries contained connections to source and destination systems (on-premises & cloud) and bridges. Bridges were processing pipelines made up of stages. Each stage could be configured to provide a particular type of VETRO function. For example, the Enrich stage could be used to add properties to the context of the message travelling through the bridge/itinerary.

Stages of a MABS Bridges

Stages of a MABS Bridges

Complex integration solutions could be built by chaining multiple bridges together using a single itinerary.

MABS message flow

MABS message flow

MABS was our first real shot at building full integration solutions in the cloud, and it was pretty good, but Microsoft wasn’t fully satisfied, and the industry was changing the approach for service-based architectures. Now we want Microservices (more on that in the next section).

The MABS architecture had some shortcomings of its own. For example, there was little or no ability to incorporate custom components into the bridges, and a lack of connectors to source and destination systems.

Give Me Those Sweet, Sweet Microservices

Over the past couple of years the trending design architecture has been Microservices. For those of you who aren’t already familiar with it, or don’t want to read pages of theory, it boils down to this:

“Architect the application by applying the Scale Cube (specifically y-axis scaling) and functionally decompose the application into a set of collaborating services. Each service implements a set of narrowly related functions. For example, an application might consist of services such as the order management service, the customer management service etc.

Services communicate using either synchronous protocols such as HTTP/REST or asynchronous protocols such as AMQP.

Services are developed and deployed independently of one another.

Each service has its own database in order to be decoupled from other services. When necessary, consistency is between databases is maintained using either database replication mechanisms or application-level events.”

So the shot-callers at Microsoft see this growing trend and want to ensure that the Azure platform is suited to enable this type of application design. At the same time, MABS has been in the wild for just over a year and the team needs to address the issues that exist there. MABS Itineraries are deployed as one big chunk of code, and that does not align well to the Microservices way of doing things. Therefore, need something new but familiar!

App Service, and API Apps, and Logic Apps, Oh My!

Azure App Service is a cloud platform for building powerful web and mobile apps that connect to data anywhere, in the cloud or on-premises. Under the App Service umbrella we have Web Apps, Mobile Apps, API Apps, and Logic Apps.

Azure App Service

Azure App Service

I don’t want to get into Web and Mobile Apps. I want to get into API Apps and Logic Apps.

API Apps and logic Apps were publicly unveiled in March of 2015, and are currently still in preview.

API Apps provide capabilities for developing, deploying, publishing, consuming, and managing RESTful web APIs. The simple, less sales-pitch sounding version of that is that I can put RESTful services in the Azure cloud so I can easily use them in other Azure App Service-hosted things, or call the API (you know, since it’s an HTTP service) from anywhere else. Not only is the service hosted in Azure and infinitely scalable, but Azure App Service also provides security and client consumption features.

So, API Apps are HTTP / RESTful services running in the cloud. These API Apps are intended to enable a Microservices architecture. Microsoft offers a bunch of API Apps in Azure App Service already and I have the ability to create my own if I want. Furthermore, to address the integration needs that exist in our application designs, there is a special set of BizTalk API Apps that provide MABS/BizTalk Server style functionality (i.e., VETRO).

What are API Apps?

What are API Apps?

This is all pretty cool, but I want more. That’s where Logic Apps come in.

Logic Apps are cloud-hosted workflows made up of API Apps. I can use Logic Apps to design workflows that start from a trigger and then execute a series of steps, each invoking an API App whilst the Logic App run-time deals with pesky things like authentication, checkpoints, and durable execution. Plus it has a cool rocket ship logo.

What are Logic Apps?

What are Logic Apps?

Putting the Pieces Together

What does all this mean? How can I use these Azure technologies together to build awesome things today?

Service Bus review

Service Bus review

Service Bus provides an awesome way to get messages from one place to another using either Queues or Topics/Subscriptions.

API Apps are cloud-hosted services that do work for me. For example, hit a SaaS provider or talk to an on-premises system (we call these connectors), transform data, change an XML payload to JSON, etc.

Logic Apps are workflows composed of multiple API Apps. So I can create a composite process from a series of Microservices.

Logic App review

Logic App review

But if I were building an entire integration solution, breaking the process across multiple Logic Apps might make great sense. So I use Service Bus to connect the two workflows to each other in a loosely-coupled way.

Logic Apps and Service Bus working together

Logic Apps and Service Bus working together

And as my integration solution becomes more sophisticated, perhaps I have need for more Logic Apps to manage each “step” in the process. I further use the power of Topics to control the workflow to which a message is delivered.

More Logic Apps and Service Bus Topics provide a sophisticated integration solution

More Logic Apps and Service Bus Topics provide a sophisticated integration solution

In the purest of integration terms, each Logic App serves as its own VETRO (or subset of VETRO features) component. Decomposing a process into several different Logic Apps and then connecting them to each other using Service Bus gives us the ability to create durable, long-running composite processes that remain loosely-coupled.

Doing VERTO using Service Bus and Logic Apps

Doing VERTO using Service Bus and Logic Apps

Summary

Today Microsoft Azure offers the most complete story to date for cloud-based integration, and it’s a story that is only getting better and better. The Azure App Service team and the BizTalk Server team are working together to deliver amazing integration technologies. As an integration specialist, you may have been able to ignore the cloud for the past few years, but in the coming years you won’t be able to get away with it.

We’ve all endeavored to eliminate those nasty data islands. We’ve worked to tear down the walls dividing our systems. Today, a new generation of technologies is emerging to solve the problems of the future. We need people like you, the seasoned integration professional, to help direct the technology, and lead the developers using it.

If any of this has gotten you at all excited to dig in and start building great things, you might want to check out QuickLearn Training’s 5-day instructor-led course detailing how to create complete integration solutions using the technologies discussed in this article. Please come join us in class so we can work together to build magical things.

Integration Monday Recap and Push-BUtton Push Trigger Introduction

By Nick Hauenstein

This blog post serves as a quick recap of and expansion on my October 19th Integration Monday talk titled Building Push Triggers for Logic Apps. You can view the session and look through the slides over at integrationusergroup.com.

Building Push Triggers for Logic Apps

In the talk, I explored the bare minimum requirements for building push triggers, expanding on my AzureCon 2015 talk about a specific push trigger for dealing with NFC tag reads. I showed how you could use the QuickLearn Push Trigger Tools and QuickLearn Push Trigger Client Tools to implement a simple interface for storing callbacks, and build a re-usable set of callback storage mechanisms.

I also introduced the Push-Button Push Trigger. A push trigger that responds to a button press on a Windows 10 IoT device (in this case a Raspberry PI 2), relying on Azure Storage for callback storage. In the remainder of this post, I’m going to show you how  to get your own Push-Button Push Trigger up and running.

Where Do I Get a Push-Button Push Trigger?

At the moment, there are 2 ways that you can get one. You can come out to QuickLearn’s 5-day Cloud-Based Integration using Azure App Service course (or attend remotely), or you can build one for yourself!

Push-Button Push Trigger for Logic Apps

Even if you’ve never worked with anything like this before — don’t panic. You can’t really get something simpler than this.

Essentially you’ll need a Windows 10 IoT device (Raspberry Pi 2, DragonBoard 410C, MinnowBoard Max, etc…), a momentary switch (button), some wiring to wire the button up to a GPIO port and ground, and optionally a breadboard for even more fun later. I went with a Raspberry Pi 2 for mine, but if I could do it again, I would have chosen a DragonBoard 410C given its built-in Wi-Fi capabilities that don’t require an additional accessory or external module.

To get started with developing, you will need some software on your own machine (Visual Studio, Windows 10 IoT Project Templates, etc…) and you will need to get Windows 10 IoT Core onto your device. Microsoft has provided a pretty decent write-up of that part of the procedure over here.

Assembling Your PusH-Button Push Trigger

You may or may not have a case to go along with your Push-Button Push Trigger. I ended up buying the cheapest case I could find on the internet. This was likely a poor choice as it quickly disintegrated and pieces chipped off. Since then, I’ve had a lot better luck with this one. Of course, you could always print/build your own custom enclosure as well.

image

First things first, make sure your device has Windows IoT by inserting a prepared MicroSD card into the device.

image

Next, place your device in its case (if applicable). Mine has a mini-breadboard mounted on top for easier portability of the device.

image

Raspberry Pi 2 in Enclosure

Now, on to the wiring. We’re going to wire up a wire to GPIO pin 4 (chosen randomly), and another wire up to ground. Eventually we’re going to put a momentary switch inbetween so that we can quickly toggle that connection between high/low.

Wiring up to GPIO 4 and Ground

Let’s get that switch up on the breadboard (and make sure we put on a nice and colorful cover).

Switch on the breadboard

You can see in the image above how the posts are reaching down into the board. To connect wires to those pins, you will simply plug in one of the jumper wires to the same row as one of the pins.

Wiring up the switch

One connection down, one to go.

All wired up

At this point, everything is wired up, and it’s time to get power and internet to the device.

Completed Push-Button Push Trigger

How Do I Make The Sample Code Work?

First of all, you can find the sample code over at https://github.com/nihaue/PushButtonPushTrigger. You can either download it as a ZIP file if you’re not comfortable with Git, or if you are comfortable with Git, you can clone it directly from here: https://github.com/nihaue/PushButtonPushTrigger.git

Once you have the sample downloaded, you should immediately Build the code to restore NuGet packages, and make all of the references happy. Next, you should take some time to look through the CallbacksController class for the Push Trigger (the part actually hosted in Azure with which the Logic App registers its interest in certain data), and the StartupTask class for the Universal Windows App (the part that actually looks for and handles the button press):

Interesting Classes within the Push-Button Push Trigger Solution

After you have a decent understanding of what’s going on, you’ll realize that the CallbacksController is storing callbacks from interested Logic Apps in Azure Table Storage, and the StartupTask (think background service on the device) is reading callbacks from Azure Table Storage when the button is pressed (moving this code to initialization and caching/polling for updates would be a better choice – and something you’re free to implement). So in order to get this thing working, you’re going to need an Azure Storage account.

If you don’t already have an Azure Storage account, head over to the Azure Portal and create one.

Creating an Azure Storage Account

The only thing you need from this storage account will be the connection string, which you can find after it’s created over here:

Getting the Storage Account Credentials

With those credentials in hand, you’ll need to visit the two files in the solution responsible for storing configuration. They’re both named AzureStorageConfig.cs.

Locating AzureStorageConfig.cs

Inside that file, you will see a line of code with a TODO comment indicating that you should paste your connection string for your Azure Storage account in that location. This is indeed your next step (make sure to do it in both the code for the API App that lives in Azure, and the code for the device itself).

Configuration Location

Ultimately, this is a terrible way to handle configuration. You can get the sample working with a simple copy/paste in that file, but the intent is that you would simply decide for yourself how you’d like to manage the configuration and creation of the CloudStorageAccount instance, and make that instance available through the StorageAccount property of the AzureStorageConfig class. This instance is used in both the AzureStorageCallbackStore and the AzureStorageClientCallbackStore classes.

Publishing the Code to Azure

We’re now ready to get this code all in place and running. The first step toward that goal will be to publish the API App project. You can do this by right-clicking the QuickLearn.ButtonPress.PushTrigger project, and then clicking Publish.

Publishing the QuickLearn.ButtonPress.PushTrigger project

Make sure to select Microsoft Azure API Apps (Preview) as the target.

image

In the Microsoft Azure API Apps window, select your Azure Subscription, and then click New… Fill out the form to create a new API App container into which you can deploy your code.

Creating a New API App Container

Once the creation of the container is complete, you will see the following status message appear in Visual Studio.

API App Provisioned

Then, you will once again right-click the project and then click Publish… This time, the form will be pre-filled with the settings from the publish profile of the Azure API App container that you just provisioned. You might find it helpful to deploy the Debug configuration of your API App (Settings > Configuration > Debug – Any CPU), but that is entirely up to you. Once you click Publish, your code will be deployed to the API App container, and the API App will be usable within a Logic App.

Publishing the API App

Next up, we will deploy code to the device, and configure it to run in the background.

Deploy the Code to the Device

First of all, you will need to edit the project properties for the QuickLearn.ButtonPress.App project so that it attempts deployment to the correct device. In this case, that will mean navigating to the Debug tab, setting the Target device  to Remote machine, the Remote machine to the name of your Windows IoT Core device (default: minwinpc), and then unchecking the Use authentication box.

Configuring Project Properties for Deployment

You will want to make sure to save the project properties, get your device connected to the same network as your development machine (laptop in my case). Next, you can right-click QuickLearn.ButtonPress.App, and click Deploy.

image

Once deployment is complete, head over to the Windows IoT Core Watcher utility that ended up on your system after installing everything that you needed to get your device setup initially. If you can’t find it, reboot your system and it will be there waiting for you. The Windows IoT Core Watcher utility finds IoT Core devices on your network and provides quick links to gain access and configure them.

In the utility, right-click your device, and then click Web Browser here.

Windows IoT Core Watcher

Login using your user name and password (default is Administrator / p@ssw0rd).

Next, head over to the Apps tab, and verify that QuickLearn.ButtonPress shows up in the list. You will want to note the full name as it appears here because you will need it in a few minutes.

QuickLearn Button Press App Installed

Since this app was created as a Startup Task rather than as a graphical application, you will need to register it with the device to be run on startup. At the moment, this is not something that you can accomplish in the browser. Instead, you will need to fire up PowerShell for this next bit.

In PowerShell, you will need to enter a remote session on your device. You can do this using the Enter-PSSession cmdlet like this:

Enter-PSSession on Raspberry Pi 2

The connection process will take a while. Just get a cup of coffee, and when you have it ready, the session should be connected. Once connected, you are in PowerShell on the device, and are executing commands against the device (not your own local machine).

On the device is a utility called iotstartup. This utility provides access to configure what tasks run at device startup. In this case, you want to configure the device to constantly be running the Push-Button Push Trigger code in the background.

iotstartup usage

At the prompt, type iotstartup add headless “QuickLearn.ButtonPress.*”

Add Headless Startup Task

Verify that what the app added matches exactly what appeared in the list on the device web page that you examined earlier. At the prompt type, shutdown /r /t 0

This will cause the device to reboot and your application to start up. It may take 60-90 seconds for the reboot to complete.

Building and Testing a Logic App using the Push-Button Push Trigger

In the Azure Portal, create a new empty Logic App in the same resource group in which you deployed the Push Button Push Trigger API App (otherwise the API App won’t be available to select in the designer). In the Logic App designer, in the API Apps pane (which you may have to expand in order to see), click QuickLearn.ButtonPress.PushTrigger.

Push Trigger in the Designer

Configure the Push Trigger as shown, and then click the green check mark to save the settings.

Push Trigger Configuration

After the push trigger, add any other actions to your Logic App that you wish. Maybe this triggers a build in TFS, maybe it connects to a device that opens a door, maybe it brews you coffee remotely, maybe it posts a message in a chat service, maybe it closes out the latest support ticket that you were working on in your help desk system – it’s up to you. For me, I’m going to add a simple HTTP action (since it’s built into the runtime), and have it POST a message to a requestb.in indicating that the button has been pressed.

Completed Logic App

Save the Logic App, and it’s all ready to go!

PUSH THE BUTTON

There’s only one thing left to do – push the button. If everything has been setup correctly, the Logic App’s callback should be invoked and magic should happen in the cloud.

End Result

What If It Didn’t Work?

Well, there’s a few troubleshooting things you can do. Using the Cloud Explorer window (part of the Azure SDK) in Visual Studio, you can navigate to your API App, right-click and then click Attach Debugger. You can set breakpoints within the callback registration method of the controller class, and step through looking for problems as the Logic App registers the callback. This only happens when the trigger is first added to the Logic App (after clicking Save), and then every hour or so after that (assuming it worked on the first try).

You can see past registrations of the callback by navigating to the trigger history for the Logic App. If you see a string of failures there, it’s likely a bug in the callback registration code, or your storage account credentials.

image

Clicking any one of those line items will bring up the details (inputs/outputs) for debugging. If you want to attempt a callback registration manually (so that you can do it on demand), you can use the Swagger UI page for the API App, and manually fire the callback registration method.

Using Swagger UI to debug Logic App Push Trigger

The above screenshots were generated by replacing the configuration details for the Cloud Storage account with completely invalid data.

If everything looks good as far as the API App in Azure is concerned, you may want to debug the Windows IoT Core task from within Visual Studio. This can be done by right-clicking the project and then clicking Start Debugging (nothing special there).

The End

That’s all for now. Stay-tuned for more samples, and course updates!

Azure App Services Training is Awesome!

By John Callaway

If you weren’t one of the students that attended the recent Cloud-Based Integration Using Azure App Service class offered by QuickLearn you really missed out. I was able to attend and found the experience very informative.

Some technologies lend themselves to simply picking up a book and reading about how it works. BizTalk Server has never been one of those products and it looks like for the foreseeable future at least Azure App Services and Logic Apps are going to fall into that same category. It’s a good thing that Rob Callaway and Nick Hauenstein are braving the front lines to create and deliver quality training for all of us that are too busy to keep up with the rapid changes in Azure.

About the Class

This class was delivered by Rob Callaway, one of the best BizTalk and now Azure App Services instructors in the world! This three day class had an eclectic international audience with people traveling from Canada and Europe to attend the course.

As you can tell from the overview this class is jam-packed with everything that you need to prepare to build integration solutions using Microsoft’s newest addition to Azure App Service, Logic Apps. Since there is so much that goes into creating a logic app the class feels a bit like a snowball rolling down hill, it starts small but as it progresses the knowledge you gain becomes almost overwhelming. The labs ensure that you don’t get lost in the cloud (pun IS intended) by providing a rich hands on experience to match the excellent lecture.

As an integration specialist, I felt very comfortable with the early concepts. By the time we got into day two everything was new as we built first simple and then more complex logic apps.

For the uninitiated a logic app is comprised of triggers and actions which are themselves API Apps. These API Apps are in turn Web Apps that perform some simple function. This whole thing is hosted in Azure. When strung together a Logic App can be very powerful providing capabilities similar to BizTalk orchestrations.

We didn’t just explore Microsoft Azure App Services, but learned how to integrate with Microsoft Azure Service Bus as a reliable and persistent store for inbound and outbound data, and identified the role that Microsoft Azure BizTalk Services (MABS) plays in cloud-based integration. By the end of the course the participants were even able to build their own custom API Apps, no small feat!

The goal of the course, one that I think all the participants would agree was achieved, is to provide the best training possible on these evolving technologies. QuickLearn Training is able to deliver on this goal because we have spent the last two years digging through the sometimes scant documentation and Microsoft presentations to find the golden acorns of knowledge that we happily share with our customers.

Special Guests

One of the benefits of our close relationship with the product team and our proximity to the Microsoft campus is that from time to time we have special visitors. We appreciated Mark Mortimore and Jeff Hollan for taking time out of their busy schedules to drop by on Thursday evening for a meet and greet with the students. Students provided Jeff with some great feedback for features in Logic Apps, and they even convinced Jeff to take a BizTalk Server course. While we don’t always get our friends at Microsoft to visit when we do it’s exciting and fun.

To wrap up the class on Friday we had a special guest appearance by our own Nick Hauenstein where he previewed his Creating a Push Trigger API App to Process NFC Tag Reads demonstration that he will be delivering at the upcoming AzureCon2015 on September 29th.

What The Participants Are Saying

Rob did a great job helping the attendees navigate the mine field or maybe, given the mental challenge, that should read MIND field, of creating and configuring Microsoft Azure Logic Apps.

Some of the feedback that we received from the attendees of this class:

…the QuickLearn materials were flawless and perfectly adapted to the objective of the course…I think that the learning environment is close to perfect and I’m having a hard time thinking of anything that should be changed.

The pace of his speech is very easy and pleasant to follow. Important points are made and repeated, often with humor, which is yet another demonstration that Rob masters his topic and enjoys sharing his knowledge.

This class probably needs to be at least 4 days if not a week.  Need more time to complete labs.

Announcement

With an evolving set of technologies such as this there were inevitable additions that were made between the initial deliveries of this course and the most recent one. With this new content the class will simply not fit into the three day time-box that we initially allotted. As a result of these additions, and the feedback that we got from the recent class, we are excited to announce that the Cloud-Based Integration Using Azure App Service class is being extended to five days!

Your first opportunity to attend this new expanded version of the class is November 30th. As with all QuickLearn Training classes it is offered for remote attendance if you prefer but of course you are all invited to attend at our state-of-the-art facilities in Kirkland Washington as well.

I predict that within one year, your customers will be asking you about cloud based integration. Wouldn’t you rather be the one that knows the answers already, and have several months experience under your belt?

If you are worried that new features will be added that you miss out on by being an early adopter, QuickLearn Training always offers the opportunity for students to retake any class within six months, thus future-proofing your learning. As new features are added to these technologies you can bet that we will do our best to stay on top of the changes so that we can share that knowledge with you.

So indeed if you weren’t one of the students that attended the recent Cloud-Based Integration Using Azure App Service class offered by QuickLearn you really missed out, but you have another chance to mend your ways and get an improved and lengthened version of the course. Don’t miss out on this great opportunity.

Creating a Push Trigger API App to Process NFC Tag Reads

By Nick Hauenstein

This post is designed to serve as a companion to my AzureCon 2015 talk titled Processing NFC Tag Reads in a Logic App. As a result, I will be recapping the talk and digging a little bit deeper into the code behind the talk. If you’d like to jump straight into the code, you can find the completed source code here, or head down to the section titled Writing the Device Code.

By the time we’re through, we’ll see what it would look like to build a solution that reads vCard data (virtual business cards) from conference attendee badges, and uses those tag read events to trigger a Logic App which imports that information as Sales Lead records into both SugarCRM and Salesforce. Are you ready?

Let’s Start with a Story

Let’s imagine for a moment that you are working for a hot tech start-up and you’re showing off the full capabilities of your products at a trade show. You’re one of many booths on the exhibition floor hoping to have the opportunity to connect with customers so that you can tell them how their lives could be better with your products. What might those customer interactions look like?

Typical Tech Conference Exhibitor Experience

You’re going to have attendees approaching the booth, not quite knowing what to expect – maybe hoping to grab some free swag. You might make eye contact, strike up a conversation, and mutually discover that they would benefit greatly from having your product in their lives. So you say “Hey can I scan your conference badge? Not only will I be able to get in touch with you later this week, but you’ll also be entered to win our super awesome contest!” At this point, everything looks really slick as you quickly tap their badge with a mobile device and their data is zapped into position.

But what really happens?

Technical conference exibitor experience

Well, in a lot of cases, after the conference, the data is removed from the scanning devices, loaded up into a CSV file, and returned to the exhibitors via email and/or through a dedicated purpose-built website. From there, it is up to mere mortal human beings to forget that data for a few days, before eventually downloading the attachment, navigating to the CRM website of choice (Dynamics, Salesforce, SugarCRM, etc…), and finally uploading the data where it actually belongs — just in time for it to be too late to make the sale. Your customer has gone with the competitors and will lead a far less fulfilling life.

There’s something a little barbaric in all of this. There’s a lot of ceremony going on over not a lot of bytes of data. While it looks really cool to the customer on the trade show floor, the experience doesn’t translate the second you’ve crammed your life back into two suitcases waiting for the moment that you can be home so that the trees can just stand still for a while.

There must be a better way.

A few years back, at the ALM Summit 3 conference in Redmond, WA, James Whittaker asserted that a new era of computing has begun – the Know and Do Era.

Three eras of computing

Before this era, we had the Store and Compute era, which drove how we interacted with data all the way up through the late 90’s. In the Store and Compute era, we focused our efforts as developers on building Applications that worked with Files. The early 2000’s through 2012  saw the dominance of the Search and Browse era, in which we built Web Pages, Web Services, and even Apps to extend the reach of that data, and make it more discoverable. The Know and Do era is one that is and will be focused on building Experiences. Experiences allow interactions with data that are painted on a canvas of time and space, agnostic of the device you have at your disposal. It’s an era where our devices become the agents of our will and use available signals to make things happen.

That sounds really cool, but seriously, how do I build it? Do I create an application? A web page? An app? Where’s the Visual Studio project template for “Experience”.

Logic Apps enable experiences

Well, here’s the thing about that – experiences don’t live in one place. They don’t deal with a small slice of data that lives in one place. They are integrations between smart things, with action brokers like Logic Apps, with data normalizers, and enrichers, and rules-based processing. They’re distributed applications that take smart devices with sensors and signals available to them, join them with data repositories (no matter how narrowly focused or curated) and make those devices seem magical.

Connecting signals, sensors, devices, and apps

So, what makes Logic Apps a good fit? Well, for one, they’re hosted in the cloud. I’m not going to write code for every device to be able to integrate with Salesforce, SugarCRM, Dropbox, SharePoint, Oracle databases, and a random file share back at my office – nor am I necessarily going to be able to anticipate the future integration needs of code already deployed. Instead of putting the burden on the device, I’m shifting those concerns to the cloud (you can argue whether or not that’s for better or worse amongst yourselves – in this scenario, I think it’s a good fit).

Second, with Logic Apps, I have the full Azure Marketplace full of connectors and actions at my disposal, and if I don’t find something there that meets my needs, then I’m free to write my own.

Logic App tech conference exhibitor experience

So, what does the same story we started earlier look like with a Logic App in play? We’re going to throw away all of the ceremony around manually juggling CSV files and instead focus directly on getting those hot sales leads into our CRM system directly. As the badges are scanned, the device that reads them will trigger a Logic App. That Logic App will do the work of parsing out the information read from the tag, and it will then pass that data along to the CRM system using the out of the box connectors.

How Does the Data Get to the Logic App as it Becomes Available?

That’s an excellent question! Logic Apps start processing whenever they’re triggered. They support manual invocation, invocation through a webhook, and through polling and push trigger API Apps (and yes, I have written about how to build a Polling Trigger API App). In this case, a push-style trigger would be a decent choice (not to discount something like an Event Hub or Service Bus in general, depending on application load).

Triggering a Logic App with an NFC tag read

A Push Trigger API App essentially sits waiting for a Logic App to indicate its interest in some event. The way that it does this is through an HTTP PUT request that includes both the URI and credentials of where to POST data, should it become available, as well as the configuration for the trigger (e.g., don’t send me the same tag read twice in a row, don’t send me tag reads if your GPS location is within the main company office, etc…) As the author of a Push Trigger API App, you get to decide the shape of that configuration data, and you define it simply as a .NET class. The Logic App designer will display this configuration data on the card for your trigger.

The Push Trigger API App will then take this information and store it somewhere, so that when data is available, it (or some other app) can use that URL and credentials to call the Logic App back. In the case of our NFC conference badge scenario, it’s going to be a device that will trigger the Logic App.

What about the storage mechanism? What shall we use for that? Well in the spirit of using as much of Azure App Service in a single demo as possible, I decided to go with an Azure Mobile App as my backend for both the Push Trigger API App and the Universal Windows app that will be running across devices.

Writing the Device Code

Let’s take a look at what we’re going to start with. We’re going to start with a set of projects that looks like this:

image

From top to bottom we have an API App project titled NfcPushTrigger, a Universal Windows 10 app titled NfcClientApp, and a shared portable class library called DataModels that will contain classes representing the shape of data in our Azure Mobile App, and the configuration / output of the trigger (currently empty).

Let’s crack open that device app and build out a really compelling UI:

Compelling UI

Well, that’s enough XAML for me, let’s go write some C# instead. I’m going to switch over to the code, while you gaze upon this work of art.

image

Okay. Over in the code, things are going well. I have all the standard default stuff and I’m ready to start interacting with the NFC reader attached to my laptop. The class that I use to get access to the NFC Reader on my laptop – or really any device that supports it – is the ProximityDevice class that lives in the Windows.Networking.Proximity namespace. That class has a method called GetDefault that I can use to get an instance of the default NFC Reader.

From there, I can subscribe for messages (tag reads) by calling SubscribeForMessage passing it a parameter indicating the type of message and a parameter that serves as a callback for whenever a message (tag) arrives. The type of messages we’re dealing with are NDEF messages where the first record of the message is a vCard record containing conference attendees’ Given (First) name, Family (last) name, and Email Address. In this case, that means we want to use NDEF as the type of message.

image

So we have our callback readied (we’ll eventually be doing some async work, hence the async lambda), but what are we going to do when we get a tag read?

The raw tag data is available as an IBuffer member named Data on the second callback parameter. I’m going to write some code to convert that in a few different ways – (1) human readable ASCII text for my own benefit, so that I can see the name on the badge that was scanned while debugging, and (2) a base64 encoded string that can be passed cleanly to a Logic App, and passed around that Logic App further. While I’m at it, I’m also going to write some code to determine if we’re seeing the same tag twice in a row.

image

So, let’s set a breakpoint, deploy, and launch this application so that we can see what we have so far when I scan a tag. After launching it and scanning my test conference badge, we hit the breakpoint.

NFC tag read

Looking good so far! However, there’s not yet a way for a Logic App to declare its interest in this data collected by the Universal Windows app. As a result, it is time to switch over to the NfcPushTrigger API App, so that we can enable Logic Apps to register their interest in the data, and provide callback details for use in this client app.

NfcPushTrigger API App

Building the Push Trigger API App

In the Push Trigger API App, I’m going to start by adding a few package references. I’m going to add a reference to the latest pre-release of the Mobile App SDK, so that we will have access to the MobileServicesClient class for interaction with our data store. I’m also going to add a package reference to both T-Rex and the QuickLearn Push Trigger Tools.

T-Rex provides us a painless way to decorate properties, methods, and parameters with attributes that help our API Apps look pretty within the Logic App designer without resorting to manually writing a bunch of Swashbuckle filters. The QuickLearn Push Trigger Tools on the server side really only provides us with a single interface, ICallbackStore, which you can see here. Using that, we can write our push trigger code against the interface to make sure we’re doing all the necessary things within the callback registration and then simply implement that interface for our callback storage mechanism.

Once I have those package references added, I’m going to start clearing out the ValuesController, so that it doesn’t contain code that I don’t need. Then, I’m going to write comments to remind myself what it is I’m trying to do:

Put method in the default values controller

Let’s start by renaming the ValuesController class to something more meaningful, like CallbacksController. Also, we have attributes routing. I don’t need to name my method “Put” when its purpose is to register a callback. Let’s adjust that a little bit.

image

Looks good so far. However, in the Logic App designer, this will show up as an action titled something awful like “CallbacksController_RegisterCallback” – and who will even know that has anything to do with starting the Logic App when an NFC tag is read on a device? We’ll want to use some of those attributes in the T-Rex Metadata Library to address that (also a good opportunity to add the c.ReleaseTheTRex() statement of code back in the SwaggerConfig.cs file).

T-Rex Attributes on Push Trigger

You might be looking at that code, and thinking to yourself, “wait a minute, what’s a PushTriggerOutput?” That’s a fair question, we haven’t seen that class yet. It’s one that we actually still need to define. This just needs to be some class that represents the shape of the output that our trigger returns (or rather the shape of the input into the Logic App). In our case, it’s going to be a string that contains base64 encoded NFC tag read data. So, something like this might suffice (T-Rex metadata added for clarity).

PushTriggerOutput

Let’s go back to the RegisterCallback method now. This method is ultimately going to be receiving information about a URI to call back to the Logic App (with embedded credentials). Right now, that’s not represented in the parameters of the method.

In this case, we have a special class that comes with the Azure App Service SDK called TriggerInput. The TriggerInput class is actually a generic class that has us specify a class that is the shape of the configuration data that we want to use for the Logic App and a class that is the shape of the output we want to use for the Logic App. We already have the output, but what about the configuration? Let’s do something like this, so that we can make use of that duplication detection code that we wrote:

image

Now that we know the shape of both the configuration settings that we will be able to set for the trigger, as well as the shape of the output that the trigger can return, we’re in a good position to actually finish out the method signature of our RegisterCallback method and move on to implementation.

RegisterCallback Final Method Signature

I’m going to go ahead and start writing against that ICallbackStore interface to store the callback data (we’ll worry about implementation a little bit later).

Using the ICallbackStore interface

There might be a little bit of mystery at this point, particularly if you have never interacted with the TriggerInput class. TriggerInput looks something like this:

TriggerInput Class

The TriggerInput class is passed into our RegisterCallback method as the parameter named parameters. That means when I’m typing parameters.inputs, it’s giving me an instance of my PushTriggerConfiguration class (the instance that described how the trigger’s card was configured in the Logic App designer). GetCallback() is a little more interesting. The object returned by that method looks like this:

ClientTriggerCallback

The CallbackUri member contains not only the Uri to the Logic App, but also the credentials to send a request. Further, if I decided to invoke this callback directly from an application that had a package reference to the App Service SDK, then I could invoke the callback through this class as well. In this case, I want to avoid adding such a heavy dependency for such a small task.

Once the callback is stored, the only other thing the Push Trigger has to do is to report back to the Logic App that the callback was stored successfully. In this case, it’s going to be pretty straight-forward boilerplate code.

Boilerplate return for push triggers

At this point, there are a few things sticking out. First, we’re returning an HttpResponseMessage through this boilerplate code (via an extension method on the Request object added by the App Service SDK) – but our RegisterCallback method doesn’t specify a return type. Second, we’ve called a method with Async in the name, but haven’t awaited it. We’re going to solve both at once by changing the method signature for the last time, and adding that missing await.

Async conversion of RegisterCallbacks method

We still have a null callback store, but if we take a step back and look at this code for what it is, it’s demonstrating that we can write ANY push trigger in 3 lines of code (with heaps of attributes). In fact, the only scenario-specific items are the class names, and metadata providing the friendly name for the method. The hard part, then, lies in actually storing the callbacks.

So, let’s make the hard part easy, by just giving it to you directly over here. With that, the final implementation of the RegisterCallback method can be seen below:

Final implementation of Register Callback method

I’m going to go ahead and publish that to Azure, and switch back to the device code, so that we can read in the callback and act on it.

Wrapping Up the Device Code

On the device, I’m also going to be adding a package reference to the Mobile App SDK because we will be interacting directly with that same Mobile App we used to store the callbacks in the push trigger API App. I’m also going to add a package reference to the QuickLearn Push Trigger Client Tools package.

That package provides us a Callback class that can be instantiated by providing the callback URI. Once instantiated, it can be used to invoke the Logic App from nearly any .NET app. As an aside, it does add some shape to the output by wrapping the content in a body object to be consistent with other triggers in the gallery, so you will want to take that into account when testing and writing expressions against its output.

That package also provides a very similar client-side interface for interacting with a storage location for push trigger callbacks. Feel free to use those classes directly and/or modify them for your purposes if you’d rather not take an external dependency over a few interfaces and classes.

With both those packages in place, I’m going to do the same thing that I did before, and provide an implementation of the callback store, so that I can go and write my code. The code I’m going to write will retrieve the callbacks from the store (wastefully, on every single tag read without caching), loop through those callbacks (Logic Apps awaiting tag reads), and check the configuration associated. If they’re configured to suppress duplicates, and I have a duplicate read, I’ll move on to the next awaiting Logic App. Otherwise, I’ll invoke the callback with a new PushTriggerOutput object:

Final device code

At this point, all of the C# coding is done, and it’s off to build the Logic App.

Defining the Logic App

In my Logic App, I already have a few steps defined, but I don’t yet have any triggers defined.

Start of the Logic App

Looking at that screenshot, you might be wondering, “where did the ndefparser API App come from?” Well, that’s a custom purpose-built API App for parsing vCard data out of NDEF formatted NFC tag reads – exactly what we have in the case of our conference badges. You can find a copy of that API App here, complete with a Deploy to Azure button, so that you can provision it directly into your Azure subscription without a fuss.

Anyway, the ndefparser API App starts the process. It is then followed by the built-in SugarCRM connector and Salesforce connector – each of which binds to the outputs of the ndefparser (Given Name, Family Name, and Email Address) — for various fields related to Create Lead(s) actions.

Let’s get the custom trigger in place and wire up the ndefparser to its output:

Logic App Modifications for Final

That’s much better. Now our Logic App is setup to be triggered by NFC tag reads from our app.

Testing the Process

Before we scan the tag, how do we know that the callback registration worked? Well, one good clue can be found in the Trigger History for our Logic App.

Trigger History Tile

The Trigger History shows every call out to the push trigger to perform the callback registration, as well as the response from the push trigger API App for each of those calls. In fact, you’ll notice that the Logic App continually makes it known that it is still interested in that tag read data (calling into question whether or not we really need something as durable as an Azure Mobile App for storing callbacks).

Trigger History Detail

Scanning The Final Badge

Now I’ve set aside a special conference badge for a moment such as this. One that I imagine that Scott Guthrie might wear if he were at a conference wearing a disguise (i.e., something other than a Red Polo).

image

Already, as I investigate the most recent run of the Logic App, I’m seeing good signs. I’m seeing on the outputs link (outputs of our push trigger) that the Logic App has received raw tag data from the tag. I can also see that all of the API Apps have succeeded (including those that talk to SugarCRM and Salesforce):

Test Results

Further, as I look in both SugarCRM and Salesforce, I have a new Scott Guthrie lead:

Salesforce Results

SugarCRM Results

Where Do We Go From Here?

We’ve seen quite a bit over the course of this post, but where do we take this solution from here if we want to make it even more of a Know and Do experience?

Maybe we look to enriching data with other signals (e.g., GPS location of the tag read to correlate with a specific conference, shakiness of the hand as read by the gyro inside the device which could be indicative of nerves induced by meeting a VIP).

Maybe we enrich the data with data from other systems we already have. For example, reaching out to look up company metrics for the conference attendee to potentially disqualify them as a lead if it turns out the cost of our product offering exceeds gross revenues of their organization, or our product offering isn’t suited to an enterprise as large as their organization.

Where do we go from here?

Maybe we start long-running processes when we read their badge, like a drip marketing campaign. Maybe we use the power of BizTalk’s rule-based processing to make an intelligent decision about the lead based on all of the signals and data we have.

Above all, though, I want to you to remember, that even if you don’t care about lead generation or sales, this solution can be generalized. In my case, lead generation does not thrill me, but playing with NFC tags does. NFC is found everywhere. Tags are used in transit passes, room keys, loyalty cards, authentication mechanisms – I even have one in my ring to unlock my door. Think of all of the data that those types of scenarios need access to, and imagine if it really makes more sense to build all of that connectivity by hand, or to use an integration framework that does lots of the heavy lifting for you.

Remember, this solution can be generalized

Maybe you don’t care about NFC, but you do see the value in triggering processes in the cloud whenever certain social event happens (e.g., someone tweets something nasty about your organization should open a case in CRM). Imagine you’re writing code for a logistics company and you want to trigger some actions in the cloud whenever a truck gets close to the destination. Imagine you have temperature, or humidity, or other sensors that should trigger actions in the cloud when certain thresh holds are met.

All of those are excellent opportunities to generalize this.

That’s All Folks!

If you’ve actually read this whole blog post, I salute you! This was a really long one, but it was designed to capture the entirety of my AzureCon 2015 session for those that prefer text over video, and prefer to take things at their own pace.

If you’d like to see even more, QuickLearn Training provides live instructor-led training all over the world (with remote connectivity available) on Logic Apps, BizTalk Server, and Team Foundation Server. We would love to see you in class.

I hope you found this post valuable and that you can go build great things!

As always, remember , this is a sample app, don’t use provided sample code directly in production.  Shortcuts have been taken with storing credentials inline, not caching, ignoring the option of using event hubs or service bus – which might be better depending on anticipated load, not following the MVVM pattern while building our Universal app, etc…

Need TFS 2015 training? We’ve got you covered.

By Anthony Borton

Now that Visual Studio 2015 has been officially released and TFS 2015 is only weeks away, it’s time to look at your training plan to ensure your team is able to maximize the benefits this new version offers.

We’re been busy over the past few months updating our existing range of TFS training courses for the 2015 version. We’ve also built two completely new courses from scratch to meet the demands of our clients.

So what sets our courses apart from others?

  • Proven track record. We’ve been delivering TFS courses internationally for many years and have thousands of  knowledgeable and productive students to show for it.
  • Built by training professionals. Our courses have been written not only by leading subject matter experts but by experienced technical trainers that know the best way to present technical content to a range of audiences.
  • Role based training. Our courses focus on specific roles in a team so that people can get training focused on exactly what they need to do on their job.
  • Completely up to date. Our courses are constantly being updated to ensure we’re current with all Microsoft product updates.

We have a full list of our TFS 2015 courses and scheduled courses on our website at http://www.quicklearn.com/tfs-training.aspx

Visual Studio 2015 is officially released!

By Anthony Borton

Monday 20th July was a big day for Visual Studio with the official release of Visual Studio 2015, .NET 4.6 and much more. There are a number of compelling features that will likely mean that many organizations will choose to install this update sooner rather than later.

Here’s just a few of my favorite things in the new version.

  •  A completely new Build automation system. This is not only easier, faster and more powerful, but also now cross-platform.
  • Cross platform. Build for windows, Android and iOS!
  • More features for less money. With the removal of the “Premium” edition of Visual Studio in 2015, anyone with Visual Studio 2013 Premium with MSDN is now upgraded automatically to Visual Studio 2015 Enterprise edition. Now you get ALL the Visual Studio features.

Naturally there are many, many new features and the Visual Studio site goes through the list in detail. You can even watch recorded sessions from launch event including some great Q&A.