BizTalk Server 2013 beta announced

Tonight Microsoft announced the public download of BizTalk Server 2013 beta.

I’m not going to relist the features because, they are perfectly described here on the blog of the Microsoft BizTalk server team.

Formerly this release was known as BizTalk Server 2010 R2, but it was rebranded with a more future proof name.

Since a few months it was already available for testing purposes on the Windows Azure platform as a Virtual Machine. Testing with Azure Virtual Machines is still possible and the fastest and easiest way to try out the bits.

For our on-premise friends: you can download BizTalk Server 2013 beta here.

The system requirements are picked from the download page and show a platform alignment with the new 2012 series of products.

Supported operating systems: Windows Server 2008 R2 SP1, Windows Server 2012

To run BizTalk Server 2013 Beta you need:

  •  
    • 32-bit (x86) platforms: Computer with an Intel Pentium-compatible CPU that is 1 GHz or faster for single processors; 900 MHz or faster for double processors; or 700 MHz or faster for quad processors
    • 64-bit (x64) platforms: Computer with a CPU that is compatible with the AMD64 and Extended Memory 64-bit Technology (EMT64T), 1.7 GHz or faster processor recommended for BizTalk Server 2013 Beta
    • 2 GB of RAM minimum (more recommended)
    • 10 GB of available hard-disk space
    • VGA monitor (1024 x 768) or higher-resolution monitor
    • Microsoft Mouse or compatible pointing device

To use BizTalk Server 2013 Beta you need the following software:

  •  
    • SQL Server 2012 or SQL Server 2008 R2 SP1
    • Microsoft .NET Framework 4.5
    • Microsoft Visual Studio 2012 [Required for selected features only]
    • Microsoft Office Excel 2010 [Required for selected features only]
    • SQL Server 2005 Notification Service [Required for selected features only]
    • SQLXML 4.0 with Service Pack 1 [Required for selected features only]
    • Internet Information Services (IIS) Version 7.5 and 7.0 [Required for selected features only]

Interesting to see you can install BizTalk on SQL Server 2012, but still need 2005 Notification Services. Also there is a new enhanced SharePoint adapter (for SharePoint 2013?), but BAM still needs Excel 2010. I assume this will also be aligned with the RTM version.

Another interesting thing is that in previous versions you can install BizTalk on a client version of Windows like Windows XP or Windows 7. Although not recommended, it is possible. In this list only server products are mentioned, does that mean that there won’t be support for Windows 8? In the installation documentation which can be found here, is stated the following Windows versions are supported: Windows Server 2008 R2 SP1, Windows Server 2012, Windows 7 SP1, Windows 8.

If you need a Windows Server 2012 machine to install BizTalk on, you can pick a 180-day trial version on VHD from here. An evaluation version of SQL Server 2012 can be found here.

Have fun!

Didago IT Consultancy

Don’t use the BizTalk 2010 Assembly Checker (BTSAssemblyChecker.exe)!

I’m working on some system engineering documentation for my current customer and of course I’m using the BizTalk Server 2010 Operations Guide as a reference for this. One of the tools is the BTSAssemblyChecker tool which is mentioned here "Checklist: Performing Monthly Maintenance Checks".

One of the steps in the checklist is this one:

“Ensure that the correct version of a set of assemblies is installed on each BizTalk machine (integrity check).”

“Use the BizTalk Assembly Checker and Remote GAC tool (BTSAssemblyChecker.exe) to check the versions of assemblies deployed to the BizTalk Management database and to verify that they are correctly registered in the GAC on all BizTalk Server computers. You can use this tool to verify that all the assemblies containing the artifacts of a certain BizTalk application are installed on all BizTalk nodes. The tool is particularly useful in conjunction with a solid versioning strategy to verify that the correct version of a set of assemblies is installed on each BizTalk machine, especially when side-by-side deployment approach is used. The tool is available with the BizTalk Server 2010 installation media at SupportToolsx86BTSAssemblyChecker.exe.”

If you use the tool, it looks in the wrong GAC (like mentioned in this MSDN forum thread) and therefore is kind of useless, since the job of this tool is to verify the assemblies deployed in the management database are also in the GAC. Besides that, the readme documentation still mentions BizTalk 2006 in several places!

What would be the reason for Microsoft to ship this tool which doesn’t work and with out-dated documentation?

The risk of having assemblies in the management database but not in the GAC still exists, but I’m not aware of an alternative tool to check this.

Anyone?

 

Didago IT Consultancy

Interesting rendez-vous with NDepend

This story starts in December 2008, but I found out only last June…..

Back in 2008 Patrick Smacchia of NDepend contacted me via my blog to ask me if I was interested in testing NDepend, but unfortunately I wasn’t notified of his message. Almost 4 years later (!) I came by accident across his message and decided to contact him and the request still stood Smile.

I already knew NDepend from early releases. Back then I ran into the famous “File or assembly name or one of its dependencies not found” exception and it was sometimes a challenge to find out which dependency was missing. Moreover because the dependencies also might have missing dependencies themselves, and so on. A first look at current NDepend made me happy because I saw the tool has evolved tremendously.

One of the reasons I started developing the BizTalk Software Factory is the fact that I think developers should be assisted in their work to build consistent, reliable and maintainable software. While looking at the features of NDepend I got enthusiastic about that part of the capabilities of NDepend.

This blog post is a review of some of the capabilities of NDepend. It is capable of much much more, so I’d like to refer to the website for additional information, documentation and instruction videos.

So of the list of features of NDepend, I like to focus on Code Quality, Code Query, Code Metrics, Explore Architecture, Detecting Dependency Cycles and Complexity and Diagrams.

If you’re involved in BizTalk development you probably know that tools like FxCop (now called Visual Studio Code Analysis) and things like Visual Studio Code Coverage are almost meaningless because most of the code is generated. But this doesn’t count for custom components like libraries, helper components, custom pipelines, custom functoids etc. It still is valuable to have tools checking the quality of your code and detect unwanted dependencies. But of course, the more custom written code you have, the more valuable tools become.

So lets start with some of the features that I like most from a code quality perspective. For this blog I use NDepend v4.0.2 Professional which I installed as Add-in for Visual Studio 2010. For my website I used the open source Orchard CMS and I took that code to do some testing.

In Visual Studio a new NDepend menu option shows up. With the option “Attach a new NDepend Project to VS Solution” you can add NDepend analysis to a project and it starts analyzing right away (if you leave the checkbox checked).

In the lower right corner of the Visual Studio a circle shaped indicator is visible after installing NDepend and after analyzing Orchard CMS the indicator is red. The results of the analysis are in the image below. Also clicking on the indicator displays the “Queries and Rules Explorer” which shows what rules actually are violated. This is a nice an quick overview of the violations.

 

In this example the “Methods too complex” violation is clicked, resulting in displaying the details in the “Queries and Rules Edit” dialog. By hovering over mehod “indexPOST”, a small dialog will be added on the right side of the explorer with details about the file and code metrics. Double clicking “indexPOST” will take you directly to the actual code. It is so easy to navigate to problem locations with this.

Because there are so many rules checked, it is also valuable to just take a look at the violations to learn from best practices. Besides the build-in rules, also custom queries over code are possible. All queries use LINQ style called CQLinq to retrieve information. All code metrics are written using the LINQ queries so it is easy to add your own (or take advantage of queries written by others).

 

Before NDepend I would have no idea how to find out the number of method in my code with more than 6 parameters, more than 10 declared variables or more than 50 lines of code. Or a combination of that. With this LINQ syntax it is a piece of cake.

For example a query to find all methods where the number of variables > 40(!) and are public.

from m in Methods where m.NbVariables > 40 && m.IsPublic select m
It is amazing to see what’s possible by just trying, assisted by intellisense!
Another really (and I mean really really) awsome feature is the Metric View, which is a visual representation of the code by module and size.

The size of every rectangle indicates the size of the method.

A second view uses LINQ queries again to visualize in this case the top 50 methods by number of lines of code. By visualizing this in the Metric View it becomes immediately clear where problems are located. Of course other metrics can be made visible in this way.

I’d like to end with another nice feature, the dependency graph and dependency matrix. The matrix shows an overview of the dependencies modules have on each other. Not only the dependencies between project components, but also dependencies on .NET system libraries are shown, which gives you insight into what assemblies actually are used in your project.

This view definitely will give you a better understanding of your code and might raise an eye brow if you find out you’re depending on an assembly that you’re not supposed to depend on. Especially if you’re responsible for a clean architecture in a large development team you need to verify the code doesn’t violate your architecture.

This tool is so comprehensive that I couldn’t possibly cover all features, but I’ve touched the ones that I’m most interested in. I’m really impressed by NDepend and I’m sure I’ll use it in upcoming projects. Being able to query the quality of your code is just awesome!

 

Didago IT Consultancy

SSO Config Cmd Tool

Every experienced BizTalk developer knows that the SSO store is a good place to safely store configuration information. It is secure, distributed and can be updated without having to dive into configuration files (but too bad the host instances have to be restarted after a change before it is picked up). Accessing the SSO used to be difficult until Richard Seroter wrote a tool for it.

In 2009 Microsoft acknowledged the problems and introduced an MMC snap-in to basically do the same thing, but then with a nice user interface. However so far no command line version appears to be available to support automated deployment. Although the Microsoft package contains MSBuild support, I couldn’t find a commandline version so I decided to write/create one myself.

The tool is no rocket science, most of the code is borrowed from the code the MMC snap-in executes to perform tasks (long live ILSpy).

The tool supports importing, exporting, deleting, retrieving details and listing SSO applications.

With the MMC snap-in there now are two SSO tools available, because also the good old ENTSSO Administration tool is present. The weird thing is that the ENTSSO Administration tool displays more SSO applications than the MMC snap-in does. The reason for this is the strange fact that the MMC snap-in only retrieves SSO applications where the contact information equals .com”>“BizTalkAdmin@<company_specified_in_sso>.com”. This is not always an issue because creating an SSO application automatically sets “BizTalkAdmin@<yourcompany>.com” as the contact email address. It will become an issue if you have to change the email address for example because you don’t have a .com domain…..

Filtering still serves a purpose because there are some default SSO applications that shouldn’t show up in the tool, like adapter configuration settings. In the SSO Config Cmd Tool this potential problem is solved by taking the opposite approach, namely to exclude the contact email addresses “someone@microsoft.com” and “someone@companyname.com” because they are related to the build-in SSO applications.

Let’s take a look at the tool and go over the actions one by one.

If you open the SSO snap-in, you’ll get this user interface. For demo purposes I already created a ‘TestApp2’ with two keys.

If you run the SSOConfigCmdTool without arguments, you’ll get help about the usage of the tool.

To list all SSO applications, where the contact info is not “someone@microsoft.com” or “someone@companyname.com”:

To retrieve the details of this ‘TestApp2’ application (as you can see the tool is case insensitive):

To export an SSO application, you need a encryption/decryption key and of course a file to export to. If you omit the extension automatically ‘.sso’ is appended. After the export a file is present at the specified location.

To be able to demonstrate the import, first the delete:

After delete, as expected the SSO snap-in doesn’t display the SSO application anymore:

Next we run the import to reinstall the SSO application. The nice thing is you can supply a different name for it (here ‘ImportedTestApp’ is used’).

Now we run into the issue mentioned above. The SSO snap-in doesn’t display the newly created application, while the good old ENTSSO Administration tool does. This is caused by the filter I previously discussed. The SSOConfigCmdTool doesn’t know your SSO company name, so it imports (creates) the SSO application with contact information ‘biztalkadmin@YourCompany.com’, which will be filtered out by the SSO snap-in.

How to solve this?

One option is not to solve it, because the SSO application is actually there despite the fact that it isn’t displayed.

Another option is to go into the ENTSSO Administration tool and change the contact info to .com”>“BizTalkAdmin@<company_specified_in_sso>.com”, but that would involve manual intervention.

The most easy and sustainable way is to grab the source code from this tool and change “YourCompany” to <company_specified_in_sso>, which is in fact just a single line of code in the Program class.

Just compile it after the change and you’re good to go.

Anyhow, after setting the contact information correctly, the imported SSO application shows up.

 

The executable and the source code of the tool can be downloaded here at CodePlex.

If you run into an issue, please let me know so I can improve the tool. And of course I’m curious if this tool already existed…..

 

Didago IT Consultancy

Deploying WCF Services using Web Deploy

More than two years ago I wrote a blog post about deploying WCF services using a Web Setup project. Back then it wasn’t an easy task to get services deployed using it. Especially the UI customization of the installation wizard dialog gave me some headache, moreover because they were difficult to test and debug.

As also mentioned in the comments of that post, we now got Web Deploy! So I decided it was about time to rewrite the blog post using the same requirements but then with Web Deploy. Let’s see how much easier it became.

A small revisit of the requirements as formulated in the other blog post:

  1. support deployment of the WCF artifacts
  2. be able to be deployed to whatever web site is available on the server
  3. be able to set ASP.NET security to NTLM
  4. be able to gather service settings during setup
  5. be able to change custom AppSettings in the web.config according to the settings

Web Deploy is available for Visual Studio 2005/2008, but I’ll be focusing on Visual Studio 2010. The approaches of the Web Deploy and Web Setup are different. In Web Setup the output was an MSI which could be run by a system administrator. The Web Deploy approach is focused on IIS as the host of the web application or service. This means the output is no longer an MSI but a (ZIP) package that can be imported in IIS or IIS Express or installed using the MSDeploy command line tool.

What are the typical steps to take to deploy a web application or WCF service using Web Deploy from Visual Studio?

First, select the ‘Publish’ option on the project context menu.

A profile page will show up in which you can specify which publish method to use. The options are:

  • Web Deploy
  • FTP (File Transfer Protocol)
  • File system
  • FPSE (FrontPage Server Extensions)

Also define a profile name, the server to deploy to and the name of the web application or WCF service. Next click ‘Publish’ and the artifacts are deployed to the target IIS server and site/application.

This covers requirement 1: support deployment of the WCF artifacts

As a side step: if you want to deploy to a remote server, you might run into a few challenges. The Web Deploy is a client/server installation so you need to correctly setup your (IIS) server in order for it to receive the deployment request. If you want to know more, take a look at these posts:

Requirement 2 is: be able to be deployed to whatever web site is available on the server

In the previous step the target web site was defined in Visual Studio, but that should be dynamically changeable when a system administrator runs the package on for example a production environment.

First the package must become portable so it can be installed on a different server, and the next step is enabling the system administrator to pick the website to deploy the service to.

To create a package there are 2 options you can use:

  • Visual Studio
  • IIS (use the Export Application functionality, which is a topic by itself and not covered here)

In Visual Studio take from the project context menu the option ‘Package/Publish Settings’ to define how the package looks like and then ‘Build Deployment Package’ to actually build the package.

Important settings here are the location of the package, which is a ZIP file, and the target location of the WCF service on the IIS server. Next select ‘Build Deployment Package’ to create it.

Now the package has been build, it is interesting to see what’s in it. Browse to the folder specified in the ‘Package/Publish Settings’ folder to find the files below.

The package consists of:

  • CMD file
    • This batch file can be used to deploy the package to a server without having to access IIS via the administration console. It uses the command line tool MSDeploy to deploy the package using Web Deploy. Suitable for unattended and/or bulk installations
  • Readme file
    • Interesting file describing all options for the CMD file. One very interesting one is the /T option, which calls msdeploy.exe with the “-whatif” flag, which simulates deployment. This does not deploy the package. Instead, it creates a report of what will happen when you actually deploy the package. Very useful for testing purposes!
  • Parameter file
    • This file contains the parameters that are defined for this service and are extracted from the project file (DeployIisAppPath element) during packaging. This file contains by default only one item, see below. Here we immediately notice that this is the value we need to cover requirement 2.
  • Manifest file
    • This file is not used during deployment, but during generating the package. It contains provider settings and like you can see below some security related entries for the IisApp and setAcl provider.
  • ZIP package
    • Contains the service to deploy, including configuration, binaries and other artifacts. If you open the ZIP you’ll see all kinds of configuration files that are used during import of the application

For now let’s see how to install the package onto a different server based on the package. First open IIS and select the web site you want to import the web application or WCF service to.

It is out of scope to describe all wizard dialogs but at a certain time you’re asked to specify the application:

In here you can define the destination application and if it doesn’t exist, it will be created for you. This is out-of-the box functionality, but nothing different from the Web Setup approach. Anyhow, this is what we need to fulfill requirement 2.

Requirement 3 is: be able to set ASP.NET security to NTLM

By default the security is set to anonymous access, so we need to change that to windows authentication.

At first this didn’t seem like a big deal, because I expected it to be some parameter or manifest setting, but the more I read on the internet the more I got worried. There is a lot to find about Web Deploy and Windows Authentication, but that mostly relates to using Windows Authentication to connect to the Web Deploy server (up to adding keys to the registry), and not about changing the authentication of a deployed WCF service or web application.

So far I haven’t found an easy way to change the authentication of a deployed WCF service or web application. What I did find was a thread in the IIS Forums about specifying the authentication in the web.config, but in order for that to work the applicationHost.config must be changed to allow overrides. That config file is maintained by the system administrator and I can understand why the system administrator should know about deployment scripts that change the authentication. On the other hand, having a WCF service with Windows Authentication is not that uncommon.

Another option that was mentioned in this thread refers to running a command file which can be added to the manifest file. This command file will probably contain some VBScript to change the authentication after the package has been installed. The difficult thing here is knowing in which web site the WCF Service has been deployed…..

I really hope I overlooked something, so if somebody knows how to do this please leave a comment! Thanks in advance!

Requirement 4 and 5 will be discussed combined: “be able to gather service settings during setup” and “be able to change custom AppSettings in the web.config according to the settings”

Like we’ve seen before, there is a parameters file. This file can be customized from within Visual Studio to make it possible to change the UI the system administrator will see. For this example I’ll demonstrate what it takes to change the value of a custom AppSetting in the web.config. This will be changed from within the UI of IIS. The value to change is a key named ‘Test’ in the AppSettings section of the web.config.

First, add an XML file to your project called “Parameters.xml”, this will contain the parameters the UI will show. This is a different file from the ‘SetParameters.xml’ file we saw earlier; that one is used with the command line option and this one with the IIS import.

Next add this piece of XML. By the way, this is a good resource to learn more about parameters in Web Deploy.

This piece describes two things:

  • It defines a parameter with a description to show up in the UI and a default value which will be filled in.
  • It defines where the value, read from the UI, should be applied. In this case it is targeted at the web.config file and the XPath points to the AppSettings key ‘Test’.

Building a package from this and importing it in IIS results in the following UI:

As you’ll notice the name and description are displayed nicely, together with the default value. When the value is changed, it will be changed in the web.config when the wizard is finished.

You can imagine the possibilities of this approach. It is very easy to change the UI and set values for example in the web.config without having to code it.

So, did it actually became easier to deploy WCF services? Yes and no. Obviously it is much easier to customize the UI and perform all kinds of actions while deploying, but on the other hand it seems very difficult to perform some common task like changing the authentication of a web site. Like mentioned before, I hope someone will leave a comment with a clarification on the authentication issue.

Anyhow, there is much more you can do with Web Deploy then is covered in this blog post. Please find below useful spending of your spare time.

Didago IT Consultancy

Installing HL7 Accelerator for BizTalk Server 2010

Application integration is all about exchanging messages and messages can only be exchanged if both the sender and receiver know what to expect from each other.

In some branches the standardization of messages and their format is already going on for a while. Examples are:

  • Financial (SWIFT)
  • Automotive (EDI)
  • Supply chain (RosettaNet)
  • Medical (HL7)

If plain BizTalk would be used in those environments it would take quite some time to define the message types according to the existing branch standard. In case of EDI these schemas are already supplied by default when installing BizTalk. For the other standardized schemas, they’re available as a separate accelerator install.

In versions prior to BizTalk 2010 the accelerators were distributed as a separate download (from the MSDN subscriber downloads). In BizTalk 2010 it is no longer a separate download, but part of the BizTalk developer or enterprise edition. This blog post focuses on the installation of the HL7 accelerator, which contains standards used by a lot of medical companies and hospitals to exchange messages.

Want to know more about HL7?

The installation of the HL7 accelerator results in one or more of these items, depending on the selection during installation:

  • Schemas
    • Contains the XSD representation of HL7 messages which are in flat file format in version v2.x
  • Pipelines
    • Converts HL7 messages in flat file format into XML on receive and XML to flat file when sending messages
    • Validates the HL7 message
  • Adapter
    • Minimal Lower Layer Protocol (MLLP) adapter enables BizTalk to receive or send HL7-based messages, which BizTalk Server typically transports using the MLLP protocol. The MLLP adapter ensures that BizTalk Server and BTAHL7 are interoperable with HL7-based messaging applications.
    • Generates acknowledgements for received messages
  • Tools and Utilities
    • Configuration Explorer
    • MLLP Test Tool
    • SDK
    • Logging framework (for auditing and logging purposes)

If you unpack the BizTalk Server install package (developer edition in this case), you’ll see the BizTalk installer and the accelerator items.

The accelerators folder contains the installer for SWIFT, RosettaNet(RN) and HL7.

Double click ‘Setup.exe’ to start the installation procedure.

The account you’re using during installation must be member of the BizTalk Server Administrators group, even if you’re using the administrator account you have to add this account to this group. Otherwise you’ll get the error message below (which is pretty clear by the way).

If that is fixed, you can move to the next screens, which show the license agreement and user information. Next the choice between typical and custom installation should be made.

In the installation guide from Microsoft, there is an overview of which features get installed for which option. The table below is copied from that document.

The custom type install shows this dialog, with only the items selected according to the table above.

For demo and development purposes, I selected all items. This is including the ‘Logging Framework’ and if selected, the next dialog will ask for an account to run the logging service under.

After specifying an account, the “logon as a service” right will be granted to the account to be able to work unattended.

The Logging Framework will install a Windows service named ‘HL7 Logging Service’ (in previous accelerator versions called ‘Auditing and Logging Service’), which runs in the end “<install folder>Microsoft BizTalk 2010 Accelerator for HL7BinAuditingLoggingService.exe”. HL7 data flowing through BizTalk is most of the time sensitive medical and/or private information. The logging framework takes care of logging errors, events and messages for auditing purposes. Logging data can be written to the Windows Event Log, WMI or a SQL Server database. After the summary dialog and install path dialog has been passed, the SQL server must be specified.

This is the last dialog of the installation procedure. The next dialog shows a button to start the installation, which will install and configure the accelerator.

After a few minutes the installation is finished and the artifacts, tools and utilities are installed.

There is one interesting button on this dialog: “Launch Tutorial’”.

This button launches a script which prepares your machine with an end-to-end tutorial, by installing schemas and orchestrations including configuring send and receive ports. Of course you won’t use this button when installing the accelerator on a production machine, but for development purposes this is quite handy to get a head start.

 

At the end of the installation you’ll find a set of files at the chosen installation location (by default C:Program Files (x86)Microsoft BizTalk 2010 Accelerator for HL7).

 

Have fun with HL7!

 

Didago IT Consultancy

BizTalk Server and the ESB SOA design pattern

In January this year I participated in a SOA architect workshop and after 5 exams I officially became certified SOA architect. The certification was issued by SOA schools which is an initiative of the well known SOA author Thomas Erl.

It was a very interesting course which discussed concepts I already knew, but wasn’t really aware of. During the course I more and more was able to map the concepts of SOA to Microsoft technology. One of the SOA characteristics is vendor neutrality so the course didn’t really discuss implementations.

Two course modules discussed design patterns (http://www.soapatterns.org/) and one of them is the ESB compound design pattern. It is called a compound pattern because it is build of other design patterns.

I started to recognize and map parts of BizTalk Server onto the ESB design pattern, but I didn’t have the full picture yet. In this blog post I take the ESB design pattern as a starting point to find out how and if BizTalk Server can be called an ESB based on this design pattern.

 

The ESB compound design pattern consists of the base pattern with these design patterns:

  • Asynchronous Queuing
  • Service Broker
    • Data Model Transformation
    • Data Format Transformation
    • Protocol Bridging
  • Intermediate Routing

and the extended version is:

  • Reliable Messaging
  • Policy Centralization
  • Rules Centralization
  • Event-Driven Messaging

BizTalk Server (together with the ESB Toolkit) is Microsofts product for implementing an ESB. To what extend does the vendor neutral design pattern map to the functionality in BizTalk Server?

Let’s start with the base pattern.

Asynchronous Queuing

In the SOA world this means that a service should be able to exchange messages with its consumers via an intermediary buffer, allowing service and consumers to process messages independently by remaining temporally decoupled.

BizTalk Server is asynchronous by nature, because all messages flowing through BizTalk are stored in the SQL Server message box thereby disconnecting consumers from services.

This SOA design pattern is explained here.

Service Broker

The service broker design pattern covers some of the most basic concepts necessary for application integration. This is about conversion of messages and the capability to receive messages via various types of protocols.

  • Data Model Transformation
    • A data transformation technology should be incorporated to convert data between disparate schema structures.
  • Data Format Transformation
    • Intermediary data format transformation logic needs to be introduced in order to dynamically translate one data format into another
  • Protocol Bridging
    • Services using different communication protocols or different versions of the same protocol should be able to exchange data.

Both types of transformations are solved in BizTalk Server using maps. Model transformation is about transforming one XML Schema to another and format transformation is transforming one message format to another, for example flat file or CSV to an XML Schema.

Protocol bridging is implemented in BizTalk Server using adapters and is about the ability of communication between different communication protocols like different versions of SOAP or something totally different like FTP.

Intermediate Routing

From a SOA perspective this is about the ability to dynamically determine message paths though the use of routing logic. Intermediate routing is typically implemented using service agents. A service agent is an event-driven program that doesn’t require explicit invocation, it just intercepts the messages. There are two types of service agents, active and passive. Active service agents not only read the message but also modify the message contents. Passive service agents only read the message and will or will not take action on the message content but doesn’t change it.

In BizTalk Server intermediate routing is implemented by means of a message agent. This agent is monitoring the message box to handle subscriptions to messages. In BizTalk everything is routed via the message box, so it can be implemented with only promoted properties or there can be complete routing logic in orchestrations or via business rules.

Reliable Messaging

This is about assuring that a message is actually received to the service. In a SOA world this is a technology concept to enable reliable communication with services. Commonly this is solved by a combination of the WS-ReliableMessaging standard and guaranteed delivery extensions like a persistent repository.

In BizTalk Server this is called guaranteed delivery, which assures to the caller that once a message is received in BizTalk Server it never will be lost. This is the persistent repository in the section above. The WS-ReliableMessaging standard is not supported by BizTalk due to its architecture.

Policy Centralization

In the context of SOA a service contract can be comprised of one or more XML Schemas, the WSDL, one or more Policies and an SLA. A policy adds information to the service commonly applied using the WS-Policy standard. This information can for example consist of the service window of a service or that it is only capable of handling so many requests an hour. Policy centralization is about applying centrally maintained policies to services.

Looking at BizTalk Server it seems like the ESB Toolkit supports some kind of policy centralization, but that appears to be related to itinerary centralization. Policy centralization probably should be maintained by a 3rd party product and cannot be maintained in BizTalk Server or with the ESB Toolkit.

Rules Centralization

Business rules are critical to companies and having the rules spread across a company is not a sustainable solution. The rules centralization design pattern is about centrally accessing, storing and managing business rules.

Business rules are part of BizTalk Server by means of the Business Rules Engine(BRE). This is even a separate component that can be used without other BizTalk components. Rules can be centrally accessed, stored and maintained in there. In a SOA approach the BRE would be wrapped into a service so it’s functionality is accessible in a standardized way from other technologies besides BizTalk Server. Wrapping such a component into a service raises other issues like introducing a single point of failure and capacity issues. These issues can be solved by applying other SOA design patterns

Event-Driven Messaging

From a SOA perspective this is about notifying customers of events. Commonly this is solved by subscribing customers to service events via one-way and asynchronous data transfer.

This is one of the most known concepts in BizTalk Server, because it refers to the publish-subscribe pattern as it is used in BizTalk Servers messaging architecture with the SQL Server message box as the heart of the system.

      Conclusion

      In this blog post we’ve been discussing the official ESB compound design pattern as defined by Thomas Erl. At least one of the requirements cannot be fulfilled by BizTalk Server, and that is Policy Centralization. So officially BizTalk Server is not an ESB product according to these requirements.

      On the other hand, policy centralization is one of the least used requirements in an ESB implementation at the moment. And besides that, BizTalk Server implements many more design patterns than just the ones mentioned in the ESB design pattern.

      On of the most recognizable design patterns for BizTalk Server is the Orchestration compound design pattern.

      Although this compound pattern is targeted at synchronous processing, it does consist of interesting design patterns we can also find in BizTalk Server:

      • Process Abstraction
        • This is about abstracting logic into a parent processes (task services). In BizTalk Server processes can be constructed of combined (agnostic) services using orchestrations. An orchestration is an abstraction of such parent processes which can be called by services without knowledge of the underlying logic.
      • Process Centralization
        • This is to prevent process logic to be spread across the company. All processes in BizTalk Server are centralized into one repository thereby fulfilling this requirement.
      • State Repository
        • This is about offloading state to a repository to release resources when they are not necessary at that time. BizTalk Server supports this by means of hydration and dehydration of processes.
      • Compensating Service Transaction
        • To prevent system locks during long running processes, compensating measures can be taken in case of an exception. BizTalk Server supports compensating service transactions as part of orchestration building blocks.
      • Atomic Service Transaction
        • To be able to rollback changes if one of the services in a chain of service calls fail, atomic service transactions can be used. BizTalk Server supports atomic service transactions in it’s WCF adapter.
      • Rules Centralization
        • Is also part of the ESB compound design pattern
      • Data Model Transformation
        • Is also part of the ESB compound design pattern

      Didago IT Consultancy

      Azure Service Bus EAI/EDI April 2012 Refresh, What’s new?

      Back in December 2011 Microsoft released the first CTP of the Azure Service Bus EAI/EDI. It showed what direction Microsoft is heading with bringing BizTalk like functionality to the cloud. Or to quote Microsoft:

      “Windows Azure Service Bus EAI and EDI Labs provides integration capabilities for the Windows Azure Platform to extend on-premises applications to the cloud, provides rich messaging endpoints on the cloud to process and transform the messages, and helps organizations integrate with disparate applications, both on cloud and on-premises. In other words, Service Bus EAI and EDI Labs provides common integration capabilities (e.g. bridges, transforms, B2B messaging) on Windows Azure Service Bus”

      If you want to read more about this, please take a look at my two previous blog posts about this topic:

      BizTalk in the Cloud, one step closer (part 1)

      BizTalk in the Cloud, one step closer (part 2)

      Now Microsoft has released the second preview:

      Service Bus EAI and EDI Labs – April 2012 Release

      Download the April 2012 Release

      So what’s new in this April 2012 release?

      Regarding the EAI Bridges, we’ve got the following enhancements. BizTalk developers will recognize parts that currently only are available in BizTalk….

      • FTP Source Component. In the December 2011 release it was already possible to connect WCF endpoints via the publish/subscribe pattern. In this refresh another endpoint type has been added: FTP, which can be used as a source component on the configuration bridge. In the BizTalk world this would be the FTP adapter, but so far it is reading only.
      • Flat File Support. This is also a familiar concept in the BizTalk world. Combined with the FTP source component, flat files for example can be read and converted into XML now. But also via HTTP flat files can be read and processed. The available flat file wizard looks quite the same as currently can be found in BizTalk Server.
      • Schema Editor. Previously only available in BizTalk, now also part of the Service Bus EAI/EDI release. Used to create and edit XSD and flat file schemas
      • Service Consuming Wizard. Also straight from BizTalk Server, the possibility to generate the necessary artifacts to be able to consume WCF services. The difference with EAI is that it only generates an XSD and no other output files like binding info and an orchestration.
      • Operational Tracking of messages processed by the bridge. This is also something from BizTalk Server. Within a bridge, a message undergoes processing under various stages and can be routed to configured endpoints. Specific details of the message such as transport properties, message properties, etc. can be tracked and queried separately by the bridge developers to keep a track of message processing.

      For Service Bus Connect, there are the following enhancements.

      • Support for Windows Authentication while connecting to on-premise SQL Servers
      • User Interface enhancements
      • Support for SSL

      The transforms also have been improved.

      • New map operations. In BizTalk this is called ‘Functoids’ but are named operations here. Added are:
        • Number format operation, to format an input number using a given format
        • Date/Time operation, to manipulate date/time
        • Generate ID operation, to generate a unique ID of a given length
      • Runtime behavior configuration. This is also interesting for BizTalk developers, because like Microsoft says:”In this release, users can configure the runtime behavior for how the transform handles exceptions, errors, null values, and empty data input. This configuration can be done for each transform”. This sounds exciting and probably something we’ll be seeing in the next version of BizTalk Server.

      Finally the EDI or Business to Business Messaging, which now has support for:

      • Processing batched EDI messages. You can now use the TPM portal to configure agreements that can process batched messages coming from trading partners
      • Tracking and archiving EDI messages. You can now use the TPM portal to track messages as they are processed using the agreements

      I’m looking forward dive into the details! BizTalk Server in the cloud again one step closer!

      Didago IT Consultancy

      BizTalk in the Cloud, One Step Closer Part 2

      In my previous post I discussed a part of the recently released CTP of Windows Azure Service Bus EAI and EDI. I went over installation and the steps to take to create schemas and mappings (in this post I use different schemas and mapping, because of Visual Studio issues I had to start over).

      This post is about how the schemas and mappings are used in Windows Azure. Like we’re used to with BizTalk we need the schemas to define the message types and we need the mappings to transform one message type to another. But how does this all fit together?

      When you create a new ‘ServiceBus’ project, you’ll notice a new “BridgeConfiguration.bcs” file:

      This file will contain the definition of the ‘Bridge’ between two systems connected to Azure. If you open the “bcs” file you’ll get an empty canvas and in the toolbox there are many options to fill the bridge configuration with:

      For example pick the ‘Xml One-Way Bridge’. This is one-way WCF service endpoint, opposed to the ‘Xml Request-Reply Bridge’ of which you probably guessed it to be a request/response WCF service endpoint. The canvas looks like below:

      In the solution explorer you’ll find the two schemas (SourceSchema.xsd and TargetSchema.xsd) and a mapping (Source_to_Target.trfm) I use to test with. The schema by the way is a simple mapping with a concatenation and the new if-then-else functoid (which isn’t called a functoid anymore).

       

      Also a configuration file has been added to the solution: “XmlOneWayBridge1.BridgeConfig”, which in the end is just an XML configuration file but a so called ‘Bridge Editor’ has been created for ease of editing. Double clicking the file results in this.

      In this we recognize the BizTalk pipeline! In the message types module you specify the schema’s you wish to be accepted by this endpoint. The validator is just an XML validator of which I assume it can be replaced by a custom component in the future. However keep in mind that this is running in the cloud so Microsoft will be careful with custom components which might impact the performance and stability of Azure.

      Enriching is interesting, it feels like property promotion in BizTalk. It opens the opportunity of content based routing. You can enrich from SOAP, HTTP, (SQL) Lookup and XPath. In the picture below I created XPath properties in the SourceSchema.xsd.

       

      The transform module also looks familiar to BizTalk developers. Obviously only the maps that are defined as a message type in the bridge show up here:

      If you build the solution now, you’ll get errors. The errors indicate that without a so called ‘Connected Entity’ the message send to this end point cannot be processed. So we need to add one to the bridge configuration. In this case I’ll write the message to a queue, but that can also be an external service that the message is relayed to. When the Queue is dropped on the canvas, it can be connected to the XmlOneWayBridge by using a ‘Connection’. Be aware you can only connect on the red dots and dragging the queue shape doesn’t create a queue on the ServiceBus.

      Building this still results in an error about a filtering expression. This is caused by the fact that by default a filter expressing is specified on the connection to the Queue. To fix this click on the connection, select ‘Filter Condition’ and set the filter to ‘Match All’ (or specify a filter).

      Now the project will build and the next step will be deploy to Azure!

      Because Azure EAI still is in a lab phase, you need to go to a different portal than the regular Azure Management portal: https://portal.appfabriclabs.com/Default.aspx

      Go there and create a so called “labsubscription”. If you haven’t created such a subscription before deployment you’ll receive an error: “The remote name could not be resolved: ‘<servicenamespace>-sb.accesscontrol.appfabriclabs.com’”

      From the project file menu select ‘Deploy’ and the regular Azure deploy dialog shows up. Specify the Shared Secret and the project will be deployed:

       

      Now it is time for fun by testing the bridge with the tools that come with the SDK. The set of tools contains the ‘MessageSender’ tool, which takes the account credentials, bridge URL, sample message and content type. This message was sent to the bridge:

      <?xml version=”1.0″ encoding=”utf-8″?>
      <SourceSchema xmlns=”http://MyFirstAzureEAI.SourceSchema”>
        <FirstName xmlns=””>FirstName1</FirstName>
        <LastName xmlns=””>LastName1</LastName>
        <Age xmlns=””>1</Age>
        <Country xmlns=””>Country1</Country>
        <BirthDate xmlns=””>1900-01-01</BirthDate>
      </SourceSchema>

      Now the message is sent to the bridge and went through the pipeline where it was transformed from ‘SourceSchema’ to ‘TargetSchema’. After the pipeline processing has been done the message is written to the queue, TestEAIQueue in my case.

      With another tool also from the SDK you can read messages from the queue: ‘MessageReceiver’.

      In the message read from the queue you can derive the FirstName and LastName field from the SourceSchema have been concatenated in the TargetSchema. Also it has been derived that someone with and age of 1 is not an adult (IsAdult=false).

      Wow, my first BizTalk-in-the-cloud-round-trip is working!

      There is a lot more to discover and it is a CTP but this is working quite nice already. One thing we’re lacking in BizTalk today is the possibility to debug by pressing F5. I’m wondering if that will change with Azure EAI in the future.

       

      Didago IT Consultancy

      BizTalk in the Cloud, One Step Closer

      We all know Microsoft is putting an enormous amount of effort in their cloud initiative called Azure. It started with compute and storage but in the mean time it has grown and more and more products are released in a cloud version before the on-premise version is released. An example is Microsoft Dynamics CRM Online.

      Since the September 2011 release of Azure the Service Bus is part of it. In the service bus we find things we know from BizTalk. The most well-known ones are Queues, Topics and Subscriptions. This so called ‘Brokered Messaging’ introduces message queuing and durable publish/subscribe messaging, which is the basis of BizTalk messaging.

      Of course BizTalk is capable of a lot more, but Microsoft released today the CTP of Windows Azure Service Bus EAI and EDI. If we take a look at this release we immediately recognize the BizTalk influences in there. It is a CTP but it is showing the direction Microsoft is going.

      Let’s focus on EAI for now. To install the SDK you just need a regular development machine with Windows 7/2008, .NET4 and VS2010. Download the SDK here. Installing the SDK will install some new VS templates under the section ‘ServiceBus’.

      When the EAI project is created, the Solution Explorer looks like this:

      And also new toolbox items are present, in which you’ll recognize the ServiceBus features Queues and Topics:

      So the first step in BizTalk development would typically be to add a schema. This can be done by the regular ‘Add Item’ option in the Solution Explorer. There the first real BizTalk items show up! We all know the Maps and Schemas!

      One thing that caught my eye was the extension of the mapping file, not BTM but TRFM. I can only speculate what the extension stands for.

      Adding the schema is nothing new since adding XSD’s has been part of Visual Studio 2010 from the beginning.

       

      Next step is creating a map, after adding the item another familiar screen shows up!

      As well as selecting the source and destination schema. Unfortunately it isn’t possible to drag the schemas onto the mapping canvas.

      And of course the toolbox is also familiar, although some items are missing, new or named differently from BizTalk.

      This is definitely a preview of the new mapper as it will show up in the next version of BizTalk. Although the mapper has improved in BizTalk 2010, this one is even more advanced (and worth a blog post by itself!). For example I put the ‘If-Then-Else’ expression on the canvas:

      This is really awesome!

      Testing the map works quite the same way as in BizTalk, however it isn’t possible yet to generate a sample message from the schema so you need to create one yourself.

      Typically you would also create artifacts like orchestrations, pipelines, etc, but they are missing in this CTP. However it won’t be a surprise that at least orchestrations will show up with WF4 in the near future.

      So now we have a schema and a mapping, now what to do with it? Keep in mind that this is all Azure stuff so we need to deploy it into the cloud.

      I’ll discuss these steps in my next blog post, but we’re definitely moving towards BizTalk in the cloud! Exciting stuff!

      Didago IT Consultancy