April 10, 2014 at 3:48 PM

Recently I wrote a post about advanced orchestration monitoring for BizTalk Server using System Center Operations Manager (SCOM).

 

There, I wrote about some of the shortages in the default monitor. In less critical environments the default monitoring for suspended orchestrations will be sufficient in most cases. I bumped into the same issue a couple of times now and I would like to share it with you, hoping it saves you some troubleshooting. For us, Codit Managed Services, it's very important to receive the right alerts when instances get suspended.  

 

This blog post concerns the default monitor for suspended orchestrations and the alerts it generates.
I often hear following question: "My orchestration is suspended for 4 hours now and still I didn't receive any alerts about it?!". At first I also did not find an explanation for this. Let's take a deeper look at this monitor.

 

If you have currently suspended orchestrations on your environment you should see them in the SCOM console having a critical or a warning state:

 

If you open the Health Explorer for this Orchestration you can see some history concerning the health state:

 

Now the question rises: "Why am I not receiving an alert even while my orchestration has a warning state.... ?".

If we take a look at the properties of the monitor at first sight everything seems to be correctly enabled:

I also missed it several times, but then I took a look at the first property. "Alert on State".

When you check the possible options of the property you will see 2 options. Choose the second one! Default a warning state will trigger no alerts!

 

The warning limit (the last property) has a default value of 10, so more then 10 orchestrations of the same type should be suspended before an alert is triggered.... If you change the "Alert on State" property you will always receive alerts when your environment contains a suspended orchestration instance, no matter how many instances.

 

But remember, as soon as the monitor is in a critical state you will no longer receive an alert when new instances get suspended! If you want an alert per instance, check out my previous post about advanced orchestration monitoring.

Posted in: BizTalk | Monitoring

Tags: , ,


April 1, 2014 at 11:50 AM

bootcamp

Introduction

Last Saturday the worldwide Windows Azure Boot Camp was organized all around the world. As we've announced in our introduction to GWAB post, Codit sponsored 925 cores to Codit employees!


The event was organized on 2 locations in Belgium by AZUG -

 

Genk

This event was organized in the Microsoft Innovation center in Genk, this is located on the C-Mine site.

Before starting we were all treated with some “koffiekoeken” – for those who do not understand Flemish, the below pictures it pretty nicely:

clip_image002

With a full stomach and a full agenda ahead the event kicked off at 9 A.M. punctual.

It started with an introduction by Yves describing what the event was about (to provide some compute time for diabetic research).
The starter video was shown as well. The same video was actually shown all over the world, on all GWAB locations.

You can watch it here: https://www.youtube.com/watch?feature=player_embedded&v=gZrxDl03W-A

So let’s take a look what presentations were given in Genk...

 

Visual studio online – Kris Van Der Mast

After having some trouble finding the Innovation center, Kris kicked of his presentation with the question "who already used Visual Studio online" – Only a few people had already used it, so a great presentation to start the day.

Kris made a site from ground up, some features he used – keyboard shortcuts, drag and drop, online console, intellisense, unzip functionality, TFS and git integration.

I was impressed by the speed, besides it’s still in beta it did look already very mature. Of course the local Visual Studio & TFS will always be the bigger brother, but in my opinion Visual Studio Online has a lot of potential.

 

Internet of things by Yves Goeleven

First of all what do they mean with the hyped term “Internet of things”? Everyday devices equipped with sensors and connectivity to work together, or to make it simpler to make your fridge order more beer when you’re out or your doctor that can check your health status remotely when you’re wearing a wearable (did I already mention Hyped with the big H?).

Recently Cisco made some predictions of the revenue for the future and it looked good: right now there are more devices than people on the planet. By the year 2020 this should exceed 50 billion devices!

Yves showed us a setup with a Netduino where he attached a temperature sensor and a Bluetooth chip. Combined with his mobile phone he could activate the sensor over Bluetooth and start reading the temperature in the room. Additionally the temperature was visualized via an application on his laptop. Communication between the sensor and application was done over this phone and Azure.

This is another great showcase where Windows Azure can be helpful in real-life.

 

Customer cases

During the customer cases we had an overview of an application made by QFrame, where they explained their architecture that should be able to handle large amounts of data. They had chosen Azure for the easy scalability.

Xavier De Coster gave an overview of his project MyGet – www.myget.org, it a tool for friction-free dependency management. MyGet offers (instead of NuGet) the ability to create private feeds, assign privileges, create packages from your own sources, Git integration, symbol support, different feeds (QA, IT, PROD, …).

Yves showed his personal project MessageHandler – he surprised us with the quote "you’ve already seen the MessageHandler in action but you did not know it". In his presentation of Internet of things the communication between his phone and his application was done by his MessageHandler.

Kristof surprised us by using the blackboard instead of a PowerPoint. With the "old-school" presentation, he showed us around the pitfalls of moving an existing project to the cloud.

 

Introduction of Windows Azure BizTalk Services by Nico Debeuckelaere

Nico gave us a brief introduction to the Windows Azure BizTalk services (WABS), which are a simple, powerful, and extensible cloud-based integration service that provides Business-to-Business (B2B) and Enterprise Application Integration (EAI) capabilities for delivering cloud and hybrid integration solutions. The service runs in a dedicated environment.

 

Deep dive architecture of Azure Storage by Yves Goeleven

Before he started his session, Yves gave us the choice between 2 sessions - an easy to understand presentation or his famous hardcore “666” session, in where you would come to a certain mental point where you would want to "kill yourself" or throw something at Yves.

As real Azure heroes we chose for the die hard “666” session.

Let this be a tip for you: if an Azure MVP warns you that he has prepared a die hard session. It probably is and you should not doubt it!

 

Infrastructure as a service by Kristof Rennen

Kristof closed the day with a session on Infrastructure as a service (IaaS), where he shared some of his personal experiences on creating our own IaaS in Windows Azure.

Some tips:

· You do not pay for a stopped server in Azure, only the data it consumes.

· You do not pay for bandwidth within the Azure datacenter.

· For a production server: only use instances with a full core not the XS version.

· If you want more disk size than the max 1 TB you can stripe multiple disks on your Azure server.

· A storage account cannot have more than 200 TB, if you want more you should create multiple storage accounts.

 

Kortrijk

The 2nd location was organized in cooperation with Microsoft Innovation Center as well, which is located on the Kulak site.

 

Hop on the Service Bus by Tom Kerkhove

Tom gave us an introduction how he started using Microsoft Azure service bus a few months ago for the first time. He briefly explained the different possibilities of service bus and architecture to be able to choose between service bus queues and storage queues.

Continuing with a short demo about the basic functionality of servicebus queues, we were even able to tackle more advanced techniques, like deadletter queues and duplicated message detection.

It is nice to hear about user experiences and to see pitfalls and how they can be tackled.

You can review his presentation here.

 

Kinecting in the cloud by Tom Kerkhove

As a current Microsoft Kinect MVP, Tom gave us a demo on how Kinect can be used together with Azure notification hubs. It was quite interesting to learn something more about another Microsoft product, the Kinect for Windows. Tom made a nice demo, which showed us a futuristic application for exhibitions.

His presentation can be found here.

 

Windows Azure, an open platform on your teams by Sam Vanhoutte

As a Microsoft Integration MVP and V-TSP, Sam built up a nice stack of experience. His session focused on his own experience and the different application building blocks in the cloud.

I was very surprised to see such a nice demo, which included the possibility to use REST calls to handle domotics at home. By using Azure VPN, service bus and domotics of TeleTask at home, he made it possible to open the curtains at home from anywhere using his rest API.

 

Customer cases by Nick Trogh

Nick works as a Microsoft Evangelist and showed us some nice customer cases implemented in Belgium. It is nice to see a changing landscape from in-house hardware to cloud or hybrid scenarios for existing companies, as well for start-ups who gain more benefits by using cloud technology.

 

Integrating the Cloud by Glenn Colpaert

After his Codit Integration Cloud customer case it was time for the real deal – Glenn his very first session in the community! He introduced the crowd to Codit Integration Cloud and the capabilities it has to offer. The most interesting part to me was how Codit Integration Cloud works under the hood and all the pieces form one nice product! After a demo Glenn wrapped up with the lessons we learned during architecting & development of the platform!

 

Windows Azure BizTalk Services by Glenn Colpaert

Glenn kicked off with the evolution of enterprise integration: from brokers and service bus towards Winodws Azure BizTalk Service.

Glenn gave a really good introduction to WABS for EAI & B2B scenarios. He ended up with connecting his (already famous) SAP-in-a-box with a WPF client by using Service Bus Relay, WABS & WABS Adapter service! If Glenn is doing this session again, don’t hesitate and attend!!

 

Small country, BIG compute!

Next to the sessions we also participated in the global charity lab where we deployed a cloud service that did research for diabetics.
We also deployed the global lab in 4 different regions around the globe on 925 cores that Codit donated!

The lab was a great success, the scientists are happy with the results and the stats are awesome!

  • Top 10 Countries - Belgium ended up as #2 around the world with 192 days 0 hours 7 minutes (!!!) of processing time
  • Top 10 Locations – Kortrijk was #1 (!) with 105 days 19 hours 44 minutes and Genk #9 with 86 days 4 hours 31 minutes
  • Top 10 Companies – Codit came, rendered and conquered! We made it to the 7th best company with 50 days 20 hours 4 minutes of processing and total of 38.160 datasets processed!

You can find all the results here!

 

Thanks for reading,

 

Dieter – Jonathan - Glenn – Sam - Tom


March 28, 2014 at 3:45 PM

Recently, I needed to send a message from BizTalk to an external WCF service with Windows Authentication.

For easy testing, I created a Windows console application that will act as the client. I used basicHttpBinding with the security-mode set to TransportCredentialOnly. In the transport-node I chose for Windows as clientCredentialType.

The web.config file looks like this:

<basicHttpBinding>
	<binding name="DemoWebService_Binding"  textEncoding="utf-16">
		<security mode="TransportCredentialOnly">
			<transport clientCredentialType="Windows" />
		</security>
	</binding>
</basicHttpBinding>

 Before sending the test message, I needed to authenticate myself and insert the Windows Credentials:

proxy.ClientCredentials.Windows.ClientCredential.Domain = "xxx";
proxy.ClientCredentials.Windows.ClientCredential.UserName = "xxx";
proxy.ClientCredentials.Windows.ClientCredential.Password = "xxx";

This works, so now to get it right in BizTalk!

Problem:

Nothing special, just a Send Port, WCF-Custom with basicHttpBinding as Binding Type and the same binding configuration as in the console application:

 

I thought I just needed to add the credentials to the Credentials-tab in BizTalk to be able to do proper authentication.

Unfortunately, this does not work!

Apparently, when "Windows" is chosen as clientCredentialType, the Credentials-tab is ignored and the credentials of the Host-Instance running the Send Port are used instead.

Solution:

After some searching, I found the answer thanks to Patrick Wellink's blog post on the Axon Olympos blog: http://axonolympus.nl/?page_id=186&post_id=1852&cat_id=6.

 

The credentials of the Host Instance can't be right, because the web-service is from an external party.

To use the Windows Credentials, a custom Endpoint Behaviour has to be created.

So I've created a new Class Library in Visual Studio with a class that inherits from both BehaviorExtensionElement and IEndpointBehavior:

public class WindowsCredentialsBehaviour : BehaviorExtensionElement, IEndpointBehavior

For extensibility, the class needs to have these public properties:

[ConfigurationProperty("Username", DefaultValue = "xxx")]
public string Username
{
	get { return (string)base["Username"]; }
	set { base["Username"] = value; }
}

[ConfigurationProperty("Password", DefaultValue = "xxx")]
public string Password
{
	get { return (string)base["Password"]; }
	set { base["Password"] = value; }
}

[ConfigurationProperty("Domain", DefaultValue = "xxx")]
public string Domain
{
	get { return (string)base["Domain"]; }
	set { base["Domain"] = value; }
}

In the function "AddBindingParameters", I've added this piece of code that sets the Windows Credentials:

public void AddBindingParameters(ServiceEndpoint endpoint, System.ServiceModel.Channels.BindingParameterCollection bindingParameters)
{
	if (bindingParameters != null)
	{
		SecurityCredentialsManager manager = bindingParameters.Find<ClientCredentials>();

		var cc = endpoint.Behaviors.Find<ClientCredentials>();
		cc.UserName.UserName = this.Domain + @"\" + this.Username;
		cc.UserName.Password = this.Password;
		cc.Windows.ClientCredential.UserName = this.Username;
		cc.Windows.ClientCredential.Password = this.Password;
		cc.Windows.ClientCredential.Domain = this.Domain;

		if (manager == null)
			bindingParameters.Add(this);
	}
	else
	{
		throw new ArgumentNullException("bindingParameters cannot be null.");
	}
}

Now after building and putting the assembly in the GAC, we need to let BizTalk know that it can use this custom Endpoint Behavior:


We need to add this line below to the behaviorExtensions (system.serviceModel - extensions) in the machine.config (32-bit and 64-bit):

<add name="WindowsCredentialsBehaviour" type="BizTalk.WCF.WindowsCredentials.WindowsCredentialsBehaviour, BizTalk.WCF.WindowsCredentials, Version=1.0.0.0, Culture=neutral, PublicKeyToken=1de22c2808f4ac2e" />

Restart the host instance that runs the send port and you will be able to select the custom EndpointBehavior:

 


March 12, 2014 at 4:00 PM

On new environments, Codit always runs benchmark tests to check whether that new environment behaves as expected and to find any anomalies.
This week, these simple tests proved their value yet again:

One of the components of the benchmark test is a simple WCF echo service. A client WCF console application calls this service (also hosted in a console application). The service simply echoes back the request message.

When launching both console applications we experienced extreme delays.
The service host took more than 8 minutes to launch and the client console application took more than 3 minutes to launch.
Because the launch of the console applications typically takes under one second, this demanded for a deeper look.

To get a clearer picture of where the time is lost, I enabled WCF tracing.
This shows that constructing the servicehost and channelfactory takes a lot of time:



Trace when launching the Service

clip_image001



Trace when launching the Client

clip_image002

 

Digging deeper - and using my favorite search engine - revealed this is probably due to assembly load times.

By default, .NET 3.5 will generate so-called ‘publisher evidence’ for code access security (CAS). Verifying the assembly publishers signature can be very costly, and it is indeed very costly on the new environment.
The generation of this publisher evidence can be disabled: so, let's disable publisher evidence generation and measure the startup time again after that.

Publisher evidence generation is disabled by adding this Xml snippet to your application configuration file, or to the machine.config file (whatever suits your needs):

<runtime>
    <generatePublisherEvidence enabled="false"/>
</runtime>

After I applied this change, I started my 2 console applications again, and ‘eureka’ they are done in less than one second!

Please note that this change will only apply to .NET 3.5!



Trace when launching the Service

clip_image001[6]



Trace when launching the Client

clip_image001[8]

 

This MSDN link on the generatePublisherEveidence element contains more info about publisher evidence.

The moral of the story is, if you setup a new environment,  test it well!

 

 

Peter Borremans


March 10, 2014 at 3:45 PM

image

 

 

 

 

 

 

“A free one-day global training event on Azure, from the community to the community…”

 

 

What?

The Global Windows Azure Bootcamp is a free one-day training event taking place on the 29th of March 2014 in over 50 countries worldwide.

This event is driven by all the local Windows Azure community enthusiasts and experts. It consists of a full day of Windows Azure sessions and labs based on the Windows Azure Training Kit.

GWAB is an initiative of Windows Azure MVPs Maarten Balliauw, Magnus Mårtensson, Mike Martin, Alan Smith & Michael Wood.

 

Charity?

This year, the organizers decided to put all this computing power at work, supporting medical research on diabetes, by hosting a globally distributed lab, in which attendees of the event will deploy virtual machines in Windows Azure. Those will help analyse data related to specific sugars called glycans, being studied as an early marker for Type 2 diabetes.

More details can be found here: http://global.windowsazurebootcamp.com/charity/

 

Where?

There are over 135 locations registered worldwide. You can find an overview of all locations on the following website: http://global.windowsazurebootcamp.com/locations/

 

Belgium?

In Belgium there will be 2 locations available for this global bootcamp: Genk and Kortrijk. Each location will feature two parallel tracks: developing and regular sessions.

If you need a place to work on the research lab or your own Windows Azure project, there will be Windows Azure experts available for answering tough questions or to give you deep dive explanations on certain technologies.

Feel free to join any (or both) of these tracks!

The Kortrijk venue will be hosted by 2 community lads of Codit with the help of AZUG: Tom Kerkhove and Glenn Colpaert.

 

You can register for the Belgian venues on following websites:

Kortrijk - Genk

 

Codit?

In the context of the 2222 campaign we did at the beginning of the year. Codit again decided to support this event by donating 222,2€ to the Global Windows Azure Bootcamp research lab.
This donation amount allows us to run 925 cores for the entire duration of the Global Windows Azure Bootcamp event in Kortrijk.

 

Cheers,

 

Colpaert Glenn & Kerkhove Tom

Posted in: Azure | Community | WABS

Tags:


March 7, 2014 at 5:01 PM

Early March, I gave a demo on the BizTalk Summit in London.  The video can be downloaded through the following blog post of Glenn Colpaert: Taking a look back at the BizTalk Summit London – Day 2.  After my talk, some attendees asked for the code or wanted a bit deeper explanation.  That’s why I decided to blog about this.

The coolness in the demo is mostly because of the use of Twilio, a cloud based voice and messaging platform that can easily be used to build applications, using their easy to use and understand web API.

In this blog post, I will describe the scenario, the architecture and explain some parts of the code that were specific to this demo.

The source code of the demo has been uploaded to the following location: http://code.msdn.microsoft.com/State-machine-workflow-68a078f3 

The scenario

In the scenario, I wanted to build and orchestrate a phone handling system that is built for an airline company.  A customer can call to a phone number, provided by that airline company (and hosted by Twilio).  Then, he gets a typical phone menu (press 1 to check in, press 2 to upgrade your flight, press 3 to cancel your flight).  Based on the input of the user, he gets some new options, until the actual action occurs. 

The following picture shows the slide that give the phone menu hierarchy.

wf01

During the demo, the attendees could call the (toll-free) number and enter their options.  One of the options in level 3 resulted in a win.  The first user that ended up there, won a prize.

 

The architecture

The following slide shows the architectural overview of the solution I demonstrated:

wf02

Twilio: the online service for phone and text messaging.

I used Twilio to make the link between the phone and my hosted service.  On Twilio, you can register a phone number.  For this, you pay an amount per month.  Every phone number is then linked to a RESTful URI.  Whenever someone calls that phone number, Twilio will call the specified URI and pass the CallerId with some other arguments.  It’s then up to the specified REST API to return a TWIML document, an actual XML document that can specify that Twilio should Say something, Play a sound, Gather information from the user, Dial another phone number to join the call, Record the user’s voice, etc. 

Flight Check Web API

The Web API was where most of the custom code for this demo went.  In this API, I implemented one specific operation that gets called every time.  I then used it to either start a new workflow instance (when it is a new phone number), or to send new events to an existing workflow (when the user has an active call and sent digits).  When the events are passed to the workflow, the API collects data from that workflow instance and returns a TWIML document, based on that data).

The Flight call state machine workflow

This is where the logic of the phone call is modeled: A state machine is a very interesting concept where a workflow is in a certain state and listens for external input (published events, etc).  When a specific event is received, it can trigger a transition to a new (or the same) state.  Every transition can have some actions.  This is a much more natural model, compared to FlowCharts or Sequential Workflows, especially when it comes to event-driven workflows.  In my state machine, I had some custom activities that either set the Next User Options, the Goodbye message or listened for specific Digits from a user.

The notification components.

I also used the Service Bus workflow topic to subscribe on all the tracking events.  Whenever a tracking event is received from the subscription, the event gets parsed (the phone number is extracted, the options, good bye message and the workflow state).  Then these events are sent to an HTML based front end, using SignalR.  This way, all calls could be monitored online.  This pattern will not be discussed in detail in this post. 

Configuring Twilio

When you have an account on Twilio, you can log in and buy one or more phone numbers.  For the number I had bought, I configured the following URI (with the GET verb) to be called (removed host name from screenshot):

wf03 

So, as soon as someone would call my Twilio number, Twilio would come and ask my API what to do.  That’s all I had to do on the Twilio site.  This is what I like about “things as a service” in our current time.  If I had want to do this myself, 10 years ago, it would have taken me a lot of money, external experts and testing.  Now I just take this service in a matter of minutes and I can focus on what I do best…

The State Machine workflow

I love state machines.  I am convinced they provide a much more natural way of representing a workflow, compared to a sequential workflow or a flow chart.  I also used the State Machine concept for the definition of my Twilio phone menu. 

A state machine has different concepts:

  • State: a workflow instance is always in a certain state.  Every state has to have one or more transitions to another state (except for the final state).  A state can have an entry point and an exit point, where specific activities can be configured and defined.
  • Transition: a transition is the change from one state to another (or to the same state).  A transition happens, when the configured trigger for that transition is fired, or when the condition for the transition is met.

For the phone menu, I used the following workflow (FlightMenuStateMachine.xaml).  In the image below, I selected the different states that would be followed, when a user would press 2 (to upgrade) and 2 (to upgrade to 1st class).  Between all these states, I have transitions, triggered by the input of the caller.

wf04

In the 2 screenshots below, you can see what a State and a Transition look like:

  • The state shows that I set the next user options on the entry of the state, using the custom activities, I created.  The exit step is empty.
  • You can also see the different transitions and the state they will move the workflow to.
  • In the second screenshot, you can see that I use the ReceiveCallerInput activity to listen for a specific entry to happen.

wf07wf08

Listening to external triggers, with the Pub/Sub workflow manager activities

When the Caller enters a specific digit, this digits has to be pointed to the right workflow instance as an input event.  For this concept, I use the ReceiveNotification activity that is provided with the Workflow Manager activities (in the ReceiveCallerInput.xaml activity).

I am using the following Pub/Sub activities in my custom workflow activity:

  • BuildMatchAllFilter activity: in this activity, I define the actual filter I will be using.  This filter will result in a new subscription on the Workflow topic.  For the filter, I use the CallerId and the Digit I want to listen for.  This means that this filter will be triggered, when my API sends a notification for this specific Caller with the specific Digit.
    wf05
  • Subscribe activity: this activity takes the filter created in the previous activity and uses it to create a Subscription on the WFTopic, used by our specific Workflow scope.
  • ReceiveNotification activity: this activity blocks this branch of the workflow until a notification is received through the Subscription that was just created.  So if we actually receive a notification that matches our filter, the transition where our activity is used, will be fired and executed.
  • Unsubscribe activity: this activity will explicitly remove the subscription from the topic.

Providing external data to the Web API, using external variables

Through the workflow client, it is possible to pass external variables to a workflow instance.  These external variables can then be updated by the workflow, so that the workflow client can inspect these values.  There is one ‘hardcoded’, system-provided external variable and that is the UserStatus variable.  This variable can be set, using the SetUserStatus activity that comes with Workflow Manager.  It is a nice and easy way to expose state from the workflow.

These variables can only be set by editing the XAML directly as they are not visualized to the Workflow designer.  This can be done with the following code:

<Assign sap2010:Annotation.AnnotationText="External variable GoodbyeStatement is set here" DisplayName="Set GoodbyeStatement to external variable">
      <Assign.To>
        <OutArgument x:TypeArguments="x:String">
          <p:ExternalVariableReference x:TypeArguments="x:String" Name="GoodbyeStatement" />
        </OutArgument>
      </Assign.To>
      <Assign.Value>
        <InArgument x:TypeArguments="x:String">
          <mca:CSharpValue x:TypeArguments="x:String">GoodbyeStatement</mca:CSharpValue>
        </InArgument>
      </Assign.Value>
      <sap2010:WorkflowViewState.IdRef>Assign_1</sap2010:WorkflowViewState.IdRef>
    </Assign>

There were 3 variables I used to provide state and data from my workflow instance to my Web API:

  • UserOptions: this variable could be set through the SetNextOptions.xaml activity.  It would contain a CRLF separated string with the different options for the current state of the phone call.
  • GoodbyeMessage: this variable could be set through the SayGoodbye.xaml activity.  This activity would typically be called in the last state, when the user call had to be terminated. 
  • UserStatus: as described, this is the standard variable that I used mostly for debugging and visibility through the Workflow Explorer.

Custom activity to insert data in a database, using the Http activities

In the presentation I gave, there was one spot in the phone menu where I entered the Caller Id from the users into a database.  The first entry in my database table was the winner.  For this, I used an Http post activity that just posted the CallerId to my Web API, so that the CallerId could be inserted in the database. 

The Web API as glue between the Twilio input and the Workflow instances

Now the actual magic happens in the Web API, I wrote.  There I get the input from Twilio in my controller and I interpret the different worklow instances and their state.  Based on the state of the instances, I return the specific TWIML document.

Custom serialization for TWIML.

The response I have to return to Twilio is TWIML.  This is an XML structure.  But the TWIML cannot be generated, using the standard DataContractSerializer.  Therefore, I had to add the XML serializer in the global.asax.cs file, using the following line:

var twilioFormatter = new TwilioMLXmlMediaTypeFormatter();
            twilioFormatter.MediaTypeMappings.Add(new QueryStringMapping("format", "twiml", "application/xml"));
            GlobalConfiguration.Configuration.Formatters.Add(twilioFormatter);

By adding this line, I allow my API clients to demand the response to be in TWIML, when they add a query string variable format=twiml.  As you can see in the screenshot of my Twilio configuration, you can see I have added that parameter in the API.  The actual TypeFormatter class is copied below.  As you can see, I just use the standard XmlSerializer there to serialize my objects to the corresponding XML structure, based on the attributes I have added to the different classes.  (You can see this in the code in the Models\TwillioResponse.cs of the Web API.)

public class TwilioMLXmlMediaTypeFormatter : XmlMediaTypeFormatter
    {
        public override Task WriteToStreamAsync(Type type, object value, Stream writeStream, HttpContent content, TransportContext transportContext)
        {
            try
            {
                var task = Task.Factory.StartNew(() =>
                {
                    var xns = new XmlSerializerNamespaces();
                    var serializer = new XmlSerializer(type);
                    xns.Add(string.Empty, string.Empty);
                    serializer.Serialize(writeStream, value, xns);
                });

                return task;
            }
            catch (Exception)
            {
                return base.WriteToStreamAsync(type, value, writeStream, content, transportContext);
            }
        }
    }

State machine interpretion

The main logic of the API was to either create a new workflow instance (when there was no active instance linked with the caller) or to send notifications (the entered Digits) to the active workflow instance.  This all happens through the WorkflowManagementClient class. 

Starting a new workflow.

To start a new workflow, we just create a new Workflow instance and pass the CallerID as an input argument to the Workflow.  The instance ID is then kept in a Dictionary to keep books of our active workflows (and to avoid asking Workflow Manager on every call).

WorkflowStartParameters startParameters = new WorkflowStartParameters
            {
            };
            startParameters.Content.Add("CallerID", CallerId);
            var instanceId = WFClient.Workflows.Start(_workflowName, startParameters);
            _workflows.Add(CallerId, instanceId);

Sending digits as notification to an active workflow instance

When digits are received and we see there is an active workflow instance for the Caller, we send a notification with the CallerId and the entered Digits to the instance.  Behind the scenes, this is sending a message to the Service Bus topic that our Workflow scope is using.  These notifications will then be picked up by our above described Pub/Sub activities.

WFClient.PublishNotification(new WorkflowNotification()
            {
                Properties =
                {
                    { "CallerId", CallerId },
                    { "ReceivedDigit", Digit }
                },
                Content = new Dictionary<string, object>()
                {
                }
            });

Get next steps from the workflow instance

In this code extract, we check the workflow instance, verify if the instance is started and then check for the UserOptions variable, that we use to return the TwilioResponse object from our API.

// Here we read the external variable 'UserOptions' from the workflow instance
            // This is a string with multiple lines that we will use to return the different options
            string optionsVariable = "{http://schemas.microsoft.com/workflow/2012/xaml/activities/externalvariable}UserOptions";
            string goodbyeVariable = "{http://schemas.microsoft.com/workflow/2012/xaml/activities/externalvariable}GoodbyeStatement";
            if (_workflows.ContainsKey(callerId))
            {
                string existingWorkflowId = _workflows[callerId];
                var wfInstance = WFClient.Instances.Get(_workflowName, existingWorkflowId);
                if (wfInstance != null)
                {
                    if (wfInstance.WorkflowStatus == Microsoft.Activities.WorkflowInstanceStatus.Started)
                    {
                        if (wfInstance.MappedVariables.ContainsKey(optionsVariable))
                        {
                            var options = wfInstance.MappedVariables[optionsVariable];
                            return parseOptions(options);
                        }
                    }
                 }
            }

The actual API controller action

In this code extract, you can see that I either create a new instance, or use the active workflow instance, using my StateMachineInterpreter class (of which I have included some extracts above).

As you can see, Twilio will send me the Caller and Digits querystring variables, when they are available.  These are the only input parameters I need, as I correlate my workflow instances on the CallerId and use the Digits to trigger the right transitions.

There is one thing I don’t like that much in this design.  I am sending the notification over the service bus topic and this usually result very fast in the state transition.  But in a few cases, it happened that I listened for the external variables that were not yet set (because the notification was still handled).  This resulted in the same menu choice sent back to my caller.  Therefore, I do a short sleep between the notification and the inspection of the external variables.  Sync over async is not always ideal…

// GET api/flight
        public TwilioResponse Get([FromUri]string Caller = null, [FromUri]string Digits = null)
        {
            // If caller is unknown > reject
            if (Caller == null)
            {
                return new TwilioResponse { Reject = new TwilioReject { Reason = "rejected" } };
            }

            // Check if there is an active call process for this phone number
            if (!StateMachineInterpreter.HasActiveWorkflow(Caller))
            {
                DebugWriter.Write(Caller, "Starting new workflow");
                StateMachineInterpreter.StartNewWorkflow(Caller);
            }
            else
            {
                DebugWriter.Write(Caller, "Sending input to existing workflow");
                StateMachineInterpreter.SendInputToExistingWorkflow(Caller, Digits);
            }
            
            // I don't like doing this, but I will anyway - taking the time to have the published event being handled
            Thread.Sleep(450);

            // At this step, we have an active workflow, linked with the Caller. 
            // Now get the next input
            string goodbyeStatement = null;
            var nextOptions = StateMachineInterpreter.GetNextSteps(Caller, Digits, out goodbyeStatement);
            return getTwilioResponse(Caller, nextOptions, Digits == null, goodbyeStatement);
        }
 

Conclusion

With this demo, it was very visible how Workflow Manager could receive external input and give back feedback.  It also showed how multiple workflow instances could be running in parallel and how events can be correlated to the right workflow.  Together with the magic of Twilio, this resulted in a nice demo that allowed different people from the audience to participate interactively.

Sam Vanhoutte


March 4, 2014 at 10:58 PM

As promised in my first post of this series (Day 1) I will try to give an overview of all the sessions from both days at the London BizTalk Summit and point out the key takeaways from all the sessions.

Below you will find the list of all the sessions of day 2. When slides and video’s come online I will update this post to point you to the right location of all that amazing content..

 

Exposing Operational data to Mobile devices using Windows Azure (Kent Weare)

In the first session of the day Kent took us to the magical world of mobile services and how we can benefit from these unique challenges as an integration person.

For mobile developers integration is not the primary target and bridging the gap is a challenge we as integration people need to tackle.

This is where BizTalk Server / Services and Azure Mobile Services come in to play.

Enterprise Application Integration challenges did not disappear with mobility, they just got bigger and bigger.
Kent took a real world business scenario from the Power Generation industry and showed us how we can use Mobile Services, Service Bus, SQL Azure and BizTalk Services to glue it all together in a very nice hybrid application.

Another important thing to mention is that both the Mobile and BizTalk team are working together to create a tighter integration between both platforms.

 

Managebility of Windows Azure BizTalk Services (Steef-Jan Wiggers)

After Kent it was the Steef-Jans turn to take the stage to talk about Managebility of Windows Azure BizTalk Services.

Fully dressed - in suit and tie - Steef-Jan started giving a very good overview of all key components and benefits of WABS, demo’ing them one by one.

The talk was primaly focused on tools that are available for WABS and all the possibilities that the current Windows Azure portal is offering for Windows Azure BizTalk Services.

Be sure to check out the entire session when the video comes online because this was an excellent overview of what is currently out there….

 

Real world Business Activity Monitoring (Dan Rosanova)

How do you take a lesser popular topic like BAM and bring it to the next level? You just invite Dan to speak at your event.

As we all know, BizTalk is a black box for users and it’s our task to inform them about what has happened on the system or in their business processes. People care about what they can see, we have never seen bussiness people getting impressed with an integration project, have we?

Dan showed us how we can overcome this painpoint and give people what they want to see and where they want to see.

This talk gave us a general overview of all the key features and possibilities of BAM, he made a demo available on his website for you to play with (http://danrosanova.wordpress.com/rwb).

Last, but not least, was a very small demo of how we can use BAM tracking data in amazing tools like Powerview and Powermap; This might open up all new possibilities to get rid of the ‘sexy’ BizTalk 2004 BAM portal.

 

Examining Master Data integration using BizTalk Server and SQL Server Master Data Services (Johan Hedberg)

This session of Johan gave us an introduction on how to use BizTalk Server together with SQL Server Master Data Services.

First giving us an overview of all aspects and architecture of SQL Server Master Data Services he quickly started demonstrating all features of this service.

Showing off the portal, excel tools and how to create models and views we received a good, high level overview of the power of MDS.

The recommendations of Johan is to start small with some own business data and use BizTalk as the "enabler".

However we have to be a bit sceptical about the future, since last year there has been no new announcements of features so that makes the future a bit unclear.

 

Thinking like an Integration Person (Nino Crudele)

It’s hard to find the correct words to describe the session of Nino.

It was probably one of the most interactive and animating sessions of this entire summit. As an integration person this is a must see session.

Nino gave us a quick overview of the perception, challenges and strategies we as BizTalk developers need to overcome on a daily basis.

Next came the most important topic of his talk: ‘BTSG NoS Addon’ for Visual Studio.

This is an adding to Visual Studio that Nino has been working on for over 4 months (!!). I will not dive into deep details of the tools because there is limited space on this blog… but here are some key features:

  • Register/Unregister from GAC
  • Jackhammering artifacts
  • Dependencies checking
  • Reflection
  • Pipeline testing
  • Pipeline component testing
  • ….

And all this from Visual Studio! A must have tool for all our BizTalk developers. Nino will release this tool soon, so be sure to keep an eye on his blog (http://ninocrudele.me).

 

BizTalk Mapping Patterns and an Introduction to WABS maps (Sandro Pereira)

Maps and transformations are one of the most common components in an integration process and who else than Sandro do you invite to talk about this topic?

Since a couple of months Sandro is working on a free eBook about ‘BizTalk Mapping Patterns and Best Practices’, which will be released to the community very soon.

The BizTalk Summit was an excellent opportunity for Sandro to come and promote his (probably) amazing book.

In his session he showed us some of the most important mapping patterns and tricks. What approach do you use for what type of messages and how can I increase my mapping performance. All these topics where tackled by Sandro.

Be sure to check out his book when it’s released. Sandro also has a set of custom functoids available on codeplex, free to use for the community!

 

Workflow manager: Running durable workflows in the cloud and on prem (Sam Vanhoutte)

It’s always a bit ungraceful to give the last session on a major event, but Sam managed to amuse the crowd with an amazing hybrid demo application using Twilio.

Talking about workflow history and a prediction what will happen with workflow on the Windows Azure Platform.

In workflow 4.5 some new features where introduced like support for C#, state machines, side by side versioning and performance enhancements.

Workflow Manager is a workflow that uses on-premise service bus and uses the Workflow 4 programming model, it’s installed in IIS and has it’s own management site and several windows services.

After guiding us through the theoretical stuff, a cloud-based workflow solution was demonstrated, showing integration between Windows Azure Service Bus, Workflow Manager, Windows Azure BizTalk Services and on-premises systems.

If you want to play around with Workflow Manager, it is available through the web-platform installer, so be sure to check it out.

 

 

Thank you all for reading both blog posts and be sure to keep an eye on them in the coming days because they will get updated will all the videos from the event.

 

Cheers,

 

Glenn Colpaert

Posted in: Azure | BizTalk | BizTalk Services | WABS

Tags:


March 4, 2014 at 1:06 AM

Today was the first day of the long awaited BizTalk Summit in London. BizTalk360 has put together an amazing set of presenters and has arranged an amazing venue.

As Tord Glad Nordahl recently quoted: “We will never get this many technical community leaders in one room and on stage ever again”.

200 attendees from 19 countries, 100+ companies, 12 Microsoft Integration MVP’s and 4 Microsoft product group members all in one single room, this might be one of the biggest integration focused events ever conducted in Europe.

 

In the coming days I will try to give an overview of all the session and point out the key takeaways from all the sessions. In addition to that when the slides and videos of the specific session are available online I will update this post to point you to the complete content of the sessions.

 

Ready? Let’s kickoff…

 

BizTalk @ Microsoft and upcoming Server Releases (Guru Venkataraman)

‘On prem is real and here to stay’, that where the words Guru used to kick off his talk at the BizTalk Summit. Guru talked about key trends in integration these days and what the current focus of Microsoft is.

BizTalk Server 2013 R2 will be released in H2 2014 and can be seen as a ‘compatibility release’, actually all the R2 released will be more about compatibility and not about major new features.

BizTalk Server 2013 R2 will ship with JSON support (Guru gave us a demo of the new JSON schema wizard), SFTP, authentication improvements for service bus and upgrades for the Healthcare Accelerator.

Another thing to remember is the release cadence for BizTalk Server, a major BizTalk release will come every 2 years, minor releases will come every alternate years.

After his talk Guru gave the stage to Harish Kumar to talk more about the latest news in Windows Azure BizTalk Services.

 

Windows Azure BizTalk Services – Latest Updates (Harish Kumar Agarwal)

In this session by Harish Kumar we received a closer look into all the features updates that went into the February update of Windows Azure BizTalk Services.

After breaking the world record of showing over 4 slides in one minute he quickly dived into demo’ing the all new EDI features of Windows Azure BizTalk Services. The goal of the new EDI features is to make EDI more simpler.

Also added in the February release was more integration support for service bus (pull from service bus) and operation log support.

On the roadmap for the quarterly updates for Windows Azure BizTalk Services are following features: ADD integration, pull from LOB adapters, JSON support, BPMN, Adapter extensibility, AS2 enhancements and custom EDI codes….

 

How to move to BizTalk Services (Jon Fancey)

This great session of Jon took us to the world of Windows Azure BizTalk Services, talking about the challenges and opportunities in moving from BizTalk Server to WABS.

The advantages of moving to the cloud are very simple: lesser cost, lesser manageability, more flexibility and more scalable. One of the most important challenges is that both WABS and on-premise BizTalk Server are very different, but offer the same capabilities.

Jon gave some nice demos on how to migrate mappings, pipelines, parties and agreements.

He has also been working on a community tool, that will be released later this year, that migrates orchestrations to workflows and if you ever saw the raw XML of an orchestration you know this tool is pretty amazing.

 

BizTalk Server Operations and Monitoring using BizTalk360 (Saravana Kumar)

In this session Saravana talked about the challenges of monitoring and supporting a BizTalk or multiple BizTalk environments.
Last year BizTalk360 released a new version and Saravana took us through most of the important features and options.

Saravana did a good job of guiding us through and demo’ing the entire BizTalk 360 product.

 

When to use What: A look at choosing Integration Technology (Richard Seroter)

When the Applied Architecture Patterns on the Microsoft Platform (http://www.amazon.com/Applied-Architecture-Patterns-Microsoft-Platform/dp/184968054X) book was released 3 years ago there where about 10 integration tools for you to choose from.

Now, with the growing cloud offering there are more than 14 tools available out there.

This great talk of Richard was about giving us on overview of what choices we have to make as a BizTalk developer or Architect, choosing the right integration tool for the right scenario was the key take away of this session.

Richard created a simple (but yet very impressive) decision framework for down-selecting the choices. All these choices and framework was based on requirements, strategy, design, operations and important questions you should ask to your customer…

According to Richard the future is a hybrid solution, a combination of Service Bus with BizTalk and Workflow maybe could be the next best thing when designing an integration strategy for your application.

Be sure to check out this session when the video comes online because Richard will blow your brains out!

 

What if you mess up the configuration (Tord Glad Nordahl)

The standard was set, after Richard it was Tord’s turn to take the stage to talk about what happens if you mess up your BizTalk configuration.

In this very interactive session Tord gave us some very neat tips and tricks to fix things when you as a developer (or admin) messed it up.

As a BizTalk admin Tord has done lots of healthchecks and learned many things from the field.  We received a ton of best practices, lessons learned and tips on all important topics of BizTalk Server.

Don’t forget to check the video when it comes online!

 

BizTalk 2013 in Windows Azure IaaS (Stephen Thomas)

In the last session of the day Stephen gave us an overview on the Windows Azure VM’s and how to configure it through the portal.

Quickly moving to explaining everything we ever wanted to know about BizTalk 2013 IaaS.

How to setup a single and multi server configuration in the cloud with powershell and XML configuration files? Stephen talked and demo'ed all about that topic.

Good to know is that all of the scripts Stephen used will be made available for the community, be sure to keep an eye on http://www.biztalkgurus.com/

 

Cheers and see you tomorrow,

 

Glenn Colpaert

Posted in: Azure | BizTalk | BizTalk Services | WABS

Tags:


February 14, 2014 at 4:35 PM

When using SCOM as a monitoring tool in a BizTalk environment you will find some shortages from time to time in the BizTalk Server management pack.

We already solved many of these shortages in our own Codit BizTalk management pack (containing custom rules, overrides,…).

An example of such a shortage is the default monitoring of suspended orchestrations. The out of the box management pack contains a monitor called “Orchestration Suspended Instances Availability Monitor”.

Assuming you know the difference between alerts and monitors, the biggest disadvantage of this monitor is its nature: being a monitor. When an orchestration gets suspended for some reason, you will receive an alert. However when other orchestrations of the same type are getting suspended, and the previous instance has not been terminated/resumed yet, no new alert(s) will be fired.

It can be fatal in a production environment if the support team doesn’t receive an alert. Also in a scenario where an orchestration gets suspended by code for some seconds (waiting for another orchestration to finish first, sequencing,…) a false alert could be triggered.

 

We created an alert that will notify us every time an orchestration gets suspended and stays suspended longer than a certain amount of time.

 

This is how to create such an alert:

 

Create a new rule to execute a script every 15 minutes:

image

 

image

 

image

 

image

 image

Script for this rule:

' ---------------------------------------------------------
' SQL Database Query Check
' ---------------------------------------------------------
' Param 0: The SQL connection string for the server 
' Param 1: The Database to use
' Param 2: SQL Query
' Author:  Brecht Vancauwenberghe
' Date:    05-02-2014
' ---------------------------------------------------------
Option Explicit

Sub Main()

    Dim oAPI, strServer, strDatabase, iThresholdHours, objBag, strErrDescription, objArgs, I, Param

    Const EVENT_TYPE_ERROR = 1
    Const EVENT_TYPE_WARNING = 2
    Const EVENT_TYPE_INFORMATION = 4

      ' Initialize SCOM Script object
      Set oAPI = CreateObject("MOM.ScriptAPI")

      ' Write Parameters to eventlog
      ' Enable for debugging.
      Set objArgs = Wscript.Arguments
      For I = 0 to objArgs.Count -1
          Param = objArgs(I)
           'strErrDescription = strErrDescription & ", " & Param
      Next
          'call oAPI.LogScriptEvent("SQL Database Query Check.vbs", 1313, EVENT_TYPE_INFORMATION, strErrDescription)     

      If WScript.Arguments.Count = 3 then

      
            ' Retrieve parameters
            strServer = CStr(WScript.Arguments(0))
            strDatabase = CStr(WScript.Arguments(1))

                  'Connect to the database
                  Dim cnADOConnection 
                  Set cnADOConnection = CreateObject("ADODB.Connection") 
                  cnADOConnection.Provider = "sqloledb" 
                  cnADOConnection.ConnectionTimeout = 60
                  Dim ConnString
                  ConnString = "Server=" & strServer & ";Database=" & strDatabase & ";Integrated Security=SSPI" 
                  cnADOConnection.Open ConnString
                  
                  'Connection established, now run the code
                  Dim rst 
                  Set rst = cnADOConnection.Execute(CStr(WScript.Arguments(2)))

                  ' should be just one record
                  ' oResults.MoveFirst

                  'Set objBag = oAPI.CreateTypedPropertyBag(1)
                  'Call objBag.AddValue("Count", CInt(oResults(0)))
                  
                  'oAPI.AddItem(objBag)

			Do While Not rst.EOF
			Call oAPI.LogScriptEvent("check_suspended_orchestrations.vbs",12223, EVENT_TYPE_Error, "An orchestration is suspended with following error :" & rst.fields.item(0) & " and is active for longer than 5 minutes. Check the BizTalk environment!") 
			rst.MoveNext 
			Loop
                  cnADOConnection.Close
                  
            'return the property bag objects
            'Call oAPI.ReturnItems

      End If 
            
End Sub

Call Main()

Parameters for this rule:

$Target/Property[Type="Microsoft.BizTalk.Server.2010.BizTalkGroup"]/MgmtDbServerName$ BizTalkDTADb "DECLARE @msgbox nvarchar (500)Set @msgbox = (select top(1) DBserverName from [BizTalkMgmtDb].[dbo].[adm_MessageBox] with (nolock))exec('SELECT * FROM [BizTalkDTADb].[dbo].[dtav_ServiceFacts] with (nolock) INNER JOIN [' + @msgbox + '].BizTalkMsgBoxDb.dbo.InstancesSuspended as msgboxdbinstancesSuspended ON [BizTalkDTADb].[dbo].[dtav_ServiceFacts].[ServiceInstance/InstanceId] = msgboxdbinstancesSuspended.uidInstanceID where [Service/Type] = ''Orchestration'' and datediff(minute,[ServiceInstance/StartTime],getutcdate()) between 5 and 15')"

The VBscript fires following SQL query defined as a parameter on the MSGBOX and DTA:

DECLARE @msgbox nvarchar (500)Set @msgbox = (select top(1) DBserverName from [BizTalkMgmtDb].[dbo].[adm_MessageBox] with(nolock))exec('SELECT * FROM [BizTalkDTADb].[dbo].[dtav_ServiceFacts] with(nolock) INNER JOIN [' + @msgbox + '].BizTalkMsgBoxDb.dbo.InstancesSuspended as msgboxdbinstancesSuspended  ON [BizTalkDTADb].[dbo].[dtav_ServiceFacts].[ServiceInstance/InstanceId] = msgboxdbinstancesSuspended.uidInstanceID  where [Service/Type] = ''Orchestration'' and datediff(minute,[ServiceInstance/StartTime],getutcdate()) between 5 and 15')

This query will return the type of the suspended orchestration,…

Another example is a query only using the MSGBOX:

SELECT nvcErrorDescription 
FROM [BizTalkMsgBoxDb].[dbo].[InstancesSuspended] with (nolock) 
JOIN [BizTalkMsgBoxDb].[dbo].[ServiceClasses] as serviceclasses 
ON uidClassID = serviceclasses.uidServiceClassID 
where serviceclasses.nvcName = 'Orchestration' and datediff(minute,dtSuspendTimeStamp,getutcdate()) between 5 and 15

 

This will return the error description why the orchestration got suspended. You can choose the query you prefer, or do much more with it.

Remark, for the MSGBOX example will need to use following parameters (query is executed on the MSGBOX):

$Target/Property[Type="Microsoft.BizTalk.Server.2010.BizTalkRuntimeRole"]/MsgBoxDbServerName$ BizTalkMsgBoxDb "SELECT nvcErrorDescription FROM [BizTalkMsgBoxDb].[dbo].[InstancesSuspended] with(nolock) JOIN [BizTalkMsgBoxDb].[dbo].[ServiceClasses] as serviceclasses ON uidClassID = serviceclasses.uidServiceClassID where serviceclasses.nvcName = 'Orchestration' and datediff(minute,dtSuspendTimeStamp,getutcdate()) between 5 and 15"

Also, when using the MSGBOX example don’t forget to change your Alert target to BizTalk Run-Time Role:

image

Now you need to create a second rule to check for this eventID in the eventlog so an alert is triggered per suspended orchestration. We have one rule that retrieves all custom alerts:

image

You can find lots of examples to do this on the internet: http://technet.microsoft.com/en-us/library/bb309568.aspx

This is the line in the script that writes to the eventlog, you will need to check for the eventid you write in the script. You can also write some result information of your query in the alert:

Call oAPI.LogScriptEvent("check_suspended_orchestrations.vbs",12223, EVENT_TYPE_Error, "An orchestration is suspended with following error :" & rst.fields.item(0) & " and is active for longer than 5 minutes. Check the BizTalk environment!")

 

This is the result, an emailed alert per suspended orchestration containing the error:

Alert: Check Eventlog for Custom SCOM Rules [Check alert content for more details]

Source: BizTalkMgmtDb.BELANSQLBTSPRDS\BTS

Path: BELANBTSPRD1.Ghent.corp.mycompany.com

Last modified by: System

Last modified time: 06/02/2014 15:48:52

Alert description: Event Description: check_suspended_orchestrations.vbs : An orchestration is suspended with following error :Uncaught exception (see the 'inner exception' below) has suspended an instance of service 'Mycompany.MyApplication.Ghent.Processes.CreateTimeOutAlerts(8b2addb3-0531-9078-786b-a96d77224319)'.

The service instance will remain suspended until administratively resumed or terminated.

If resumed the instance will continue from its last persisted state and may re-throw the same unexpected exception.

InstanceId: ee15bbad-e4a8-40fd-ad16-46e3cc6f1f81

Shape name: Save Alert

ShapeId: 2652d27b-61fc-47b9-84ad-eca188d776bf

Exception thrown from: segment 1, progress 62 Inner exception: Exception in SaveAlert

Exception type: ApplicationException

Source: MyCompany.MyApplication.Helpers

Target Site: Void Add(Int32, System.String, System.String) The following is a stack trace that identifies the location where the exception occured

at MyCompany.MyApplication.Helpers.AlertHelper.Add(Int32 stateId, String type, String status)

at MyCompany.MyApplication.Ghent.Processes.CreateTimeOutAlerts.segment1(StopCond and is active for longer than 5 minutes. Check the BizTalk environment!

 

This was tested on a SCOM 2012 SP1 environment using BizTalk Server 2010. We expect this to work on older/newer versions of SCOM and BizTalk, but at this moment it hasn’t been validated by Codit yet. Also note that you should be using an unsealed management pack.

Posted in: BizTalk | Monitoring

Tags: , ,


January 10, 2014 at 3:40 PM

It’s always nice to be able to look back on things and see what was good and what needs some work or can be done better in the future.

The beginning of the new year is the perfect time to do this and so we’re listing up the top 5 posts from our blog, based on the number of visits. 

Be sure to check them out if you haven't already!

 

Our most popular posts of 2013

This list shows our most visited blog posts, added in 2013:

 

  1. 1. Troubleshooting SSL client certificate issue on IIS (Toon Vanhoutte)
  2. 2. Using ACS and WAAD with JWT Tokens for Web and Store Applications (Part 2) (Jonas Van der Biest)
  3. 3. Windows Azure BizTalk Services – getting started (Sam Vanhoutte)
  4. 4. Rule “FusionActive Template Library (ATL)” failed at SQL Server 2008 R2 Installation (Henry Houdmont)
  5. 5. Windows Azure BizTalk Services & BizTalk 2013 - comparing the mapper (Glenn Colpaert)

 

Our most popular posts (All-time)

This list shows our most visited blog posts of all-time, regardless of when they were posted.
Note that older posts are likely to be higher in this list, due to the fact that they had more time to accumulate visitors.

 

  1. 1. Best practices for consuming web services within BizTalk Server (Toon Vanhoutte, 2012)
  2. 2. Service Bus for Windows Server (Sam Vanhoutte, 2012)
  3. 3. Hosting WCF HTTP receive location inside BizTalk, without using IIS (Sam Vanhoutte, 2010)
  4. 4. Creating your own virtual machine on Azure: Introducing VM Role (Sam Vanhoutte, 2010)
  5. 5. Troubleshooting SSL client certificate issue on IIS (Toon Vanhoutte, 2013)

 

On average, we increased traffic to our blog with 119%, which is a very nice achievement!

Thank you to all our visitors, we hope to be as active in 2014 as in 2013 with even better content.

 

Have a great 2014!

Posted in: Community

Tags: