August 28, 2014 at 3:55 PM

Sentinet is highly extendable through standard Microsoft .NET, WCF and WIF extensibility points, and through the Sentinet API interfaces.
In the last post we saw how to build a custom alert handler for SLA violations notification. In this 4th post I want to continue the Sentinet Extensibility series exploring another possible customization, the routing.

         

Routing

 

The Routing feature allows to deliver the messages received on the virtual inbound endpoint to more than one alternative endpoint of the same backend service. When the backend service exposes multiple endpoints, some of them (or all) can be included in message routing by marking them using the Sentinet Administrative Console. Notice that, to activate the Routing at least two backend endpoints must be selected.

 

The Sentinet routing feature improves the API availability with an automatic failover. This means that, in case of communication failure, the Sentinet Node falls back to the next available endpoint (this does not happen in case of the SOAP Fault because it's considered as a valid response).

 

Sentinet supports four router types:

  • Round-Robin with priority or equal distribution of the load. This is the default routing mode, the fallback is automatic.
  • Simple Fault Tolerance. The routing mechanism always hit the endpoint with the highest priority then, in case of communication failure, it falls back to the endpoint with lowest priority.
  • Multicast. A copy of the incoming message is delivered to all the endpoints.
  • Custom. The routing rules are defined in a custom .NET component.

 

Scenario

       

Here are  the requirements for this scenario: 

  • Sentinet is deployed behind the network load balancer and the customer doesn't want to pass again through the NLB.
  • The virtualized backend service has nine endpoints (three per continent) and the load should be routed depending on which continent the request is coming from.
  • The load routed to Europe and to North America should be equally distributed between the endpoints (simple round robin). 
  • The load routed to Asia should always hit a specific endpoint and in case of error must fall back to the others (simple fault tolerance). 

 

In short, we want to build a geography-based custom router that merges the Round-Robin and the Fault-Tolerance types. To build the GeoRouter I started with the example that I found in the Sentinet SDK.

ScenarioWW

 

Build the custom Router

 

A Sentinet custom Router is a regular .NET component that implements the IRouter interface (ref. Nevatech.Vbs.Repository.dll) and MessageFilter abstract class.

The IRouter  inteface contains three methods:
- GetRoutes – Where to define the routing rules.
- ImportConfiguration – Read the (and apply) the component’s configuration.   
- ExportConfiguration – Save the component its configuration.

 

The custom router reads the component configurations where we define which endpoint is contained in which region (continent) and the type of routing to be applied. Based on this XML the GetRoutes method creates the Route object which is responsible for the message delivery.

<Regions>
  <Region code="NA" roundRobin="true">
    <!-- North America -->
    <Endpoint>net.tcp://northamerica1/CustomerSearch/4</Endpoint>
    <Endpoint>net.tcp://northamerica2/CustomerSearch/5</Endpoint>
    <Endpoint>net.tcp://northamerica3/CustomerSearch/6</Endpoint>
  </Region>
  <Region code="AS" roundRobin="false">
    <!-- Asia -->
    <Endpoint>net.tcp://asia1/CustomerSearch/7</Endpoint>
    <Endpoint>net.tcp://asia2/CustomerSearch/8</Endpoint>
    <Endpoint>net.tcp://asia3/CustomerSearch/9</Endpoint>
  </Region>
  <Region code="EU" roundRobin="true">
    <!-- Europe -->
    <Endpoint>net.tcp://europe1/CustomerSearch/1</Endpoint>
    <Endpoint>net.tcp://europe2/CustomerSearch/2</Endpoint>
    <Endpoint>net.tcp://europe3/CustomerSearch/3</Endpoint>
  </Region>
</Regions>

 

The GetRoutes method, returns a collection of Route objects. A Route is composed by a Filter expression, an EndpointCollection and a Priority.

image

 

How the sentinet engine processes the Collection<Route> object?

The Sentinet engine processes the routes one by one according to the order defined (priority field) untill a first match occurs. Then, when the filter criteria is matched, the request message is sent to the first endpoint in the EndpointCollection. If the current endpoint throws an exception Sentinet fallbacks to the next endpoint in the collection.

 

How to populate the Collection<Route> to achieve our goals?

The fallback is automatically implemented by Sentinet every time that in the Route’s endpoint collection there are more than one endpoint. So the creation of a route which contains one single endpoint disables the fallback mechanism.

 

The round robin mechanism implemented in this demo is very simple. Basically the distribution of the load between the endponts is achieved :

- Creating a number of routes equal to the number of the endpoint in that region (e.g. in europe we have 3 endpoint so 3 routes are created and added to the collection).

- Every route has a different a filter expression based on a random number.

- In every route’s endpoint collection, the items are sorted in a different order to prioritize a different endpoint at every iteration.

 

Here a visual representation of the Routes collection to achieve the RoundRobin + Automatic fallback

image

Automatic fallback without round robin

image

Round robin without the automatic fallback (not implemented in this example)

image

 

So what does the code do? Basically, it reads the collection of the endpoint that we checkmarked during the virtual service design and if the endpoint is contained in the XML configuration it is added to the continent-based route object.

 

Here the GetRoutes code

        public IEnumerable<Route> GetRoutes(IEnumerable<EndpointDefinition> backendEndpoints)
        {
            if (backendEndpoints == null) throw new ArgumentNullException("backendEndpoints");

            // Validate router configuration
            if (!Validate()) throw new ValidationException(ErrorMessage);

            // Collection of routes to be returned
            Collection<Route> routes = new Collection<Route>();

            // Ordered collection of outbound endpoints used in a single route
            Collection<EndpointDefinition> routeEndpoints = new Collection<EndpointDefinition>();

            // The order of a route in a routing table 
            byte priority = Byte.MaxValue;

            foreach (Region region in Regions)
            {
                // Collection can be reused as endpoints are copied in Route() constructor
                routeEndpoints.Clear();

                // collection of the backend endpoint per region 
                foreach (string endpointUri in region.Endpoints)
                {
                    // Find outbound endpoint by its AbsoluteURI
                    EndpointDefinition endpoint = backendEndpoints.FirstOrDefault(e => String.Equals(e.LogicalAddress.AbsoluteUri, endpointUri, StringComparison.OrdinalIgnoreCase));
                    if (endpoint == null) throw new InvalidOperationException(String.Format(CultureInfo.CurrentCulture, InvalidRouterConfiguration, endpointUri));
                    routeEndpoints.Add(endpoint);
                }

                if (region.EnableRoundRobin)
                {
                    // build a route for each endpoint in the region
                    var iEndpointIndex = 0;
                    foreach (string endpointUri in region.Endpoints)
                    {
                        // change the backend's endpoint order 
                        if (iEndpointIndex > 0) SortEndpoints(routeEndpoints, iEndpointIndex - 1);

                        // Configure message filter for the current route
                        var rrFilter = new GeoMessageFilter
                        {
                            ContinentCode = region.Code,
                            RoundRobin = region.EnableRoundRobin,
                            BalancingFactor = GetBalancingFactor(iEndpointIndex)
                        };

                        routes.Add(new Route(rrFilter, routeEndpoints, priority));
                        iEndpointIndex++;
                        priority--;
                    }
                }
                else
                {
                    // build a route for each region
                    var filter = new GeoMessageFilter
                    {
                        ContinentCode = region.Code,
                        RoundRobin = false
                    };
                    // endpoint Fallback scenario
                    routes.Add(new Route(filter, routeEndpoints, priority));
                }
                priority--;
            }

            return routes;
        }

And the messageFilter class

    public sealed class GeoMessageFilter : MessageFilter
    {
        #region Properties

        public String ContinentCode { get; set; }
        public bool RoundRobin { get; set; }
        public double BalanceFactor { get; set; }

        private static Random random = new Random(); 
        #endregion

        #region Methods

        public override bool Match(Message message)
        {
            var remoteProps = (RemoteEndpointMessageProperty) message.Properties[RemoteEndpointMessageProperty.Name];
            return Match(remoteProps.Address, ContinentCode);
        }


        private bool Match(string ipAddress, string continentCode)
        {
            var requestCountryCode = GeoLocation.GetCountryCode(ipAddress);
            var matchTrue = (CountryMap.GetContinentByCountryCode(requestCountryCode) == continentCode.ToUpperInvariant());

            if (matchTrue && RoundRobin)
            {
                if (random.Next(0, 100) > BalanceFactor) return false;
            }
            return matchTrue;
        }

        #endregion
    }

 

Register and configure

 

The custom component can be registered and graphically configured using the Sentinet Administrative Console. Go to the design tab of the virtual service and click modify, then select the endpoint you want to be managed by the Routing component. On the endpoint tree node click the ellipsis button.

IncludeEndpointsAndSetupRouter

Add new Custom Router, specifying few parameters:

  • Name. The fiendly name of the custom router (GeoRouter)
  • Assembly. The fully qualified assembly name that contains the Router implementation (Codit.Demo.Sentinet.Router,Version=1.0.0.0,Culture=neutral,PublicKeyToken=null)
  • Type. The .NET class that implement the IRouter interface (Codit.Demo.Sentinet.Router.Geo.GeoRouter)
  • Default Configuration. In this example I let it blank and I will specify the parameters when I use the

 

Select the router and set the custom configuration.

CustomRouterConfig

Save the process and wait for the next heartbeat so that the modifications will be applied.

 

Test

To test the virtual service with the brand new custom router, this time I tried WcfStorm.Rest.

Test case #1 - All the nine endpoints were available.

The messages have been routed to the specific continent and load has been distributed among the backend services as expected.

I collected in this image the backend services monitor (top left) and the map displays the sources of the service calls.

As you can see the the basic load balancer is not bullet proof, but the load is spread almost equally which is acceptable for this proof of concept.

TestDashboardAndIPmaps

 

Test case #2 -  Fall back test on the European region.

I shut down the europe1 and europe2 services, so only the europe3 service was active.

Thanks to the fallback mechanism, the virtual service always responded. In the monitor tab you can check the fallback in action.

image

 

Test case #3 - All the European backend services were stopped.

Means that a route had a valid matchfilter, Sentinet tried to contact all the endpoints in the endpoint collection but without any success in evey attempt. Here under, it’s reported the error message we got. Notice that the address reported will be different depending on the route has been hit.

<string xmlns="http://schemas.microsoft.com/2003/10/Serialization/">
The message could not be dispatched because the service at the endpoint address 
'net.tcp://europe3/CustomerSearch/3' is unavailable for the protocol of the address.
</string>

Test case #4 – No matching rules.

If there are no matching rules (e.g. sending messages from South America) this following error message is returned.

<string xmlns="http://schemas.microsoft.com/2003/10/Serialization/">
No matching MessageFilter was found for the given Message.</string>

 

Conclusion

Sentinet is designed to be extensible in multiple areas of the product. In this post I’ve demonstrated how to create a Geography-based custom Router that combines the round-robin and the fault tolerance features. In the next post I will discuss about the Sentinet management APIs.

 

Cheers,

Massimo

Posted in: Sentinet | SOA | WCF

Tags: , ,


July 8, 2014 at 9:18 AM

In Sentinet, authorization and access to any virtual service is defined using an Access Rule which is a combination of authorization expressions and logical conditions. Sentinet provides an out-of-the-box access rule composer with a set of common Access Rule Expressions like X509 certificate, Claim and User validation, etc...

 

Running out of the in-built tools to cover all the business scenarios is almost inevitable; extensibility is the way to fill this gap. Extensibility is one of the key-features of every successful product and it particularly shines in Sentinet where you can deal with different extensibility points.

 

In this blog post I will go through the steps involved in creating a custom access rule expression, register it and test it.

 

Create a Custom Rule Expression

A custom access rule expression is a regular .NET component that implements the IMessageEvaluator interface (ref. Nevatech.Vbs.Repository.dll). This inteface contains three methods:

  • Evaluate – Where to put the access rule logic.
  • ImportConfiguration – Read the (and apply) the component’s configuration.
  • ExportConfiguration – Save the component its configuration.

 

In this example I’m going to define a component for evaluating an APIKey sent in a custom header (for SOAP services) or as a part of the service URL (for REST services).

As shown in the figure below, the implementation is pretty straightforward.

  • the getSecurityContext method evaluates the System.ServiceModel.Channels.Message object to read the APIKey
  • the isValidKey method evaluate the key.
  • the properties ServiceName and isRest are set with the values specified in the componet configuration.

 

ApiKeyValidator

 

Here the simple implementation for reading the component configuration.

Configuration

 

In this example the API Key is validated against an SQL table. A stored procedure with two parameters evaluates whether the key has access to the specific service.

SQLImplementation

 

Register

First step is to copy the dll(s) to the Sentinet node(s). In this case, notice that it’s not mandatory to sign the assembly and register to the GAC. I simply create a bin folder in my Sentinet node and I copied the dlls.

Bin

 

The custom component can be registered and graphically configured using the Sentinet Administrative Console.Click on the AccessRule node, Add new Access Rule, then at the bottom of the rule designer press Add..

 

Five parameters need to be specified:

  • Name. The fiendly name of the custom rule expression (APIKey)
  • Assembly. The fully qualified assembly name that contains the custom rule expression (Codit.Demo.Sentinet.AccessControl, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null)
  • Type. The .NET class that implement the IMessageEvaluator interface (Codit.Demo.Sentinet.AccessControl.APIKeyValidator)
  • IsReusable. Set it to true if your component is thread-safe. The Sentinet Authorization Engine will
    create and use a single instance of this component.
  • Default Configuration. The default configuration. In this example I let it blank and I will specify the parameters when I use expression in a specific Access Rule

 

Register

 

Test

First I created a SOAP and a REST virtual service that virtualize the Offer backend service (SOAP). Then I defined a couple of access rules using the new APIKey Custom Access Rule Expression and I applied those rules to the Virtual Service using the Access Control tab.

In this example the service name is passed to the Expression through a configuration parameter but a better solution would be to extract it from the Message class.

 

AccessRules

 

For this example I defined few API keys in the SQL table.

SQLApiKeys

 

The SOAP scenario has been tested with soapUI. Depending on the value returned by the Evaluate method the virtual service returns different objects:

  • True => Access granted, the response message is returned.
  • False => Generic Access denied via Soap Fault.
  • Exception => Custom Exception via Soap Fault.

 

SOAPTest

 

The REST scenario has been tested with Fiddler. Depending on the value returned by the Evaluate method, the virtual service returns different HTTP codes:

  • True => HTTP 200
  • False => HTTP 403
  • Exception => HTTP 500

RESTtest

 

Finally, below, you can see how the test results are presented in the Sentinet Monitoring tab.

SentinetMonitor

 

Conclusion

The Sentinet extensibility model is intended to support custom scenarios by enabling you to modify the platform behavior at different levels.In this post I have discussed how to extend the Access Rule engine with an additional authorization component.

 

Cheers,

Massimo

Posted in: .NET | Monitoring | Sentinet | WCF

Tags: , ,


June 24, 2014 at 9:52 AM

Now that BizTalk 2013 R2 is released on MSDN, it’s time to take a first look at the new features and possibilities of this release.

As Guru already clarified on the last BizTalk Summit (BizTalk Summit Summary), the R2 releases of the BizTalk products are focusing more on 'compatibility/platform' alignment and less about shipping new (major) features/add-ons to the platform.

To give you an overview, the following features where added in the new BizTalk Server 2013 R2 release:

  • Platform Alignment with Visual Studio, SQL Server,…
  • Updates to the SB-Messaging Adapter
  • Updates to the WCF-WebHttp Adapter
  • Updates to the SFTP Adapter
  • Updates to the HL7 Accelerator

This blog post will focus on the updates of the WCF-WebHttp Adapter that where shipped with this new release of Microsoft BizTalk Server.

 

WCF-WebHttp Adapter enhancements

With this new version of BizTalk Server the WCF-WebHttp adapter now also supports sending and receiving JSON messages. This new version of BizTalk ships with a wizard to generate XSD schema from a JSON instance and two new pipelines (and components) for processing JSON messages in a messaging scenario.

 

Out with the old, in with the new

Let us take a quick glance at the new components that are shipped with this new BizTalk release. We first of all have 2 new pipelines and components for encoding and decoding JSON messages in our BizTalk Ports.

clip_image001[4]

Configuring these 2 new components is very straightforward. On the encoder side there is one component property to specify whether the XML root element should be ignored while encoding the xml message to JSON. On the decoder part there are 2 properties to specify, the root node and the root node namespace to be used by the decoder in the generated xml message.
You can find a screenshot of the properties of both components below.

 

clip_image002

clip_image002[6]

 

Next to 2 new components for parsing the JSON messages there is also the new JSON Schema Wizard which allows you to generate XSD files based on a JSON instance. You can find this new wizard in the "Add -> New Item" menu of Visual Studio.

clip_image002[8]

 

JSON, do you speak it?

To demonstrate the new features and possibilities of the enhanced WCF-WebHttp adapter I created a little POC that will use the iTunes API (http://itunes.apple.com).

First of all I downloaded a JSON instance from the API from following URL: http://itunes.apple.com/search?term=metallica.

clip_image002[1]

Next, I used this JSON instance to generate the XSD with the new JSON Schema Wizard. The wizard itself is pretty straightforward, you simply specify the ‘Instance File"’, choose your ‘Root Node Name’ and ‘Target Namespace’ and simply press ‘Finish’ to get your XSD generated.

clip_image002[3]

When using the ‘Metallica’ instance this results in following schema.

image

 

 

After generating the instance I started configuring the Send Port, to actually consume the service from BizTalk.

Below you can see the configuration of my 2-Way Send Port that is sending requests to the API. We have a PassThruTransmit pipeline and as receive pipeline we have a custom pipeline with the JSON decoder inside it.

clip_image002[7]

 

clip_image002[9]clip_image002[11]

 

 

When sending the request to the API we get following message in the BizTalk MessageBox (parsed JSON instance).

image

This sums up the basic demo of the new enhancements of the WCF-WebHttp adapter.

If you have questions regarding this scenario, please share them in the comments section of this blog post and I’ll be happy to answer them.

 

Cheers,

Glenn Colpaert

Posted in: BizTalk | Schemas | WCF

Tags: , ,


June 2, 2014 at 4:00 PM

With “Mobile-First Cloud-First” being the new trending mantra, the communication between devices, Services on-prem(ise) and cloud are growing tremendously. Such a scenario drives the necessity to have a means that provides a high level perspective and complete control of all the services irrespective of their hosting model and aggregate, secure and tune them to business efficiency.

 

Sentinet by Nevatech

 

Sentinet is a lightweight and scalable SOA and API management platform that helps to define, secure and evolve your API program.

It delivers runtime SOA management by enforcing design-time decisions using policies and remote declarative configurations. These capabilities provide SOA and REST implementations in a completely non-intrusive manner. 
Based on the concept of the service virtualization and service brokerage, it allows to transparently manage solutions that run on a diverse SOA infrastructure and quickly adapt to changes.

 

In this blog post I want to give you an overview of the components and the main features.

 

The components

 

A) Sentinet Nodes. A high-performance, low latency, scalable hosting model that can dynamically and non-intrusively extend and modify the behavior of existing services.

B) Sentinet Console. A web-based interactive application that allows SOA administrators and IT operators to manage and monitor the APIs and SOA services.

C) Sentinet Management API: An API that developers can leverage and extend to build their own management extensions and applications

D) SOA Repository. It provides a centralized and secured repository of all SOA managed assets like services, policies, authorization rules, service level agreements and metrics.

           

SentinetComponents2

 

Explore the main features

 

Virtualization

The Sentinet Nodes hosting model enables to aggregate and compose multiple business services in a single Virtual Service. Thanks to the fine-grained virtualization it is possible to configure details like which operations to be virtualized, the uri templates, versions, routing criteria, etc.

 

In this sample two different services (one SOAP and one REST) have been virtualized in a single REST service, two operations have been renamed and included and two excluded from the virtualization.

 

Virtualization

 

Mediation

Business services can be developed and deployed in the application layer with a unified communication and security pattern, while aspects like protocols, security, authorization and versioning are delegated to the Sentinet platform.

 

In this example, a service is exposed as netTcp with Integrated Security and the security configurations are delegated to the virtualized service that has multiple endpoint with different bindings and different security models like TransportWithMessageCredential or Message security with a client certificate. In other words: a protocol mapping and a security mapping has been applied.

Mediation

 

Security and Access Control

Sentinet Nodes dynamically implement and enforce SOA solutions’ security via managed authentication, authorization and access control.

Sentinet security models enable SOA services with Single-Sign-On and Federated Security scenarios and extend implementations with industry standard Security Token Services.

 

In this example I applied a custom access control rule that implement a rate limit of 7000 requests in 10 minutes, an ip filter and a timerange filter. An access rule can be applied to different scopes (Service, Operation, Endpoint), it’s also possible to multiple rules to the same scope to create a chained Access Control.

AccessRuleSimple

Then I run a quick load test for testing the rule I created.

zs_RuntTesta

When the rate limit is hit an HTTP 403 status code is returned.

AccessDenied

 

 

Monitoring and Reporting

Sentinet provides real-time and historical monitoring, auditing and messages recording. 

The image here under, reports the real-time graph related to the run test for the Access Control rule. At a glance we can see the performance trend and other metrics like the number of successful/failed calls, maximum message size and  response times.

 

In this particular case, the real time view helped me to quickly notice that the test has been run in a scenario with a high network latency. Indeed, the summary box reports average duration of 10ms when the average response time measured by the test client was 413ms.

zs_Report

Switching to the logs tab we can find the list of the transactions occurred with additional details like the operation and the triggered access rule. It’s also possible to record the message content or to completely disable changing the monitor.

zs_LogWithDetails

Other reports with aggregated metrics are available. For more details, visit the Nevatech website.

 

SLA management

Sentinet Service Agreements helps to monitor products and maintain them reliable and scalable. A service agreement can cover multiple services and different service scopes (Interface, Operation, Endpoint) and it’s validated against multiple performance metrics.

During the definition of the scope to be monitored, you can choose which message will be targeted specifying an access rule.

SLA violations can trigger alerts and custom actions.

 

In this example,I created a new service agreement that covers two version of the same service. The SLA is applied to different services and the SLA violation is calculated every 5/10 minutes.

SLA1

Then I created another SLA for the maximum duration. Positioning on the Service agreement node, you can monitor in real-time all the agreements merged together. This is very helpful especially when you define different groupes of SLAs.

 

SLA4

Finally, in the logs tab you can find all the violation details within the agreement. the metric violated and the metric value at the time the violation occurred.

 

Conclusion

There are many SOA management products out there. Sentinet is the one we've chosen to enrich our integration offer because it fits perfectly with solutions that leverage the Microsoft technology stack, it has a little footprint, it’s highly extensible with remarkable performances.

 

Cheers,

Massimo

Posted in: Monitoring | REST | Sentinet | SOA | WCF

Tags: , , ,


May 28, 2014 at 10:05 PM

 

 

 

 

Like we promised yesterday in our summary blog post of Day 1 we are back with a summary of the second day of this amazing event.

Below you will find an outline of some of the sessions, if slides and/or videos come online, we will update this and the first blog post with the link to the session content.

 

Game Services & Telemetry Processing by Alan Smith

 

In this very interesting session Alan talked about different scenario’s in gaming industry, IoT, Formula One, ... where cloud computing and telemetry processing is used. Starting off with some cloud computing usage patterns and design patterns, he quickly gave some real life examples where the cloud is used to process telemetry/big data.

One of these examples is Halo. Halo uses Microsoft Service Bus and actually was used to stress test the complete infrastructure of Service Bus, be sure to check out following video: http://channel9.msdn.com/Blogs/Subscribe/How-Halo-4-is-using-Windows-Azure-Service-Bus

Another one of those real life examples is the Lotus F1 Team that uses Microsoft Dynamics and Microsoft Azure to process their data and measurements, you can find the video here: http://www.youtube.com/watch?v=g3b4xC0FIRU

After these real life examples Alan started demo’ing his own game RedDog Racing (http://reddog.azurewebsites.net), explaining the entire architecture and pitfalls during this project.

Alan announced that he will come over again to Belgium to give a RedDog Racing workshop during an Azug event! Be sure to join!!

- Glenn Colpaert

 

What’s new in ASP.NET an VS2013 for Web Developers by Jon Galloway

Jon Galloway has done an impressive job in showing us all the new features in visual studio 2013 and ASP.NET, and this all in the limited time of one hour. The key point to all the features is that it offers the user options through extensions and/or nuget packages. When you choose Web.Api as your main development framework, you can always choose to add MVC to your project simply by getting the nuget package(s).

Here's a list of the new things I picked up during the talk:

  • Options when creating a web project (WebApi, MVC or forms or a combination)
  • Authenticaiton options
  • Scaffolding! Even for WebForms which now supports dynamic model binding
  • New HTML editor
  • Browser Link
  • AngularJS Intellisense
  • More LESS support :)
  • JSON editor
  • A working! Phone emulator
  • Easy SSL
  • Side waffle templates
  • Enhanced sprites support (you can generate or update a sprite simply by selecting multiple images)
  • Grunt integration in the EDI
  • Bootstrap
  • OWIN
  • Identity

- Wouter Seye

 

 

Application Insights by Marcel De Vries

The continuous delivery cycle is missing a part at the operations side: telemetry.

Application Insights works in the cloud (Visual Studio online) so your app needs an internet connection to benefit of its features and currently it's in preview.

It allows you to gather lots of data about your application and the way people use your application just by adding some short lines of code. All the results are shown in a nice, live dashboard.

You can download Intellitrace files to start debugging your code whenever an error is reported in Application Insights.

- Henry Houdmont

 

Patterns for Parallel Programming by Tiberiu Covaci

The Romanian chef 'Tibi', better known as Tiberiu Covaci told us more about how to increase productivity in the kitchen, by optimizing sequential code to run parallel. Obviously, he was not hosting a cooking-class, but was using a very clear sample of how the use of parallelism can speed up the process.

It states without saying that it is not because the amount of processors is continuously increasing, the software is automatically running quicker. Software needs to be optimized to actually use the multiple cores, which are available.

Using demos throughout the session, with the subject mentioned before, Tiberiu illustrated very clearly how to easily implement the use of parallelism.

- Maxim Braekmanimage

 

A queue by any other name would still work…. or would it? by Mike Wood

 

One of my favorite session was one of Mike Wood about the most used queueing services in the Azure landscape - Azure Storage Queues, Azure Service Bus Queues & Topics. He started very low-level with explaining what a queue is, what options you have and showed us some demos - How we can send/receive to a storage queue, service bus queue and service bus topic.

Although I already have some experience with Azure Storage and Azure Service Bus, this session was still very interesting since it highlighted some pitfalls I also had. However the session made me aware of several small details or features that can help you create better applications.

He ended his session with a small comparison between the two types of queues and telling us that there is not really one better than the other. It all depends on the requirements and situation you are in. Very fascinating and inspiring talk on a cool topic!

The takeaway here is that there is no queue to rule them all, you need to pick the correct queue for the job.

During the session Mike also pointed out that Clemens Vasters has done a good deep-dive into Service Bus that you can watch here or a new blog about Azure called JustAzure.

Last but not least: I recently found this article that compares the Storage queue with the Service Bus queue and could help you in choosing the correct queue for the job.

- Tom Kerkhove

 

 

Building great HTTP-bases APIs using ASP.net WebAPI 2 by Chris Klug

Chris Klug takes us on a tour of the ASP.NET WebApi 2 by comparing it to ASP.NET MVC. He explains briefly the differences and similarities of the two technologies.

He then goes on by introducing some best practices when creating your API, for instance using the RouteAttribute and optionally the RoutePrefixAttribute.

Other best practices like returning a HttpResponseMessage or IHttpActionResult instead of void or string are covered as well as how you can use the base class methods to handle.

In this session you will see exactly how easy it is to build HTTP-based APIs using ASP.NET WebApi, including handling things like data formatting, response codes, authentication and error handling.

At the end he concludes by demonstrating OWIN and how you can use the self-hosting capabilities in you unit tests.

- Wouter Seye

 

 

Keynote – The history of programming by Mark Rendle

 

This must have been one of the most amusing sessions of this conference. Mark Rendle should really consider a job in standup comedy.

Mark took us on a tour through the history of programming and programming languages. He showed us how to write a ‘Hello World’ application in all of those different languages, did you know that the first ever ‘Hello World’ application was written by a womanjQuery15203894762075506151_1401306834668

Be sure to check out the slide deck when it comes online. This is a must see overview for all developers out there!

During the session of Mark, Kurt De Vocht came on stage to talk a bit about the future of programming and that is of course: our kids! Kurt is one of the trainers of Coderdojo in Belgium. Coderdojo is an organization that bring kids together to get them in touch with programming and technology. They are constantly looking for additional trainers and venues. Be sure to get in touch with Kurt if you are interested.

- Glenn Colpaert


Building BIG data solutions in the cloud (Highway to the information zone) by Andy Cross

 

The goal of this session was to solve 3 key challenges of building Big Data solutions in the cloud. Andy started off by dispelling the myth of Big Data, according to him Big Data is nothing more than a marketing term while it’s really about achieving higher throughput when you hit I/O bottlenecks when processing raw data.

The main part of this session was about setting up and provisioning a HD Insight Cluster in Windows Azure. Unluckily the demo gods ruined the party and the internet connection broke down. For the rest of the session Andy gave us a theoretical overview on all key features and challenges when setting up a HD Insight cluster.

To round up, the challenges to create a HD Insight/Hadoop cluster are Provisioning, Data Ingress and Running Queries…

Andy will put all the scripts and slides online to provision your own HD Insight Cluster in under one hour. Keep an eye on his twitter account.

- Glenn Colpaert

 

The Toolshed: Inside Windows Azure Tools by Mike Martin

As the last session of this conference Mike gave us a talk on Microsoft Azure Tools.

As you all know there are some very good basic tools available but sometimes you just want that little more extra when managing/developing your Windows Azure entities.

When it comes to tooling it’s all about evolving and improving the tools that you are using or want to use, to finally getting the ultimate tool (think of the Sonic Screwdriver of Dr. Who).

Mike gave us a very nice overview on some of the most common used tools out there in following areas: editors, monitoring, asset management,…

The takeaway from this session is that there are many tools out there that will fit your needs, and if all else fails call the #GWAB team!!

-Glenn Colpaert

image

 

This rounds up our 2 day adventure at Techorama, first of all we want to thank everybody for reading our two blog posts and off course a big thank you to the organization of Techorama for creating such an amazing event!!

 

Cheers,

Maxim, Henry, Tom, Glenn


May 26, 2014 at 11:23 PM

clip_image002

Today was the first day of a brand new Belgian conference called Techorama.
After techdays called it quit, some community techies decided to join forces and organise a community driven alternative.

They have the ambition to make Techorama THE annual tech conference for the developer community in Belgium.
More details on the people behind Techorama can be found here: http://www.techorama.be/about/

Some people of Codit attended this great event, we have a set of reviews with some take-aways ready for those that were not able to attend this great initiative!

 

Keynote - 'The Real Deal' by Bruno Segers

To be honest - The keynote was my least favorite session for the simple reason that it was not a technical one. However Bruno was able to tell his story in a very amusing way.
He pointed out that we, as developers, know what the risk is of exposing our data on the internet but that the majority of the users are not!

Innovation is really great but we should not pay the price with our identity and data, we need better law definitions that protect the users.

Big companies are selling our data - Never forget this.

- Tom Kerkhove

 

What’s new in Windows Phone 8.1? by Gitte Vermeiren

In this session Gitte gave us an overview on what’s new in Windows Phone 8.1 and what’s in it for us as developers. This session was inspired by the BUILD session of a couple of weeks ago.

Gitte talked about the convergence story of Microsoft is bringing to make apps and applications easily available/convertible cross-device. Write it once, use it everywhere!!

This session was full of code examples and tons of tips and tricks on how to use the new WP 8.1 SDK and how to build better cross-device apps.

- Glenn Colpaert


Lean ALM: making software delivery flow and learning from software innovators by Dave West

This session was all about the following best practices:

Autonomy

Successful teams have a few things in common:

  • They have smart people with a clear mission
  • They have a mandate to change whatever is necessary to improve their productivity
  • They automate the hell out of everything

Adaptability

Allow teams to plan, to do, to learn … with the right tools and practices at the team and roll that feedback back into more traditional planning and operational processes.

Transparency

Encourage teams to put in place a transparent, visual process in real time that searches for the truth whilst ensuring that team measures roll-up into a broader view of the delivery.

Collaboration

E-mail = Evil

Enable high performance teams to focus on the currencies of collaboration within the context of the work they are doing whilst some of the team will not be in the scrum team using the same tools and practices.

- Henry Houdmont


Intro to the Google cloud by Lynn Langit

Okay, so most .NET-developers know about Microsoft Azure, right? But what about the Google Cloud? The actual difference was about to be explained to us by Lynn Langit, a former Microsoft-employee. Of course the overall concept is quite similar to the known cloud platforms, such as Microsoft Azure. So you do have a Google-equivalent to the Azure VM's or Worker roles which are called the Compute and App engines, but the main idea behind the Google cloud is high performance at low cost.

Instead of allowing the customer to set a limit on the scalability of the cloud engine, Google allows you to set a limit on the maximum price. Therefore the engine will scale automatically according to the amount of resources needed, limited to the total costs.

The idea of maximum performance, at low costs can also be found in BigQuery, the equivalent of SQL Azure storage. For this type of storage, you do not pay for the amount of data you collect, but for the amount of queries executed.

Shortly, Google cloud is definitely worth taking a look at.

Maxim Braekman


Service Virtualization & API management on the Microsoft platform by Sam Vanhoutte

Imagine a landscape of webservices with their own authentication, monitoring,... how do we keep track of all this?
With webservice virtualization you can control all these features in one place by exposing the physical webservices over a virtualized one.

Sam introduced me to Windows Azure API Management that has been recently announced as a public preview!
It enables you to easily virtualize REST API's in a controlled way by using a Developer & Publish portal where you can create new APIs with operations, products and policies where developers can apply to use it from the developer portal.

Next to the new Microsoft Azure API Management in the cloud Sam also talked about an on-premise alternative called Sentinet, a product that Codit is an exclusive reseller of (more info here).
Once again - With this tool we can virtualize physical webservices to the consumer with our own access control, load balancing (round-robin/fail-over/...), etc.
It even has a test-section where you can tell the virtualized service to send out test messages - Imagine the physical service is not build yet but you need to test it in the consuming application?
This is no longer a problem!

Both the API Management & Sentinet were new to me, but Sam was able to explain them very easily and show me the big benefits of both platforms and illustrate how easy they are to use.

- Tom Kerkhove

(If you want to know more about the API Management, read Massimo his post here)

 

Zone out, check in, move on by Mark Seeman

Most programmers desire to be ‘in the zone’ as much as possible; they see it as a prerequisite to being productive. However, the reality is often one of interruptions.
As it turns out, being in the zone is a drug, and as you build up tolerance, getting the next ‘high’ becomes more and more difficult. This may explain why programmers move on to management or other pastures as they get older.

However, it’s possible to stay productive as a programmer, even in the face of frequent interruptions.
Forget the zone, and learn to work in small increments of time, this is where a distributed version control system can help greatly.

To be more productive, we need to stay focused, but how do we stay focused? As it turns out, we stay focused when we get (immediate) feedback. Mark believes that unit tests provides us the feedback we need to stay focused on the code we're writing.

A tip for avoiding interruptions: the headphone rule (as long as the headphone is on, nobody must interrupt me).
Problem: we spend too much time reading code instead of writing code because we get interrupted and we need to start over and find out where we were.

Solutions:

  • Write less code
  • Build modules instead of monolythic systems
  • Work from home
  • Use a distributed version control system (Git, Mercurial) and work with branches
  • Check-in every five minutes (compilable code)
  • Integrate often: get in, get out quickly to avoid working concurrently with a colleague on the same files to avoid merge conflicts.
  • Use feature toggles to be able to check-in and ship incomplete code

One other way to keep focused is by keeping your code small (modular) and clear, so when interrupted it takes less time to pick up the thread again later.

- Wouter Seye & Henry Houdmont

 

Windows Azure Web Jobs – the new way to run your workloads in the Cloud by Magnus Mårtensson

The very online available Magnus started his session by setting a kitchen timer to one hour because when he starts talking cloud he’s unstoppable.
His talk of today was about “Azure Web Jobs”  that recently got released as a public preview.

You can think of Azure Web Jobs as background processes that are hosted under the Azure Website where it’s deployed to and therefore share the same advantages.
Magnus gave us some very interactive demo’s showing us all the features and possibilities of Azure Web Jobs. It’s amazing how easily you can set this up.
Web Jobs is using Azure storage behind the scenes, but as a developer you literally require zero knowledge on how to code storage when using Azure Web Jobs.
Web Jobs supports many languages like C#, Python, PHP,…
Be sure to check out this new feature as this is a very powerful and cool addition to the Microsoft Azure Platform.

- Glenn Colpaert


Dude, where's my data? by Mark Rendle

What type of (cloud) storage should be used for different kinds of applications/situations. Mark Rendle enlightened us about some of the possibilities, such as several types of relational databases, NoSQL, queues and messages. Although there are many options to choose from, not all are suitable for any type of application you might build.

Some of the best practices, that were given during this session, for choosing the type of storage can be found below.

  • Store as little data in SQL as possible, the more data you store, the harder it becomes to manage all data.
  • If you do need storage, use the simplest storage that works, do not make it more complicated. If the data can be stored in a text-file, stick to the text-file.
  • Sufficiently test the made choice in Azure! Do not test on local hardware or emulator, this is never exactly the same as Azure.

Last, but definitely not least:  Experiment, learn and keep an open mind. Do not always stick to the familiar, known option.

Maxim Braekman

 

Managing your Cloud Services + Continuous Integration solutions by Kurt Claeys

Imagine you have a cloud service running in production but how do we manage & monitor that? This was the main topic in this talk by Kurt.
He showed us how we can auto-scale our cloud service based on the number of messages in the queue, what the Microsoft Service Management API has to offer, the SQL database management views, how to restore/backup SQL databases to blob storage, etc.

Kurt also introduced us to the several ways to deploy your cloud service and how you can use continuous integration with Visual Studio Online that is linked to our cloud service.
This will automatically deploy a new version to Staging-phase when every time we check-in our code and everything passed the build!

Very interesting topics that will easy the development & afterlife of a cloud service!

- Tom Kerkhove


#IoT: Privacy and security considerations by Yves Goeleven

We all know that we are at dawn of what is called the Internet of Things and we are already bumping in the first serious issues - How about our privacy and securing the use of millions of sensors?
Yves opened his session by stating that everything is new for everyone and he has some ideas about those issues.

We will need to fight this on two fronts: physically & virtually where we need to secure ourselves against physical tampering, virtual tampering and think about our data, etc.
For example, small devices have low memory & CPU, how will we encrypt it and where will we do this? Are we communicating directly with the cloud or are we using gateways?

While I'm not really active in the IoT world this discussion session was still very interesting to me and I really liked Yves his approach with the Gateway-principle where devices are communicating to one gateway per location where after the gateway communicates with one backend. Also it was good to know what Yves his vision was on this topic and what very important issues need to be tackled as soon as possible! I know that this next big thing will/is happening and we need to take our precautions.

I think we can say that Yves is becoming a guru on IoT in the Belgium community and I'm very curious on his next talk!

- Tom Kerkhove


That was it for day one, stay tuned for more Techorama action tomorrow!!

 

Thanks for reading,

Henry, Glenn, Maxim, Wouter & Tom


March 28, 2014 at 3:45 PM

Recently, I needed to send a message from BizTalk to an external WCF service with Windows Authentication.

For easy testing, I created a Windows console application that will act as the client. I used basicHttpBinding with the security-mode set to TransportCredentialOnly. In the transport-node I chose for Windows as clientCredentialType.

The web.config file looks like this:

<basicHttpBinding>
	<binding name="DemoWebService_Binding"  textEncoding="utf-16">
		<security mode="TransportCredentialOnly">
			<transport clientCredentialType="Windows" />
		</security>
	</binding>
</basicHttpBinding>

 Before sending the test message, I needed to authenticate myself and insert the Windows Credentials:

proxy.ClientCredentials.Windows.ClientCredential.Domain = "xxx";
proxy.ClientCredentials.Windows.ClientCredential.UserName = "xxx";
proxy.ClientCredentials.Windows.ClientCredential.Password = "xxx";

This works, so now to get it right in BizTalk!

Problem:

Nothing special, just a Send Port, WCF-Custom with basicHttpBinding as Binding Type and the same binding configuration as in the console application:

 

I thought I just needed to add the credentials to the Credentials-tab in BizTalk to be able to do proper authentication.

Unfortunately, this does not work!

Apparently, when "Windows" is chosen as clientCredentialType, the Credentials-tab is ignored and the credentials of the Host-Instance running the Send Port are used instead.

Solution:

After some searching, I found the answer thanks to Patrick Wellink's blog post on the Axon Olympos blog: http://axonolympus.nl/?page_id=186&post_id=1852&cat_id=6.

 

The credentials of the Host Instance can't be right, because the web-service is from an external party.

To use the Windows Credentials, a custom Endpoint Behaviour has to be created.

So I've created a new Class Library in Visual Studio with a class that inherits from both BehaviorExtensionElement and IEndpointBehavior:

public class WindowsCredentialsBehaviour : BehaviorExtensionElement, IEndpointBehavior

For extensibility, the class needs to have these public properties:

[ConfigurationProperty("Username", DefaultValue = "xxx")]
public string Username
{
	get { return (string)base["Username"]; }
	set { base["Username"] = value; }
}

[ConfigurationProperty("Password", DefaultValue = "xxx")]
public string Password
{
	get { return (string)base["Password"]; }
	set { base["Password"] = value; }
}

[ConfigurationProperty("Domain", DefaultValue = "xxx")]
public string Domain
{
	get { return (string)base["Domain"]; }
	set { base["Domain"] = value; }
}

In the function "AddBindingParameters", I've added this piece of code that sets the Windows Credentials:

public void AddBindingParameters(ServiceEndpoint endpoint, System.ServiceModel.Channels.BindingParameterCollection bindingParameters)
{
	if (bindingParameters != null)
	{
		SecurityCredentialsManager manager = bindingParameters.Find<ClientCredentials>();

		var cc = endpoint.Behaviors.Find<ClientCredentials>();
		cc.UserName.UserName = this.Domain + @"\" + this.Username;
		cc.UserName.Password = this.Password;
		cc.Windows.ClientCredential.UserName = this.Username;
		cc.Windows.ClientCredential.Password = this.Password;
		cc.Windows.ClientCredential.Domain = this.Domain;

		if (manager == null)
			bindingParameters.Add(this);
	}
	else
	{
		throw new ArgumentNullException("bindingParameters cannot be null.");
	}
}

Now after building and putting the assembly in the GAC, we need to let BizTalk know that it can use this custom Endpoint Behavior:


We need to add this line below to the behaviorExtensions (system.serviceModel - extensions) in the machine.config (32-bit and 64-bit):

<add name="WindowsCredentialsBehaviour" type="BizTalk.WCF.WindowsCredentials.WindowsCredentialsBehaviour, BizTalk.WCF.WindowsCredentials, Version=1.0.0.0, Culture=neutral, PublicKeyToken=1de22c2808f4ac2e" />

Restart the host instance that runs the send port and you will be able to select the custom EndpointBehavior:

 


March 12, 2014 at 4:00 PM

On new environments, Codit always runs benchmark tests to check whether that new environment behaves as expected and to find any anomalies.
This week, these simple tests proved their value yet again:

One of the components of the benchmark test is a simple WCF echo service. A client WCF console application calls this service (also hosted in a console application). The service simply echoes back the request message.

When launching both console applications we experienced extreme delays.
The service host took more than 8 minutes to launch and the client console application took more than 3 minutes to launch.
Because the launch of the console applications typically takes under one second, this demanded for a deeper look.

To get a clearer picture of where the time is lost, I enabled WCF tracing.
This shows that constructing the servicehost and channelfactory takes a lot of time:



Trace when launching the Service

clip_image001



Trace when launching the Client

clip_image002

 

Digging deeper - and using my favorite search engine - revealed this is probably due to assembly load times.

By default, .NET 3.5 will generate so-called ‘publisher evidence’ for code access security (CAS). Verifying the assembly publishers signature can be very costly, and it is indeed very costly on the new environment.
The generation of this publisher evidence can be disabled: so, let's disable publisher evidence generation and measure the startup time again after that.

Publisher evidence generation is disabled by adding this Xml snippet to your application configuration file, or to the machine.config file (whatever suits your needs):

<runtime>
    <generatePublisherEvidence enabled="false"/>
</runtime>

After I applied this change, I started my 2 console applications again, and ‘eureka’ they are done in less than one second!

Please note that this change will only apply to .NET 3.5!



Trace when launching the Service

clip_image001[6]



Trace when launching the Client

clip_image001[8]

 

This MSDN link on the generatePublisherEveidence element contains more info about publisher evidence.

The moral of the story is, if you setup a new environment,  test it well!

 

 

Peter Borremans


November 21, 2013 at 4:00 PM

According to MSDN, the WCF Adapter copies SOAP headers to the ‘InboundHeaders’ context properties:
“The WCF adapters copy custom SOAP headers and standard SOAP headers in the inbound messages to the WCF.InboundHeaders property.”

This is true in most cases, but as I learned during the past days, this doesn’t work as described in all cases.

Let’s try with a very basic scenario – A client application sends a custom header with each call:

using (var scope = new OperationContextScope(proxy.InnerChannel)) 
{
    var header = MessageHeader.CreateHeader("peterheader", "http://peter.com", "myvalue", false);
                        
    OperationContext.Current.OutgoingMessageHeaders.Add(header);


    BTSServcieHttp.Request req = new BTSServcieHttp.Request();
    req.Param1 = "1";

    var resp = proxy.GetInfo(req);

    MessageBox.Show(resp.Result);
}

In BizTalk the incoming messages are tracked to monitor the content of 'InboundHeaders' as it is set by the WCF adapter.
A call from the client with the code shown above results in the following content for 'InboundHeaders':

<headers>
	<a:Action>GetInfo</a:Action>
	<peterheader xmlns="http://peter.com">myvalue</peterheader>
	<a:MessageID>urn:uuid:bb6d87cc-90b5-4ea7-8995-c5cd9060f74b</a:MessageID>
	<ActivityId CorrelationId="507d4c75-9d64-49f5-a095-f227437251fd">bbb9f44e-65b5-4d95-b238-493f4b54b0b3</ActivityId>
	<a:ReplyTo><a:Address>http://www.w3.org/2005/08/addressing/anonymous</a:Address></a:ReplyTo>
	<a:To>replyaddress</a:To>
</headers>

*** Note: For the sake of readability, I removed all namespaces and security headers from the 'headers' section.

The behavior above is exactly as expected, the WCF adapter writes our header to the InboundHeaders property.

Let's slightly change our basic scenario to this:

using (var scope = new OperationContextScope(proxy.InnerChannel)) 
{
    var header = MessageHeader.CreateHeader("peterheader", "http://peter.com", "myvalue", false, "myActor");
                        
    OperationContext.Current.OutgoingMessageHeaders.Add(header);


    BTSServcieHttp.Request req = new BTSServcieHttp.Request();
    req.Param1 = "1";

    var resp = proxy.GetInfo(req);

    MessageBox.Show(resp.Result);
} 

The only thing I added was an extra parameter to the 'CreateHeader' method. This added parameter represents the 'Actor' for the SOAP header.

This is the content of 'InboundHeaders' for the call with an Actor set:

<headers>
	<a:Action>GetInfo</a:Action>
	<a:MessageID>urn:uuid:5c4c50f8-76dd-4bf9-83dc-7af7cc9af164</a:MessageID>
	<ActivityId CorrelationId="15cf87f2-6bb0-4b5e-aefb-a644d55836b7">f548f1b7-bc29-4102-a986-2f9dacb92202</ActivityId>
	<a:ReplyTo><a:Address>http://www.w3.org/2005/08/addressing/anonymous</a:Address></a:ReplyTo>
	<a:To>replyaddress</a:To>
</headers>

As you can see, the custom header is now not written to the 'InboundHeaders' property.
This is not exactly what is documented on MSDN about 'InboundHeaders'. All standard and custom headers should be written to 'InboundHeaders'. Just adding an 'Actor' to our header will break this behavior.

This observation made me very curious to know why exactly this is happening.
To find the exact reason, I started digging in the WCF Adapter by decompiling the 'Microsoft.BizTalk.Adapter.Wcf.Runtime' assembly.

In this assembly I located the 'CopyHeadersToContext' method. This method is responsible to copy the SOAP headers to the 'InboundHeaders' property.
This is the 'CopyHeadersToContext' method implementation:

private static void CopyHeadersToContext(Message wcfMessage, IBaseMessageContext btsMessageContext)
{
    StringBuilder stringBuilder = new StringBuilder();
    stringBuilder.Append("<headers>");
    foreach (MessageHeaderInfo messageHeaderInfo in wcfMessage.Headers)
    {
        int header = wcfMessage.Headers.FindHeader(messageHeaderInfo.Name, messageHeaderInfo.Namespace);
        if (header >= 0)
        {
            using (XmlReader xmlReader = (XmlReader)wcfMessage.Headers.GetReaderAtHeader(header))
            {
                string str = xmlReader.ReadOuterXml();
                btsMessageContext.Write(messageHeaderInfo.Name, messageHeaderInfo.Namespace, (object)str);
                stringBuilder.Append(str);
            }
        }
    }
    stringBuilder.Append("</headers>");
    string str1 = ((object)stringBuilder).ToString();
    btsMessageContext.Write(WcfMarshaller.inboundHeadersProp.Name.Name, WcfMarshaller.inboundHeadersProp.Name.Namespace, (object)str1);
}

As you can see in the code above, the WCF Adapter loops over the wcfMessage.Headers collection and adds them to a stringbuilder. This stringbuilder is finally written to the 'InboundHeaders' property.
The key line here is the wcfMessage.Header.FindHeaders() method call. Only headers that return a non negative result are added to the 'InboundHeaders'.

Let's have a look at the implementation of FindHeader:

public int FindHeader(string name, string ns)
{
    //lines removed for readability
    return this.FindNonAddressingHeader(name, ns, this.version.Envelope.UltimateDestinationActorValues);
}

FindHeaders calls FindNonAddressingHeader. That method is implemented like this:

private int FindNonAddressingHeader(string name, string ns, string[] actors)
{
    int num = -1;
    for (int i = 0; i < this.headerCount; i++)
    {
        if (this.headers[i].HeaderKind == MessageHeaders.HeaderKind.Unknown)
        {
            MessageHeaderInfo headerInfo = this.headers[i].HeaderInfo;
            if (headerInfo.Name == name && headerInfo.Namespace == ns)
            {
                for (int j = 0; j < actors.Length; j++)
                {
                    if (actors[j] == headerInfo.Actor)
                    {
                        if (num >= 0)
                        {
                            if (actors.Length == 1)
                            {
                                throw DiagnosticUtility.ExceptionUtility.ThrowHelperError(new MessageHeaderException(SR.GetString("MultipleMessageHeadersWithActor", new object[]
                                                                                                                                {
                                                                                                                                                name,
                                                                                                                                                ns,
                                                                                                                                                actors[0]
                                                                                                                                }), name, ns, true));
                            }
                            throw DiagnosticUtility.ExceptionUtility.ThrowHelperError(new MessageHeaderException(SR.GetString("MultipleMessageHeaders", new object[]
                                                                                                                {
                                                                                                                                name,
                                                                                                                                ns
                                                                                                                }), name, ns, true));
                        }
                        else
                        {
                            num = i;
                        }
                    }
                }
            }
        }
    }
    return num;
}

What this function basically does is 'finding' a header based on its name, namespace AND actor! The actor of the namespace is compared with a collection of actors that is passed into the function as a parameter (string[] actors). Only headers that have an actor matching the actors collection will give a positive result.

So.. the next step is finding out what this collection of actors contains...
The parameter that was passed in by the 'FindHeader' method was 'this.version.Envelope.UltimateDestinationActorValues'.
The question is what is the content of that property. The only location where the content of that property is assigned is in the constructor of the EnvelopeVersion:

private EnvelopeVersion(string ultimateReceiverActor, string nextDestinationActorValue, string ns, XmlDictionaryString dictionaryNs, string actor, XmlDictionaryString dictionaryActor, string toStringFormat, string senderFaultName, string receiverFaultName)
{
   //code remove for readability
        this.ultimateDestinationActorValues = new string[]
        {
            "",
            ultimateReceiverActor,
            nextDestinationActorValue
        };
    }
}

Here we find the reason for what is happening in our two basic scenarios. The actors collection that is passed to the FindHeader function contains three strings: "", ultimateReceiverActor, nextDestinationActorValue.

Because "" is in this collection, having no actor in the header will make sure it is passed to 'InboundHeaders'.
But when an actor is set, and it doesn't matches the ultimateReceiverActor or nextDestinationActorValue, then the 'FindNonAddressingHeader' method return -1! This results in a lost SOAP header.

*** Note: ultimateReceiverActor and nexDestinationActorValue are assigned a fixed value by the parameter less constructor of MessageVersion.

Conclusion

After digging into the implementation of the WCF Adapter, we can perfectly explain why SOAP headers containing an actor are not written to 'InboundHeaders'. However, this strange behavior doesn't seem to be like intended. I would love to get Microsoft feedback on this issue.

Peter Borremans

Posted in: .NET | BizTalk | WCF

Tags: , , ,


August 30, 2013 at 4:00 PM

In my current environment, we have a service router built in BizTalk providing generic endpoints for client applications and central security between clients and services.

A new service was added to the router that performs some complex tasks and replies after 30 minutes. I know that long running services like this should be avoided as much as possible, but in this case, we needed to add this one as well to the service router.

 

After adding the new service to the service router configuration, shorter calls worked out fine, but longer calls resulted in this error:

 

A request-response for the "CustomRLConfig" adapter at receive location "***service.svc" has timed out before a response could be delivered.

 

After having a look at the timestamp, I saw that the timeout that is hit is 20 minutes. This timeout immediately rang a bell: it is the default idle timeout on AppPools and BizTalk AppDomains.

So I decided to test the impact of these timeouts:

clip_image002[8]

 

After having set these timeouts to 60 minutes (or disabling them on the AppPool), the same error occurred again. I double checked the receive timeout and send timeout on the receive location and send port. Both are set to 60 minutes and are ok.

 

After doing some research, I learned that the WCF adapter itself has another built-in timeout that is set to 20 minutes by default!

This timeout tells BizTalk when to send a NACK to the WCF adapter. If a request-response doesn’t receive any response within 20 minutes (default) a NACK will be send to the WCF adapter instead of the response message.

The default of 20 minutes is not enough for my scenario, and luckily this timeout can be changed by adding a registry value (DWORD).

 

For in-process adapters, use the following registry key:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\BTSSvc{Host Guid}\MessagingReqRespTTL

 

For isolated adapters, use the following registry key:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\BTSSvc.3.0\MessagingReqRespTTL

 

The values of both keys should be expressed in minutes (decimal type).

When using BizTalk 2006 R2, you need at least this hotfix when you want to change the default timeout (or a more recent version of the WCF runtime dll via a Service Pack or Cumulative Update higher than the one specified in the hotfix).

 

After applying the registry modification and restarting the host instance hosting the service, the long running call succeeded! No more timeouts.

 

Peter Borremans

Posted in: BizTalk | IIS | WCF

Tags: , ,