August 28, 2014 at 3:55 PM

Sentinet is highly extendable through standard Microsoft .NET, WCF and WIF extensibility points, and through the Sentinet API interfaces.
In the last post we saw how to build a custom alert handler for SLA violations notification. In this 4th post I want to continue the Sentinet Extensibility series exploring another possible customization, the routing.




The Routing feature allows to deliver the messages received on the virtual inbound endpoint to more than one alternative endpoint of the same backend service. When the backend service exposes multiple endpoints, some of them (or all) can be included in message routing by marking them using the Sentinet Administrative Console. Notice that, to activate the Routing at least two backend endpoints must be selected.


The Sentinet routing feature improves the API availability with an automatic failover. This means that, in case of communication failure, the Sentinet Node falls back to the next available endpoint (this does not happen in case of the SOAP Fault because it's considered as a valid response).


Sentinet supports four router types:

  • Round-Robin with priority or equal distribution of the load. This is the default routing mode, the fallback is automatic.
  • Simple Fault Tolerance. The routing mechanism always hit the endpoint with the highest priority then, in case of communication failure, it falls back to the endpoint with lowest priority.
  • Multicast. A copy of the incoming message is delivered to all the endpoints.
  • Custom. The routing rules are defined in a custom .NET component.




Here are  the requirements for this scenario: 

  • Sentinet is deployed behind the network load balancer and the customer doesn't want to pass again through the NLB.
  • The virtualized backend service has nine endpoints (three per continent) and the load should be routed depending on which continent the request is coming from.
  • The load routed to Europe and to North America should be equally distributed between the endpoints (simple round robin). 
  • The load routed to Asia should always hit a specific endpoint and in case of error must fall back to the others (simple fault tolerance). 


In short, we want to build a geography-based custom router that merges the Round-Robin and the Fault-Tolerance types. To build the GeoRouter I started with the example that I found in the Sentinet SDK.



Build the custom Router


A Sentinet custom Router is a regular .NET component that implements the IRouter interface (ref. Nevatech.Vbs.Repository.dll) and MessageFilter abstract class.

The IRouter  inteface contains three methods:
- GetRoutes – Where to define the routing rules.
- ImportConfiguration – Read the (and apply) the component’s configuration.   
- ExportConfiguration – Save the component its configuration.


The custom router reads the component configurations where we define which endpoint is contained in which region (continent) and the type of routing to be applied. Based on this XML the GetRoutes method creates the Route object which is responsible for the message delivery.

  <Region code="NA" roundRobin="true">
    <!-- North America -->
  <Region code="AS" roundRobin="false">
    <!-- Asia -->
  <Region code="EU" roundRobin="true">
    <!-- Europe -->


The GetRoutes method, returns a collection of Route objects. A Route is composed by a Filter expression, an EndpointCollection and a Priority.



How the sentinet engine processes the Collection<Route> object?

The Sentinet engine processes the routes one by one according to the order defined (priority field) untill a first match occurs. Then, when the filter criteria is matched, the request message is sent to the first endpoint in the EndpointCollection. If the current endpoint throws an exception Sentinet fallbacks to the next endpoint in the collection.


How to populate the Collection<Route> to achieve our goals?

The fallback is automatically implemented by Sentinet every time that in the Route’s endpoint collection there are more than one endpoint. So the creation of a route which contains one single endpoint disables the fallback mechanism.


The round robin mechanism implemented in this demo is very simple. Basically the distribution of the load between the endponts is achieved :

- Creating a number of routes equal to the number of the endpoint in that region (e.g. in europe we have 3 endpoint so 3 routes are created and added to the collection).

- Every route has a different a filter expression based on a random number.

- In every route’s endpoint collection, the items are sorted in a different order to prioritize a different endpoint at every iteration.


Here a visual representation of the Routes collection to achieve the RoundRobin + Automatic fallback


Automatic fallback without round robin


Round robin without the automatic fallback (not implemented in this example)



So what does the code do? Basically, it reads the collection of the endpoint that we checkmarked during the virtual service design and if the endpoint is contained in the XML configuration it is added to the continent-based route object.


Here the GetRoutes code

        public IEnumerable<Route> GetRoutes(IEnumerable<EndpointDefinition> backendEndpoints)
            if (backendEndpoints == null) throw new ArgumentNullException("backendEndpoints");

            // Validate router configuration
            if (!Validate()) throw new ValidationException(ErrorMessage);

            // Collection of routes to be returned
            Collection<Route> routes = new Collection<Route>();

            // Ordered collection of outbound endpoints used in a single route
            Collection<EndpointDefinition> routeEndpoints = new Collection<EndpointDefinition>();

            // The order of a route in a routing table 
            byte priority = Byte.MaxValue;

            foreach (Region region in Regions)
                // Collection can be reused as endpoints are copied in Route() constructor

                // collection of the backend endpoint per region 
                foreach (string endpointUri in region.Endpoints)
                    // Find outbound endpoint by its AbsoluteURI
                    EndpointDefinition endpoint = backendEndpoints.FirstOrDefault(e => String.Equals(e.LogicalAddress.AbsoluteUri, endpointUri, StringComparison.OrdinalIgnoreCase));
                    if (endpoint == null) throw new InvalidOperationException(String.Format(CultureInfo.CurrentCulture, InvalidRouterConfiguration, endpointUri));

                if (region.EnableRoundRobin)
                    // build a route for each endpoint in the region
                    var iEndpointIndex = 0;
                    foreach (string endpointUri in region.Endpoints)
                        // change the backend's endpoint order 
                        if (iEndpointIndex > 0) SortEndpoints(routeEndpoints, iEndpointIndex - 1);

                        // Configure message filter for the current route
                        var rrFilter = new GeoMessageFilter
                            ContinentCode = region.Code,
                            RoundRobin = region.EnableRoundRobin,
                            BalancingFactor = GetBalancingFactor(iEndpointIndex)

                        routes.Add(new Route(rrFilter, routeEndpoints, priority));
                    // build a route for each region
                    var filter = new GeoMessageFilter
                        ContinentCode = region.Code,
                        RoundRobin = false
                    // endpoint Fallback scenario
                    routes.Add(new Route(filter, routeEndpoints, priority));

            return routes;

And the messageFilter class

    public sealed class GeoMessageFilter : MessageFilter
        #region Properties

        public String ContinentCode { get; set; }
        public bool RoundRobin { get; set; }
        public double BalanceFactor { get; set; }

        private static Random random = new Random(); 

        #region Methods

        public override bool Match(Message message)
            var remoteProps = (RemoteEndpointMessageProperty) message.Properties[RemoteEndpointMessageProperty.Name];
            return Match(remoteProps.Address, ContinentCode);

        private bool Match(string ipAddress, string continentCode)
            var requestCountryCode = GeoLocation.GetCountryCode(ipAddress);
            var matchTrue = (CountryMap.GetContinentByCountryCode(requestCountryCode) == continentCode.ToUpperInvariant());

            if (matchTrue && RoundRobin)
                if (random.Next(0, 100) > BalanceFactor) return false;
            return matchTrue;



Register and configure


The custom component can be registered and graphically configured using the Sentinet Administrative Console. Go to the design tab of the virtual service and click modify, then select the endpoint you want to be managed by the Routing component. On the endpoint tree node click the ellipsis button.


Add new Custom Router, specifying few parameters:

  • Name. The fiendly name of the custom router (GeoRouter)
  • Assembly. The fully qualified assembly name that contains the Router implementation (Codit.Demo.Sentinet.Router,Version=,Culture=neutral,PublicKeyToken=null)
  • Type. The .NET class that implement the IRouter interface (Codit.Demo.Sentinet.Router.Geo.GeoRouter)
  • Default Configuration. In this example I let it blank and I will specify the parameters when I use the


Select the router and set the custom configuration.


Save the process and wait for the next heartbeat so that the modifications will be applied.



To test the virtual service with the brand new custom router, this time I tried WcfStorm.Rest.

Test case #1 - All the nine endpoints were available.

The messages have been routed to the specific continent and load has been distributed among the backend services as expected.

I collected in this image the backend services monitor (top left) and the map displays the sources of the service calls.

As you can see the the basic load balancer is not bullet proof, but the load is spread almost equally which is acceptable for this proof of concept.



Test case #2 -  Fall back test on the European region.

I shut down the europe1 and europe2 services, so only the europe3 service was active.

Thanks to the fallback mechanism, the virtual service always responded. In the monitor tab you can check the fallback in action.



Test case #3 - All the European backend services were stopped.

Means that a route had a valid matchfilter, Sentinet tried to contact all the endpoints in the endpoint collection but without any success in evey attempt. Here under, it’s reported the error message we got. Notice that the address reported will be different depending on the route has been hit.

<string xmlns="">
The message could not be dispatched because the service at the endpoint address 
'net.tcp://europe3/CustomerSearch/3' is unavailable for the protocol of the address.

Test case #4 – No matching rules.

If there are no matching rules (e.g. sending messages from South America) this following error message is returned.

<string xmlns="">
No matching MessageFilter was found for the given Message.</string>



Sentinet is designed to be extensible in multiple areas of the product. In this post I’ve demonstrated how to create a Geography-based custom Router that combines the round-robin and the fault tolerance features. In the next post I will discuss about the Sentinet management APIs.




Posted in: Sentinet | SOA | WCF

Tags: , ,

August 21, 2014 at 7:00 PM

Note: Azure search is currently in preview (at the time of writing), you might need to request access first using your Azure subscription in order to view/use this functionality. To test this service, you can use one of the free packages.


What is Azure Search

Azure search is an indexing service where you can look for content in documents. You could compare it with your personal Bing search engine for your own indexed documents. You can configure and search your documents by using the Azure Search REST API.


Typical usage flow:

  1. 1. Create an Azure Search Service using the portal
  2. 2. Create an Index schema
  3. 3. Upload documents to the index
  4. 4. Query the Azure Search for results



Configuring Azure Search Service using the the preview portal

In order to start using Azure Search, you will need:


Creating the search service

Browse to and follow the instructions as shown in the following screenshots.





Using Fiddler to configure your search

Once you have set up your Azure Search Service, we can start using the API. You might notice the api-version query url parameter that you might need to change as things evolve.


Creating an index


An index defines the skeleton of your documents using a schema that includes a number of data fields. To create an index, you can issue a POST or PUT request with at least one valid "field". Each POST request uses a JSON payload in order to specify the request body.




POST Example



Content-Type: application/json
Content-Length: 579

  "name": "products",
  "fields": [
    {"name": "productId", "type": "Edm.String", "key": true, "searchable": false, "filterable": true},
    {"name": "title", "type": "Edm.String", "searchable": true, "filterable": true},
    {"name": "category", "type": "Edm.String", "searchable": true, "filterable": true},
    {"name": "description", "type": "Edm.String", "searchable": true, "filterable": true},
    {"name": "releaseDate", "type": "Edm.DateTimeOffset" },
    {"name": "isPromotion", "type": "Edm.Boolean" },
    {"name": "price", "type": "Edm.Double" }
You should receive a 201 Created response.

Modifying your index

Whenever you want to edit your index, just issue a new POST request with an updated index. For now only added fields are updated. Changes to existing fields are not possible (you should delete and recreate the index).

Deleting your index



You should receive a 204 No Content response.


Retrieving existing indexes configured on your Azure Search Service




Index new blob files


We previously created a new Azure Search index. It's time to populate the index with some documents. For each document you need to upload it to the Search Service API as JSON format using the following structure:

  "value": [
      "@search.action": "upload (default) | merge | delete",
      "key_field_name": "unique_key_of_document", (key/value pair for key field from index schema)
      "field_name": field_value (key/value pairs matching index schema)

You could of course use Fiddler to upload your documents but let's write some code to do this.


Hands-on: Index new blob files using Azure WebJobs


Because "we" (developers) are lazy and try to automate as much as possible, we could use Azure WebJobs as a blob polling service and then index each new blob file on the fly. For the sake of simplicity, we will only send 1 file to the Azure Search API service, you could however send over a batch to the indexing service (~ max 1000 documents at once and size should be < 16MB).


The following example is available for download on Github


Creating the Azure WebJob


More information on how to create a WebJob can be found here


Creating a WebJob is fairly easy, you just need to have the following static method to pickup new blob messages:

public static void ProcessAzureSearchBlob([BlobTrigger("azure-search-files/{name}")] TextReader inputReader) { }
The blob connectionstring is specified in the config and the webjob will poll on the blob container "azure-search-files". When you drop a new file in the blob container, the method will be executed. Simple as that and half of the work already done! We created an index before so we should follow the indexing schema. Let's drop the following XML file:
  ASUS EeePC 1016PXW 11-Inch Netbook
  Computers \ Tablets
    Graphics: Intel HD Graphics Gen7 4EU
    Cameras: 1.2MP
    Operating System: Windows 8.1

The last and most interesting task is to parse the blob message and send it to the API. First we deserialize the xml to an object.
var deserializer = new XmlSerializer(typeof(Product), new XmlRootAttribute("Product"));
var product = (Product)deserializer.Deserialize(inputReader);
Then we parse it to JSON using Json.NET library.
var jsonString = JsonConvert.SerializeObject(product,Formatting.Indented, new JsonSerializerSettings { ContractResolver = new CamelCasePropertyNamesContractResolver() });
message.Content = JObject.Parse(jsonString);
Finally we need to adjust the JSON format in order to have the same structure as Azure Search API is expecting.
var indexObject = new JObject();
var indexObjectArray = new JArray();
var itemChild = new JObject { { "@search.action", "upload" } };
indexObject.Add("value", indexObjectArray);
We send the JSON using an HttpClient. This is the request that was sent
Content-Type: application/json; charset=utf-8
Content-Length: 431

  "value": [
      "@search.action": "upload",
      "productId": "B00FFJ0HUE",
      "title": "ASUS EeePC 1016PXW 11-Inch Netbook",
      "category": "Computers \\ Tablets",
      "description": "\n    Graphics: Intel HD Graphics Gen7 4EU\n    Cameras: 1.2MP\n    Operating System: Windows 8.1\n  ",
      "releaseDate": "2013-11-12T00:00:01.001Z",
      "isPromotion": false,
      "price": 362.9

And the response:

HTTP/1.1 200 OK
Cache-Control: no-cache
Pragma: no-cache
Content-Type: application/json; odata.metadata=minimal
Expires: -1
request-id: 1abe3320-de0e-47aa-ac65-c6aa15f303c3
elapsed-time: 15
OData-Version: 4.0
Preference-Applied: odata.include-annotations="*"
Date: Thu, 21 Aug 2014 07:20:29 GMT
Content-Length: 221


To make things easy, I've created a test client that sends the message to blob for you.





Querying the Azure Search Service

Querying is done using the Query API Key.


You can do a simple lookup by the key of your index schema. Here we can lookup the previously uploaded document with productId: B00FFJ0HUE



Simple query syntax

Querying is fairly simple. For a complete overview, you should check the complete API reference. For this example, the service has 5 product documents. 4 of them have a price lower than 500. We can execute the following query to retrieve the 4 products.



GET$filter=(price+lt+500)&api-version=2014-07-31-Preview HTTP/1.1



Azure Search also supports paging, so we can also $skip and $take a number of results.

More info on the syntax to create specific queries: MSDN



Additionally the following characters may be used to fine-tune the query:

  • +: AND operator. E.g. wifi+luxury will search for documents containing both "wifi" and "luxury"
  • |: OR operator. E.g. wifi|luxury will search for documents containing either "wifi" or "luxury" or both
  • -: NOT operator. E.g. wifi -luxury will search for documents that have the "wifi" term and/or do not have "luxury" (and/or is controlled by searchMode)
  • *: suffix operator. E.g. lux* will search for documents that have a term that starts with "lux", ignoring case.
  • ": phrase search operator. E.g. while Roach Motel would search for documents containing Roach and/or Motel anywhere in any order, "Roach Motel" will only match documents that contains that whole phrase together and in that order (text analysis still applies).
  • ( ): precedence operator. E.g. motel+(wifi|luxury) will search for documents containing the "motel" term and either "wifi" or "luxury" (or both).


There is more than this…

Time to mention some extra features that are out of scope for this blogpost:

CORS Support

When using javascript to query the service, you will expose your API Query key to the end-user. If you have a public website, someone might just steal your key to use on their own website. CORS will prevent this by checking the HTTP Headers where the original request came from.


Hit highlighting

You could issue a search and get a fragment of the document where the keyword is highlighted. The Search API will respond with HTML markup (<em />).



Provides the ability to retrieve suggestions based on partial search input. Typically used in search boxes to provide type-ahead suggestions as users are entering text. More info: MSDN


Scoring Profiles

Scores matching documents at query time to position them higher / lower in ranking. More info: MSDN



Performance and pricing

When you are interested for using the service in production, you can pay for a dedicated Search Unit. Search Units allow for scaling of QPS (Queries per second), Document Count, Document Size and High Availability. You could also use Replicas in order to load balance your documents.


And more…



Download Sample

Sample code can be found here:

August 8, 2014 at 3:00 PM

Sentinet is highly extendable through standard Microsoft .Net, WCF and WIF extensibility points. Also the Sentinet API interfaces can be used to extend Sentinet possibilities.

In some previous post that were release on this blog we saw how to build a custom access rule expression and how to leverage the WCF extensibility by setting up a virtual service with a custom endpoint behavior .


In this post I would like to explain another extensibility point of Sentinet: Custom Alert Handlers.

Alerts or Violation Alerts are triggered in the Sentinet Service Agreements when certain configured SLA violations occurred. More details about Sentinet Service Agreements can be found here.



The main scenario for this blog post is very simple. We will create our own Custom Alert Handler class by inheriting certain interfaces, register our custom Alert Handler in Sentinet and use it as an Alert when a certain SLA is violated.

Creating the Custom Alert Handler

We start our implementation of the custom alert handler by creating a new class that inherits from the IAlertHandler interface. This interface is available through the Nevatech.Vsb.Repository.dll.

This interface contains one single method: ProcessAlerts where you put your logic to handle the alert after the SLA violation has occurred.


One more thing to do before we can start our implementation of the ProcessAlerts method is to add a reference to the Twilio REST API through NuGet. More information about Twilio can be found here.



The final implementation of our custom Alert Handler looks like below. We start by initializing some variables needed for Twilio. After that everything is pretty straight forward. We read our handler configuration, here we made the choice for a CSV configuration string, but you can perfectly go for an XML configuration and parse it to an XMLDocument or XDocument.

When we've read the receivers from the configuration we create an alert message by concatenating the alert description, after that we send an SMS message by using the Twilio REST API.



The first step in registering your Custom Alert is to add your dll(s) to the Sentinet installation folder. This way our dll(s) can be accessed by the "Nevatech.Vsb.Agent" process, that is responsible for generating the alert. There is no need to add your dll(s) to the GAC.

Next step is to register our Custom Alert in Sentinet itself and assign it to a desired Service Agreement.

In the Sentinet Repository window, click on the Service Agreements and navigate to alerts and then choose add Alert. Following screen will popup.


Next step is clicking on the 'Add Custom Alert Action' button as shown below.


In the following screen we have to add all our necessary parameters to configure our Custom Alert.

  • Name: The friendly name of the Custom Alert
  • Assembly: The fully qualified assembly name that contains the custom alert
  • Type: The .NET class that implements the IAlertHandler interface
  • Default Configuration: The optional default configuration. In this example I specified a CSV value of difference phone number. You can access this value inside your class that implements the IAlertHandler interface.


Confirm the configuration by clicking 'OK'. In the next screen be sure to select your newly configured Custom Alert.

You will end up with following Configured Violation Alerts.




To test this alert I've modified my Service Agreement Metrics to a very low value (ex 'Only 1 call is allowed per minute'). So I could easily trigger the alert. After I called my Virtual Service multiple times per minute, I received following SMS.

Service agreement "CoditBlogAgreement"  has been violated one or more times. The most recent violation is reported for the time period started on 2014-07-24 18:15:00 (Romance Standard Time).


Sentinet is designed to be extensible in multiple areas of the product. In this post I’ve demonstrated how to create a Custom Alert Handler that will send an SMS when an SLA has been violated.



Glenn Colpaert

Posted in: .NET | BizTalk | Sentinet

Tags: , ,

August 4, 2014 at 4:00 PM

There are a lot of posts available learning you how to create a singlebox FTP server on Microsoft Azure, using FileZilla Server or Internet Information Services... About a year ago, at the announcement of the general Availability of Infrastructure as a Service I tested a singlebox FTP server using FileZilla Server. The main benefit of FileZilla is it's easy installation and configuration. 


SingleBox configuration:

Following posts will guide you while creating a singlebox server using FileZilla:


Azure Virtual machines need maintenance from time to time, you should always avoid a single point of failure,... Enough reasons for a High Available configuration.. This post does not consists of a step by step guide showing you how to create a VNET, VM's, DFS,...
You need some experience with the Azure platform and Windows Server, I will help you putting the pieces together for a High Available FTP Server running on the Azure platform.


High Available Topology:


Topology remarks:
  • I'm using DFS (Distributed File System) as a High Available network share. I tried a currently in preview new Windows Azure feature, Azure File Services, it's very useful for shared storage between Virtual Machines. IIS and FileZilla are not able to work with this feature, so it's not useful for our purposes.
Virtual Network and Virtual Machines:
  • Create a new Virtual Network, choose a region, create an affinity group,... 
  • When your Virtual Network has been provisioned, create two new Virtual Machines and add them both to the VNET.
  • When creating the first VM, create an availability set. When creating the second VM, join the availability set you just created.  
  • Use the same cloudservice name for the second VM as the one you defined at creation time of the first VM.
  • When creating a new VM, the first thing you should do is changing the Windows Update and UAC settings.
  • Attach an empty datadisk to both Virtual Machines and format it. (Will be used for DFS and FTP file storage)
Active Directory:

FTP and IIS:

  • Install IIS and FTP service on both servers.
  • Configure the FTP services (publish FTP services).
  • Create a DFS share and set up Shared IIS config (you can use a shared config when doing the initial setup, when you go live you will need to disable it due to the port settings).
You will find all the information to do this on following sites:
FTP users and folders:
Should you have problems remembering how to configure user access in IIS, following posts will guide you.
If you work with a domain instead of for example local users, you need to create a folder with the domain name in IIS, don't forget this!



Azure Load Balancer:
Open port 21 and load balance it between the two machines. Don't forget to join the load balanced port on the second virtual machine!



Now here is where the magic happens to enable passive FTP. I was not able to find any solution for this on the internet, but following did the trick. (You could use the Public Instance IP (PIP), but then your Windows Explorer clients will not be able to connect.)

You open a specific range of Passive FTP Ports on the first VM, and another specific range of ports on the second server. This way FTP traffic will always be routed to the the right server.

To avoid a lot of manual work you can use powershell to open a range of ports:

Import-Module azure

Select-AzureSubscription "yoursubscription"
$vm = Get-AzureVM -ServiceName "yourvmservicename" -Name "yourvm"

for ($i = 6051; $i -le 6100; $i++)
	$name = "FTP-Dynamic-" + $i
	Write-Host -Fore Green "Adding: $name"
	Add-AzureEndpoint -VM $vm -Name $name -Protocol "tcp" -PublicPort $i -LocalPort $i
# Update VM.
Write-Host -Fore Green "Updating VM..."
$vm | Update-AzureVM 
Write-Host -Fore Green "Done."


Now you can specify the machine specific range in IIS per machine, secondly you need to specify the public IP of your cloud service in IIS. Note, deallocating both Virtual Machines will make you lose your Public IP. (Since the latest Azure announcements it's possible in Azure to reserve your IP).

Don't forget to allow FTP through your Windows Firewall!

July 25, 2014 at 3:30 PM

Sentinet is highly extendable through standard Microsoft .NET, WCF and WIF extensibility points, and though the Sentinet API interfaces.

In the previous post we saw how to build a custom access rule expression (standard .NET component), with this post I would like to continue the series about the Sentinet Extensibility and demonstrate how to leverage the WCF extensibility setting up the virtual service with a custom endpoint behavior.



The scenario is pretty simple. A third party REST service, GetCalledNumbers, that provides the list of the recent called numbers needs to be exposed. To meet the customer’s privacy policies it's been required to mask the dialed phone numbers.


To achieve these requirements I created a REST façade service that virtualizes the third party one. Then I built a custom endpoint behavior that replaces the phone numbers with asterisks except the last 4 digits.


In this demo the inspector has been positioned at the client side so the IClientMessageInspector interface has been implemented. It's not the intent of this post to describe how to build a message inspector, here you can find a complete example.





To use the custom component, first it has to be registered.

The register procedure is pretty straightforward:


1) Drop Codit.Demo.Sentinet.Extensions.dll in the bin directory of both Repository and Nodes applications (for Sentinet Nodes you might need to create empty bin directory first).


2) Register/add behavior extension in web.config files of both Repository and Nodes applications

<add name="jsonMask" type="Codit.Demo.Sentinet.Extensions.Formatting.JsonFormatterEndpointBehaviorExtension, Codit.Demo.Sentinet.Extensions, Version=, Culture=neutral, PublicKeyToken=9fc8943ef10c782f"/>

3) Then you can add this endpoint behavior in the Sentinet Console shared behaviors (for the future re-use of this shared component between virtual services):

<behavior name="JsonFieldMask"><jsonMask toBeBlurred="PhoneNumber" /></behavior>

4) Now you can apply this shared endpoint behavior to the inbound endpoint(s) of the virtual service.

Using the Sentinet Administration console, first create a new behavior.


Then apply it to the virtual service.




For testing the virtual service I used postman to send a GET request to my virtual API.



During this test I enabled the full transaction recording so that in the Monitor tab I can access to the content transmitted and check the result  of the message inspector.




Message inspectors are the most used extensibility points of the WCF runtime. In this article we saw how to set-up and use custom / third party components in Sentinet. In the next post I will discuss about the Sentinet's routing feature and how to build a custom router.




Posted in: REST | Sentinet | SOA | WCF


July 22, 2014 at 4:00 PM

Recently I noticed some odd behavior when I was troubleshooting a SQL integration scenario with BizTalk 2010. I was using WCF-Custom adapter to perform Typed-Polling that executed a stored procedure. This stored procedure was using dynamic SQL to fetch the data because it is targeting multiple tables with one generic stored procedure.

In this blog post I will tell you how we implemented this scenario as well as where the adapter was failing when polling.
Next to that I will talk about some "problems" with the receive location and the adapter.
I will finish with some small hints that made our development easier.



In this simplified scenario we have an application that is polling on two tables called ‘tbl_Shipment_XXX’ where XXX is the name of a warehouse. Each warehouse will have a corresponding receive location that will poll the data that is marked as ready to be processed.

This is performed by using a stored procedure called ‘ExportShipments’ which requires the name of the target warehouse and will use the proceed in the following steps – Lock data as being processed, export data to BizTalk & mark as successfully processed.



Creating our polling statement

In our polling statement we will execute our generic stored procedure. This procedure will simply mark our data as being processed, execute a function that will return a SQL statement as a NVARCHAR(MAX). Afterwards we will execute the statement & mark the data as processed.

	@Warehouse nvarchar(10)
	SET @lockData = N'UPDATE [ShopDB].[dbo].[tbl_Shipment_' + @Warehouse + '] SET [STATUS_ID] = 7 WHERE [STATUS_ID] = 1';
	SET @exportData = [ShopDB].[dbo].[ComposeExportShipmentSelect] (@Warehouse)
	DECLARE @markProcessed NVARCHAR(MAX);
	SET @markProcessed = N'UPDATE [ShopDB].[dbo].[tbl_Shipment_' + @Warehouse + '] SET [STATUS_ID] = 10 WHERE [STATUS_ID] = 7';

In our function we will compose a simple SELECT-statement where we fill in the name of our warehouse.

CREATE FUNCTION ComposeExportShipmentSelect
	@Warehouse nvarchar(10)
	SET @result = 'SELECT [ID], [ORDER_ID], [WAREHOUSE_FROM], [WAREHOUSE_TO], [QUANTITY] FROM [ShopDB].[dbo].[tbl_Shipment_' + @Warehouse + '] WHERE [STATUS_ID] = 7';
	RETURN @result

I chose to seperate the composition of the SELECT-statement in a function because we were using a lot of JOIN's & UNION's and I wanted to seperate this logic from the stored procedure for the sake of code readability.

Although our scenario is very simple we will use a function to illustrate my problem.

Generating the Types-polling schema

Now that everything is set in our database we are ready to generate our Typed-Polling schema for the WarehouseA-polling to our BizTalk project.

1. Right-click on your project and select Add > Add generated items... > Consume Adapter Service


2. Select the SqlBinding and click "Configure"

3. Fill in the Server & InitialCatalog and a unique InboundId. The InboundId will be used for in the namespace of your schema and URI of your receive location. Each receive location requires its own schema. I used "WarehouseA" since I am creating the schema for polling on "tbl_Shipments_WarehouseA".

4. Configure the binding by selecting "TypedPolling" as the InboundOperationType and execute our stored procedure as PollingStatement with parameter "WarehouseA". (Note that we are not using ambient transactions)


5. Click Connect,select the "Service" as contract type and select "/" as category. If everything is configured correctly you can select TypedPolling and click Properties that will show you the metadata. If this is the case click Add and a schema will be generated.


The wizard will create the following schema with a sample configuration for your receive location.


It's all about metadata

The problem here is that executing the dynamic SQL doesn't provide the required metadata to BizTalk in order to successfully generate a schema for that result set.

I solved this by replacing the function 'ComposeExportShipmentSelect' with a new stored procedure called 'ExportShipment_AcquireResults'.
In this stored procedure will execute the dynamic SQL and insert the result set into a TABLE and return it to the caller. This tells BizTalk what columns the result set will contain and of what type they are. 

CREATE PROCEDURE ExportShipment_AcquireResults
	@Warehouse nvarchar(10)
AS DECLARE @result 
			  [ID] [INT] NOT NULL,
	SET @select = 'SELECT [ID], [ORDER_ID], [WAREHOUSE_FROM], [WAREHOUSE_TO], [QUANTITY] FROM [ShopDB].[dbo].[tbl_Shipment_' + @Warehouse + '] WHERE [STATUS_ID] = 7';
	INSERT INTO @result
	SELECT * FROM @result;

Our generic polling stored procedure simply execute our new stored procedure.

	SET @lockData = N'UPDATE [ShopDB].[dbo].[tbl_Shipment_' + @Warehouse + '] SET [STATUS_ID] = 7 WHERE [STATUS_ID] = 1';
	EXEC [ShopDB].[dbo].[ExportShipment_AcquireResults] @Warehouse
	DECLARE @markProcessed NVARCHAR(MAX);
	SET @markProcessed = N'UPDATE [ShopDB].[dbo].[tbl_Shipment_' + @Warehouse + '] SET [STATUS_ID] = 10 WHERE [STATUS_ID] = 7';

When we regenerate our schema the problem should be resolved and your schema looks like this -
(Note that the new filename include your InboundId) 


Why not use a #tempTable?

You can also achieve this by setting FMTONLY OFF and using a temporary table but this will not work in every scenario.

	@Warehouse nvarchar(10)
	SET @lockData = N'UPDATE [ShopDB].[dbo].[tbl_Shipment_' + @Warehouse + '] SET [STATUS_ID] = 7 WHERE [STATUS_ID] = 1';
	SET @exportData = [ShopDB].[dbo].[ComposeExportShipmentSelect] (@Warehouse)
	CREATE TABLE #tempTable
			  [ID] [INT] NOT NULL,
	INSERT INTO #tempTable
	DECLARE @markProcessed NVARCHAR(MAX);
	SET @markProcessed = N'UPDATE [ShopDB].[dbo].[tbl_Shipment_' + @Warehouse + '] SET [STATUS_ID] = 10 WHERE [STATUS_ID] = 7';
	SELECT * FROM #tempTable;

We were using transactions on SQL-level and Try/Catch statements but this conflicted with the FMTONLY and resulted in a Severe error in the event log. Also this didn't make any sense at all since we seperated the SELECT composition to a seperate function/stored procedure and the definition of the result set should be defined there.

If you want to read more about #tempTables, I recommend this post.


PollingDataAvailableStatement & no ambient transactions

With our schema generated I deployed my application to my machine and started creating the receive locations for the polling on my database. The configuration is pretty easy - No ambient transactions, Typed polling, pollingstatement is our stored procedure and specify a SELECT in the PollDataAvailableStatement to check if we need to run the stored procedure.

Apparently the PollDataAvailableStatement is only used when you enable ambient transactions according to this article.

Problem here is that when you clear out the PollDataAvailableStatement and start your receive location it will be disabled automatically with the following error - 

PollDataAvailableStatement seems to be a mandatory field although it is not being used, I easily fixed it with "SELECT 0".


Empty result sets & TimeOutExceptions

Later on the project we had to move from transactions on SQL-level to ambient transactions and our PollDataAvailableStatement was consulted before it ran the stored procedure.

In our stored procedure we used the found data to perform cross-references and return a set of data when required so it is possible that it is not required to return a set.

We started experiencing locks on our tables and TimeOutExceptions occured without any obvious reason.
It seemed that the adapter had a bug: when the adapter has found data it is required to return a result to BizTalk, if not it will lock SQL resources and lock the tables.

This issue can be resolved by installing the standalone fix (link) or BizTalk Adapter Pack 2010 CU1 (link)


Tips & Tricks

During the development of this scenario I noticed some irregularities when using the wizard, here are some of the things I noticed -

  • It is a good practices to validate the metadata of the TypedPolling-operation before adding it. If something is misconfigured or your stored procedure is correct you will receive an error with more information.
    If this is the case you should click OK instead of clicking the X because otherwise the wizard will close. The same error might occur when you finalize the wizard, then it is important to click cancel instead of ok or the wizard will close automatically. 
  • If your wizard closes for some reason it will remember the configuration of the previous wizard when you restart it and you can simply connect, select the requested operation and request the metadata or finish the wizard.
    It might occur that this results in a MetadataException that says that the PollingStatement is empty and therefor invalid -

    This is a bug in the wizard where you always need to open the URI configuration although it still remembers your configuration. You don't need to change anything, just open & close and the exception is gone.


In this blog post I highlighted the problem where I was unable to generate a schema for my stored procedure by only executing the result of my function. I also illlustrated how I fixed it and why I didn't use FMTONLY and a temporary table.

Next to the generation of our schemas I talked about the problems with the receive location where it required a PollDataAvailableStatement even when it was not used and the locking of our tables because our stored procedure wasn't returning a result set. Last but not least I gave two examples of common irregularities when using the wizard and how you can bypass them.

For me it is important to write your SQL scripts like you write your code - Use decent comments, seperate your procedure into subprocedures & functions according to the separation of concern.
While writing you scripts everything might sound obvious but will it in a couple of months? And how about your colleagues? 

All the scripts for this post, incl. DB generation, can be found here.

Thank you for reading,


July 8, 2014 at 9:18 AM

In Sentinet, authorization and access to any virtual service is defined using an Access Rule which is a combination of authorization expressions and logical conditions. Sentinet provides an out-of-the-box access rule composer with a set of common Access Rule Expressions like X509 certificate, Claim and User validation, etc...


Running out of the in-built tools to cover all the business scenarios is almost inevitable; extensibility is the way to fill this gap. Extensibility is one of the key-features of every successful product and it particularly shines in Sentinet where you can deal with different extensibility points.


In this blog post I will go through the steps involved in creating a custom access rule expression, register it and test it.


Create a Custom Rule Expression

A custom access rule expression is a regular .NET component that implements the IMessageEvaluator interface (ref. Nevatech.Vbs.Repository.dll). This inteface contains three methods:

  • Evaluate – Where to put the access rule logic.
  • ImportConfiguration – Read the (and apply) the component’s configuration.
  • ExportConfiguration – Save the component its configuration.


In this example I’m going to define a component for evaluating an APIKey sent in a custom header (for SOAP services) or as a part of the service URL (for REST services).

As shown in the figure below, the implementation is pretty straightforward.

  • the getSecurityContext method evaluates the System.ServiceModel.Channels.Message object to read the APIKey
  • the isValidKey method evaluate the key.
  • the properties ServiceName and isRest are set with the values specified in the componet configuration.




Here the simple implementation for reading the component configuration.



In this example the API Key is validated against an SQL table. A stored procedure with two parameters evaluates whether the key has access to the specific service.




First step is to copy the dll(s) to the Sentinet node(s). In this case, notice that it’s not mandatory to sign the assembly and register to the GAC. I simply create a bin folder in my Sentinet node and I copied the dlls.



The custom component can be registered and graphically configured using the Sentinet Administrative Console.Click on the AccessRule node, Add new Access Rule, then at the bottom of the rule designer press Add..


Five parameters need to be specified:

  • Name. The fiendly name of the custom rule expression (APIKey)
  • Assembly. The fully qualified assembly name that contains the custom rule expression (Codit.Demo.Sentinet.AccessControl, Version=, Culture=neutral, PublicKeyToken=null)
  • Type. The .NET class that implement the IMessageEvaluator interface (Codit.Demo.Sentinet.AccessControl.APIKeyValidator)
  • IsReusable. Set it to true if your component is thread-safe. The Sentinet Authorization Engine will
    create and use a single instance of this component.
  • Default Configuration. The default configuration. In this example I let it blank and I will specify the parameters when I use expression in a specific Access Rule





First I created a SOAP and a REST virtual service that virtualize the Offer backend service (SOAP). Then I defined a couple of access rules using the new APIKey Custom Access Rule Expression and I applied those rules to the Virtual Service using the Access Control tab.

In this example the service name is passed to the Expression through a configuration parameter but a better solution would be to extract it from the Message class.




For this example I defined few API keys in the SQL table.



The SOAP scenario has been tested with soapUI. Depending on the value returned by the Evaluate method the virtual service returns different objects:

  • True => Access granted, the response message is returned.
  • False => Generic Access denied via Soap Fault.
  • Exception => Custom Exception via Soap Fault.




The REST scenario has been tested with Fiddler. Depending on the value returned by the Evaluate method, the virtual service returns different HTTP codes:

  • True => HTTP 200
  • False => HTTP 403
  • Exception => HTTP 500



Finally, below, you can see how the test results are presented in the Sentinet Monitoring tab.




The Sentinet extensibility model is intended to support custom scenarios by enabling you to modify the platform behavior at different levels.In this post I have discussed how to extend the Access Rule engine with an additional authorization component.




Posted in: .NET | Monitoring | Sentinet | WCF

Tags: , ,

July 3, 2014 at 4:03 PM

With some luck your environment was installed by a BizTalk professional and the most critical maintenance tasks were configured at installation. When I say critical maintenance tasks, I mean the BizTalk backup job, DTA Purge & Archive,... If these tasks are not configured I'm sure your production BizTalk environment will only be running smoothly for a couple of days or months but definitely not for years!


I'm not going to write about basic configuration tasks as most BizTalk professionals should be aware of these. 

At Codit Managed Services our goal is to detect, avoid and solve big and small issues before it's too late. This way we often detect smaller missing configuration tasks. Not necessary at first, but definitely necessary for the BizTalk environment to keep on running smoothly through the years! 


Purging/archiving the BAM databases:

I'm going to explain you how you should maintain the BizTalk BAM databases:


In most environments these maintenance tasks are missing. You will find the necessary information on the internet, but probably only when it's urgent and you really need it.


My goal is to provide you with this information in time, so you are able to configure the BAM maintenance tasks without putting your SQL server or yourself under some heavy stress. If it comes to BAM and data purging, it's rather simple. By default, nothing will be purged or archived. Yes, even if you have the BizTalk backup and all the other jobs succesfully running! 


I have seen a production environment, running for only two years where the BAMPrimaryImport database had a size of 90GB! It takes a lot of time and processing power to purge such a database. To maintain and purge the BAM databases you will need to configure how long you want to keep the BAM data, configure archiving, trigger several SQL SSIS packages,...


The problem is: purging is configured per Activity, so this is a task for the developer and not a task for the guy who installed the BizTalk environment. You will find all the information to do this on following sites:


BAMData not being purged immediately?

Something very important that you should be aware of is the fact that if you for example want to keep a year of data, you will have to wait another year for the data being purged/archived in a supported way!

That's why it's so important to configure these jobs in time! It's not like the DTA P&A job, where the data is purged immediately.

You can find more information about this on following blog:


Purging the BAMAlertsApplication database:

I'm rather sure the following maintenance task will not be sheduled to clean your BAMAlertsApplication database. I only discovered this myself a couple of days ago! Probably not a lot of people notice this database because it's rather small. After 2 years running in production with a small load it had a size of 8GB. But it's 8GB of (wasted) diskspace!


If you search on the internet on how to clean this database you will find nothing official by Microsoft,...

Credits on how to purge the BAMAlertsApplication go to Patrick Wellink and his blogpost:

If you wonder what the NSVacuum stored procedure looks like, you can find it below:

USE [BAMAlertsApplication]


	DECLARE @QuantumsVacuumed	INT
	DECLARE @QuantumsRemaining	INT
	DECLARE @VacuumStatus		INT

	SET @QuantumsVacuumed = 0
	SET @QuantumsRemaining = 0

	SET @StartTime = GETUTCDATE()

	-- Volunteer to be a deadlock victim

	EXEC @VacuumStatus = [dbo].[NSVacuumCheckTimeAndState] @StartTime, @SecondsToRun

	IF (0 != @VacuumStatus)			-- VacuumStatus 0 == Running
		GOTO CountAndExit

	DECLARE @RetentionAge		INT
	DECLARE @VacuumedAllClasses	BIT

	-- Remember the last run time and null counts
	UPDATE [dbo].[NSVacuumState] SET LastVacuumTime = @StartTime, LastTimeVacuumEventCount = 0, LastTimeVacuumNotificationCount = 0

	-- Get the retention age from the configuration table (there should only be 1 row)
	SELECT TOP 1 @RetentionAge = RetentionAge FROM [dbo].[NSApplicationConfig]

	SET @CutoffTime = DATEADD(second, -@RetentionAge, GETUTCDATE())

	-- Vacuum incomplete event batches
	EXEC [dbo].[NSVacuumEventClasses] @CutoffTime, 1

	-- Mark expired quantums as 'being vacuumed'
	UPDATE	[dbo].[NSQuantum1] SET QuantumStatusCode = 32
	WHERE	(QuantumStatusCode & 64) > 0 AND		-- Marked completed
			(EndTime < @CutoffTime)					-- Old

	DECLARE @QuantumId			INT

	DECLARE QuantumsCursor CURSOR
	SELECT	QuantumId, EndTime
	WHERE	QuantumStatusCode = 32

	OPEN QuantumsCursor

	-- Do until told otherwise or the time limit expires
	WHILE (1=1)
		EXEC @VacuumStatus = [dbo].[NSVacuumCheckTimeAndState] @StartTime, @SecondsToRun

		IF (0 != @VacuumStatus)			-- VacuumStatus 0 == Running

		FETCH NEXT FROM QuantumsCursor INTO @QuantumId, @QuantumEndTime

		IF (@@FETCH_STATUS != 0)
			SET @VacuumStatus = 2		-- VacuumStatus 2 == Completed
			SET @QuantumsRemaining = 0
			GOTO CloseCursorAndExit

		-- Vacuum the Notifications
		EXEC [dbo].[NSVacuumNotificationClasses] @QuantumId, @VacuumedAllClasses OUTPUT

		EXEC @VacuumStatus = [dbo].[NSVacuumCheckTimeAndState] @StartTime, @SecondsToRun

		IF (0 != @VacuumStatus)

		-- Vacuum the Events in this quantum
		EXEC [dbo].[NSVacuumEventClasses] @QuantumEndTime, 0

		-- Delete this Quantum from NSQuantums1 if its related records were also deleted
		IF (1 = @VacuumedAllClasses)
			DELETE [dbo].[NSQuantum1] WHERE QuantumId = @QuantumId

			-- Update the count of quantums vacuumed
			SET @QuantumsVacuumed = @QuantumsVacuumed + 1

		EXEC @VacuumStatus = [dbo].[NSVacuumCheckTimeAndState] @StartTime, @SecondsToRun

		IF (0 != @VacuumStatus)

	END	-- Main WHILE loop


	CLOSE QuantumsCursor
	DEALLOCATE QuantumsCursor


	-- Report progress
	SET @QuantumsRemaining = (SELECT COUNT(*) FROM [dbo].[NSQuantum1] WITH (READUNCOMMITTED) WHERE QuantumStatusCode = 32)

	SELECT	@VacuumStatus AS Status, @QuantumsVacuumed AS QuantumsVacuumed,
			@QuantumsRemaining AS QuantumsRemaining

END -- NSVacuum

You need shedule the NSVacuum command on your environment. Run this step by step, as it puts a lot of stress on your SQL server.


The content of this post is something every BizTalk professional should add to his BizTalk Server installation/deployment procedure!

June 25, 2014 at 11:00 AM

As discussed yesterday in my blog post around the enhancements to the WCF-WebHttp Adapter the R2 releases of the BizTalk products are focusing more on 'compatibility and platform' alignment over shipping new major features/add-ons to the platform.

To give you another overview, the following features where added in new BizTalk Server 2013 R2 release:

  • Platform Alignment with Visual Studio, SQL Server,…
  • Updates to the SB-Messaging Adapter
  • Updates to the WCF-WebHttp Adapter
  • Updates to the SFTP Adapter
  • Updates to the HL7 Accelerator

This blogpost will focus on the updates of the SB-Messaging Adapter that where shipped with this new release of Microsoft BizTalk Server.


SB-Messaging Adapter enhancements

With this new version of BizTalk Server the SB-Messaging adapter now also supports SAS (Shared Access Signature) authentication, in addition to ACS (Access Control Service).

Due to this improvement BizTalk Server can now also interact with the on-premise edition of Service Bus that is available through the Windows Azure Pack.


More information on SAS Authentication can be found here:

More information on the Windows Azure Pack can be found here:


The image below is a head to head comparison between the ‘old’ version of the SB-Messaging adapter authentication properties window and the ‘new’  version that ships with BizTalk Server 2013 R2.





Out with the old, in with the new

When we compare the 'Access connection information' from both the Windows Azure Portal (cloud) as the Windows Azure Pack Portal (on premise) we already see a clear difference on ways to authenticate to the Service Bus.

You also notice why the update to include SAS was really a "must have" for BizTalk Server.

On the left side you see the cloud portal, that has support for SAS and ACS, on the right side you notice the on premise version of service bus that only supports the SAS and Windows authentication.





This ‘small’ addition to the SB-Messaging adapter has a great impact on interacting with Service Bus as we finally can use the full potential of the on-premise version of Service Bus.

We have started implementing this feature at one of our customers. One of the requirements there is to use the on-premise version of Service Bus. First test are looking good and the behavior and way of implementing is the same as connecting to the service bus in the cloud.

However one downside that I personally find is the lacking of the Windows Authentication possibility feature.


Happy Service Bus’ing !!

Glenn Colpaert

June 24, 2014 at 9:52 AM

Now that BizTalk 2013 R2 is released on MSDN, it’s time to take a first look at the new features and possibilities of this release.

As Guru already clarified on the last BizTalk Summit (BizTalk Summit Summary), the R2 releases of the BizTalk products are focusing more on 'compatibility/platform' alignment and less about shipping new (major) features/add-ons to the platform.

To give you an overview, the following features where added in the new BizTalk Server 2013 R2 release:

  • Platform Alignment with Visual Studio, SQL Server,…
  • Updates to the SB-Messaging Adapter
  • Updates to the WCF-WebHttp Adapter
  • Updates to the SFTP Adapter
  • Updates to the HL7 Accelerator

This blog post will focus on the updates of the WCF-WebHttp Adapter that where shipped with this new release of Microsoft BizTalk Server.


WCF-WebHttp Adapter enhancements

With this new version of BizTalk Server the WCF-WebHttp adapter now also supports sending and receiving JSON messages. This new version of BizTalk ships with a wizard to generate XSD schema from a JSON instance and two new pipelines (and components) for processing JSON messages in a messaging scenario.


Out with the old, in with the new

Let us take a quick glance at the new components that are shipped with this new BizTalk release. We first of all have 2 new pipelines and components for encoding and decoding JSON messages in our BizTalk Ports.


Configuring these 2 new components is very straightforward. On the encoder side there is one component property to specify whether the XML root element should be ignored while encoding the xml message to JSON. On the decoder part there are 2 properties to specify, the root node and the root node namespace to be used by the decoder in the generated xml message.
You can find a screenshot of the properties of both components below.





Next to 2 new components for parsing the JSON messages there is also the new JSON Schema Wizard which allows you to generate XSD files based on a JSON instance. You can find this new wizard in the "Add -> New Item" menu of Visual Studio.



JSON, do you speak it?

To demonstrate the new features and possibilities of the enhanced WCF-WebHttp adapter I created a little POC that will use the iTunes API (

First of all I downloaded a JSON instance from the API from following URL:


Next, I used this JSON instance to generate the XSD with the new JSON Schema Wizard. The wizard itself is pretty straightforward, you simply specify the ‘Instance File"’, choose your ‘Root Node Name’ and ‘Target Namespace’ and simply press ‘Finish’ to get your XSD generated.


When using the ‘Metallica’ instance this results in following schema.




After generating the instance I started configuring the Send Port, to actually consume the service from BizTalk.

Below you can see the configuration of my 2-Way Send Port that is sending requests to the API. We have a PassThruTransmit pipeline and as receive pipeline we have a custom pipeline with the JSON decoder inside it.






When sending the request to the API we get following message in the BizTalk MessageBox (parsed JSON instance).


This sums up the basic demo of the new enhancements of the WCF-WebHttp adapter.

If you have questions regarding this scenario, please share them in the comments section of this blog post and I’ll be happy to answer them.



Glenn Colpaert

Posted in: BizTalk | Schemas | WCF

Tags: , ,