Charteris Community Server

Welcome to the Charteris plc Community
Welcome to Charteris Community Server Sign in | Join | Help
in Search

Chris Dickson's Blog

Throttling BizTalk service instances

Most of the time, performance tuning of our software is about speeding things up... why would you ever want to slow things down? Well, as it happens, there are scenarios when working with BizTalk Server where slowing things down a bit is a desirable goal.

A common one in BizTalk Server 2004 arises when using the SOAP send adapter which comes in the box to make calls to SOAP web services from within orchestrations. The SOAP send adapter uses the standard CLR thread pool to dispatch SOAP requests across the network, and process the responses received from the web service. Unfortunately, if you drive it too hard, at peak throughput you will start to see the send adapter logging exceptions like this and suspending the request messages for retry:

Event ID 5740 - The adapter "SOAP" raised an error message. Details "There were not enough free threads in the ThreadPool object to complete the operation.

This is actually a symptom of a problem in the ASP.NET HTTP stack rather than in the BizTalk product itself. The issue is explained in this Microsoft Knowledge Base article: briefly, each SOAP request uses a worker thread to make the request, an IO completion thread to service the response, and additional threads to perform authentication handshakes; if the available thread pool threads are tied up on requests and there are no free threads which can be used to service responses, these errors occur. Changes were made in BizTalk Server 2004 SP1 to alleviate this problem, and the throughput limit before the problem manifests can be increased by careful tuning of the CLR thread pool parameters as described in the KB article. The problem doesn't go away completely, however. In a typical BizTalk 2004 installation with optimal tuning of the CLR parameters, spikes in throughput of more than about 150 SOAP requests per second will encounter this issue. (I'm told that the situation is much better in BizTalk Server 2006 - the ASP.NET 2.0 stack was rewritten to considerably reduce the possibility of thread pool starvation - but I do not have first-hand experience or confirmation of this).

Unfortunately, even if your normal peak throughput is considerably less than this limit, spikes can still occur due to temporary outages of one sort or another. BizTalk's automatic retry functionality then serves to exacerbate the problem, because the retry interval is fixed so messages which previously failed close together in time are resubmitted in a spike. Also, when the problem starts to occur, thread pool threads affected become tied up for a fairly lengthy timeout period (100 seconds, I think) before being freed up, so the problem tends to escalate. If you need high throughput rates it is highly desirable that you avoid the problem ever occurring.

To this end, what we would like to be able to do is smooth out any spikes in the rate at which our orchestrations feed web service requests to the SOAP send adapter. Unfortunately, the knobs provided by BizTalk 2004 for controlling the work rate of orchestration hosts are much too blunt an instrument for doing this effectively - host throttling parameters are global to the BizTalk Group in BizTalk 2004. A different approach which I have used with some success is a fairly direct regulation of the rate at which the orchestrations execute the Send shape to the Web Port. I use a C# helper type, which I have called ExecutionBrake, which is invoked by the orchestration just before it initiates the SOAP request, and which acts as a governor controlling the peak rate at which concurrent instances can execute within that host instance.

The idea is to use thread synchronisation primitives within this helper type to identify when the rate of executing requests is approaching the desired limit, and apply as light a touch as possible to delay execution of just enough threads to keep the rate from peaking above the limit. Unless there is a sustained spike, short waits using System.Threading.Thread.Sleep() are sufficient. If a sustained load in excess of the limit is experienced, a fallback method is used, whereby the helper type indicates to the orchestration that it should enter a longer delay loop using a Delay shape.

The following sample code should illustrate the idea:

using System;
using System.Collections;
using System.Threading;

namespace Charteris.ChrisDicksonBlog.Samples
{

 public class ExecutionBrake
 {

 /// <summary>
 
/// Maintains a register of the named instances which are active in the
 
/// current AppDomain. For any unique name, the braking is implemented by
 
/// an internal Singleton object.
 
/// </summary>
 
/// <param name="uniqueName">Unique brake name to register</param>
 
private static void RegisterBrake(string uniqueName)
 {
  
if (!_brakingImplementations.ContainsKey(uniqueName))
   {
    
lock (_sync)
     {

         if (!_brakingImplementations.ContainsKey(uniqueName))
         {
            ExecutionBrakeImpl impl =
new ExecutionBrakeImpl();
            Thread.MemoryBarrier();
            _brakingImplementations.Add(uniqueName, impl);
         }
     }
   }
 }

 /// <summary>
 
/// Collection of singleton implementation objects, keyed on unique name
 
/// </summary>
 
private static Hashtable _brakingImplementations = new Hashtable();
 
 
private static object _sync = new object();


 public ExecutionBrake(string uniqueName)
 {
     _uniqueName = uniqueName;
     RegisterBrake(uniqueName);
 }

 /// <summary>
 
/// Key method called by the orchestration. If the return value is zero, the orchestration
 /// continues to make the SOAP request. If non-zero, the orchestration should loop via a 
 /// Delay shape and call this method again before proceeding. The return value can be used
 /// to seed the Delay shape's configuration, so that retries are spread randomly. 
 
/// </summary>
 public
int
ThrottleExecution()
 {
   
return ((ExecutionBrakeImpl)_brakingImplementations[_uniqueName]).ThrottleExecution();
 }

 private string _uniqueName;

 private class ExecutionBrakeImpl
 {
   
public int ThrottleExecution()
    {

     // Maintain a count of threads currently executing this method. The corresponding
      
// decrement is in the finally block
      
int threadsInThisMethod = Interlocked.Increment(ref _threadsUnderControlCount);
      
try
      
{
          
if (threadsInThisMethod >= _deferThreshold)
           {
              
// We have more than enough threads already so defer this one immediately
          return _random.Next(_maximumDeferralDurationHint);
           }

        int numberOfSleeps = 0;
           
if (_threadsReleasedThisIntervalCount >= _brakingThreshold)
            {
                
// We have already reached the limit of threads which can be released in
                
// the current reference interval, so this thread must wait
                
++numberOfSleeps;
                 Thread.Sleep(_random.Next(_maximumThreadSleepDuration));
            }

        // Keep checking for a release window, then sleeping, alternately until this thread has
           
// either been released or has used the maximum number of sleeps
           
while (numberOfSleeps < _maximumThreadSleeps)
            {
              
lock (_sync)
               {

             // If the reference period has ended, we can start a new one and
                  
// be the first thread released in the new period
                  
if (0 > DateTime.Compare(_endOfControlInterval, DateTime.Now))
                   {
                       _endOfControlInterval = DateTime.Now + _thresholdInterval;
                       _threadsReleasedThisIntervalCount = 1;
               return 0;
                   }

                  // Otherwise we can go if the count for the current interval hasn't been exceeded
                  
if (_threadsReleasedThisIntervalCount < _brakingThreshold)
                  {
                      ++_threadsReleasedThisIntervalCount;
               return 0;
                  }
                 
// Otherwise we'll need to sleep and loop again
        }
            ++numberOfSleeps;
        Thread.Sleep(_random.Next(_maximumThreadSleepDuration));
       }
    }
   
finally
   
{
        Interlocked.Decrement(
ref _threadsUnderControlCount);
    }
   
// We were not able to release the thread, so return a non-zero deferral hint
   
return _random.Next(_maximumDeferralDurationHint);
 }
 
private DateTime _endOfControlInterval = DateTime.Now;
 
private int _threadsUnderControlCount;
 
private int _threadsReleasedThisIntervalCount;
 
private object _sync = new object();
 
private Random _random = new Random(); 
 
// Configuration parameters.
  
// For the purposes of this sample these are constants, but
  
// in practice they would need some configuration mechanism
  
// to tune the braking for any particular named brake.
  
private const int _brakingThreshold = 100;
  
private const int _deferThreshold = 200;
  
private TimeSpan _thresholdInterval = new TimeSpan(0,0,0,1);
  
private const int _maximumThreadSleepDuration = 150;
  
private const int _maximumThreadSleeps = 3;
  
private const int _maximumDeferralDurationHint = 500;
  }

 }
}

Naturally, if there are multiple host instances executing the orchestration, this braking mechanism smooths the rate of execution of each one independently, and the parameters need to be configured with this is mind.

Don't expect such a mechanism to enable very precise regulation of execution rates, particularly with multiple host instances, but it can be used effectively to prevent abnormal spikes in message volumes causing the sort of problems described above with the SOAP adapter.

Published Mar 06 2007, 01:56 PM by chrisdi
Filed under:

Comments

 

Craig D. said:

Hi Chris, Do you know if it is possible to force an orchestration to be scheduled to run on a specific host instance? In other words, I have a BizTalk host with N host instances and I want my orchestration to run on the host instance which I (somehow?) choose at run-time. Any suggestions will be hugely appreciated. Thanks. Craig
April 27, 2007 4:54 PM
 

chrisdi said:

@Craig D.   As far as I know, there is  no way to force an orchestration to be activated on a specific host instance if you have more than one host instance for the orchestration's host configured and running. Why would you want to do that anyway? I'd be interested to hear more about your requirement, as I suspect you have a problem which can be better solved in other ways.

The only mechanism I can think of, which would guarantee routing a message to be handled by a service instance on a specific host instance, is a message box subscription created by a correlated Receive shape in an already running instance. For instance there are patterns which create singleton Sequential Convoy orchestration processes which will only ever run on one host instance at a time, with all messages routed to that same instance. But that isn't quite what you said you needed...

May 1, 2007 4:14 PM
 

Biztalk Rookie said:

Hello Chris, I have a similar problem. In my case I am using Biztalk to import XML fimes. They are first processed then insterted into a database. During the processing a Key is generated which is the primary key. I am pooling the object and multiple executions are possible. Some of the recorsd are executing simultaneously and end up with the same primary which in turn gives an error. As a solution I am generating a random number and using this number to make that thread sleep for that duration. But for some reason the Orchestration is being suspended. Can I not use Thread.Sleep in an orchestration?

January 21, 2008 7:27 AM
 

Throttling Messages | MSDN @ EEYOGO said:

Pingback from  Throttling Messages | MSDN @ EEYOGO

October 18, 2012 3:55 AM

Leave a Comment

(required) 
(optional)
(required) 
Submit
Powered by Community Server (Commercial Edition), by Telligent Systems