10 February 2006

BizTalk re-sequencing of Functoid parameters when schema changes


I came accross a strange behavior in BizTalk 2004 the other day.
I have a solution setup to demonstrate this behavior.
The solution contains two projects.
The first is a BizTalk project wich simply contains two schemas, one incoming and one outgoing as well as a map to transform between the two.
The second project is a class library that contains an external class to which we will be mapping our scripting functoid.
The case this map is attempting to address is that of the target requiring input data to be truncated to a cerntain number of characters.
For the purpose of this example, we’ll use 100 characters coming into the source schema and we need to only pass along the first 20 characters of the string to the target schema.
Now there are string functoids in BizTalk’s toolbox that you could use to do this, but I’ve always preferred to create an external class library.
You can then do all your custom code in the class library and simply map the scripting functoid to the functions.
This is has an advantage when you wish to update the function logic in that you simply have to:
  1. Change the class library code.
  2. Recompile.
  3. ReGAC the DLL.
There is almost zero downtime save for the fraction of a second it takes to unGAC and reGAC the DLL which is much easier to maintain that having to:
  1. Update the functoid code in the orchestration.
  2. Recompile the BizTalk project.
  3. Stop the orchestration.
  4. Unenlist the orchestration.
  5. Undeploy the BizTalk assembly.
  6. Redeploy the new BizTalk assembly.
  7. Rebind the orchestration.
  8. Re-enlist the orchestration.
  9. Start the orchestration.
So given that setup, the solution would look something like this:

Figure 1 – BizTalk Solution Setup

In our class we will add a "HeadString" method that will parse out the given string returning only the specified number of characters from the beginning of the string.  Of course we’d do some null checking and error handling as well.  The method looks like this:

public static string HeadString(string strInput, int intLength)
{
  try

  {
    if (strInput != null)
    {
      if (strInput.Length > intLength)
      {
        return strInput.Substring(0, intLength);
      }
      else
      {
        return strInput;
      }
    }
    else
    {
      return "";
    }
  }
  catch (Exception ex)
  {
    throw new Exception(ex.ToString());
  }
}

We use the map to convert the FieldString100 data in Incoming.xsd into FieldString20 data inOutgoing.xsd as follows:
  1. Select Incoming.xsd as the Source Schema.
  2. Select Outgoing.xsd as the Destination Schema.
  3. Drag a Scripting functoid from the Toolbox onto the map grid.
  4. Connect FieldString100 from the Source Schema with the Scripting functoid.
  5. Connect the Scripting functoid with FieldString20 in the Destination Schema.
The map should now look like this:

 Figure 2 – Map Configured

If you study the code of the HeadString(…) method, you will notice that we now also have to provide the length of the required return value to the method as the second parameter. 
We will need to strong name the class library and GAC it before we can map the Scripting functoid to it as an external assembly.
  1. Select the Scripting functoid.
  2. In the Properties pane, click the Input Parameters elipses.
  3. There are five buttons at the top of the Configure Functoid Inputs dialog window.  Click the  New Parameter button, second from the left.
  4. Enter "20" as the value.
The dialog should look like this:

Figure 3 – Configure Functoid Inputs

The Click the OK button to close the dialog.
At this point you can compile and deploy your project and it should perform as expected.  Now here is where things get a little more interresting…
Suppose the Source Schema changes.  When you work with integration technologies, you know that key is to keep both sides of the integration point as stable as possible.  If you’ve been dealing with integration for a while, you also know that this almost never ever happens.  Fact is… things change… and schemas even more so! 
For the purpose of our discussion, let’s assume the Incoming.xsd gets node added and the FieldString100element is moved to that node leaving the schema looking like this:

Figure 4 – Schema changed

Now BizTalk does a good job of trying to maintain mapping links.  As long as the element you linked still remains in the same node of the schema, BizTalk will not break the link.  If the element is moved to another node of the schema, BizTalk will delete the connecting link and you would need to relink it manually.
After saving the schema and reloading the map, BizTalk leaves our map looking like this:

Figure 5 – Broken map

We now have to manually reconnect the FieldString100 element with the Scripting functoid.  This is easily done leaving our map looking like this:

Figure 6 - New map

This all looks fine, but if you compile and deploy this solution, it WILL FAIL!
Why???
Quite simply because the parameter order has been switched.  When you have a constrant value defined in the functoid input, the removing of the link to the functoid will in essence "promote" the second parameter, the constant value, to be the first parameter.
When we then relink the element to the functoid, the element input simply becomes the second parameter as in this screen shot:

Figure 7 - Parameter order swapped
 Because our method expects a string as the first parameter and an integer as the second parameter, it will fail.
Now you could just reorder the parameters, but when the schema changes again, you will need to repeat the process.

RECOMMENDATION
I recommend that when having to use constant values as parameters to functoids, the dependant methods be developed in such a way as to expect the constant values first followed by the variable values i.e. changing the method declaration to read like this instead:

public static string HeadString(int intLength, string strInput)
{
  …
}

Later
C

No comments:

Post a Comment

Comments are moderated only for the purpose of keeping pesky spammers at bay.

SharePoint Remote Event Receivers are DEAD!!!

 Well, the time has finally come.  It was evident when Microsoft started pushing everyone to WebHooks, but this FAQ and related announcement...