Connecting BizTalk 2010 to a GVC-LC Vending Machine

Posted: January 31, 2011  |  Categories: BizTalk Uncategorized

I am writing about this because I think this is an interesting example of the new host tuning features in BizTalk 2010 and how to use the WCF LOB adapter SDK.

The GVC-LC vending machine  reads magnetic cards  and requests authorization via a TCP/IP socket before dispensing a vend. BizTalk takes a request from a vending machine, validates the card against a set of business rules and based on data retrieved from a database, returns the result of the validation to the vending machine. The latency as measured by the request/response time between the vending machine and BizTalk had to be less then one second 99% of the time to ensure an acceptable experience to a customer.

Custom WCF LOB Adapter for Vending Machine Communications

There is no out of the box BizTalk adapter that will communicate with the GVCLC vending machines. Communication with the vending machine is via a simple request/ response protocol where the custom BizTalk LOB adapter sends a poll command to the vending machine and waits for a response. The adapter connects to the machine over TCP/IP and starts a communications session. This session can last any length of time.

A custom WCF adapter was created to connect to the vending  machine (GVCLC) socket and handle all the communications before sending the messages to the BizTalk message box. I have summarised the GVCLC Adapter development here.

Tuning for Low Request/Response Latency

The table of latency tests below shows that sub-second request response latency can be achieved after the hosts have been tuned.

With the custom GVCLC WCF LOB adapter in hand an orchestration was built that persists the information on the card from the first card use until the end of the day.  Particular attention had been made during the creation of the orchestration to minimise the number of persistence points. Start orchestration shapes were used rather than call orchestration shapes and only one transaction scopes was  used around the database request/response. The number of persistence points per card swipe ranges between 2 to 3.

 

In this way the data is only retrieved from the source database once during the day and subsequent card swipes should have decreased latency.   The latency of this orchestration measured for about 900 unique card swipes ranges from 2.3 to 0.9 seconds. If a card that had already been swiped is swiped again then the latency is about 0.3 – 0.6 seconds because the data is already persisted. Surprisingly this time was not affected by the number of cards that were persisted. The latency when there were only ten card orchestrations dehydrated was almost identical to that when we had over 900 card orchestrations dehydrated. The latency was unaffected by the number of vending machines that were connected via the GVCLC adapter. Measurements using one machine or ten machines were identical.

If no cards are swiped for a long time (> 30 minutes) then the request/ response latency next card can be slow (> 2 seconds). This is due to the BizTalk assemblies being unloaded from memory when they are not being used. This was mitigated by setting up a custom domain to contain all the assemblies for this interchange and then setting the properties of SecondsEmptyBeforeShutdown and SecondsIdleBeforeShutdown to –1 as described by Ben Simmonds.

To decrease the request response latency three new “high priority” hosts were created, one for receiving messages, one for XLANG processing and one for sending messages. Additionally a dedicated tracking only host was created and tracking was disabled on all other hosts. After all the BizTalk artifacts had been assigned to the correct host the request response latency for about 400 unique card swipes decreased to be in a range of 1.4 to 1.0 seconds.

Finally we used the new host settings feature of BizTalk 2010 to change some of the settings on these three three hosts only.

  • The value of polling interval was decreased for messaging from 500ms to 100ms for  the high priority hosts for receiving and sending messages.
  • The value of internal message queue size was increased from 100 to 1000 on all high priority hosts.
  • The value of polling interval was decreased for orchestrations from 500ms to 100ms on the high priority hosts for XLANG processing.

The measurement for 400 unique card swipes was now less than 0.7 seconds all of the time and most of the time is less than 0.4 seconds.

In summary the new host setting feature of BizTalk 2010 has been used advantageously to tune this BizTalk interchange. This highlights the power of tuning your BizTalk server either for low latency or high throughput depending on the exact situation. These results are all from my 32-bit development Windows 7 VM and we would expect much better figures from a production server.

Test Condition No Request/Response Latency(seconds)
Range Average
Initial card swipes, one host, one vending machine, and orchestrations do not contain send port to database. 6 0.6-0.5
Initial card swipes, one host, one vending machine, and orchestration contain send port to database. 6 1.2-1.8 1.4
Initial card swipes, one host, ten vending machines, ten card swipes simultaneously on each machine and orchestration contain send port to database. 10+ 0.9-1.8
Initial card swipes, one host, one vending machines and orchestration contain send port to database. 400 1.0-1.9
Initial card swipes, one host, ten vending machines, card swipes sequentially on each machine and orchestration contain send port to database. 852 1.0-2.0
Initial card swipes, dedicated hosts, ten vending machines, card swipes sequentially on each machine and orchestration contain send port to database. 852 0.5-1.6
Initial card swipes, dedicated and tuned hosts, ten vending machines; card swipes sequentially on each machine and orchestration contain send port to database. 852 0.3-0.7 0.5

turbo360

Back to Top