Friday, August 31, 2007

Dynamic Endpoints in Cape Clear BPEL

Creating dynamic endpoints in BPEL using Cape Clear is pretty easy. Just simply overwrite the PartnerLink using an Assign/Copy activity where the From value is the URL and the To is the partnerlink. Having done this in a previous implementation I statically defined these URLs in the BPEL itself. I could have developed a WS that returned the dynamic endpoint but thought that was too much for what was needed at the time. The URLs were static and would not change. Well 6 months later, the URLs are changing as new destinations are being added to the BPEL, in fact double the destinations.

I am now refactoring the current BPEL to use a new feature in Cape Clear BPEL, the XPath function getInitValueParam(). The getInitValueParam() function allows me to send in a property name as a string and get back its value. These properties are defined in the initParams section of the Cape Clear deployment descriptor (ws-application.xml).

I am also employing a while loop to examine the inbound XML for destination data and in combination with the above function, parameterize the destinations externally to the BPEL.

Here is the scenario, an inbound XML messages defines the names of the various intermediaries in the overall process. This is represented in an element called system as shown below:

<system>systema<system>
<system>systemb<system>

The system element is unbounded but is an enumeration of valid destinations. Within the new BPEL, a counter is initialized and a while loop determines if the counter is less than or equal to the number of recipients. For each system I wanted to extract the system element value (the SOAP intermediary name) and then use the getInitValueParam() function to return the destinations URL . I then overwrite the PartnerLink value with this destination and I have performed dynamic endpoint assignment.

Sounds good right! Well I finally got it working through the help of a fellow BPEL blogger. The problem that I had was accessing the system element via array notation. My array value was defined as a variable called counter. I tried using a combination of XPath that would get me down to the system node and then combine this with bpws:getVariableData() of the counter value. No luck. I found this link which got me through the issue quickly:

http://covarm.tvu.ac.uk/blog/?p=4

Basically, it was necessary to use the concat function in a Copy activity to create a xpath query that builds the actual expression I was interested in, in this case something like:

concat('.../.../ns1:system[', bpws:getVariableData('counter'),']' )

then, this value was assigned to a variable called xpath. Then in the next Copy activity I refer to the new variable xpath:

bpws:getVariableData('...', 'parameters', bpws:getVariableData('xpath') )

A lot of work to do something so simple in Java. Oh well, the combination of these features is helping dramatically simplify my original BPEL and providing unlimited scalability for new systems via configuration instead of coding.

Saturday, August 25, 2007

Applying JUnit/XMLUnit to Web Services

As part of developing an agile SOA environment for a couple of clients, testing becomes a critical component. So perusing through cyberspace, I came upon an excellent and simple approach towards this goal. Check out http://www.ibm.com/developerworks/webservices/edu/ws-dw-ws-soa-autotest2.html . This article brings an Apache Commons HTTP Client, JUnit and XMLUnit to bear. The result is a reusable client that posts a SOAP request and writes the SOAP response received to the file system. Then XMLUnit methods are used to provide comparisons between the actual response and the desired response (stored in another file). This approach works well for Request/Response services.

The challenge I have is that my most important client does not have this style of service. Rather they are using One-Way/JMS style of communications. My thoughts here are centering around the use of a centrally available file system or database. Each of the SOAP Intermediaries would need to store the inbound and outbound requests to this storage point. Once the overall cycle is complete, the same XMLUnit features could be used to compare the entire request lifecycle.

Thursday, August 16, 2007

Rules Engines / Rule Based Services

How many people have actually implemented Rules Engines? I first ran into Expert Learning Systems in the 90's when working in the Mechanical Computer Aided Engineering (MCAE) space. They were being used to electronically document engineers/designers knowledge for the development of mechanical components. The idea at the time was to use this knowledge to optimize part development and reduce lead time. The infrastructure behind this was a Rules Engine.

I don't know the answer on the current adoption of Rules Engines but for the right problem, it appears that it provides a compelling solution. I am currently investigating this sector to help identify if the technology is pluggable in a SOA and what type of business problems are best served. It appears that logic that is dynamic is the best candidate. The ability to update an external configuration without having to go through a typical release cycle if the logic was embedded in a bean is definitely attractive. The languages are pretty different between vendors, some using Natural Language approaches while others use a proprietary scripting language. The only constant is support for the JSR-094 specification.

Most of the vendors generate a java implementation or even web services so inclusion into a Web Services infrastructure is pretty straight forward.

So far I have looked at Jess and OpenRules and found the open source community to provide a very good starting point.

Monday, August 13, 2007

Evolutionary SOA / Agile Development

About two years ago, my former company undertook a major development effort to implement an agile development process based on SCRUM. With a team of 20 developers, the company slowly but surely made the effort to transform the development process and use incremental / agile techniques to create a better quality product. The result after 2+ years is evident in the stability and consistency in releases of new features.

One of my current customers is also striving to implement a similar process. Currently development occurs in one major release a year and focuses on a monolothic EJB infrastructure. Unlike the previous company, the switch from a traditional waterfall process to a more agile/incremental environment has several challenges:

  • History (the current organization was built around a single major release and has done so for many years)
  • Platforms (the current infrastructure is comprised of very heavyweight EJB applications that are not flexible/agile)

The organization is tackling this on several fronts:

  • Executive Support (most important is to get executive support on the use of a incremental/agile development approach which in turn will directly correlate to performance factors for the business)
  • Organizational Changes (re-organizing the development team to focus on services)
  • Simplification/Refactoring/Replacement (simplifying the exiting infrastructure for immediate gains and replacing the monolith with a series of simpler services for long term benefit)
  • Putting out to pasture (placing the existing monolith into a legacy category thus limiting its scope/usage)
  • Training (bringing in external consultants to guide the existing development staff on the best practices and fast track the simplification/replacement strategies).

Overall, I have been trying to do the same with the current ESB infrastructure. Through consistent interaction with operations and my desire to continue to refactor/simplify the production solution, improvements in reliability and customer support have been made. Incrementally I am removing / improving the production solution to make it extensible and more reliable for the large volumes of customer that interact with it. This process will never end and thus like the Toyota commercials, will hopefully result in the Corolla of ESBs (400,000 miles and still going!).

Wednesday, August 8, 2007

A Bad Foundation makes a Bad House

If you have worked with BPEL, you know how dependent it is on the foundation. In this case the foundation is XML Schema and the sub-floor is WSDL. Challenges introduced in the XML Schema or WSDL, ripple there way into the BPEL definition. Here are a few novice and advanced issues that have bitten me over the past several years when dealing with BPEL variables:

- Unique namespaces are critical/mandatory in XML Schema/WSDL as they are used to uniquely identify variables and their content.

- Schemas with imports should utilize xsd:ref otherwise bpel:assign activities become a nightmare. BPEL Variables that are generated from XML Schema/WSDL that import other schemas will each have their own namespaces. This makes copying root elements between two variables with different namespaces impossible using the bpel:copy activity. The option is to copy each element individually and see the BPEL script grow exponentially. BPEL 2.0 and its support of inline XSLT will eliminate this issue.

- Some Web Services are RPC Encoded versus Document Literal. bpws:getVariableData syntax is different when extracting values from a RPC Encoded BPEL Variable versus a Document Literal BPEL Variable. BPEL Editors eliminate this issue, however when editing by hand the later will include the variable, part and XPath to the element. RPC only includes the variable and part.

- One of the most simple but common mistakes is to not initialize the BPEL Variables with a literal assignment when using Document Literal services. This will cause run-time failures or other depending on the BPEL engine you are using.