Posts

Showing posts from March, 2008

SOA Readyness

I'm wondering if my company is ready for SOA . Right now the various business units in the company support many pieces of custom built technology which provide essentially the same functionality. Consolidating this functionality into a common, shared set of services is what SOA is all about right? Not only will the footprint of what we need to support be smaller, but we'll be able to meet future business needs faster (getting a jump start by reusing these prebuilt, prepackaged, pretested functionality). Isn't SOA a no brainer in this instance? I'm not so sure. Building successful SOA components (even ones designed for internal use) need a product development approach. Code before SOA is essentially "set it and forget it". Failures in a particular component have a relatively small impact. In a SOA solution, things like scalability, stability, support, and release management are much more important. The significance of these traits increase proportionat

Change Control Gone Wrong

Change Control & Release Management are two of the most fundamental elements in software engineering. You write code. You release code to the world. All the while, track changes to the code in case something goes horribly wrong and you need to revert to a previous version. It's pretty simple. Modern change control systems like CVS provide additional functionality. When checking in changes, you can include comments to describe the changes. Tagging the repository ties various versions of source files together (i.e. for a release). Branching involves splitting the repository so parallel development can take place (i.e. to support bug fixes to a release while the next version of development continues on the trunk). Changes on the branch can be merged into the trunk, eliminating the need to apply fixes twice. These features are fairly standard in most of the change systems software development teams use today. The point of this post is to ask why some organizations are rel

JCAPS Unit Testing - Part 3

Now that we can test jcdTargets from a jcdUnitTester , the final step is to use a combination of JUnit and HTTPUnit to execute and verify the tests. Basically we turn jcdUnitTester into a simple RESTful Web Service where we post the tests. The first set of changes are to jcdUnitTester . My first pass at jcdUnitTester hard coded the test into the JCD. Since we'd like to change the message per test, it'd be better to pass this in as parameter on the HTTP Request. Use the getRequest().getParameterInfo().getWebParameterList() method of JCAPS' HttpServer class to get these values. Other values we'll need, in addition to the test itself, is the queue/topic name where the test will be placed and the queue/topic where jcdUnitTester will listen for the response. Use the same technique described in the previous post to set the replyTo field in the JMS message. This should be the minimum set of values we'll want to send to jcdUnitTester . Some other optional t

JCAPS Unit Testing - Part 2

Last time I explained how to trigger a JCAPS JCD from a web browser. This entry will hopefully clarify some of the items in that post, before building on that functionality to execute JUnit tests using HTTPUnit . First, the JCD created to listen for HTTP requests (let's call it jcdUnitTester ) will drop messages on a queue or topic where the JCD we want to test is listening (call it the jcdTarget ). The jcdTarget normally processes the incoming message, then passes the result to the next process via a queue, topic, or some other transport mechanism (call it targetResponse ). For jcdUnitTester to return the message to browser in the HTTP response, jcdUnitTester must be listening on targetResponse (creating a loop between jcdUnitTester and jcdTarget. ) In my code, most of my jcdTarget s involved JCDs sending responses back on a topic or queue. To make the JMS jcdTarget s dynamic, I've taken advantage of the JMS eGate class's sendTo method. The targetRespons

JCAPS Unit Testing - Part 1

Since starting with JCAPS last November, I've looked for a better, automated way to test my code. During JCAPS training, the test exercises are kicked off by placing files into a "hot folder" monitored by a File eWay. It's very cumbersome to monitor the directory after placing the file - constantly refreshing the window, wondering why the file is not being picked up - or what is taking so long. In my company's environment, the JCAPS development server ran on a shared machine that was cumbersome to access - adding to the difficulty in dropping this file. Since most of our components listened on JMS queues or topics, dropping messages directly to these queues would've been an option, but these were not accessible outside the machine. I knew there had to be a simpler, more elegant way to trigger our tests. What I needed was a way to drop a JMS message into the system and trigger my JCD. What I wanted was a servlet-like mechanism that I could trigger at will