Showing posts from 2008

Performance Tuning JCAPS - Part 2

It's been close to 2 months since my first Performance Tuning JCAPS post. Since then, we've noticed our servers running out of TCP connections under heavy load. Researching this problem, we learned that the JCAPS' JMS server makes heavy use of TCP in implementing Request/Reply queues . We approached Sun for guidance and they assured us that they've had success implementing high throughput applications using the JMS request reply solution... but how? It appears a multi-step solution is needed. First, running out of TCP connections in a scenario like ours is a JCAPS bug addressed in an ESR (110348) and rolled into JCAPS 5.1.3 Update Release 3 . We installed this update and noticed a marked improvement. Running my simple 75 user simulated test, transactions took 2521 ms to round trip (compared to the previous time of 20128ms). While this is great improvement, an average of 2.5 seconds is still really slow for this simple transaction. We reported our findings to Su

Side Effects May Include...

Software development is complex. A lot of communication is needed to coordinate with users and between teams to synchronize efforts. On large teams developing distributed systems, effective communication is exponentially harder. In addition to the users and development teams, you also need to deal with system admins, DBAs, network, and change control specialists. In many environments, this later group of people (the system admins, DBAs, etc.) do not get involved with the project until late stages. The development team might have free reign over development databases for instance, but when the code migrates to a certification environment things are suddenly very different. Teams move from surroundings where they have total control of machine settings, databases, and resources to an environment where they have none. Often, during the migration to a test or production environment things are often "forgotten". Things that often don't make it into an application's ch

Secret Sauce

Good things always have that "secret sauce" - the element that sets it apart from the pack and makes it better than anything else. The "secret sauce" takes something ordinary and makes it extraordinary. It's a Mac vs. a laptop. Disney World vs. a theme park. A Big Mac vs. a hamburger. When I hear company execs talk about what sets their companies apart, it seems there's no real "secret" to the sauce after all. It boils down to hard work and a commitment to your customers. A commitment to a better user experience. A commitment to treating every guest interaction "special". A commitment to a good hamburger every time. There's not a magic switch you can flip... it's a commitment to excellence through hard work. Looking for a shortcut to this kind of success is a waste of time. All too often I've seen software development teams look for magic switches rather than commit to the hard work of fixing the underlying problem.

Right Tool for the Job

Business transactions can vary widely in their definitions, but they can basically be classified into two types - those where I need an immediate response so I can continue with my work and those where I'm going to tell you what to do and trust that you'll get it done while I move on to something else. These classifications are otherwise known respectively as synchronous and asynchronous transactions. When designing an architecture for solving business problems, it's helpful to figure out which type of problem you're solving and then pick an appropriate technology to support it. For example, HTTP might be a good choice for synchronous applications since it is request/reply by nature. JMS might be a better fit for asynchronous solutions since these structures are typically one way. Blurring the lines is the JCAPS offering of a request/reply mechanism for JMS messaging - called appropriately enough, JMS RequestReply . A colleague called this mechanism "synchronou

Performance Tuning JCAPS

We're getting close to (finally) deploying our JCAPS rewrite to production. Our final task is to load test and "tune" the application - making sure it will handle the expected production transaction load. Prior to executing the load test, we added logging statements to report on the amount of processing time spent in each JCD (or processing unit for those non-JCAPpers out there). Using this info, we can determine the slow running processes and optimize them. For this optimization we could follow a traditional approach and profile the code, or we could take an easier route and configure JCAPS to throw more resources at the bottlenecks. This second option is done in the JCAPS connectivity map by increasing the maximum number of threads allocated to run the process. Clicking the input line to the JCD to open the properties, you can set the "Server session pool size" in the Advanced tab (NOTE: this setting is only for JCDs listening to JMS queues or topics). I&

Contractors (ugh)

Contractors - programming contractors, general home contractors like builders, plumbers or electricians, and even mechanics - have bad reputations. While everyone needs these skilled professionals to complete work where they have little expertise, there an inherent distrust in these relationships. Distrust - where total trust is needed. Many times it's unwarranted because contractors are as honest and hard working as anyone else. So why is it so? I think a number or relatively minor, easily correctable behaviors lead to this perception. Personally, I have changed mechanics more than I care to comment on. Not because I felt they were screwing me necessarily, but mostly because they didn't explain the work they were performing to my satisfaction. This led me to doubt their abilities. The doubt and distrust lead me to look for another mechanic who I have more confidence in. It's too bad. I hate shopping around for new mechanics. At home, we recently added a room to ou

Ideal Software Development

I'm the type of person who constantly looks for ways to improve. No matter how well things go, I'm always striving to "plus" an experience - smooth out the rougher edges to make the next time even better. Sometimes the need to do this drives me a little nuts. It drives my wife really nuts! When things don't go well, this task can be a little overwhelming (and depressing). There are so many areas for improvement, it's hard to know where to begin. I've spent the last year or so, reexamining my software development experience - looking for trends in the tools and ideas that helped the teams I was on to be highly productive (and conversely where the lack of some practices led to a lack of productivity and frustration). Combined with many of the things I've read by Spolsky , Fowler , and the Poppendiecks . I've created a list of criteria I'd consider essential in an ideal software development environment. Talk to the target user

Random Thoughts

I haven't been doing too much development as I wait (and wait and wait) for our acceptance testing to complete. Here's a quick rundown of what I've been thinking about while waiting for that to complete.... I've been reading and participating in some of the discussions on StackOverflow . It's a great way to get a pulse on what others are thinking about and working on. I've also been able to pick up some things I never knew existed before. I've taken part in other question and answer type sites before and I had my doubts how helpful this would really be... I've been pleasantly surprised. I listened to these great podcasts on alternative energy. Grass as fuel?? Pretty cool. I've been thinking about maybe getting some solar panels for my house. I would never have thought that the Northeast would be a good place for these until I saw a cool episode of Nova sometime back where someone in Mass. had some installed on their house. Now maybe I'

"Nobody knows the trouble I've seen..."

I haven't had a lot to write about this month. I'm in the middle of this seemingly endless, painful cycle where the more we regression test, the more unimplemented requirements we uncover. That's right, regression testing is defining our requirement set . It's a symptom of this big rewrite I'm working on. The system has been around so long, that no one can remember every intricacy in the existing application. Regression testing is the only way to uncover the missing functionality. It's a nightmare! Compounding the problem is the difficulty in regression testing. There is a "certification" environment that is supposed to be an exact replica of production. But when we try to run production requests through our new application, the results are different. Our business users review these inconsistencies and often tell us that our responses are actually better than the production version. This is a good news/bad news type of thing. While it's

Cheating on Tests

I just listened to Kent Beck's presentation from this year's RailsConf for the second time (something I almost never do). One of the gems I loved about his presentation was when he described test driven development as "cheating" (at the 27:08 mark). Exactly! I feel a little foolish for not thinking about it about this way before. It is cheating! How many times in school did I wish I had the answers to tests before I took them? Or wished I even knew the questions? Answer: Every time. Why? Because it would've been so easy! Imagine all the time I would've saved, with assurance that my work would always be correct and complete. Extending this idea to software development should be a no brainer - figure out what the software should do ahead of time (and write tests for it). This way you know exactly what to code, when you are done, and that your solution is complete and correct. It's so easy (it feels like cheating).


I'm usually not in charge of scheduling. I've contributed to schedules, but someone else - usually someone from management - drives schedule creation. I also can't remember a situation where the schedule actually worked. What I mean by this is, a lot of time was spent to create schedules for 2-3 months into the future - then not really revisited (adjusted) for a month or more. By this time something invariably happens (priorities change, initial tasks take longer than expected, vacations/time off weren't accounted for, etc.). This throws the entire thing out of whack and quickly makes it obsolete. A month or more will pass, then the whole process starts again with the creation of another schedule. For the project I'm currently on we've spent a lot of time creating 2 schedules from scratch and are currently creating a third. The first two have been complete failures and I have low expectations for the third. It's frustrating to spend a lot of time and


Steve Yegge has this great post today about business requirements and building software that you'd personally use in a domain that you actually know . Great stuff! I need to figure out a way to print wallet size copies to hand out at requirement meetings.

Second Thoughts on SOA Architecture

When I first joined this SOA team, I questioned the architecture we were implementing. The application flow is basically linear and uses JMS to pass messages between the various functional components. The components themselves were not designed generic enough to be shared by multiple applications. Like I mentioned earlier , the main driver for rewriting this application was for speed. The whole thing seemed very unSOA-like to me. Since none of the components were reusable outside this application, why not code the entire project in a "traditional" way? This approach would improve performance, replacing the overhead of multiple marshal/unmarshal operations needed for JMS with direct function calls. My perspective changed (a little) recently when we needed to add new functionality to the system. We could have modified one of the existing components to implement the new feature. Functionally, it made sense to keep this related logic together in a single module. The main p

Top Down or Bottom Up

When developing Java RPC-style web services, I'm torn whether to follow a top-down or bottom-up approach. Using the bottom-up approach, you create your Java classes first and use the @ WebService and @ WebMethod annotations to specify that this is your web service interface. The WSDL for the service is generated automatically during the build (pretty nice). This approach has it's advantages in that you never have to leave Java to create the WSDL. Creating a WSDL file is not trivial and using this approach means you never need to think about the WSDL's <service> , <binding> , or <portType> elements and how they're linked together. Conversely, a top-down approach means you create your WSDL first, then use a utility (either your IDE or a command line program) to generate the Java skeleton. Using this approach you need modify the generated Java stubs to call your business logic. If the interface (the WSDL) changes, you'll need to regenerate t

JCAPS 6 Impressions

I've started to use JCAPS 6 a little of over the past month and thought I'd share some of my initial impressions. Finally a JCAPS editor I can use! I found the previous version of the JCAPS IDE hard to develop with. It was based on Netbeans 3 and didn't have a lot of the features I had grown accustomed to using ( refactoring tools and a local history to name two). It was just plain old, clunky, and slow. The new version is based on the latest Netbeans and all the goodness is back. And it's fast. No more repository. Well, almost. JCAPS 6 has a few different modes to work in - repository based or non-repository. A repository based project allows JCAPS developers to work in essentially the same way they did with JCAPS 5 - JCDs , OTDs , connectivity maps, deployment profiles and yes - the repository. It's also the way to migrate legacy JCAPS project to the new platform - through a simple export and import between the repositories. I haven't used t

Process Over Technology

As the struggles continue with the application rewrite I'm currently involved with, I've read and reread these articles by Joel Spolsky , James Shore , and Chad Fowler on why this path generally leads to failure. After I finish reading articles like these I often ask myself: since many of the industry thought leaders think these projects are a bad idea, why are they still so common? I mean, Joel's article was written in 2000. Eight years later (an eternity in technology) that article is still extremely relevant, but it's lesson has failed to penetrate mainstream thought. There's not enough industry focus on how best to approach legacy system enhancements. Many software development articles that garner the most attention focus on quality improvements delivered through innovations in technology - new programming languages, frameworks, APIs , architectures, and hardware. While that stuff's definitely important, they're relatively little help to applicat

Head Scratcher

I wish I was on board at the very beginning of the project I'm working on. Maybe it would shed some light on the direction the client is currently taking. This project is a rewrite of an existing, poorly performing, data processing system. The rewrite is being done in two parts. Phase 1 replaces the front half of the system - responsible for receiving messages, routing the transactions to the appropriate business logic, then finally, transmitting a response. Analysis (performed before I got here) identified this logic as the application bottleneck. This portion of the application was rewritten in JCAPS and it's currently being tested (I haven't seen any performance data for the rewrite yet). The second phase of the project replaces the business logic, currently written in C++, with BPEL . Let me say that again... the second phase of this project replaces compiled C++ with interpreted XML . For speed. Anyone else confused? Now maybe implementing the business

Documenting Requirements

Almost every place I've worked struggles (in varying degrees) documenting business requirements. Maybe not the initial requirements, but the new requirements/enhancements that emerge as releases are delivered. They usually stem from a conversation that starts out, "I really like this, but it would be really cool if it could also do this other thing I had not thought of until now." This is a bigger than normal issue at the place I'm currently working. Mostly it's because new requirements are not captured in the same documents as the original specification. Instead, a new document is created and placed into a shared folder on the network (or sometimes a series of different folders). Over time, there are a lot of distributed requirement documents and it's really difficult to figure out what the system is supposed to do (and where to look for this information). Please, place all your application requirements in a single coherent "document"! Make

Recent SOA questions/thoughts

We've had some interesting SOA desing questions lately. Here's a quick rundown with my thoughts... Is it a good idea to wrap DB calls in a web service? Some argue that this is a waste for internal applications. That these services don't provide any business value - query results are simply regurgitated back to the caller. While I agree that these calls don't add much business value, the benefit lies in abstracting the DB calls from the caller. In simplest terms, wrapping access means that callers need not concern themselves with managing database connections. Additional changes/enhancements to the schema can now be handled in a single place rather than mandated enhancements to DB clients - even the DB implementation, location, and authentication updates are transparent to users. To me, services like this - services that simplify interactions - are a big part of what SOA is all about. As SOA systems evolve/extend, is it a better idea to plug in new functionality or

Something old, something new

In another discussion recently about SOA , it still surprises me a little that people, technical people, don't get that there's not much new here. For years it's been standard practice to separate functionality into functions, classes, and modules. The idea has always been that these smaller, highly specialized components are easier to share and maintain than monolithic blocks of code. Functionally, SOA is not much different. The goals are the same - reusability and easy maintenance. The biggest difference - in the case of a web service SOA - is that the shared library included in your application is replaced with an HTTP call. If you're already supporting message-oriented applications using MQ or JMS , these messages are now HTTP requests. The process of decomposing the application into reusable services is essentially the same. The real point here is, if you already have a modularized architecture and you're having problems with application crashes, a swi

Targeted Content

I'm fascinated by all the context sensitive/targeted media that is making it's way into the mainstream. The other day I was reading an email from my wife having to do with bringing some milk home and something about a recent trip we had taken. The Google ads on this page were some that you'd expect - one on how to save gas, another for a baking product. The one that really surprised me was an ad for silica gel?!? Silica gel is a moisture absorber and usually found in the little packets that say "Do Not Eat" included when you buy a new pair of sneakers and many other products. This was intriguing to me because there was no mention of anything related to this in the body of the message. However, my wife works for a company that produces these products and sent the email from her work address. Not only was AdSense was smart enough to pick up on the from address and trace it back to an actual company, it knew what the company produces, and suggested an approp

Cognitive Dissonance?

In this week's StackOverflow podcast , Joel Spolsky describes cognitive dissonance - and it may help explain the the JCAPS love fest at my current client. This conversation happens around the 11-minute mark. I knew there had to be a name for this ...

More Nonsense

I know I need to let this go, but I can't stop thinking about the nonsense of this BPMN -> JCAPS -> .Net web service architecture proposed by management at the client I'm working with. The main reason for the base web services to be implemented in .Net rather than directly in JCAPS is that this organization has 10x as many .Net developers as it has Java/JCAPS developers. The thinking is that this ratio will make it easier to find available bodies to maintain and enhance these services. While this makes sense, any guess on how many people know BPMN? A small handful... all of them contractors. There's not even a BPMN modeling tool in place at the company. Yet they're convinced this is the way to go. Is it reasonable to expect business users to create BPMN models? While I realize the GUI interface makes it less like "coding", I think that it'd be helpful to have a basic understanding of boolean logic, parallelism, and exception processing in cre

JCAPS = Nonsense (at least here)

Come hell or high water - this company is hell bent on including JCAPS as part of it's enterprise architecture, even though it doesn't plan to use any of the features that might set it apart from it competitors or open source alternatives. The long-term plan here is for the business users to use a BPMN-compliant tool (not the one from JCAPS) to create the business processes. These processes will integrate web service calls, creating a sort of business mashup of these services. The services will be written in .Net, not JCAPS. The only place JCAPS enters the equation is as the platform to run the BPEL generated from the BPMN. Make any sense? Not to me. Isn't a BPEL engine included in Glassfish? I'm sure this is just one of several free or low cost alternatives to execute BPEL. What I can't figure out is this company's infatuation with JCAPS. Given the shortcomings they've encountered already, why they are so intent on looking for more places to impleme

Observations in Poor Management

Yesterday two people at the place I'm working gave notice that they'll be leaving the company. There has been a lot of turnover in the six months I've been here. Based on what I've seen, I'm not really surprised. Still, there always seems to be some head scratching by management and some longer tenured employees on why these people are leaving. Here's some of what I've seen. I've already mentioned the poor tooling and processes in place here - starting with Lotus Notes - so I won't rehash that here other than to mention the tooling shortfall has been brought to management's attention many times with no real action. Some people I sit near are in a constant state of emergency. Their production systems break daily (even nights and weekends) because the company's trading partners send messages that don't conform to their messaging API. Instead of rejecting these transactions, management's approach is to ignore the problem, askin

Links to Some Useful Resources

I've been a fan of Joel on Software for some time. Recently Joel (Spolsky) started a new enterprise, , with Jeff Atwood. They've been releasing some entertaining and informative podcasts as part of this new endeavor. Good Stuff. Who knew you could wash your electronics?? ***** Joel has written a few books. One I haven't read yet is titled Smart and Gets Things Done: Joel Spolsky's Concise Guide to Finding the Best Technical Talent . While I'm sure this book is helpful, what I really need is a book that can help locate progressive companies which allow employees to work smartly and get things done. I don't know if it's endemic to the region I live or it's more widespread than I imagine, but none of the companies I encounter here are doing anything close to Agile development, few are working with Ruby, most are afraid of open source, none have heard of REST web services, and most are reactive problem solvers rather than proactive.

Unit Test Saga

It's been almost a month since I last posted! I think it's partially due to the fact that I haven't had much interesting work to talk about. Mostly I've been frustrated in trying to convince my teammates why automated unit testing is important (and how to better use our horrible source control system). Most of the objections are the familiar "we'll be writing more test code than application code", "how can we test what we haven't written yet", "it takes too long", etc. I think I've finally convinced them of the benefits though, and we're starting to write automated tests! I'm hoping I can now introduce Test Driven Development and convince them to start writing their tests before the code. I came across a podcast last week from Net Objectives that does a much better job of explaining the benefits of Test Driven Development than I ever could. I thought it was great and I hope it's useful to others as well.

Silver Bullet Syndrome

I've noticed a trend in my current organization which I'm sure is common to many companies. I'm having a hard time defining it, but let's call it "Silver Bullet Syndrome"... where when faced with some business problem, companies immediately look for an off the self (OTS) technology solution - many times without understanding the full scope of the problem. The OTS solution is viewed as a "Silver Bullet" and will immediately solve all of the organization's woes. Now I'm not saying all off the shelf software is bad, but often times these business problems are not generic enough to be addressed by a general solution. The lack of flexibility of many OTS solutions compounds the problem, forcing companies to modify business processes or requirements to meet the capabilities of the tool instead of the other way around. Once the tool is in place however, another problem manifests itself. This often starts with a management mandate like, "OK n

SOA Readyness

I'm wondering if my company is ready for SOA . Right now the various business units in the company support many pieces of custom built technology which provide essentially the same functionality. Consolidating this functionality into a common, shared set of services is what SOA is all about right? Not only will the footprint of what we need to support be smaller, but we'll be able to meet future business needs faster (getting a jump start by reusing these prebuilt, prepackaged, pretested functionality). Isn't SOA a no brainer in this instance? I'm not so sure. Building successful SOA components (even ones designed for internal use) need a product development approach. Code before SOA is essentially "set it and forget it". Failures in a particular component have a relatively small impact. In a SOA solution, things like scalability, stability, support, and release management are much more important. The significance of these traits increase proportionat

Change Control Gone Wrong

Change Control & Release Management are two of the most fundamental elements in software engineering. You write code. You release code to the world. All the while, track changes to the code in case something goes horribly wrong and you need to revert to a previous version. It's pretty simple. Modern change control systems like CVS provide additional functionality. When checking in changes, you can include comments to describe the changes. Tagging the repository ties various versions of source files together (i.e. for a release). Branching involves splitting the repository so parallel development can take place (i.e. to support bug fixes to a release while the next version of development continues on the trunk). Changes on the branch can be merged into the trunk, eliminating the need to apply fixes twice. These features are fairly standard in most of the change systems software development teams use today. The point of this post is to ask why some organizations are rel

JCAPS Unit Testing - Part 3

Now that we can test jcdTargets from a jcdUnitTester , the final step is to use a combination of JUnit and HTTPUnit to execute and verify the tests. Basically we turn jcdUnitTester into a simple RESTful Web Service where we post the tests. The first set of changes are to jcdUnitTester . My first pass at jcdUnitTester hard coded the test into the JCD. Since we'd like to change the message per test, it'd be better to pass this in as parameter on the HTTP Request. Use the getRequest().getParameterInfo().getWebParameterList() method of JCAPS' HttpServer class to get these values. Other values we'll need, in addition to the test itself, is the queue/topic name where the test will be placed and the queue/topic where jcdUnitTester will listen for the response. Use the same technique described in the previous post to set the replyTo field in the JMS message. This should be the minimum set of values we'll want to send to jcdUnitTester . Some other optional t

JCAPS Unit Testing - Part 2

Last time I explained how to trigger a JCAPS JCD from a web browser. This entry will hopefully clarify some of the items in that post, before building on that functionality to execute JUnit tests using HTTPUnit . First, the JCD created to listen for HTTP requests (let's call it jcdUnitTester ) will drop messages on a queue or topic where the JCD we want to test is listening (call it the jcdTarget ). The jcdTarget normally processes the incoming message, then passes the result to the next process via a queue, topic, or some other transport mechanism (call it targetResponse ). For jcdUnitTester to return the message to browser in the HTTP response, jcdUnitTester must be listening on targetResponse (creating a loop between jcdUnitTester and jcdTarget. ) In my code, most of my jcdTarget s involved JCDs sending responses back on a topic or queue. To make the JMS jcdTarget s dynamic, I've taken advantage of the JMS eGate class's sendTo method. The targetRespons

JCAPS Unit Testing - Part 1

Since starting with JCAPS last November, I've looked for a better, automated way to test my code. During JCAPS training, the test exercises are kicked off by placing files into a "hot folder" monitored by a File eWay. It's very cumbersome to monitor the directory after placing the file - constantly refreshing the window, wondering why the file is not being picked up - or what is taking so long. In my company's environment, the JCAPS development server ran on a shared machine that was cumbersome to access - adding to the difficulty in dropping this file. Since most of our components listened on JMS queues or topics, dropping messages directly to these queues would've been an option, but these were not accessible outside the machine. I knew there had to be a simpler, more elegant way to trigger our tests. What I needed was a way to drop a JMS message into the system and trigger my JCD. What I wanted was a servlet-like mechanism that I could trigger at will

Crazy Lately

Things have been really hairy at work lately. I haven't had much chance to write. Here's what I've been up to. JCAPS One of my goals this year has been to make JCAPS more usable. A tall order indeed. I'm going to hit on the details of some of these items in later posts, but here's a bullet list of some of the issues I've been working through. My organization uses some non-standard x12 EDI messages. The way to get JCAPS to process these messages is by creating an OTD from an SEF file. I had no luck finding someone to help me create such a file and there are surprisingly few Java libraries that support these x12 messages. I wound up writing my own parser. It works pretty well and can be extended to support other message types. I've developed an approach that provides an automated way to test my JCAPS collaborations using JUnit. I'll describe this approach in future posts. The company I'm working for has a change control process th

Heavy Air

I downloaded some of the Air samples and the first thing it wants to do is install the application on my machine. I expected an Air application would be fairly lightweight (like a client-side Java application), and as long as I had the Air runtime on my machine the application would run - no administrative rights required. Not so. And too bad. Installing Eclipse is simply an unzip. There is no need to add entries to the registry or provide administrative rights. It's simple and that's the way it should be. I wonder why Adobe didn't follow this model?

Podcatcher Prototyping

I started this podcast project so I could learn some things I haven't had time to experiment with in my "real" job. Things like playing more with Ruby, trying out Behavior-Driven Development, and experimenting with Agile practices (plus I really wanted a better podcatcher program and I needed something to do over the winter). While I haven't produced much real code yet, I've created some simple prototypes and wanted to comment on some things I've been looking at. Reading an RSS feed My first prototype was a very simple Ruby program to read and parse an RSS feed. I had found some code here using the standard Ruby RSS Parser so that's where I started. At first I thought it was broken, but after about a minute and a half it returned. This wasn't going to cut it, so I started to look for alternatives and quickly found the feed-normalizer gem. Some quick coding and the total time to access the RSS feed was reduced to about 10 seconds. Not bad

JCAPS Training Notes

The JCAPS training I attended ( Foundations of Java CAPS II ) was excellent. We screamed through the course material in 3 days and had discussions on other JCAPS topics the other 2 days. Here were my impressions. I enjoyed learning about the eInsight Business Process Manager . The tools and steps involved in creating a business process are very similar to those used to create a JCD. There are a lot of interesting constructs like correlation and the "flow" element, which easily enable parallel processing. One frequent topic of discussion, was when to use an business process (eInsight) rather than a JCD. There's a certain flexibility to creating a business process, gained at the cost of speed. Since you can do virtually everything in a JCD that can be done by a business process, we questioned when each was a better fit. We didn't reach a consensus and I'm sure we'll be talking about this again. We explored the shortcomings of the repository and the be