It’s About Process (or the Ability to be Responsive) — Part III

To that end, Webcom Inc. has leveraged its vast expertise earned while addressing many complex sales quote-to-order (Q2O) process issues (i.e., channel quote approvals, special pricing approvals, special non-standard product feature request approvals, etc.) and has created a brand new workflow engine, which can be (and is already) used for many generic business processes.

Such examples of processes would be: RMA (Return Material/Merchandize Authorization), NFR (New Feature Request), ECN (Engineering Change Notice), NPR (New Product Release), Bug Tracking, Engineering Change Request, and many other business processes that require approval steps.

The Ability to Respond, On-demand

In May 2008, Webcom announced the availability of ResponsAbility, its newest offering addressing the case management and workflow processing areas. ResponsAbility is designed to speed the “time-to-resolution” process, eliminate unnecessary time delays and improve overall value chain communications and productivity through improved transparency and collaboration.

The idea behind this case management and workflow solution was to help organizations keep their projects on track and their employees on the same page, thereby making the lives of internal and external team members much less complicated (and more productive and enjoyable).

This straightforward application provides a central location (repository) for managing the key aspects of many types of cases, including product and service defects, customer and supplier complaints, non-conformance issues, health and safety incidents, and RMAs. Separate tabs keep key information within easy reach, whereby team members can log issues as they arise, prioritize them, and update their status as appropriate.

Built-in reports let users see open issues by project, projects by stage, and many other categories. On a proactive side, the tool can be leveraged by companies to create and implement corrective and preventive actions (CAPA) and to support a plethora of regulatory and compliance requirements. All in all, users that have always had the responsibility now have the “ability to respond”, as required.

This case management software may not currently have all the bells-and-whistles associated with full-fledged BPM packages, such as programmatically driving a workflow engine, visual process modeling, process monitoring and optimization, or automatic task allocation based on workload. Still, it seems well suited for small and medium size companies, who can leverage such a software tool with an intuitive user interface (UI), for handling many, if not all of their processes, in an incremental manner.

The design and enforcement of processes is enabled because both administrators and end-users are able to design workflows, notifications, and data collection forms, as well as setting up permissions accordingly. The system manages cases by ushering each case through the resolution process, and by tracking the progress of each case throughout the entire process.

The multi-tenant software as a service (SaaS) delivery model ensures that a customer can be up and running quickly with all of the selected critical processes being modeled and functional. No onsite deployment is necessary and the software only requires a Web browser and some modest to minimal data and process setup to be up and running.

Brethren Software Vendors as Likely ResponsAbility Users?

For example, a software development company can deploy this tool within a day or two and allow its customers to report bugs. This information can then be internally routed according to a customized workflow to the support department, then to the engineering and testing staff, and then back to the customer for approval and case closure.

To elaborate, the Software Bug workflow logically starts with the customer reporting a software bug. Then a default assignee at the software vendor reviews it, and then either resolves it on the spot (hopefully) or assigns it to the software engineering staff by providing a test case. Then the software engineering team determines a cause for the bug and either provides a workaround, fully fixes the bug, or determines that the software behaves as designed after all.

At the same time, ResponsAbility can be used to allow customers to create new feature requests, which are then routed via a different customized workflow starting from project management, via development, release scheduling, back to development, quality assurance (QA), documentation (technical writers), product management, and finally to marketing teams.

Again, if the bug can be fixed, the case is assigned to the testing staff, back to the support team, and finally back to the customer for approval and case closure. But, if the issue turns out not to be the bug after all, the case is then converted to a new feature request and follows an entirely different workflow.

To that end, the New Product Feature Request process starts with customers, sales & service people, channels and/or product managers requesting a new feature. Often, the existing users (install base special interest groups [SIGs]) are allowed to vote on it, and based on the number of votes and other factors, some new features are assigned to the engineering department to estimate the effort entailed to implement the requested feature.

Based on the estimate and other criteria, some new features are then assigned to the engineering or research and development (R&D) departments for implementation. Upon implementation, the new feature is assigned to the QA department for testing and approvals. Finally, based on the QA results, a new feature is returned back to engineering for a rework or is scheduled for production (or general availability).

Apparently, various instances of a process (called cases) can be changed midstream. For example, something that was initially entered as a bug upon investigation may be classified as an expected behavior. The customer who did not expect such behavior from the software can then change a case type of this instance from a bug to a new feature request, without having to re-enter any information and this case will then follow the prescribed new feature workflow process.

Also, a built-in notification and permissions engine ensures that all communication and collaboration happens within ResponsAbility, so everybody is aware of anything that anybody ever stated about the case via comments, file attachments, etc.

Unlike some of the simple issue tracking software packages mentioned in Part II, ResponsAbility can be used not only for tracking things, but also for enforcing a process in order to ensure that things get done correctly. For example, a workflow engine can be set up to make sure that a process status cannot be changed from “bug fixed” to “in testing” until a concrete test case scenario is provided by a user via customizable online forms.

Webcom — “Eating Own Dog Food”

It might be interesting to note that Webcom, as a software developer itself, has since late 2006 been using ResponsAbility internally for its older sibling WebSource CPQ product’s bug tracking and new product features introduction.

The traditional model, whereby the dedicated product/project manager and support staff were the only bidirectional conduit between the client’s team (i.e., WebSource CPQ users and administrators, local project manager, application owners, stakeholders, etc.) and Webcom’s team (i.e., developers, modelers, QA, consultants, product managers, etc.), has over time been shown to have many disadvantages.

Namely, despite the dedicated project manager’s intimate knowledge of the individual client’s installation and the established relationship and hand-holding comfort level, the challenges have repeatedly been the bottleneck nature of the dedicated project management and support team, with no significant value being added by this additional layer of communication.

Other disadvantages would be the all too often “black hole” syndrome due to the lack of a single project/client/tasks/issues depository. Therefore, priorities are often managed on an inefficient (and often redundant or conflicting) one-to-one basis.

The advantages of the new support model, with ResponsAbility providing a single repository of all cases (in a hub-and-spoke manner), start with collaboration and the ability for all parties to both instantly contribute to the case/task/issue and have instant visibility into the case status. Also, new resources that include clients, Webcom employees and third-parties (partners) can all immediately participate and be notified, while the enabler for everyone is also an advanced searching capability within the system.

The Webcom Q2O clients’ adoption was initially somewhat tepid due to the ingrained human habit of emailing or calling directly the preferred contact or due to the clients having their own issue tracking systems. Of course, there is always the need for a human touch and chatting (as a “bonus”) with Webcom associates about the “critical” issues like a “lovely” winter weather in Wisconsin or about the Green Bay Packers’ revival.

Nonetheless, joking apart, from the end of 2007 ResponsAbility has been the sole vehicle for communication, tracking and managing tasks and cases at Webcom. Prior to that, Webcom had used the JIRA issue tracking system, which at the time allowed users to create a workflow based on a set of offered statuses.

However, at the time (the things might have meanwhile changed though) there was not the user’s ability to create statuses and workflows at will. For instance, the offered statuses were “open,” “in progress,” “closed,” etc., but the user could not create a custom status like “material returned”, “in engineering”, “being analyzed” or so.

Further, users could add custom fields, but they could not design forms in a drag-and-drop fashion. There was no way to specify forms and fields for each action (task) either, so that, e.g., when the process passes from the “bug fixed” into the “in testing” phase, the user could not create a mandatory field named “test case.” While administrators had ample controls, the end users had very little control over what fields they could see on the screen, and so on.

Key ResponsAbility Design Tenets

In contrast, ResponsAbility was built with several design concepts in mind, starting with scalability in terms of users’ ability to create an unlimited number of cases, processes, statuses, status transitions, custom fields, users, user types, departments, etc.

There is also flexibility in terms of creating permissions (e.g., by project, by process, by custom fields, etc.) and the assigning of rules and permissions is visible system-wide. As for data flexibility, there are custom fields and forms and process-related fields and forms, while at each process point (step) fields can be assigned as read-only (viewable), editable, and/or required (mandatory). There is also a flexible definition of assignments, notifications, and recipients, whereby conditional actions drive implicit and explicit notifications.

Furthermore, the ease-of-use concept translates into hardly any training required, whereby the idea for the tool is to be perceived by users as their enabler for getting things done instead of an enforced mandatory tracking tool by the “ivory tower.” Some examples of the ease-of-use features are:

* An intuitive drag-and-drop interface for administrators to design and preview online forms;
* An instant system feedback regarding the field size, informing users how many characters they still have left or by how many characters they have exceeded the maximum field size, and all of this happens dynamically while they are still typing;
* When looking at the list of cases, dragging a mouse over a case will bring additional fields in a hover (a so called “mouseover”), so that a user can find out more about each case while browsing a list, without having to open each case (thereby saving valuable time); and
* Each list of cases can be customized (personalized) by users in order to show fields as columns based on what that user is interested in or what a user considers to be important. If, e.g., a case type has 100 fields, it is impractical to put them all as columns in a list of cases on the screen. It is also impossible to select 10 most important fields universally because their importance depends on individual user needs. Therefore, each user can determine (select), in a drag-and-drop manner, which fields are truly important for them.

Last but not least, the ease-of-setup tenet starts with a pre-built library of processes, but companies can certainly create their own processes with an intuitive and flexible setup of forms, workflows, notifications and permissions. In addition to the abovementioned advanced search capability, users have a facility of unlimited comments and uploading of attachments.

It’s About Process (or Ability to be Responsive) — Part II

BPM Suite Components

Full-fledged BPM system components thus include visual process modeling: a graphical depiction of a process that becomes a part of the application and governs how the business process performs when companies run the application.

They also feature Web and systems integration (SI) technologies, which include displaying and retrieving data via a Web browser and which enable companies to orchestrate the necessary people and legacy applications into their processes.

Another important BPM component is what’s been termed business activity monitoring (BAM), which gives reports on exactly how (and how well) the business processes and flows are working (for more information, see TEC’s article entitled “Business Activity Monitoring - Watching The Store For You”).

Optimizing processes that involve people and dynamic change has been traditionally difficult, and one barrier to optimization has been the lack of visibility and ownership for processes that span functional departments or business units, let alone different enterprises. In addition, the industry often changes faster than information technology (IT) departments can update the applications set that the business relies on to do its work, thus stifling innovation, growth, performance and so on.

But today, the pervasiveness of Web browsers and the emergence of simpler application integration technologies such as Web sevices, simple object access protocol (SOAP), extensible markup language (XML), business process execution language (BPEL), etc. have enabled IT staff to deploy technology that supports the business process across functional, technical and organizational silos.

In the broadest sense, BPM components address the issues of the following: process modeling, documentation, certification, collaboration, compliance, optimization, and automation (i.e., via a workflow engine that is rule-based).

Again, highly functional, top-of-the-range BPM suites use graphical (visual) process modeling tools that enable business users and business analysts (i.e., those people that are most familiar with the process) to implement and manage the process definition. To complete any transaction, the BPM suite must also call on various siloed legacy applications that hold necessary information, for example, customer, inventory or logistics data.

But to the ordinary user the complex process that runs over many enterprises and various systems should appear seamless. End-users should be spared the effort of hunting down the scattered information themselves, since the underlying BPM platform provides tools for:

* Business analysts to model (and change) the business processes and define the business rules that control how those processes behave;
* IT departments to integrate the necessary legacy systems;
* Joint teams to build applications for the end user that enforce the processes and rules; and
* Management to review process performance (e.g., the required time to resolve client return exceptions) and even adjust process parameters in real-time (e.g., increasing the dollar value threshold during peak periods to trigger management review and approvals of client returns).

Therefore, the most vital BPM attributes would be the following: being event-driven, orchestrated, intended for both internal and external processes/customers, and leveraging human-centric workflow and business analytics.

With the leading BPM platforms/suites, everyone in the company will be working on the same shared data and process model, so changes to the process can be put into action very quickly. This is because these sophisticated platforms provide integrated process modeling, real-time process monitoring, and Web-based management reporting — all working in unison to support rapid process innovation.

BPM — Much More than Integration

BPM is often used to integrate multiple enterprise applications and various internal and external users into a new process, but it goes way beyond mere integration. Whereas traditional enterprise application integration (EAI) products help companies to move data between applications, BPM adds interaction with people and the ability to support processes, which then become as manageable as data.

BPM integrates existing applications, Web services and people in order for companies to quickly change, destruct or construct processes as required. Again, BPM enables a company to more cost-effectively and quickly model and change its business processes to meet the specific requirements of a particular business. Via BPM, people can be involved in two ways:

1. From a rank-and-file employee point of view — BPM represents units of work from the business process as tasks, whereby each task contains work instructions, status, priority, due date and other attributes. Workers use BPM to monitor and execute the tasks that are assigned to them or to the workgroup to which they belong; and
2. From a manager or executive point of view — Managers and executives use BPM to monitor process performance by viewing graphical reports that summarize task status and alert them to process bottlenecks. They also frequently get involved with tasks by participating in approval or escalation process steps.

Thus, many BPM products provide real-time monitoring and insight into the process operation. The process flow model of BPM allows management the ability to not only easily identify bottlenecks and inefficiencies in the process, but also to more easily modify the process to improve productivity.

For instance, with industrial (plant-level) BPM deployments, companies can digitize their work processes and close the loop on performance with actual execution data. By applying BPM in manufacturing plants, companies can manage and audit their production more effectively and consistently thus improving their conformance, compliance, throughput, and ability to deliver. They can also empower their workforce by integrating people and their roles and by customizing individuals’ work styles and decision-making processes.

Astute BPM suites that focus on manufacturing can enable companies to close the loop on production process improvement, digitize good manufacturing practice (GMP) tasks, standard operating procedures (SOPs) and work instructions. They can also enable corrective action/exception management, Hazard Analysis and Critical Control Point (HACCP) monitoring procedures, and also orchestrate high-level processes and manage data between various disparate systems and empower domain experts to solve production problems immediately on the shop floor.

For more information on BPM, see TEC’s earlier articles entitled “Business Process Management: How to Orchestrate Your Business” , “Giving a Business Process Management Edge to Enterprise Resource Planning” and “Business Process Analysis versus Business Process Management.”

Special credit also goes to CIO Magazine’s articles entitled “ABC: An Introduction to Business Process Management (BPM)” and “Making Workflow Work and Flow for You.” All of the above articles were quite leveraged for this blog series thus far.

What’s the User’s Choice Then?

As said in Part I, the BPM market remains quite stratified, whereby there seems to be a number of powerful and full fledged BPM software packages (e.g., from IDS Scheer, Appian, Tibco, Lombardi, Ultimus, Fujitsu, Oracle-BEA Systems, Metastorm, etc.), many of which can be found in TEC’s BPM Evaluation Center.

BPM is considered one of the most overlooked trends in enterprise applications today. In fact, it is increasingly becoming a native part of the IBM WebSphere (best shown by the recent acquisition of ILOG), SAP NetWeaver and Oracle Fusion Middleware platforms and applications, which could be a glimpse into the future of modeling, workflow, re-engineering, and continuous change, all around ERP.

For a typical implementation that leverages a comprehensive on-premise (which is still a dominant deployment model) BPM suite, companies should count on forking out up to US$500,000 to address a few meaningful processes in their organization. Moreover, potential hidden costs include (all on top of already hefty investments in existing enterprise applications):

* Having to license and deploy multiple development, test and/or production environments to support multiple BPM initiatives;
* Additional application and database server licenses;
* Additional staff to provide the care and feeding of these servers; and
* Internal cost of direct involvement from business users to participate in process modeling, business rule definition, user interface (UI) design, testing and rollout activities.

At the lower end of the market there are a slew of workflow-based software packages addressing specific processes, such as bug or issue tracking systems. While upper-range BPM packages address complex business processes and issue tracking systems typically deal with one simple workflow, a number of workflow (possibly BPM wannabe) vendors like FloWare, Skelta, Red Maple, Web and Flo, Quask, XALT Technologies, ZyLAB Technologies, etc. are addressing a space in between.

How About Workflow (and Eventually BPM) On-demand?

But again, not many of these solutions are delivered in true no-frills software as a service (SaaS) fashion, as they still require significant hardware, software and professional service resources to be deployed on the customer’s site. Also, some business processes, although mission-critical for the company, are not transactional in nature and do not necessarily need to be part of the back-office database.

In fact, trying to capture every step and status of every little case (e.g., a customer’s product complaint or improvement suggestion that needs to be investigated by several employees) would only unnecessarily encumber the ERP or customer relationship management (CRM) database.

It’s About Process (or the Ability to be Responsive) — Part IV

Other Real Life ResponsAbility Use Examples

In addition to the examples described in Part III, another example of the ResponsAbility software in use can be found in Grayhill, Inc. an electronics manufacturer from Lagrange, Illinois (US), servicing industrial and government customers. While the company has been a long-term WebSource CPQ user for sales configuration purposes, the ResponsAbility sibling was later introduced for managing several processes, among them for product returns or return merchandize authorizations (RMAs).

Customer return requests are either imported from the company’s enterprise resource planning (ERP) system or directly entered by customers and/or Grayhill associates into ResponsAbility as a “request for material return.” Based on the entered data via a customized form, the return is authorized or denied. Namely, a default assignee reviews a request and approves it, rejects it, or asks the customer for additional clarifications.

Upon authorization, when the goods are received a case gets assigned to the quality assurance (QA) team. This is another “gate review” step in the process where the quality team determines if the failure is due to a product defect or misuse (user-induced damage). If a case is determined to be a defect, then the part is repaired at no cost or a new part is sent to a customer.

The defective part is also sent to the engineering department for analysis to determine the root cause and future corrective actions. Namely, in order to ensure the highest quality for which Grayhill is known, the case cannot be closed until all the corrective and preventive action (CAPA) requirements are fulfilled. To that end, the following outputs must be generated: the detailed explanation of the root cause of the problem, the short-term fix, the long-term fix, sent a final report to the customer, etc.

If it is not a defective part case, the case is closed and the goods are returned to the customer, who may in turn elect to convert it to a special service request case type. Logically then, another workflow process is followed, consisting of steps such as creating a service estimate, approval, service fulfillment (repair), invoicing, etc.

In other words, in case of misuse, the customer is asked to authorize a repair for a fee. If and when an approval is received, the product is repaired and the case is closed. Similar to the new feature request vs. bug software example from Part III, a repair service for fee process follows its own workflow via the repair department and QA, and then is shipped to the customer.

Ken Hoving, Grayhill’s vice president (VP) of corporate quality said

“The Webcom solution allowed us to consolidate all of our customer corrective actions in one system and enable web access across the entire organization, including our customers, resulting in cycle time improvements and increased customer satisfaction.”

Also, the company asserts that due to all the system’s nifty drag-and-drop Web 2.0 personalization capabilities for both users and administrators, the BPM tool is not something that users feel forced to use, but they truly want to use it because it helps them to do a better job. They do not have to worry about forgetting to do something or missing a step in a rush, since ResponsAbility ensures that the process is thorough and consistent each time.

Another important process that ResponsAbility enables at Grayhill is SDPR (Special Design Pricing Request).

Namely, when a prospective customer inquires about a product that Grayhill does not currently manufacture as a standard, then such a request gets routed via a number of departments, starting with sales that captures the detailed inquiry/request. Then, the engineering team will estimate the cost/time to complete the special request, while the marketing and accounting staff will analyze the economic viability of the special job (it is still expected to be some batch/series production rather than a one-off engineer-to-order [ETO] product), and create a catalog number and its price (quote).

Before that happens and the sales department can communicate back to the customer Grayhill’s interest and official price (quote), several collaborative iterations have to take place between the customer, Grayhill and its vendors (e.g., the special tooling and fixtures’ cost and lead time discussion).

Product Information Management Example

Broan-NuTone, based in Hartford, Wisconsin (US), and North America’s leading manufacturer and distributor of residential ventilation products is another combined WebSource CPQ and ResponsAbility user. Its products include range hoods, ventilation fans, heater/fan/light combination units, Indoor Air Quality (IAQ) Fresh Air Systems, built-in heaters, whole-house fans, attic ventilators, paddle fans and trash compactors.

The company has thousands of products, each with a slew of attributes such as length, width, material, standards to comply with (e.g., the UL Safety Standard, Canadian Standards Association [CSA], CE-Marking, etc.), voltage, power, air flow, and so on. The goal is to publish all that vast catalog data electronically via WebSource CPQ.

However, that cannot happen without consolidating all of the above data for all of the company’s products. ResponsAbility comes into the picture here, whereby each product will go through a special product information management (PIM) workflow.

Namely, the engineering team will have to fill in over hundred data points for each product, the marketing staff will add in their pertinent data, and product management will then have to fill the various product prices (list price, distributor price, wholesale price, etc.). Once the PIM case is closed, a prepared Microsoft Excel document with all of the required data about all the products in a product family can be imported into WebSource CPQ.

“After months of review and the evaluation of numerous vendors to help implement a Product Information Management system, we chose ResponsAbility from Webcom”, stated Mark Hughes, Internet Marketing Manager at Broan-NuTone. “Having several thousand products to manage from conception to obsolescence, we wanted to have stability out of the box. We feel that ResponsAbility is the perfect fit,” added Hughes.

Underlying ResponsAbility Technology

With some research indicating customer acquisition costing multiple times more than customer retention, ResponsAbility complements Webcom’s quote-to-order (Q2O) solution, WebSource CPQ, and continues the company’s focus on simplifying complex business processes.

“Attaining your goals and objectives requires not only a focus on obtaining new business through a quote-to-order solution such as WebSource CPQ, but just as rigorous a focus on retaining your most treasured asset, your customers”, commented Aleksandar Ivanovic, Webcom’s chief executive officer (CEO) and founder.

“ResponsAbility is just the type of solution needed to help drive customer satisfaction, innovation and repeat business”, added Ivanovic. “Especially in today’s uncertain economy, driving productivity through repeatable and reliable processes is crucial to success, and ResponsAbility could be a valuable tool helping companies improve customer service through nimbleness and implement process control.”

However, in order not to create internal competition for research and development (R&D) resources, WebSource CPQ and ResponsAbility, although both being offered on-demand, have intentionally been developed on two different technologies, Microsoft .NET Framework and Java 2 Enterprise Edition (J2EE), respectively. For more information, see TEC’s earlier article entitled Understand J2EE and .NET Environments Before You Choose.

Some best-practices sharing between the two teams could still be possible on the user interface (UI) side, since both products leverage Asynchronous Java and XML (AJAX) for rich client enablement and Web 2.0 gadgets. Although the two products are currently English-only, a common translation mechanism for other languages is being developed. Both products will be able to leverage these schemas for deployments in several languages. However, the decision on which languages to tackle first and deliver has yet to be made.

But, in contrast to WebSource CPQ, ResponsAbility is enabled for the Hibernate database-independent object/relational persistence and query service. The product features full audit trail and archiving capabilities, and the ability to export data in the CSV (comma separated values), Microsoft Excel, extensible markup language (XML), Adobe PDF (portable data file), and RTF (rich text file) file formats.

KISS IT or Leave IT

Webcom’s main challenge with the new workflow/BPM product will be to balance its “keep it straight and simple (KISS)” mantra with the complexity of full-fledged BPM applications’ deployments. On the one hand, the vendor positions ResponsAbility as a “lite BPM” product, given that it features much more capabilities than a mere workflow product, but on the other hand, it is far more limited than any other notable BPM suite’s functional footprint at this stage.

To be fair, some BPM functional requirements can be rendered moot in the on-demand model. In fact, product versioning, acceptance testing and/or whether workflow notification mechanisms can integrate with desktop products or interact via email are all capabilities that are a “big deal” for client/server on-premise BPM deployments, but are virtually irrelevant in software as a service (SaaS) subscription-based deployments.

The same goes for integration with third-party integrated development environments (IDE’s) due to the web-based workflow modeling environment within ResponsAbility. Indeed, IDEs like Microsoft Visual Studio are relevant for on-premise programming development, i.e., for writing source code, compiling it and making it executable code. In contrast to that, workflow modeling within ResponsAbility does not require coding, compiling, server deployment, etc. Furthermore, the SaaS deployment model completely obviates the need to buy and install an IDE.

It might be interesting to note here that Salesforce.com, when it started several years ago (and likely even still today) only had a fraction of customer relationship management (CRM) functionality that Oracle Siebel has had (and still has today). Still, this functional deficiency did not stop the on-demand CRM pioneer from succeeding.

The goal is not necessarily to out-feature other software packages, since most of them already have so much functionality that much thereof is never implemented or used (as can be seen in TEC’s article entitled Application Erosion: Eating Away at Your Hard Earned Value).

Thus, Webcom’s main goal is to make ResponsAbility so easy to set up and so easy to use that there will never be a failed implementation or a disgruntled customer. The goal is to quickly and simply help people to get their respective jobs done in a way that they get almost addicted to the tool, so much so that they cannot even imagine doing it any other way.

For what is worth, getting back to the “eating own dog food” mantra from Part III, Webcom’s staff admits to being addicted to ResponsAbility. If they look at their own statistics, which are available in the application, each Webcom employee will have personally performed thousands of transactions therein.

In the next product release, due in the fall of 2008 (which is another advantage of the SaaS development, i.e., the frequency of new releases), Webcom will be adding several new features, such as visual workflow/process designer, rules and conditions, escalations, service level agreement (SLA) tiers, field dependencies, scheduled events, analytics (graphs, charts, trends), etc. Features like Web Services application programming interface (API), support for personal digital assistant (PDA) and other mobile devices, case and task interdependencies, etc. might come in future product releases.

While the vendor strongly believes that ease-of-use and ease-of-setup are far more important than a long list of out-of-the-box supported features, it is necessary to have some of those in the request for information (RFI)/request for proposal (RFP) phase of any selection project to avoid outright elimination.

Even though some of the capabilities which are often marked as a “must have” will likely never be implemented by prospective clients, the selection team wants to make a safe decision and get all of their bases covered. Without those capabilities on paper, ResponsAbility may get eliminated before users ever get a chance to fall in love with the application.

Webcom also strongly believes that if users need to be trained extensively on how to use the application, the product will have failed. We concur that no one can expect customers and partners (channel and supply) to take additional classes on how to collaborate with the company using its applications.

The software needs to be as intuitive as going to the Amazon.com web site and buying a book or a CD, or going to Google and doing a search. It is Webcom’s approach that until it figures out how to make each feature that intuitive, it will not introduce it in the application.

Taming the SOA Beast – Part 1

Certainly, I admit to not being a programmer or a techie expert (not to use somewhat derogatory words like “geek” or “nerd”) per se. Still, my engineering background and years of experience as a functional consultant should suffice for understanding the advantages and possible perils of service oriented architecture (SOA).

On one hand, SOA’s advantages of flexibility (agility), components’ reusability and standards-based interoperability have been well publicized. On the other hand, these benefits come at a price: the difficulty of governing and managing all these mushrooming “software components without borders”, as they stem from different origins and yet are able to “talk to each other” and exchange data and process steps, while being constantly updated by their respective originators (authors, owners, etc.).

At least one good (or comforting) fact about the traditional approach to application development was that old monolithic applications would have a defined beginning and end, and there was always clear control over the source code.

Instead, a new SOA paradigm entails composite applications assembled from diverse Web services (components) that can be written in different languages, and whose source code is hardly ever accessible by the consuming parties (other services). In fact, each component exposes itself only in terms what data and processes it needs as an input and what it will return as an output, but what goes “under the hood” remains largely a “black box” or someone’s educated guess at best.

Consequently, SOA causes radical changes in the well-established borders (if not their complete blurring) of software testing, since runtime (production) issues are melding with design-time (coding) issues, and the traditional silos between developers, software architects and their quality assurance (QA) peers appear to be diminishing when it comes to Web services.

Transparency is therefore crucial to eliminate the potential chaos and complexity of SOA. Otherwise, the introduction of SOA will have simply moved the problem area from a low level (coding) to a higher level (cross-enterprise processes), without a reduction in problems. In fact, the problems should only abound in a distributed, heterogeneous multi-enterprise environment.

Then and Now

Back to the traditional practices and mindset: the software world considers design as development-centric (i.e., a “sandbox” scenario), and runtime as operation-centric (i.e., a part of a real-life customer scenario). But with SOA that distinction blurs, since Web services are being updated on an ongoing basis, thus magnifying the issues of recurring operations testing and management.

Namely, companies still have to do component-based software testing (to ascertain whether the code is behaving as expected) at the micro (individual component) level, but there is also application development at the macro (business process) level, since composite applications are, well, composed of many disparate Web services. In other words, programmers are still doing traditional development work, but now that development work becomes involved in infrastructure issues too.

For instance, what if a Web service (e.g., obtaining exchange rates, weather information, street maps information, air flight information, corporate credit rating information, transportation carrier rates, etc.), which is part of a long chain (composite application), gets significantly modified or even goes out of commission? To that end, companies should have the option of restricting the service’s possibly negative influence in the chain (process) until a signaling mechanism is in place, which can highlight changes that may compromise the ultimate composite application.

Functional testing in such environments is a challenge because, by nature, Web services are not visual like conventional, user-facing software applications. In place of a front-end or user interface (UI), some astute testing software can overlay a form that allows team members to see the underlying schema (data structure) of the Web service being tested.

Furthermore, testing SOA applications is problematic since it is not only difficult for a company to know if a particular Web service will deliver on its “contract”, but also, even if it does, whether it will maintain the company’s adopted standards of performance (e.g., under increased loads) and security while complying with its adopted regulatory policies.

Thus, modern SOA software testing tools increasingly provide support for multiple roles, whereby architects can codify policies and rules, developers check for compliance during the test cycle, and support and operations staff can check for compliance issues when problems occur. The new crop of SOA testing tools also increasingly support a range of tests, including functional and regression testing, interoperability testing, and policy conformance. Contrary to traditional software testing tools that inspect code, Web services testing tools deal with the quality of the extensible markup language (XML) messaging layer.

And although both traditional and Web services testing tools deal with syntax, for Web services team members require higher-level awareness of business rules and service policies. This is owing to the highly distributed SOA environment that makes keeping track of changes difficult and underscores the new SOA management complexity.

In fact, change management in pre- and post-application development is essential to filter out redundant changes, prioritize changes, and resolve conflicting changes. But also, if a certain message between the points A and B doesn’t pass in a real-life scenario, there has to be awareness of what needs to be done to rectify it now and in the future.

The abovementioned examples of numerous problems inherent in SOA have caused the previously mentioned silo-ed areas to now come much closer to each other. These are the following: software lifecycle management, applications performance management and information technology (IT) governance, with change management acting as a core information source on all changes in the environment. This union should enable companies to discover which Web services and components exist, who the owners are, and which services and components are actually consumed and by which applications/business processes.

Progress Software Nabs Mindreef

As to be better positioned to deliver testing and governance products that are geared towards setting up continuous testing and validation to ensure the high reliability and quality of multi-tier, composite SOA applications, Progress Software Corporation recently acquired Mindreef. It is interesting to note the quietness of the event that was reported only briefly by ZDNet bloggers Joe Kendrick and Dana Gardner.

Mindreef was a privately held firm founded in 2002 by Frank Grossman and Jim Moskun who leveraged their deep expertise in Microsoft Windows, Java, and device drivers’ debugging and testing to create the Mindreef SOAPscope products for SOA testing and validation. Mindreef was acquired by Progress Software and included in the Progress Actional product group in June 2008.

Prior to being acquired by Progress Software in early 2006, Actional Corporation was an independent leading provider of Web services management (WSM) software for visibility and run-time governance of distributed IT systems in a SOA. Actional’s SOA management products were incorporated under the product name Progress Actional within Progress’ Enterprise Infrastructure Division, and is now a major element of the Progress SOA Portfolio.

In a nutshell, Mindreef has already been wrapped into Progress Actional product group, since it addresses SOA management at the design and testing phase, while Actional primarily addresses SOA management at the production (run-time) phase (e.g., tracing transactional tables). Thus, Progress now has an expanded solution that addresses the quality and management of the full SOA lifecycle, from early concept and design thru go-live implementation, on-boarding new Web services, and overall SOA production management.

Frank Grossman, former chief executive officer (CEO) and founder of Mindreef is now vice president (VP) of Technology for Progress Actional, reporting to Dan Foody, who is in charge of Progress Actional. For more information the acquisition’s rationale, see the frequently asked questions (FAQ) page here.

Since there is so much product integration in the planning stages at this point soon after announcement of the two recent acquisitions (the other one being of Iona Technologies), Progress hopes to have new slide decks to accompany analyst briefings on virtually all of its products over the next several months. Look for follow up blog posts from me at that time.

Zooming Into SOAPscope

Designed for easy use by architects, service and support personnel as well as SOA operations managers, the Mindreef SOAPscope product family comprises SOAPscope Server, SOAPscope Architect, SOAPscope Tester, and SOAPscope Developer.

Essentially, Mindreef products collect information about Simple Object Access Protocol (SOAP) transactions and use it to shed light on Web services communications. But while most of such logging tools store data in pesky flat files, SOAPscope stores it in a relational database for ease of use even by the folks who are not necessarily XML and SOAP experts.

Mindreef SOAPscope Server was initially called Mindreef Coral, and was re-released under the current name in mid 2006. Like many software testing tools, this collaborative testing product includes a “play” button when Web services are exercised based on specific scenarios. If services for some steps of the process scenario are not available, SOAPscope Server can even simulate them.

The collaborative team lifecycle support comes by means of a “playback” feature that shows what happened at each step along the way, so that different members of the team can inspect for their respective areas of concern. For instance, developers can check for syntax errors, while architects can test if a service that has been invoked many times could still eventually trigger a scenario that violates company policies.

Taming the SOA Beast – Part 2

Mindreef joined the Progress Actional SOA Management product family that provides policy-based visibility, security, and control for services, middleware, and business processes. This acquisition continues Progress’ expansion of its burgeoning SOA portfolio and strengthens the company’s position as a leader in independent, standards-based, heterogeneous, distributed SOA enterprise infrastructures.

Prior to being acquired, Mindreef decoupled some plug-in features from its previously all-in-one SOAPscope Server suite.

One capability was SOAPscope Policy Rules Manager that tests compliance with rules such as whether the Simple Object Access Protocol (SOAP) and Web Services Description Language (WSDL) headers comply with the WS-I Basic Profile for Web services interoperability. Also, the feature checks whether the extensible markup language (XML) schema was formed properly, and whether the “contracts” between Web services are valid so that companies can ensure they won’t break at run-time because of faulty logic.

Another plug-in, called Load Check, provides a pre-test simulation of the system’s performance. The underlying idea was to mitigate the bad practice that, when developing Web services-based applications, the load or performance testing tends to be an afterthought that is often compensated for by purchasing extra hardware after the fact and at a hefty price.

Progress Actional + Mindreef

Like its parent, Mindreef has always designed its products as a good fit for third-party IT governance solutions, with the ability to check on whether Web services are well formed and remain consistent with business policies.

Progress does not release the number of customers it has for specific products or as a corporation, although it admits to gaining access to more than 3,000 of Mindreef’s customers at more than 1,200 organizations worldwide. The ideal customers for the combination of Progress Actional and Mindreef SOAPscope are those seeking full life-cycle quality management of their SOA environments, ranging from design through operational deployment.

Mindreef SOAPscope is a recognized testing and validation software product for SOA services at the design stage, while Actional is the market leading SOA management, validation and monitoring software for operational SOA. Thus, the combination of the two provides a solution that is likely to be the first in the market to address the entire SOA lifecycle with SOA quality, validation, and runtime governance.

Progress Actional and Mindreef provide a deep level of SOA management, testing, validation and run-time governance functionality, but not all organizations that have begun implementing SOA environments recognize the need to implement that functionality as yet. As a result, those companies that have felt the significant pain of having to diagnose why SOA composite applications have failed in order to get them rapidly back up and running, or who have discovered rogue Web services within their environments into which they have no visibility, should see the benefit of deploying Progress Actional and Mindreef.

Progress Actional and Mindreef are sold worldwide from offices in North America, Latin America, Europe, and Asia. A complete list of Progress Software offices is available here.

While hardly any player in the market currently has equal lifecycle SOA quality capabilities as the combination of Actional and Mindreef provides, traditional competitors for Actional include Amberpoint, SOA Software, IBM, Hewlett-Packard (HP), Layer 7 Technologies and Computer Associates (CA).

As for Mindreef, while it can also be hard to find a single product that functionally competes head to head with SOAPScope, some other vendors’ functionality is comparable to that found in SOAPScope. Namely, in sales situations, Mindreef sometimes runs across IBM Rational Software and HP/Mercury, and occasionally some of the smaller niche players like Parasoft Solutions, iTKO LISA, PushToTest, and Crosscheck Networks.

Forget Not about Oracle Fusion Either

The recent acquisition of the former middleware competitor, BEA Systems, has promoted Oracle into the middleware market leader, at least in the Java world. The idea behind the ambitiously broad Oracle Fusion Middleware (OFM) suite is the following:

* to enable the enterprise applications’ architecture shift to SOA
* to become a comprehensive platform for developing and deploying service-oriented enterprise applications
* to form the foundation for modernizing and integrating the burgeoning Oracle Applications portfolio

Oracle’s middleware product strategy is foremost to provide a complete (unified) and pre-integrated middleware suite that is also modular, standards-based, open, and thus “hot pluggable.” Furthermore, the strategy is to develop and deploy enterprise applications on the Internet via unifying SOA Management, business process management (BPM), business intelligence (BI), enterprise content management (ECM), and enterprise 2.0 capabilities.

The third part of the strategy, the lowest total cost of ownership (TCO), by managing systems, applications, and user identities on low cost hardware and storage systems, has been too overplayed by virtually all vendors to really ring as differentiating, but is certainly a worthwhile attempt by Oracle.

Asserting SOA Governance Competitiveness

As for the product strategy for the Oracle SOA Governance suite, as a subset of OFM, it starts with offering an integrated and complete lifecycle SOA governance platform entailing tools, service registry and repository, policy manager, monitoring console, and so on.

Additionally, the goal is to enable visibility into an organization’s service portfolio via the ability to discover, categorize, manage change, audit usage, and monitor Web services. Last but not least, as discussed in Part 1, the ultimate goal is to provide better control over the lifecycle of services by enforcing policy compliance from software development to operations.

But what really impressed me post-acquisition was Oracle’s due diligence and even (atypical) humility in admitting BEA’s advantages (e.g., in terms of Enterprise System Bus [ESB] and service mediation capabilities) and bundling it with Oracle’s established capabilities of workflow management and Web services orchestration. Other specific areas where BEA had superior technologies were Java virtual machines, transaction processing monitors and certain security products. Conversely, Oracle has products like BI, ECM and identity management, where BEA did not have products.

Accordingly, Oracle has stratified the combined Oracle and BEA middleware products into the following three groups:

1. Strategic products — BEA products that are being adopted immediately with limited re-design into OFM, since no corresponding Oracle products exist in a majority of cases. Where corresponding Oracle products exist, they will converge with BEA products with rapid integration over next 12-18 months;
2. Continued and converged products – BEA products that are being incrementally re-designed to integrate with OFM. There is gradual integration with existing OFM technology to broaden features with automated upgrades. Oracle hereby grants continued development and maintenance for at least nine years; and
3. Maintenance (a.k.a., “stabilized”) products – those products that even former independent BEA had marked as the end-of-life (EOL) ones due to limited adoption prior to Oracle’s acquisition. Oracle hereby promises continued maintenance with appropriate fixes for five years.

Translating this into the product offerings for Oracle SOA Governance, most of the Oracle and BEA products will end up in the strategic category, starting with BEA AquaLogic Enterprise Repository at the core. It is a repository to capture, share, and change manage SOA artifacts across the lifecycle, with capabilities like audit trail and metrics, service level agreement (SLAs) and policies management, rules and standards definition, WSDL and XML Schema Definition (XSD) schemas, capturing and modeling business requirements, and dependency management.

For its part, Oracle offers Oracle Service Registry, which is a standards-based Universal Description Discovery and Integration (UDDI) v3.0 registry to publish and discover Web services. Furthermore, Oracle Web Services Manager is a policy manager to define and manage security, auditing, and the quality of services (QoS) policies on Web services.

Moreover, Oracle Enterprise Manager (EM) takes care of services deployment, staging and approval, change management, retirement and removal. To that end, the Oracle EM Service Level management pack is a management console to monitor service level response times and availability, while the Oracle EM SOA management pack is a management console to monitor, trace and change manage SOA.

The above-mentioned strategic Oracle SOA Governance suite can be bolstered optionally with the business process analysis (BPA), design and modeling capabilities via the partnering IDS Scheer’s ARIS tool, and Oracle’s JDeveloper or open-source Eclipse tool. The latter two tools can also be leveraged for Web services implementation purposes.

The only product slated for maintenance is BEA AquaLogic Services Manager, which is an embedded product (i.e., AmberPoint) and is deemed redundant with Oracle EM.

At this stage, Oracle cannot really compete with Progress Actional in highly distributed and diverse IT environments, and thus must continue to develop and market its slated SOA governance technology as heterogeneous. Based on the availability of Oracle database for Microsoft Windows, I could imagine the similar decoupling of OMF products to also include non-Oracle/Java technologies in the future.

It remains to be seen how the impending acquisition of ClearApp and its planned rolling into Oracle EM will help in this regard. In addition to not limiting the sale of middleware to Oracle or BEA Java-oriented environments, the giant can gain further credibility for interoperability in this market by expanding its SOA governance partner ecosystem.

For its part, Actional’s openness and scalability were recently given an endorsement by another SOA powerhouse. Namely, Software AG, announced in early September that it will embed, as an original manufacturing product (OEM), Progress Actional and re-name it as WebMethods Insight. This means that Software AG will now be selling Progress Actional to its own substantial SOA customer base. This is a huge opportunity for Progress, which it did not have prior to this partnership.

Managing the Aches and Pains of Long Cycle Times: Automating Controls for Pharmaceutical Manufacturers

One of the biggest challenges (or business pain points) for pharmaceutical manufacturers (or life sciences companies) is the long cycles that are required for research and development (R&D) and product approval. This is particularly a challenge for manufacturers of generic drugs, for which cycle times can average 20 months or more (and the full time-to-market period upwards of 12 years).

Why are long cycles a problem?

Simply put, it comes down to the familiar equation that “time = money.” More time needed means more capital spent, and manufacturers watch their bottom lines slip farther and farther away. To begin to formulate a plan to address the issue of long cycle times, it’s important to understand the factors that contribute to this challenge.

Long R&D cycles happen for a number of reasons. One is that there has been increasing need to comply with regulations, including the Food and Drug Administration’s (FDA’s) Title 21 Code of Federal Regulations (CFR) Part 11, for pharmaceutical manufacturers that are employing methods for electronic record-keeping and electronic and digital signatures.

This increasing need often means that additional administrative time must be spent on ensuring that the technical and procedural protocols are set up correctly and doing what they are supposed to do.

Another reason for long cycle times has to do with the need to ensure that all stages of product development are adequately documented for audits. Whether a manufacturer is using paper or electronic methods of data storage, there must be a reliable, consistent, secure, and accessible method of storing all documents related to the research, development, manufacture, and release of all drugs.

Every change to a document must be retained, and the integrity of the versions kept intact. For manufacturers straddling the line between paper-based and electronic methods, all paper-based documents need to be transferred and saved in digital form, a process that can require considerable time for scanning or manually entering data.

What are the business risks involved in longer R&D cycles and product approval?

Fewer products can be developed or manufactured concurrently, which means fewer products get to market. And fewer products to market can mean a decrease in the company’s in-coming cash flow (i.e. decreased profits). Additional worry may come from the fact that with this increase in time-to-market, other competing manufacturers may develop a similar drug and release it sooner, thereby further diminishing profits due to lost market share and a shortened product life cycle. A delayed or lengthened cycle time can seriously affect the return on investment (ROI) for a given new drug or product.

What can help?
A software solution that implements automated controls that address compliance issues, including 21 CFR Part 11.

How does 21 CFR Part 11 relate to product R&D and approvals?

For all of the processes involved in getting a drug to market, strict policies must be established and followed by a company regarding the use of electronic records. Each step of product R&D and approval processes must be, according to the dictates of 21 CFRR Part 11, consistent, reliable, and repeatable—in other words, each version of every document must be archived and easily retrieved for the purposes of inspection or auditing.

But this thorough documentation means that the approval process can be streamlined with automated functionality, as the time needed to send documents to the approving individual(s) will be reduced (with a centralized system, all users may have access to documents, providing they are authorized to do so according to level-specific electronic signatures; also, the system can be configured to send automatic notifications). Consequently, document turnaround time can be reduced, while the authenticity, integrity, non-repudiation, and confidentiality of documents is assured.

Furthermore, for the purposes of an audit, the automated system can aid a company by streamlining document retrieval. With a system that helps you organize and maintain accurate records of all processes, time isn’t wasted on following a lengthy paper trail of documents to ensure that changes have been authorized and tracked, and that all paper versions are now available.

However, it is very important to realize that using a software application off the shelf to automate all processes involved in electronic signatures, document archiving and change management, and tracking and auditing, will not automatically render your company compliant with 21 CFR Part 11.

You must also ensure that you configure the system so it provides you with the validation you need to be compliant—you must establish rules and policies for the application that are consistently followed so you can be assured your processes for electronic signatures and data management are compliant. Both procedural and administrative controls must be in place to ensure process compliance.

Software applications that can help pharmaceutical manufacturers with the issues described above include

Ask the Experts: Data Purging and System Migration

* migrating between ERP systems,
* purging ERP data,
* integrating data from acquired companies into ERP and PLM”

We thought we’d take this on ourselves—but see the bottom of this post for more resources!

Introduction

When migrating between systems, it is crucial to define the scope of implementation, as well as to outline each stage of the project and the resources that will be needed. A failed implementation will paralyze the operational capabilities of an organization, but the right methodology will help ensure a successful implementation.

In the issues related to the areas of ERP and PLM integration, we’ll highlight relevant areas of consideration. Furthermore, you’ll learn what steps can be taken to safeguard purging and data retention. This is a legal and mandatory business consideration.

We’ll assume for the purposes of this blog post that a new system exists, and that we are migrating data from an existing legacy system to a new ERP/PLM system. This can be viewed as an in-house system upgrade, or as migration of data from a purchased company.

Purging and Data Retention

When production databases become too large, they impact productivity by slowing access to information, and by extending the time required for system backups or for system restores.

Depending upon the industry (for example, medical, government, etc.), the need for data retention varies based on regulatory compliance. Some industries have long duration product guarantees, which results in the necessity to retain data.

Archiving has evolved into a discipline known as information lifecycle management (ILM). ILM helps organizations maximize the business management of storage from creation to disposal. Management is understandably reluctant to perform data purges due to the unknown operational risks, and it is therefore often done in stages.

Unstructured data populates file servers and typically includes e-mails, drawings, and user- and application-generated files in hundreds of unique formats. Purging can be by date, by type (internal or external), and by inbound or outbound status. Nevertheless, while many IT shops are only archiving e-mail to a less expensive tier of storage, they are still unwilling to permanently purge e-mail for legal or operational reasons.

The usual approach consists of transfer of data from active tables to online historical backups on a monthly basis. Since historical data is essentially invariant for long periods, it does not require being re-backed up if it had no changes. The backup facility may also make a second copy to non-rewritable storage. In the process of creating archives, an accompanying step is often taken to create summary data into a data-warehousing product for business intelligence studies. Summary data allows a look at a product’s sales figures for a given time period, by examining a single entry in a table rather then summing up individual sales order lines.

System Migration

Defining Your Needs
A migration project often starts with a feasibility study, which takes place before an implementation project gets off the ground. The approval process can include the board of directors or a high-ranking officer who sponsored the feasibility study for the project. Subsequently there is a lengthy process to build a cutover plan. This is necessary because errors or oversights at the beginning of an implementation project can be very costly to the organization down the line.

The feasibility study will address the following considerations:

System Upgrades
Upgrades are the migration from an older version of a product to a newer one. The vendor will usually provide a set of utilities (for example, to move from version X to X+1), but not always. A “not-always” situation would be in the migration from version X to version X+3 or X+4, where intervening versions must be jumped over.

Interfacing with a new system requires the maintenance of the old system along with major testing of the new system. Don’t neglect the importance of ensuring your web interfaces are also up-to-date. As I mentioned, the old system will have valid data for some time.

It may be that the switchover is done on a midnight when the decision taken is to remain on the live system, and not do a fall back. In this case, you will need to address the problem of what to do with your web site interfaces if something goes wrong. For this reason, you should only update these interfaces once you are sure the implementation is a success.

(Yes, some go-lives fail the first time. This happens mainly with large systems because they are of a more complex product design).

Historical Data Migration
Normally, ERP systems have a built-in provision for creating historical data. When is data transferred to historical archives? Some businesses archive data 24 months old, others do it at 36 months.

The move to historical tables is to reduce the number of records in “active tables” so that system response times are reasonable and system backups can be done in the window of time reserved for this operation.

Documentation and training
This is an important critical part of the migration or upgrade process. The usual practice is to make the key users responsible for guaranteeing the training of their respective groups. Any vendor-based courses deemed necessary are held a week or two before going live, sometimes on-site or at other times, off-site.

The last stage prior to “going live” is user acceptance testing (UAT) where the client tries out the system to ensure that everything is working properly and that the developer has fixed all the application bugs.

Once the cutover has been completed, the employees begin to work with the new system, performing their day-to-day tasks. If problems arise at this point, the key users/champions will be informed immediately. It is likely to take some time for the solution to work properly; in the meantime workarounds will be set in place until the problem is resolved.


Post-migration Review
After go-live, a post-project assessment is performed. This assessment is a checkpoint to determine if the system’s performance aligns with the project charter, as well as a way to audit the vendor. Did the vendor meet the requirements stated in the request for proposal (RFP)? If not, additional enhancements may be recommended as a secondary phase of the project. (Often it is not possible to deliver all software in the migration process.) The priority is given to completing migration rapidly and with essential software only, to mitigate the financial drain surrounding migration activities. In the post-implementation period, the applications or reports that are not mission-critical are evaluated, and coding can be scheduled for these deliverables.

The finance department examines the costs; if the migration is amortized over several years, implementation costs are accrued, and life starts. There may be a period of time when transactions such as sales order entry are “frozen” (i.e., orders are manually held for entry in the new system after the implementation).

Data Integration Considerations
Integrated system configuration is an area common to both ERP and PLM. Data compatibility with the combined PLM/ERP system requires common units of measure (UoM) and decimal precisions.

A UoM converter identifies how items are stored and how items are converted. Converters are introduced as pairs or words with multipliers (e.g., hammers are purchased by packages of 12, but sold individually). Sample UoM conversions are “dozen to each” and “gross to dozen,” “box24 to ea,” etc.

In the migration process, the source quantity is converted to the target quantity, with a corresponding change to costs and re-order amounts. If the target quantity is not defined, the item will be rejected and have to be reprocessed after corrections are made.

Other preparatory work is required before importing the source data. It includes part numbers, customer details, bills of materials, and finance mappings from source system values to the target system standards. The system has to be able to incorporate these changes as part of its functionality.

There are a number of departments affected by the system implementation. Co-ordination of open sales orders, as well as open purchase orders and open production orders, must be completed prior to go-live.

Open production orders are works in process, and existing open production orders are allowed to complete on the old system in order to maintain a level of operational control for the business. New material requirements planning (MRP) runs may be done in the new system, to insure inventory is correct, by taking into account the open production orders active in the shop floor.

Sage ERP X3 Version 6: A Sneak Peek

We recently got a sneak peek of the new version of Sage ERP X3 that is scheduled to be released in October 2009. We were given a detailed demonstration of some of its core functional changes and advancements and we have summarized our findings differently.

Gabriel Gheorghiu’s Take on Version 6

There are features that are new to X3 and quite rare for an enterprise resource planning (ERP) solution for midsized companies.

Extended and Integrated Business Functionality Coverage

We saw it. It’s all in there—from purchasing to sales, to inventory and accounting, to customer relationship management (CRM) and production. Of course, it would take hours to see how it all works. To track performance and key performance indicators (KPIs), X3 uses the Crystal Reports generator, which houses a library of 400 reports and Business Objects technology for dashboards and Web access.

New Technologies for Collaboration and Monitoring

Integration with Microsoft Office allows users to create, edit, and save files without leaving the system. This also makes collaboration between users easier since all files are saved in the database. Another interesting option is that customers and vendors can interact with the system.

Complex but Reliable Security and Workflows

Workflows are really easy to define and modify: a click or drag-and-drop will suffice for simple processes. The Visual Process Designer comes with over 100 predefined processes. The level of complexity supported is supposed to be very high, but this is something that cannot be demonstrated but worth mentioning. Security is essential when processes become complex, and X3 offers Web encryption as well as access control by user, group, and role.

Flexible for Growth and Changing Needs

The modularity of the system allows companies to implement only the modules that they need (others can be added later). The Sage ERP X3 fourth generation language integrated development environment (4GL IDE) can be used to create custom applications, which can be integrated within the product. This is another feature that cannot be demonstrated during an online demo, but it’s worth mentioning since it’s one of the features that sets X3 apart from its competitors.

Finally, here are the major enhancements in Version 6 compared to Version 5 (see graph below):
- Finance has been entirely rebuilt and now completely covers the functionality for this module. The main changes are related to multi-country, multicurrency, and multi-legislation compliance, as well as advanced fixed assets and advanced budgeting.
- E-commerce functionality has been greatly improved, as well as the functionality for distribution (shipment preparation and landed cost), work order (WO), and manufacturing (made-to-order and sub contracting).
- Functionality like sales, purchases, stock management, material requirement planning (MRP), business intelligence (BI), and finance are now fully covered.

functional-scope-x3.PNG

Khudsiya Quadri’s Take on Version 6

New versions of existing software are often not highly regarded. Most organizations fail to see the benefits of software upgrades because there are no magical answers to these questions: Is changing to a new system beneficial? Will it meet the business requirements?

The following should be considered by users and potential Sage ERP X3 system buyers

Multi design/Functionality Changes

Sage ERP X3’s financial module has been revamped in Version 6 to provide international coverage. Companies will benefit from the following suites: budget and cost accounting; fixed assets management; financial; commitment management; and personal accounting. Version 6 is capable of handling transfers and reporting information internationally between companies and subsidiaries. It makes multinational management operations seamless by combining decentralized information and the ability to handle multiple currencies, languages, companies, sites, legislations, and country-specific accounting rules and standards without losing the audit functionality.

Sage’s demonstration team also highlighted that Version 6 has built-in international capabilities, designed especially for midsized organizations. Their purpose is to lower the complexity and bring global operations together within the system. The pre-built settings containing country-specific standards and regulations make sharing common data and processes between different locations and sites easier. This speeds up the implementation process across multinational organizations with reduced IT costs.

Sage X3 Version 6 has very rich ergonomics and visual processes. The look and feel across all functional areas is the same, regardless of whether it’s Web client- or server-enabled. The user interface (UI) has multiple graphical presentations with a drag-and-drop capability for hierarchical datasets. The user has the ability to see agendas, spider Web presentations, and Gantt presentations (data elements) in real time and can drill into the specifics from the presentation right into the transactional view. The Sage Application Framework for Enterprise Technology (SAFE X3) visual process comes with over 100 standard work procedures in graphical view, with one-click access to the underlying systems’ functions to identify how the process is setup with transactions and entities being linked together for the complete visual of the process. Organizations can build their business process in the process designer tool and customize without having to change the source code.

Sage X3 Version 6 is compatible with Windows, Linux, Unix, SQL server, Oracle, client/server, or Web access regardless of what architecture design the organization currently has set up.

Scalability/Integration/Web Power

As the organization’s business grows, systems (within the organization) need to grow with the company’s business needs (without increasing cost). Sage ERP X3 Version 6 doesn’t require any add-ons, but rather, the ERP software comes with all the functionality needed by a midsized company (workflow, batch server, database administration, reporting tools, BI, and security management).

The application allows companies to systematize and prioritize their global development by business areas and regions. Companies can scale up from 10 to 1,000 users because they can use the system from any location (the ERP system can be accessed remotely or via a Web browser).

Collaboration between partners, suppliers, and customers is in real time, which helps manage end-to-end processes more cost-effectively. By having enhanced communication methods with real-time information availability, decision-makers can offer quicker responses to customers, suppliers, and vendors. In the demonstration, Sage presented a variety of scenarios. In one scenario suppliers or authorized vendors had secure access to the system in order to find information about upcoming requirements or changes to reflect the demand fluctuation from customers. This business process-enabled capability creates end-to-end visibility for all parties involved. This method reduces waste and eliminates gaps from the process with scalability and Web enhancement.

Throughout the presentation, multiple business processes were integrated within Sage X3 Version 6 (order entry, shipment delivery, etc.), and all tasks were seamlessly tracked in real time. Alerts were automatically generated if any issue arose, which helped resolve the problem without jeopardizing business performance. The embedded workflow functions enable organizations to automate information flow inside (and outside) the organization based on business process requirements.

Application Platform

Sage ERP X3 is built on SAFE X3 technology. This is a common development platform for all Sage applications for midsized enterprises. The main features of this application platform are Web service-enabled applications (.NET, extensible markup language [XML], universal description, discovery, and integration [UDDI], web service description language [WDSL], simple object access protocol [SOAP] etc.), with enhanced BI engine, reporting tools, workflow engines, database requesters, import/export capability, and collaborative UI and portal (either Web or Windows). SAFE X3’s key feature is the 4GL IDE, which is used for both constructing application modules and for customizing Sage ERP X3 Version 6 to each unique organization’s requirements within the implementation phase without needing to change the source code. This development method supports progressive application deployment and is provided at no extra cost to companies using Sage ERP X3 Version 6. The IDE tool comes with documentation and delivery tools, and it is independent from the execution environment and UI. Therefore, changes made within the 4GL are tested and vetted before released into the live environment. Sage ERP X3 is moving towards Eclipse 4GL editor and providing customers with more enhanced development platforms. Below is a picture of the SAFE X3 framework:

safe-x3.PNG