Is Papyrus ECM, BPM, CRM, EAI or a Mashup?

In my post ‘Redefining BPM? Who wants that?’ I discussed the problem of market fragment definitions by analysts. To shorten my posts and to seperate opinion from product related discussions, I want to add the following here.

Till today, if a product does not offer a flowcharting tool it is clearly not considered BPM. The Papyrus Platform has offered the state/event driven and tool/material controlled processes mostly focused on content with Papyrus since 2001: That’s not BPM I was told. It was not yet Design-by-Doing (adaptive in Jim Sinurs (Gartner Group) diction) then, but processes could be dynamically changed at runtime. We added to that user-definable business rules in 2003, but no, that was not BPM either, but clearly it was Design-by-Doing. In 2007 we introduced the User-Trained Agent that would kick off activities based on a machine learning principle and that is Design-by-Doing ALL THE WAY. Nope, we were told by analysts and customers – no flowcharts means it is not BPM. So now we do have the BPMN designer as one option to define structured processes as well, are we now suddenly BPM? Nothing else has changed. Is that now good or bad? Should we not provide the designer so that we can be ACM? Maybe someone will now consider us ‘Pure-Play-BPM’ as well? Oh my god, the implications of that. Seriously, that whole game is utterly senseless.

Question: As soon as you empower the process owner and his team to execute any way they feel works and you get the most efficient execution, does anyone care if they use a flowchart or if it is called workflow, BPM or collaboration? Absolutely not. BPM is mostly bureauracy today and linked to inhumane Measure-to-Manage management paradigmes such as SixSigma and Balanced Score Card. If you focus on errors and numbers that’s what you get. No more. By what means would that improve outcomes for people – employees and customers? Well, it doesn’t.

So why is everyone trying to expand BPM now? They do not want to admit that possibly BPM is not the final wisdom that it was proposed to be for so long. Now, that there is a movement that they know in their guts will kill old-style BPM, they at least want to retain the name because then they won’t have to admit to have been wrong. I see history repeating itself. When we were first to introduce printer-independent, graphical design, dynamic document formatting in 1994, a customer got up really upset: “Why are you doing this? Forms worked fine and as soon as our competitors will pick this up, we will have to do it as well!” The same thing is happening now. I actually had someone ask me at the process.gov conference in Washington: “Why are you rocking the BPM boat? Once someone starts to do Adaptive Processes, we will have to follow along and all the money we spent on BPM will be wasted.” Sorry, guys – I told you so for a long time. Now the time has come.

I for my part don’t really care whether the solutions we offer with the Papyrus Platform are considered BPM, ACM, ECM, CRM, EAI or Mashups. And in fact, it should not matter to our customers either. Analysts do not make our life easier, but there are those highlights that make my day. While being stuck in Washington due to the ash cloud over Europe last week, I used the time to give a two hour LIVE-DEMO of our Papyrus Platform to Mike Gilpin and John Rymer of Forrester Research. If you look them up you will note that they cover APPLICATION DEVELOPMENT and not BPM. While they do not endorse products this way, I still want to share what they said: “Max, you told us for two years that you have implemented what Forrester calls ‘Dynamic Business Applications‘ and finally this demo has convinced us that what you do is unique and very powerful and matches with our concept.’

So what do I do now? Dump all the other TLAs and jump onto that bandwagon? I guess not. We simply will continue to spend our money to develop what our customers need and not on advertizing or bandwagons. I am pretty sure that our customers will appreciate it in the long run. Yup, I am that naive …

Advertisements

Mastering The Unpredictable

Recently I co-authored a book on ACM Adaptive Case Management.

Many current implementations of process and case management solutions are at odds with modern management concepts. While that applies to all workers, it is especially relevant for highly skilled knowledge workers. Motivation is achieved by empowering people to be valuable team members rather than through command-and-control-oriented process implementations. Adaptive case management sits at the center of gravity for process, content, and customer relationship management and therefore plays a key role for effective execution toward business goals.

While ACM is about bringing the benefits of adaptability to existing knowledge workers, I propose to expand that into “Adaptive Process” that combined with an empowerment management paradigm turns more production workers into knowledge workers rather than just automating the production workers’ work.

There is an obvious need for dynamic processes that BPMS vendors are already addressing. The reality of BPM shows that it is very difficult to top-down analyze and simulate business processes and link them to KPIs in a continuous improvement cycle. Measure-to-manage optimization is counterproductive to improvement and innovation. Only empowered actors can use their intuition and experience for sensible action. The dynamics of economy require a self-organizing structure that is resilient to fast changes through its ability to adapt.

Agility cannot be enforced by methodology, and it is not a product feature. It can only be achieved through the agile mindset of management who will put the right technology in place that empowers agile employees. Process maturity is not about how well processes control employees, but how much process control is given to employees to achieve goals and outcomes.

Adaptive process technology exposes structured (business data) and unstructured (content) information to the members of structured (business) and unstructured (social) organizations to securely execute—and continuously adapt with knowledge interactively gathered during execution—structured (process) and unstructured (case) work in a transparent and auditable manner.

You can find all about it here: ‘Mastering The Unpredictable’

ISIS Papyrus WebArchive Client for iPhone

We are very excited to announce the availability of the ISIS Papyrus WebArchive Client for iPhone on the Apple Appstore through using its Papyus EYE Mobile technology. It enables mobile access to documents stored on a remote Papyrus WebArchive. These documents can be generated by any kind of application or can be part of a business process or case file. Users can access Business Documents, add remarks (Stickers) and locate the person’s address. The user can upload any file on his iPhone to the WebArchive, including pictures taken with the iPhone. If the user is properly authorized he can change the state of viewed documents and thus take part in business processes without the need for a particular workflow client! ISIS Papyrus also announces its plans to make all Papyrus EYE Mobile applications available for Windows Mobile 7 and Android.

Papyrus WebArchive opens the world of mass customer documents distribution from the mainframe or Unix servers to the corporate Intranet and also to the MOBILE world. Corporations with a large number of customers regularly addressed with mass printing, now have the option to offer company wide document access, as well as customer direct access via Internet technology as a value added service to those same documents. This functionality is an essential feature for CRM Customer Relationship Management.

Applications for Banking, Insurance and Telecoms:

The WebArchive can be utilized to distribute customer documents and bank statements for collection of the documents by the customer. Customer folders can be defined on a number of WebArchive servers which can be accessed by internal and agent staff. Any customer query can be answered immediately. Customer documents can be printed, faxed or e-mailed as copy from the original document file. Customer billing information is not only available to the company personnel, but also to the customer himself via the Internet and through mobiles.

Features and Benefits

  • Completely secure access protocol without browser!
  • Access to document inbox from mobile
  • Change document state and thus case state
  • Alternative or value added services
  • No duplicate document generation
  • Constant quality for print and Web presentation
  • Reduced print and mail costs
  • Link to other Web based services
  • View AFP documents to lossless PDF conversion

Papyrus WebArchive enables the use of mainframe and Client/Server mass produced documents in WWW based Intranets or the Internet and now on mobiles with first the iPhone. Supported input formats are AFP documents or line-mode files with mixed Xerox DJDE or AFP controls, SAP R/2, R/3 and any other format supported by Papyrus DocEXEC. The z/OS JES2/3 connected Papyrus Host component provides transparent document transfer to the WebArchive from the print queue. Processing of the documents can be performed on the mainframe or the server as required.

ISIS Papyrus WebArchive Client for iPad available in Mai 2010!

ISIS Papyrus becomes OASIS Foundational Sponsor

Recently, ISIS Papyrus Software has decided to join OASIS – the Organization for the Advancement of Structured Information Standards as a Foundational Member.

OASIS is a not-for-profit consortium that drives the development, convergence and adoption of open standards for the global information society. The consortium produces more Web services standards than any other organization along with standards for security, e-business, and standardization efforts in the public sector and for application-specific markets. Founded in 1993, OASIS has more than 5,000 participants representing over 600 organizations and individual members in 100 countries.

For many this may be a surprise. I have been fairly outspoken when discussing the benefits of standards and made it clear that only those standards are relevant to us when they produce a substantial benefit to the business user. Otherwise they just cost money and hold back innovation. One of the reason to join was the creation of the OASIS CMIS (Content Management Interoperability Services) to standardize a Web services interface specification that will enable greater interoperability of Enterprise Content Management (ECM) systems. CMIS uses Web services and Web 2.0 interfaces to enable rich information to be shared across Internet protocols in vendor-neutral formats, among document systems, publishers and archives, within one enterprise and between companies.

We have announced the participation in the CMIS standards already over a year ago and have well advanced its implementation and testing. In this process we found that we should take a stronger shaping role in the creation as otherwise it is just the large vendors who dominate such standards to fit their own purposes, as much as OASIS uses a democratic process. Democracy only works if you go to vote. So here we are!

We also announced some time ago that we will put more effort into supporting XML formats, despite all their drawbacks and problems. We will only do this to the outside because our own internal formats are up to TWENTY TIMES more efficient than any XML format would be.

This is a substantial step for our business to give our customers the assurance that we do not only support market standards such as AFP (since 22 years) and PDF, but also Open Standards!

NoSQL and Elastic Caching in Papyrus

Mike Gualteri posted on his Forrester Research blog on Application Development about NoSQL and Elastic Caching. Quote: ‘The NoSQL idea is pretty simple: Not all applications need a traditional relational database management system (RDBMS) that uses SQL to perform operations on data. Rather, data can be stored and retrieved using a single key. The NoSQL products that store data using keys are called Key-Value stores (aka KV stores).’ Mike sees the difference as: ‘Ultimately, the real difference between NoSQL and elastic caching may be in-memory versus persistent storage on disk.’

I already posted about the powerful clustering and caching algorithms of the Papyrus Platform some time back. It was now interesting to read about combining NoSQL and Elastic Caching. The Papyrus Platform uses both the same concepts on the lowest layer to support the metadata repository, rule engine, and the distributed, object-relational database and transaction engine. Even the strict security layer and easy to use thick- and thin-client GUI frontend benefit from the powerful object replication and caching.

  • Reliability and Scaling: Papyrus offers the benefits of reliability and scaling through replication. Persistence and storage management concepts are defined on a per object type and node type form. Data can be spread across thousands of nodes. Also user PC’s can have their own local node and storage. Actually, that will be even true for mobile phone users once our mobile kernel will be available later this year for iPhone, WinMobile, and Symbian.
  • Fast Key-Value Access: Papyrus supports straight key-value access but also PayprusQL object-relational access (similar to Xpath), offering query and search across data in widely distributed KV storage nodes. Those can also be offline (dumped to tape or DVD).
  • Distributed execution: Papyrus executes object-state-engines and methods (implemented in PQL), events, and rules. The deployment of the application is automatic to the local node where the data is or any other chosen node. It does not take developers (clever or not) to distribute the load across multiple servers.
  • Change of data structures: Due to Papyrus WebRepository and its class versioning we can add fields to objects without the need to restructure database tables. New instances will simply have the new fields. Data storage IS NOT XML format because the performance to parse it is dreadful. Papyrus uses field-length-keyed hex-codepaged strings that can be parsed 20 times faster.
  • Latency: Papyrus can use transient objects that not saved to disk when the data does not have to be persisted. This significantly reduces the latency of data operations. In-memory operation is thus not a downside for large or persistent objects because it can be chosen per object type (class or template).
  • Reliability: Papyrus provides distributed caching with data replication algorithms to store the data on multiple nodes. If one of the nodes goes down, the load balancer in V7 will move the user session to another node and continue with the proxy objects there. A more efficient object distribution for a HA cluster will be available in Q410.
  • Scale-out: With Papyrus you add and remove nodes during operation. Currently the application can choose how the objects are distributed across nodes. The next release in Q410 will provide this distribution on system level as a part of the backup and recovery procedure.
  • Execute in data location: Using distributed code execution, developers can distribute the workload to where the data resides rather than moving the data to the application. Execution of methods on the owner node of the tool is the basic functionality. Full Distribution  is no problem with PQL.

It does not require enterprise application developers and architects to create architectures with the above features as they are embedded in the Papyrus Platform peer-to-peer kernel engine. Papyrus thus provides all the benefits of NoSQL and Elastic Caching without the technical complexity:

  • Achieve savings by reducing RDMS licenses and maintainance.
  • Add scaling layer in-front of databases, SOA or MQ messaging.
  • Build Web applications with shared session and application data.

BPMN/XPDL Execution in Papyrus

Those who follow my blogs might already be bored with my frequent bickering about process management. That I am not the only one to criticize the BPM market you can read in Terry Tschurter’s paper on the BPM State of the Nation 2009.

The worst thing I could do is to complain about something that I do not know much about. Therefore I would like to show to you that we at ISIS Papyrus are no strangers process management concepts, as easily proven by this announcement of the BPMN/XPDL Editor of our Papyrus Platform. I will not go into the details of either BPMN and XPDL here.

Keith Swenson is the authority on XPDL and BPMN and covers the relationship in this post: The Diagram is the meaning.

Let me just say that BPMN is a modeling notation for designing processes and XPDL is a superset that also contains graphical features of the actual drawing. Therefore we decided to cover both in the Papyrus Platform. Why did we do that if I am so opposed to BPM? As you might know, my opposition is mostly related to the huge, disconnected analysis and optimization process bureaucracy. Therefore we defined standard BPMN/XPDL to be used as execution engine:

BPMN/XPDL in the Papyrus Platform can be created and edited while you work and executed AS-IS.

It is fully executable by linking with the UML data models, content artifacts and Natural Language Rules defined in our Papyrus WebRepository. Also BPMN/XPDL is stored in the WebRepository using Papyrus’ change management and automatic, distributed deployment capability. All additional logic necessary is cleanly encapsulated in those classes and is not created during BPMN conversion into BPEL or by expansion with Java. BPMN in Papyrus is mostly used to define sub-processes in our Adaptive Process concept. The user interface is handled by our Papyrus EYE widgets so there is no XSD/XSLT mapping and Ajax forms programming. The business data are simply accessed through the UML classes linked to our Service Adapters (SOA and others). Finally, BPMN can be used in an Adaptive Process environment and allow 100% runtime editing.

Here is the Papyrus Platform BPMN/XPDL 2.0 Designer:

AFP versus PDF/VT

Immediately when we published our free Papyrus AFP Viewer the question arose why we are still so committed to AFP and do not for example rather use PDF.  While we do support PDF in all its variants, let me explain our reasons for supporting AFP:

There are currently five different ISO Standards for PDF in different stages of coordination. Until a few years ago there was no official PDF documentation but Adobe Acrobat defined what was acceptable PDF functionality. AFP is not an ISO standard but has been fully published by IBM over 20 years ago. AFP evolved from transaction printing needs and PDF from Postscript rendering for offset printing. AFP was designed to create high-volume variable documents and Postscript/PDF was designed to produce the highest print quality for an offset print press. AFP will never be as flexible for the highest quality of graphic arts and PDF will never be ideal for variable data printing. The question is not one of document rendering language abilities but one of overall needs for a business.

Task Force 3 of ISO TC130 Working Group 2 is the committee responsible for specifying and advancing PDF requirements and developing the ISO 16612-2 (PDF/VT) standard. The standard is entering DIS balloting stage. Adobe announced middle of 2008 the PDF Printing Engine 2 that will support it. PDF/VT will support the graphics model of PDF 1.6, which includes transparency, ICC-based color management, and layers.

In a PDF/VT workflow, conversion to Postscript is no longer required as the variable-data-printing software will generate output in PDF/VT format, and a Raster Image Processor (RIP) that is capable of interpreting the PDF/VT format has to be used for print production. It means that a VERY specific printer is required. Once you are in PDF/VT you have to print to such a printer. From AFP it is very easy to print to any printer on the market including PDF.

Variable-data-printing workflows based on PDF/VT are HOPED to be able to produce output that is more predictable than output generated by variable-data-printing workflows that are in use currently. Current workflows require that PDF transparency must be flattened, fonts converted to outlines, device-independent colors converted to device-dependent colors, spot colors are converted to process colors. Converting an RGB digital photograph to CMYK constrains the color for output to a device.

So PDF/VT solves some problems of PDF/Postscript but what about the complex variable-data issues? Like VPS and VIPP, PDF/VT will enable for one-time rendering and caching of the static text & graphics in the RIP for a variable-data print run. This allows documents to be produced faster than would be possible if the code for the static text & graphics were sent to the RIP/DFE over and over, once for each document in the print run.

PDF/VT is not practical for high-volume variable data printing for financial institutions with or without Transpromo because one has to generate an archive copy of the document at the same time. I have not been able to find exactly what kind of positioning logic is available in PDF/VT but I am pretty sure that it is in the area of VPS and VIPP, which are basically just forms fillers but do not support dynamic page breaking with complex tables. Therefore the complex document has to be created in the print file PDF and the variable Transpromo elements have to be stored in the printer. That means that a full document formatting run is required before print rendering as for AFP. No advantage there.

One further key requirement of a financial institution is Records Management and that requires that the SAME document that is sent to the customer is kept in the electronic archive. Courts require that businesses prove that IT processes guarantee that it is the same. Using a PDF/VT print workflow with rendering inside the printer and PDF/A rendering outside would make that VERY difficult. PDF/VT is necessary to reduce the problems that PDF/Postscript create. It does not provide more features or substantially higher quality than AFP Transpromo does. PDF/VT enables a few more options around object and layer transparency and that is all.

Using Papyrus with AFP output a large business can create any layout formatting quality needed without the limitations of PDF/VT formatting – including highest print-quality of embedded Transpromo elements – and store the same file to the archive. With the Papyrus AFP Viewer the same document print file can now even be sent to the customer on the web, which before required a conversion to PDF. In case that the business chooses PDF to be sent to customer, Papyrus DocEXEC produces the AFP and PDF file in the SAME PRODUCTION RUN from the internal formatted page structure. Therefore documents printed, sent or archived are guaranteed to be the SAME!

We are working with the AFP Consortium to standardize document encryption and digital signing for AFP documents which is important for archiving and Web distribution.

All in all, I do not see PDF/VT to make inroads in high-volume Transpromo or variable data printing beyond direct marketing applications in the near future.