If you’re a workflow user, workflow tips, success stories and best practices can be informative and even spark inspiration.
But what’s even more helpful: Cautionary tales of workflows gone wrong.
In our decades of experience helping organizations achieve their digital transformation goals using workflow automation, we’ve seen the good, the bad, and the ugly. While the latter two fates are unpleasant, they’re also powerful learning experiences.
We consulted Naviant architects and compiled a list of workflow “don’ts” and advice on how to avoid them. Plus, stay tuned to the very end to get a checklist of critical questions to ask yourself before designing a workflow for best results.
The software we reference in this article is Hyland’s OnBase. However, these disastrous workflows could be applicable to those using other content management services and workflow tools.
Inside Disastrous Workflows
- The Workflow Based on Assumptions
- The Misinformed Workflow
- The Workflow That Overlooks Document Retention
- The Workflow Without High Availability
- The Workflow Lacks the Resources or Training for Long-Term Support
- The “Just Because You Can Doesn’t Mean You Should” Workflow
- The Workflow that Disregards Volume and Probability
- The Workflow with E-Forms that Do Too Much
- The Workflow with Forgotten Items Left in Queue / Soft” Decisions
- The Workflow That Has Too Much Work on a Timer
- The Workflow where System Work Does Everything
- The Workflow that Transitions in and Out of Queues
- The Workflow with “Default” Values Set on Keywords
- The Workflow with Too Much Complexity in Logic
- The “Mystery” Workflow
15 Disastrous Workflows & How to Avoid Them
1. The Workflow Based on Assumptions
One company needed a workflow to automate check processing requests. The workflow creator, having previously worked in accounting at another organization, set up the workflow to mirror the process they had seen in their former position. Turns out the needs for approvals and coding ended up being VERY different from their previous experience.
The Solution: The entire workflow had to be scrapped and remade. From the initial workflow’s creation to completing the remake, close to four months of work was put into getting the workflow right.
How to Avoid: Every organization has unique processes. Always check with ALL parties to verify needs, no matter how confident you are in their validity. You will save yourself significant time and effort.
2. The Misinformed Workflow
A group of employees who used OnBase worked with an IT admin to design an OnBase Unity Form. Together, they compiled all the rules for what would be visible on the form and what needed to be visible at different times. The workflow was built; everything looked and functioned perfectly…at first. Sadly, all the fields on the form were form fields. This meant the IT team couldn’t search for the document on the backend of OnBase once it was saved – a critical step.
The Solution: They had to recreate the form after designating the keywords necessary to search and locate the document to fix the problem. Making matters worse, once this patch was in place, the management team requested to see reports using additional values on the form. And, the form fields were not saved in the database, so they could not be reported in SSMS reports. The form had to be recreated AGAIN, so the elements needed to report against were saved as keywords instead of form fields.
How to Avoid: Know the end goal and what is required to ensure your workflow can fully function and best serve the workflow users.
3. The Workflow That Overlooks Document Retention
Half the time, the data necessary to apply document retention and holds isn’t stored in the content services system, which can lead to problems. For example, an HR department might need to keep documents for ten years after termination, but they may only record the termination date in their HR system and never move it to their OnBase system. As a result, their OnBase system would be unable to use those critical data values or apply retention to them at all.
The Solution: To add the termination date to the system after the fact, you will need to update every HR document. If your content services system is small, you can run these kinds of updates relatively fast, so it’s more of a minor inconvenience. But if you have a larger system, running these updates can take weeks if not years.
How to Avoid: Before you start designing your workflow, ask: what do we need for retention?
4. The Workflow Without High Availability
Most organizations want a highly available system so if a server goes down, the system can continue running on backup servers. However, overlooking high availability is a common mistake that can surface in several ways.
a. We’ve seen systems with two to six application servers get set up in load balancing, only to have the disk groups set to rely on only one server instead of in a redundant server.
The Solution: To solve this problem, you need to remove single points of failure by setting up additional backup servers. This way, if any one machine goes down, the users won’t be affected, and the business can proceed with business as usual.
b. To restrict admins from looking at images of documents, an organization configured the disk groups for a UNC path looking at \\localhost\DiskGroups. At first, everything went smoothly for the user scanning with the OnBase Thick Client, but other users couldn’t retrieve the images once the scanning was complete.
The Solution: Similar to example A., this problem can be solved by adding additional backup servers.
c. An organization using OnBase had all Document Import Processor (DIP) processes running on a single computer. When the computer crashed, so did their ability to execute the scheduled process requests. To fix this, a new computer had to be installed from scratch (and ALL systems were down until the system was up).
Possible Solutions: For DIP and other thick client-scheduled processes, you must have redundancy in mind. This can be done in a few ways, like:
- One solution is to set up the thick client services in a cluster and let the cluster keep only one service active at a time.
- You could also set up the services to run on a VM and set the backups and movement of the VM so that if it fails on one of them, you can bring in the backup or move the VM in a matter of minutes. The trick is that the moved/restored VM needs to have the same machine ID and MAC address as the original VM. If it’s different, OnBase will detect it and require manual intervention to the new machine.
- Another option is setting up two stations to run the scheduled processes with either only one of them running and the other with its service turned off or with both running simultaneously. Depending on the uptime, SLAs will determine your best option.
- Workflow timers can also be helpful. The Unity scheduler will automatically assign a timer to run on one of the available services. Too often, our architects have seen admins install the timer service to only run on a single computer. In a case involving multiple processing servers, the timer group ‘General’ only ran on processing server #1. Then, processing server #1 went down, causing all the workflows that were dependent on it to stop. This problem can cause SLAs to be missed. When the problem is discovered, if the server is not recoverable, users must carry out an emergency installation and set up a new processing server. You can avoid this by having multiple servers run services for the task group ‘General.’
How to Avoid: Overlooking high availability wastes a considerable amount of time and resources. That’s why asking yourself, “do I need high availability?” from the beginning is necessary.
5. The Workflow Lacks the Resources or Training for Long-Term Support
Another common way to set yourself up for failure is to build a workflow that you don’t have the resources to support. Here are some specific examples our architects have encountered:
a. An organization sets up a workflow and brings in a developer to help facilitate it who has the proper API training but not workflow training. Without being proficient in both, the developer is more comfortable in code rather than the workflow-building interface. This problem often leads to lengthy, complex scripts that accomplish things that could quickly be accomplished in workflow. As a result, the organization ends up with solutions its support team members without API training can’t support.
The Solution: In cases where you need to make scripts, make them as simple as possible so all support team members can support them.
b. A more recent common mistake involves connections and password changes. Organizations require password changes at different intervals. For some, you can set a password and leave it, while others require a new password every 30 days. The frequency rate of password changes largely determines its supportability. If you only have a dozen locations to update password changes, it’s relatively easy to update them. But if you have, for example, 100 external datasets, 20 external auto-fills, and multiple application processing servers, what was once a few minutes of work becomes a few hours. Manually updating large quantities of passwords can also lead to outages caused by human error.
The Solution: To avoid this problem, see if you can move to using NT authentication instead of SQL authentication or move ODBC connections to scripts that use connection strings. This change can move multiple ODBC settings that must be updated individually to a single update with a built-in test button. One little trick is if the ODBC connection is set up for windows integrated, it will ignore the name and password OnBase passes it and use the windows account calling it instead. While options are available to you regarding password updates, you must first consider what your support team can realistically support.
How to Avoid: Ensure the workflows you build can be managed by others on your team. It is essential to consider what your team can realistically support. Additionally, your staff needs proper training.
6. The “Just Because You Can Doesn’t Mean You Should” Workflow
Another trap our architects often see organizations fall into is building a workflow using a method that isn’t the most effective option. This mistake can be applied to many scenarios, including the following examples:
a. Avoiding load balancing. A workflow queue was created for each department group and rules to send the document to the correct queue. This technically worked, but it caused significant problems like slowing down the time it took to open OnBase Studio and creating unnecessarily long queues within the invoice approval workflow.
The Solution: To make the workflow more efficient, load balancing was applied, which decreased the queues from 240 to only five. Plus, the time to open OnBase Studio was reduced from five minutes to two.
b. Text extraction methods. You may set up a workflow that sends a document through a scan queue, so it creates a text rendition. Alternatively, you could use OCR engines alone without the assistance of a capture system to generate the document’s full text. With the full text option, you may apply a rule to extract a keyword or find trigger words to classify a document. While technically possible, it’s not the most effective method.
The Solution: Bringing in a capture system is a far more efficient solution. If you use OCR alone without a capture system assisting, you often lose the accuracy OCR can produce. A capture system like ABBYY FlexiCapture interacts with the OCR engine to report its confidence in the accuracy of the characters it just read. It can even tell you what the unconfident characters are and offer other options for what the character might be. You don’t get any of this assistance when looking at a text file, with or without OCR. This feature is highly valuable because character confusion is bound to happen, whether a document is worn out from being over-scanned or contains a series that even a human would struggle to read, like (o0O〇⚪ Il|†t+!¦).
Another reason a capture system is a better alternative is that if you don’t get a match to your search, you can put the document into an indexing queue for the user to index. Capture systems can then assist the user with document indexing and can apply rules, which is much more reliable than taking an “I hope this works” approach.
How to Avoid: Always question whether the route you have planned is the best option. Just because you can doesn’t mean you should. Knowing whether you’re choosing the most effective option will come with experience and ongoing workflow training. However, if you’re newer to workflow or feel unsure whether you’re taking the ideal route, it can be helpful to get a second opinion from an experienced peer or expert/consultant.
7. The Workflow that Disregards Volume and Probability
Volume matters. It’s essential to know the approximate number of items that you will be processing in a workflow at any one time. Too large of a queue count often leads to time-outs and an inability to display all items for the user to view, which wastes time and leads to unnecessary frustration.
How to Avoid: Always consider probability when building task lists in your workflow, especially when working with high-volume queues. Failure to do so adds significant inefficiency to your process. If you have a workflow with a very long list of nested rules that aren’t set up to execute the highest probability of success first, then the second, etc., this causes major overhead for the server and results in overall slower processing since it’s asking questions on items that had low probabilities of success. Multiply this inefficient design with a high-volume queue, and you have a disastrous workflow that’ll make you cringe.
8. The Workflow with E-Forms that Do Too Much
Most workflows begin with an electronic form. Far too often, disaster strikes when too many validation rules, external lookups, and scripts cause the form to perform terribly. These performance issues are often exaggerated when the form is available externally.
How to Avoid: Don’t be tempted to overload e-forms and consider whether workflow and other tools can better accomplish the job in harmony. In cases like these, validations and scripts should be performed in a workflow instead and, if necessary, sent to an exception queue.
9. The Workflow with Forgotten Items Left in Queue / Soft” Decisions
Another common mistake organizations make is ending up with forgotten items left in their queues, which can lead to an unmanageable pileup. OnBase can detect whether documents with the same keyword values exist. For instance, consider AP invoice processing; this capability is critical as vendors tend to send invoices multiple times. These are almost always true duplicates that can be deleted. Putting these items into a queue so an employee can process them “when they have time” can result in items piling up to the point where it’s impossible to review them all. Here, you need to decide if the cost of missing an item that is a “false” duplicate is greater than the cost of reviewing the genuine duplicates, which is the outcome 99% of the time. Additionally, ensure that the automated rules are always followed.
How to Avoid: Double-check to ensure that your design won’t cause forgotten items to be left in queues before you build your workflow. And, if human review is required, do it immediately and force a decision on each item as efficiently as possible.
10. The Workflow That Has Too Much Work on a Timer
Another common mistake is the workflow that has too much work on a timer. If you don’t proactively ensure that your timer runs as efficiently as possible, the timer’s processing may lag.
How to Avoid: You need to measure the performance of the timer work periodically and adjust as needed, especially if you’re processing high volumes of long-running tasks. If a timer takes too long to process, review what it is doing. As you monitor, be aware that timers shouldn’t run again before the previous instance has finished. Additionally, you should assess whether the timer could be made more efficient or if you could benefit from breaking the process into multiple steps.
11. The Workflow where System Work Does Everything
We have also seen many cases where organizations use System Work for tasks that could be better accomplished with a timer. System work seems simpler and faster, but it is not always appropriate. When an item enters a queue, the tasks in System Work are performed. This work is executed by the machine that puts the item in the queue at the time it enters the queue. If it moves to another queue, the System Work in the next queue is performed on that item before the next item starts. For batch processes like scanning or document import processes, this can delay processing during import or during commit. Instead of System Work, you may want to consider using a timer.
How to Avoid: Design workflows to run in defined, timed processes that complete all automated tasks as cleanly as possible before moving to the next “waiting” step.
12. The Workflow that Transitions in and Out of Queues
For timed processes that check to see if an item is ready to process, or those with a “retry” a action, these items do not move out of the queue they’re in unless the action is completed. This mistake can lead to significant performance problems.
In the case of waiting for a goods receipt within AP invoices, a common design is to build a queue for “Check for receipt” and “Hold for receipt.” The “Hold for receipt” queue will typically have a timer that puts items back into the “Check for receipt” queue to perform the check again. This setup results in additional log entries on the document history and the workflow queue history for every item in the queue each time the queue executes. It may not seem like a large overhead, but if the scenario involves high volume and high frequency, you can end up with millions of extra rows on several tables. Additionally, your performance will degrade as table indexes continue to grow.
How to Avoid: Have the timer in the “Hold for receipt” queue perform the check with as little overhead as possible. Do not update keywords or transition items unless there is new data that can be acted upon.
13. The Workflow with “Default” Values Set on Keywords
Setting default values on keywords instead of leaving them blank can waste substantial time. For example, an organization imported items into a workflow that would later set a transaction number on a document. At the beginning of the process, the transaction number keyword had been set to “NONE” rather than being left blank.
This was a problem because the database index for the keyword no longer worked. Millions of rows with the value “NONE” ended up on the keyword table. As a result, whenever a user wanted to search by transaction number, the system took several minutes to scan the entire table when it should have only taken one second. Workflow timers would also take hours to complete when checking to see if the value was “NONE” rather than checking to see if the keyword had a value at all.
The Solution: Fixing it required changing the process and going back through all of the documents and removing the “default” value using workflow, which took a great deal of time and effort that wasn’t necessary in the first place.
How to Avoid: Leave the keyword empty. Empty keywords don’t create rows on the tables. The workflow rule “keyword exists” can instead check to see if there’s a value, which takes very little time. The database indexes will thank you.
14. The Workflow with Too Much Complexity in Logic
When designing workflows, it’s common to create a task list with hundreds of lines of nested logic. However, doing so makes the process very difficult to troubleshoot and nearly impossible to change in the future. You want your workflow logic to be easy to follow if possible. Before you build your workflow rules in a queue, map them out in a flowchart and divide them into clear, easy-to-follow sections.
How to Avoid: Break the logic into separate task lists and clearly document each one. Planning this in advance won’t just help you create cleaner code, it will help you better understand your business processes and ensure they’re as efficient as possible.
15. The “Mystery” Workflow
Trying to remember what a particular process does after a few years can be a problem. Staff turnover leads to a loss of knowledge. When a workflow does not describe what it does well, it becomes tough to maintain.
How to Avoid: Use the “Documentation” features in OnBase studio. Every lifecycle and queue (at a minimum) should have notes that indicate why it is there and what it does. If task lists are complex or have external dependencies, make note of them within the configuration. The “Help Text” also should be used to provide information to the users. This will save a lot of time and confusion.
Workflow Tips: 7 Key Questions to Ask Before Designing a Workflow
This workflow tips checklist will help you cover all your bases to ensure that you’re designing an impactful, efficient workflow – not one you will regret.
- What is needed – am I operating under any assumptions or biases, or do I need more context about the process to proceed?
- What is required?
- How is retention affected?
- What is needed for reporting?
- Do we need high availability?
- How might probability and volume affect this workflow?
- Who will need to support this workflow, and do they have the skills and time to do so?
- What’s the best strategy to accomplish the goal at hand? (Remember: just because you can doesn’t mean you should.)
Want to Expand your OnBase Knowledge?
Bookmark our Question Corner playlist on YouTube. Every other week, we add a new OnBase demo led by experts from the Naviant Customer Success Team. Each demo is 3-10 minutes long and answers questions frequently asked by OnBase users like you.
Want More Content Like This?
Subscribe to the Naviant Blog. Each Thursday, we’ll send you a recap of our latest info-packed blog so you can be among the first to access the latest trends and expert tips on workflow, intelligent automation, the cloud, and more.