Greg Council's blog

Navigating Hype, Myths and Realities of Cognitive RPA

Excitement continues to grow around the capabilities of applying automation to various business processes, particularly using robotic process automation (RPA). The enthusiasm is appropriate because early initiatives to automate rote, low-level tasks have seen very positive results with high levels of automation achieved, which frees up staff to spend time on higher-value, more complex tasks.

Low variance, rote and simple tasks have been the primary focus for the majority of RPA projects because they are easy to define and the complexity related to handling different types of exceptions can be avoided. According to AIIM’s 2018 report titled, “Enhancing Your RPA Implementation with Intelligent Information,” the top processes across different functional areas include well-defined processes that operate on structured data. The report highlights processes such as inventory management, payroll, order management and records processing, all of which benefit from standardized data and straightforward tasks. The result is close to 100% automation.

Automating Key Activities

As most organizations become more adept at process automation using these tools, attention starts to turn to processes that involve key activities within an organization. Processes involving customers need to be sped-up and more convenient. Processes involving the delivery of products and services need to be better controlled and accelerated. It is not just about automating tasks to lower costs. In the same AIIM report, it found that organizations see RPA technologies as a way to deal with reducing errors within processes while at the same time, improving data quality and customer service.

Greg Council, Vice President of Marketing and Product Management

Cognitive Automation: Reading the Tea Leaves

Cognitive automation mimics human brain functions

As more enterprises and service providers adopt cognitive automation to improve their manual processes, reading the tea leaves or better yet, examining case studies suggests a new job landscape with some fairly drastic improvements in efficiency.

Harvard Business Review (HBR) provides a useful summary article explaining how to deconstruct work into tasks that can be automated. Here three characteristics are used to assess our work:

  • Repetitive vs. variable work;
  • Dependent vs. interdependent work; and
  • Physical work vs. mental work. 

Any automation assessment model should also take into consideration the nature and complexity of both the inputs and outputs onto which we can overlay these three characteristics to assess the impact of automation based upon the nature of work itself. 

Basic automation has arguably had the highest level of impact so far. It is applied to rote, highly repeatable and low variance tasks. For example, basic automation supports the IT back office such as regular back-ups of data or automated provisioning of software resources (such as email accounts and CRM access).

These activities can be highly automated due to the nature of the work and low probability of exceptions to workflows. The inputs can be highly structured with very little variability while outputs are often binary. The result is either a successful completion or an exception. These tasks are very independent with interactions typically only with application interfaces. There is very little mental effort required.

Greg Council, Vice President of Marketing and Product Management

Sidestep High Failure Rates in Digital Transformation

Smart city and communication network concept

Adopting digital transformation (DX) leads to significant growth for organizations when compared to their lagging peers, according to McKinsey and Company research. McKinsey suggests that there are five approaches to plan for and incorporate into any digital transformation (DX) project: ensuring lean process design, digitizing the customer experience, selective process outsourcing, incorporating analytics to aid with decision-making and using intelligent automation for non-core human tasks.

These five approaches make sense; however, there are many speed bumps along the way that will amplify the risks of any DX undertaking. The reality is that few organizations are ready to attempt such an endeavor. The obstacles are enormous. Mapping and documenting processes, culture and change management, access to data science skills, access to the data itself, and managing many moving parts of an implementation are just a few of the complex tasks that an organization must tackle.

As a result, these capability problems have led to a change of thinking both on the part of enterprises and by the organizations that provide services to them. It is critical to examine the key challenges along with potential strategies to resolve these problems.

Greg Council, Vice President of Marketing and Product Management

When the Future You Expect Never Arrives

When the future you expect never arrives and business predictions fall short of their mark, the culprit is—more often than not—bad or missing data. Procurement staff must ensure their data is accurate from start to finish so that their forecasts have the desired outcomes. 
 
Idioms abound about how to tackle future challenges such as “past results do not guarantee future performance” or conversely, “those who do not learn history are doomed to repeat it.” We have all seen or heard these intuitive phrases. On the surface, they would seem to be at odds. In reality, they address two different concepts associated with using the past (data) to understand, predict and influence the future.  
 
Similarly, when it comes to projects that involve the need for data, whether it is to predict sales to manage inventories or to train a system to automate a process, success hinges on having the right set of data to use as input to the decision making process. Today, where machines are often making decisions, the notion of “right set of data” becomes a lot harder to understand. This is because machines learn in a different way and the rationale for the output they produce is difficult to reconstruct. 
 
Machines do not have the intuition or the critical reasoning that can help to elevate or discount one data point over another. Input data must be accurate, representative, and free from bias so here are some key guidelines about your data to help ensure successful projects:
 
1. Accurate Data. Having accurate data is essential because a machine can learn on both accurate and inaccurate data, but only accurate data provides the desired results: a machine that provides output, which is reliable.
 
Greg Council, Vice President of Product Management, Parascript

Artificial Intelligence Spawns the Next Largest Divide

Artificial Intelligence (AI) is creating the next largest divide not only between people, but also between organizations. Taking full advantage of AI requires a two-pronged approach by any enterprise. First is to identify the business processes that can gain most from the introduction of AI. Second is to treat AI as a key component in any reengineering effort with quality data as one of the highest priorities.  

Since one key beneficial attribute of AI is that it can replace tedious, low-value human tasks, it is important to target processes that enable staff to focus on other higher-value areas. The perspective of pragmatically tackling routine processes first is echoed in research presented by Harvard Business Review, which provides a useful construct by defining three types of AI: one applied for automation, another for delivering insight, and a third for customer engagement.  

Data Science: the Key to Successful AI 

Greg Council, Vice President of Product Management, Parascript