[Progress News] [Progress OpenEdge ABL] How to Win Over Stakeholders with Automated Decisioning

Status
Not open for further replies.
J

John Iwuozor

Guest
Automated decisioning is transforming the way organizations make decisions—if you can get stakeholders on board. Here are some tips.

The impact of automation in this current time cannot be overestimated—from helping 90% of workers improve their work life to reducing the need for manual labor and costs for organizations.

This effect has found its way to the field of decision-making resulting in Automated Decisioning—a process of using artificial intelligence (AI) and machine learning (ML) algorithms with data, business rules and criteria to make better real-time decisions. It follows an intelligent approach to helping organizations streamline and optimize their processes, and improve their overall efficiency and performance.

Despite 80% of executives supporting the motion that automation can be applied to any business decision, the story is different for stakeholders. There are a number of genuine concerns—security, privacy, costs, ethics, system failure, etc. These factors can make any stakeholder hesitant to adopt automated decisioning.

Common Challenges Associated with Adopting Automated Decisioning​


Stakeholders are largely responsible for influencing complex business processes as every significant outcome affects their involvement.

Implementing a switch to AI systems somewhat implies a shift of decision-making power. This is what they fear, alongside issues relating to data protection, ethics and others. But this is not what you want. We want to reassure them that they have an active part (aka human intervention) in this process, build that confidence in them and gain their trust.

Showing the Value of Automated Decisioning​


“To err is human.” We as humans have this natural tendency to make mistakes and some of these come from a place of emotional bias. As much as this may be needed, it can result in inconsistent short-sighted decisions that lead to unintended errors.

Decision fatigue is another point to consider. Imagine having to make multiple decisions to the point where you get mentally exhausted and start making bad decisions. Worse still, the party required to make decisions is unavailable, leading to strain on the workflow and slower business processes.

The first step in winning over stakeholders is to show them these problems and the importance of automation, especially for routine issues that require repetitive decisions. Alan Young, chief product officer at InRule Technology describes decision automation as automating the application of business rules and logic in enterprise operations, allowing organizations to make real-time, repeatable, and complex operational and customer engagement decisions in a scalable manner.

By automating routine decisions, employees can focus on higher-level tasks that require human judgment and expertise. In addition, automated decisioning can provide insights that may be difficult to uncover manually.

How to Solve the Problem of Bias​


In AI/ML, bias is a critical factor. When a particular training data is missing or unrepresented, algorithms can make biased decisions favoring certain groups and ignoring other factors without intending to do so. In reality, automated decisioning systems are only as good as the data they are trained on.

James Beattie, small business owner and the founder of We Tested points this out:

Code:
"You have to be aware of possible biases, as automated decisions based on algorithms can influence decision-making without considering complexities, history, personal context, and preferences. What's worse is that these models might not always be transparent, making it difficult to understand why particular decisions were taken."

In this Progress piece, some ways of handling the topic of bias include adopting a policy-driven solution, prioritizing ethical decision-making, and promoting transparency and traceability.

Prioritize Explainability and Interpretability​


Explainability refers to the ability to understand why a model makes certain predictions or decisions, while interpretability refers to the ability to understand how a model works and how it arrives at its predictions or decisions. These features help to identify biases or errors in machine learning models.

The Future of Privacy Forum (FPF) recently provided independent policy recommendations to the California Privacy Protection Agency regarding the California Privacy Rights Act in relation to automated decision making (sometimes abbreviated ADM). During the public stakeholder sessions, FPF emphasized the importance of explainability in developing trustworthy automated systems.

According to the National Institute of Science and Technology’s Principles for Explainable Artificial Intelligence, explainable systems should:

  • Provide an explanation
  • Be understandable to its intended end users
  • Be accurate
  • Operate within its knowledge limits or the conditions for which it was designed

Invite Stakeholders to be an Active Part of the Process​


Show, don’t tell. It’s nice to talk about the benefits of automated decisioning but more convincing to present data demonstrating how other organizations have successfully implemented automated decisioning.

According to this report published by Gartner last year, you should work closely with business stakeholders to identify and prioritize decision-making moments and use cases, and then determine how improvements must be made (with emphasis on speed, accuracy, scalability, personalization or optimality).

Businesses should also assess the feasibility of decision automation by considering timing, complexity and risk. Timing refers to the duration between the occurrence and detection of an event and the decision on how to respond to the situation or threat at hand. Complexity is characterized as being elementary, structured, interdependent or dynamic, while risk involves assessing the actual risks and the possible mitigations of automating decision-making.

Kamales Lardi, award-winning global tech leader and author of “The Human Side of Digital Business Transformation,” explains how she dealt with stakeholders skeptical or resistant to change:

Code:
"Identifying key stakeholders early enough, ensuring that they are included in the development process. Stakeholders may include business users, but also management team members, people with influence in the company and data owners. Early engagement will bring a sense of advocacy and ownership to the system that is being developed.

This also includes organization-wide communication to ensure that teams understand the business case and objectives for the change, what is being done, how it will benefit teams and the company, etc."

Transparency Has a Major Role in Building Trust​


Be up front and realistic with expectations. You don’t want to sell over-optimism to stakeholders and be held responsible for huge errors. This is key to building trust.

Lardi describes “overestimation of technology abilities” and “trust” as key challenges when dealing with leadership teams.

On the former, she says:

Code:
"Technology, even AI-based platforms, are not silver bullet solutions for issues and challenges faced by the business. In implementing decisioning solutions, companies will need to identify clear objectives (aligned with business strategy) and develop an implementation approach that produces the intended outcome.

This is a complex process that would require experience and understanding of the business, as well as the true capabilities of the system being implemented. Additionally, companies will need to have prerequisites available to maximize the results, e.g., data sources that are accessible, reliable, and available in usable formats, analytics capabilities, etc."

On trust, she ties it to the lack of transparency:

Code:
"Another key challenge is trust—oftentimes teams (employees or management) do not trust decisions made by the new system. This could be due to the lack of transparency on how the systems produce outcomes, the complexity of the systems, the lack of capabilities (e.g., data analytics), etc. However, it would be important that data and decisioning audits may be conducted, ensuring that teams are not fully reliant on the outcomes of the systems. Companies will need to strike a balance between efficiency/optimization and reliability of the data and system."

You also want to ensure that stakeholders can see how their expertise and judgment are being used to inform the automated decision-making process. This way, they ensure that the system meets their needs and that keeps them invested in the system, and that it helps them see the value it provides.

Implementing automated decisioning is not a one-time event. It requires ongoing monitoring and improvement to ensure that the system continues to meet the needs of the organization and its stakeholders. This means using data to track the performance of the system and identify areas for improvement.

Concluding Thoughts​


Automated decisioning is transforming the way organizations make decisions. However, winning over stakeholders with automated decisioning requires a careful approach. Showing the value of automated decisioning, involving stakeholders in the process, addressing concerns about bias, providing training and support, and continuously monitoring and improving the system are key strategies for success.

Progress Corticon Can Help​


Progress Corticon is a business rules management system (BRMS) software solution that enables organizations to automate decision-making processes. It uses natural language to capture and model business rules, allowing non-technical users to create and manage decision services. It offers built-in features to detect and address bias and supports real-time decision-making for fast decision-making.

Get started here: Corticon Trial - Progress.

Start Your Free Trial

Continue reading...
 
Status
Not open for further replies.
Top