How AI agents will help us make better decisions
When people tell the story about how AI changed the world, language models will be remembered as an important precursor to what came next: AI agents. Automation and the AI systems that have been developed and implemented over the past several decades have created tremendous value in their ability to take automated and precise action based on decisions that have already been made. Until now, however, the ability for AI systems to make decisions on their own has been limited to a handful of organizations with significant resources and top talent. As the ability to create AI agents becomes ubiquitous through open-source tools and platform offerings from startups and cloud providers, this dynamic is beginning to flip. AI systems can now be taught how we make decisions. But how exactly do we make decisions? How we make decisions The decision-making process typically flows as: new information, research, reasoning, decision, action, monitoring, and learning. When the average knowledge worker gets new information through an email or message from a colleague, they research across internal and external documents (or team members) to get more context. Then they reason (or think through) what should or shouldn’t be done based on that new information together with the context they’ve uncovered in their research. The next step is to make a decision and take action, then monitor to see and learn from the result. For most organizations, this process is not recorded anywhere, varies significantly from person to person, and is not carefully examined on a regular basis (if ever). Improving how we make decisions When developing an automated workflow, a group of cross-functional team members carefully consider each step of that workflow and all of the factors and checkpoints the automation should account for as it executes each predetermined decision. Developing AI agents is going to do the same thing for decisions that are not predetermined, by bringing cross-functional teams together to carefully consider how the AI agent should receive and prioritize new information, how it should research, how it should reason, and at what thresholds of confidence it should seek help from human experts or make decisions and take action on its own. Consider how two colleagues preparing for an event would treat the same inbound email from a keynote speaker offering to present at the event. One of the colleagues might double-check the agenda, go to that speaker’s website, watch a speaker reel, consider how that speaker’s topic and energy would fit into the event, and decide to reply and offer to schedule time to connect. The other colleague might not like how the speaker reached out and choose not to reply. Despite being on the same team receiving the same information, two different colleagues might have wildly different criteria and processes for making decisions. AI agents’ two promises for improving decision-making AI agents have the potential to transform and augment how we receive new information, how we research, how we reason, and how we monitor and learn from the outcome of the decisions that we make. In the event speaker example above, an AI agent connected to the team members’ inbox could double check the agenda, research the speaker, and make a recommendation to the team member before they’ve even read the initial email, with the speaker reel embedded into their message and a confirmation that the speaker’s fee is within the remaining event budget. The process of developing AI agents will challenge and reconstitute our decision-making capabilities as experts and professionals. By documenting and examining how we research, reason, and make decisions together with our peers, we will discover our own weaknesses and strengths across each step of the decision-making process. From this starting point, organizations can develop cross-organizational sharing of best practices for each step in the decision-making process, invest in books and training focused on decision making, and create individualized decision-making growth plans for team members that include learnings from risks taken, risks not taken, and ways to improve each step in the process. This will form a foundation for discussion and learning about arguably the most important, least invested-in skill across our organizations and society: our ability to make good decisions. Philippe De Ridder is founder and CEO of BOI (Board of Innovation), and Brian Evergreen is CEO of The Future Solving Company and author of Autonomous Transformation: Creating a More Human Future in the Era of Artificial Intelligence.
When people tell the story about how AI changed the world, language models will be remembered as an important precursor to what came next: AI agents.
Automation and the AI systems that have been developed and implemented over the past several decades have created tremendous value in their ability to take automated and precise action based on decisions that have already been made.
Until now, however, the ability for AI systems to make decisions on their own has been limited to a handful of organizations with significant resources and top talent.
As the ability to create AI agents becomes ubiquitous through open-source tools and platform offerings from startups and cloud providers, this dynamic is beginning to flip. AI systems can now be taught how we make decisions.
But how exactly do we make decisions?
How we make decisions
The decision-making process typically flows as: new information, research, reasoning, decision, action, monitoring, and learning.
When the average knowledge worker gets new information through an email or message from a colleague, they research across internal and external documents (or team members) to get more context. Then they reason (or think through) what should or shouldn’t be done based on that new information together with the context they’ve uncovered in their research. The next step is to make a decision and take action, then monitor to see and learn from the result. For most organizations, this process is not recorded anywhere, varies significantly from person to person, and is not carefully examined on a regular basis (if ever).
Improving how we make decisions
When developing an automated workflow, a group of cross-functional team members carefully consider each step of that workflow and all of the factors and checkpoints the automation should account for as it executes each predetermined decision.
Developing AI agents is going to do the same thing for decisions that are not predetermined, by bringing cross-functional teams together to carefully consider how the AI agent should receive and prioritize new information, how it should research, how it should reason, and at what thresholds of confidence it should seek help from human experts or make decisions and take action on its own.
Consider how two colleagues preparing for an event would treat the same inbound email from a keynote speaker offering to present at the event. One of the colleagues might double-check the agenda, go to that speaker’s website, watch a speaker reel, consider how that speaker’s topic and energy would fit into the event, and decide to reply and offer to schedule time to connect. The other colleague might not like how the speaker reached out and choose not to reply. Despite being on the same team receiving the same information, two different colleagues might have wildly different criteria and processes for making decisions.
AI agents’ two promises for improving decision-making
AI agents have the potential to transform and augment how we receive new information, how we research, how we reason, and how we monitor and learn from the outcome of the decisions that we make.
In the event speaker example above, an AI agent connected to the team members’ inbox could double check the agenda, research the speaker, and make a recommendation to the team member before they’ve even read the initial email, with the speaker reel embedded into their message and a confirmation that the speaker’s fee is within the remaining event budget.
The process of developing AI agents will challenge and reconstitute our decision-making capabilities as experts and professionals. By documenting and examining how we research, reason, and make decisions together with our peers, we will discover our own weaknesses and strengths across each step of the decision-making process.
From this starting point, organizations can develop cross-organizational sharing of best practices for each step in the decision-making process, invest in books and training focused on decision making, and create individualized decision-making growth plans for team members that include learnings from risks taken, risks not taken, and ways to improve each step in the process.
This will form a foundation for discussion and learning about arguably the most important, least invested-in skill across our organizations and society: our ability to make good decisions.
Philippe De Ridder is founder and CEO of BOI (Board of Innovation), and Brian Evergreen is CEO of The Future Solving Company and author of Autonomous Transformation: Creating a More Human Future in the Era of Artificial Intelligence.