Summary 

AI models don’t go off track by accident. They follow what they’re told to value. In this article, Bill Schmarzo outlines a practical approach for enterprise leaders to take control of that logic through the utility function—the part of the system that handles trade-offs and shapes decisions. If teams skip this step or leave it to just the builders, key priorities like ethics, trust, and long-term value often fall out. TechLeader breaks down a clear four-step method to design AI systems that stay aligned with what the business—and its stakeholders—actually care about.  

Enterprise systems do what they’re set up to do. The outcomes they generate are the result of what they were told to prioritize. 

Bill Schmarzo, Customer AI and Data Innovation Strategist at Dell Technologies, spends little time on hypotheticals. His focus is practical: AI models act according to their utility functions. If those functions don’t reflect what matters to the business, the model won't either. 

“AI will only do what you train it to do. And at the core of every AI model is a utility function.” 

This is the decision engine where tradeoffs are handled and performance is defined. Most utility functions today are shaped by the entity that owns the model build.  

Schmarzo thinks that’s too narrow. His view is that enterprise leaders should take an active role in setting what the system optimizes for and how it does that over time. 

TechLeader breaks down exclusive insights shared by Schmarzo to give CTOs, CAIO and decision-makers a guide on what a responsible utility chain looks like and how to build it. 

When Outcomes Don’t Match Intent 

AI projects often begin with speed. Teams use what’s on hand, like metrics from internal dashboards, logs from customer systems and product KPIs already tracked. Since these metrics are easy to plug in, they become the inputs for most AI pilot projects. 

But Schmarzo points to a pattern: models built this way tend to lock into existing behavior. They reinforce what the business is already doing, rather than where it wants to go. 

“We end up talking to the same customers about the same things… and our total addressable market shrinks.” 

This doesn’t mean the model is broken. It’s just doing what it was told.  

What’s missing is direction. Teams skip the step where success is defined across users, departments, and outcomes. Instead, they rely on proxies. If a value isn’t measured, it doesn’t make it into the model. And if it’s not in the model, it doesn’t get optimized. 

This is where misalignment begins. The system performs but the results don’t reflect what the business needs. It keeps optimizing for activity that looks familiar, not activity that creates growth, fairness, or trust. 

Schmarzo’s solution is simple: If you want a different outcome, change what the model is being asked to value. 

 The Utility Function Makes Every Call 

There is no AI output without a utility calculation. This is where the model evaluates the value of each possible action it could take, based on what it has been told to care about. 

“It’s the model’s beating heart… deciding what to do given the metrics around which it is trying to optimize.” 
- Bill Schmarzo

Every choice flows through this core. For example, a system handling ride assignments might weigh travel time, traffic flow, location, and driver rating. While in a claims processing model, it could include risk score, incident severity, user history, and queue time. These inputs work together to determine the best possible action in context. 

The math matters, but the bigger decision is upstream: who defines which inputs count and how much they should matter. 

Schmarzo’s stresses at how important it is to make sure the right people are at the table to shape a utility function. If business leaders aren’t involved at this step, critical priorities, like ethics or long-term value often don’t show up at all. 

That’s why he treats the utility function as more than a technical step. It’s a way to translate intent into action and when designed with care, it becomes the part of the system that carries stakeholder goals forward in every decision it makes. 

The Responsible Utility Chain 

Schmarzo lays out a method for building systems that reflect business intent. It starts before modeling and continues through validation. This sequence is what turns ethical goals and strategic direction into actual optimization logic. 

How to Build a Responsible Utility Chain by TechLeader

1. Define Stakeholder Success 

Begin by identifying the people the system will impact. That includes internal users and external participants. In a single workflow, there could be six or seven distinct groups, which include patients, service reps, supervisors, partners, regulators, community advocates, and more. 

Each group brings different needs and definitions of value. Some focus on speed. Others focus on fairness, accuracy, or clarity. These priorities have to be gathered and made explicit. 

“We must understand what the user’s intent is. What does good look like to them?” 
- Bill Schmarzo

These shouldn’t be guesses. They’re surfaced through direct engagement. Teams can use interview formats or experience mapping to uncover where friction exists, what outcomes are desirable, and how different groups measure impact. 

2. Translate Intent into Measurable Variables 

Once the measurable variables are clear, they need to be translated into something the system can track. These may be direct KPIs or behavioral indicators. For example, system usage can suggest ease of use, while ticket volume by region may reflect service disparities. 

“AI allows us to define the variables and metrics around which the aspirations are that we wish to become.” 

Each input must represent a real-world outcome. Including a variable just because it’s available weakens the model’s purpose. A well-structured utility function includes only those signals that reflect stakeholder needs. 

This step often reveals gaps. For example, if a desired outcome isn’t being measured anywhere, that’s a flag. It suggests the organization may be missing feedback channels essential to model accuracy and trust. 

3. Assign Weights and Tradeoffs 

Once the system has access to the right variables, it needs direction on how to prioritize them. The weighting process tells the model which outcomes to protect and where flexibility is acceptable. 

“You must be thoughtful...AI does not magically optimize for values you never included.” 

Schmarzo reiterates that this is not a solo exercise and requires collaboration from various teams, including product, engineering, compliance, and operations teams. This is because they each have visibility into different types of risk. And some tradeoffs might be acceptable at one stage of a workflow but unacceptable later. That nuance can only be captured if the weighting process brings in more than one perspective. 

The utility function serves as the integration point for those priorities. Once set, it enables models to make rapid decisions in dynamic settings without drifting from the organization’s goals. 

4. Stress-Test the Model with Edge Cases 

Before deployment, the utility function should be tested across scenarios that push its logic. Edge cases reveal how the model behaves when signals are noisy, incomplete, or in tension. 

This testing is practical. Teams can simulate inputs from users in underrepresented segments, review model decisions when key variables are missing, or observe how the system handles uncertainty. This is to see how the model balances priorities when conditions aren’t ideal. 

The results offer visibility into the model’s judgment and if outcomes surprise the team, the utility function likely needs refinement. 

Turn Values into Model Inputs 

Schmarzo who is also a professor at Menlo College and NUI Galway, teaches utility design as a way to operationalize values. His students have taken ethical principles and expressed them mathematically. One group codified the Golden Rule of philosophy into a working utility function. The most familiar version of the rule says, “Do unto others as you would have them do unto you.” 

 Schmarzo stresses how this wasn’t a philosophical exercise and resulted in a model that could account for fairness in its decisions. 

“You can codify ethics… as a series of KPIs and metrics inside the utility function.” 
- Bill Schmarzo

Enterprise teams can follow the same process. Privacy, safety, inclusion, and accountability can be modeled if they are translated into measurable inputs. Those variables then shape model decisions the same way cost and speed do. 

When these values are excluded, systems default to easier targets. This exposes a limitation in the design process. Schmarzo’s work shows that ethical intent can be modelled, but it requires discipline during development. 

Utility Functions Should Not Be Built in Isolation 

The team building the model cannot carry all responsibility for how it behaves. Schmarzo sees a clear failure pattern when utility design is left to one group. 

“Organizations will fail if they think AI is only for the high priesthood.” 

If product, legal, and leadership are not involved in setting the model’s priorities, they have no way to shape its outcomes. That disconnect can surface months later as poor alignment, limited adoption, or trust issues. 

The utility function is not a technical artifact. It’s a strategic asset. It should be owned collectively by the teams responsible for performance and impact. 

Conclusion 

A utility function is not just a mathematical step. It’s where intent is shaped into action. If left vague, it becomes a liability. If built carefully, it becomes the model’s guide. 

For enterprise teams looking to scale AI systems with confidence, the Responsible Utility Chain offers a sequence they can follow. I requires clarity, ownership, and design discipline. 

“No AI tool is going to replace you—unless you let it.”
- Bill Schmarzo

The model will do what it’s told. The utility function is how the organization instructs it what to do. 

If you're making calls on AI systems that impact revenue, risk, or customer trust, TechLeader is built for you. We’ve designed a sharp, signal-rich platform for enterprise tech executives who need more than surface-level takes. 

Echo Reports: Unfiltered, deep-dive research

TechLeader Voices: Straight-from-the-source expert interviews

TechLeader Events: Where people building the future meet

Check out the newest issue of TechLeader Voices to understand why AI teams often stall after scaling beyond 50 engineers.

If you're leading that transformation, TechLeader Voices is made with you in mind. Subscribe to our free newsletter for clear, strategic insight into what’s driving enterprise tech forward.