Facilities in 2026: Tech-Forward Leadership & Execution
read the report

Insights for facilities leaders across retail, restaurant, grocery, and c-store operations.

All articles

Operations Leaders Calibrate AI Governance Between Startup Speed And Enterprise Caution

Facilities News Desk
Published
May 5, 2026

Graciela Chadwick, COO of Seventh Wave Refreshments and former COO of Crumbl Cookies, shares the five-pillar governance framework she uses to bring AI into operations without waiting for regulators to catch up.

Credit: Facilities News

"We don’t have thousands of employees, so we can ask the team how they’re using AI and learn from that in real-time."

Graciela Chadwick

Chief Operating Officer
Seventh Wave Refreshments

Startups move fast. Legacy enterprises build moats. For many operations leaders, figuring out how to bring AI into daily workflows often comes down to navigating the friction between those two extremes. Move too quickly, and you invite risk. Wrap every tool in red tape, and you squash the exact innovation you want to foster.

Enter Graciela Chadwick. As the COO of Seventh Wave Refreshments and previously the COO of Crumbl Cookies, she's laid the groundwork for AI governance first steps. Before stepping into the startup world, she spent nearly 14 years at Chick-fil-A, rising to Director of Strategy and Insights for field operations. That distinct career path gives Chadwick a practical blueprint for finding the middle ground between startup speed and enterprise caution. Chadwick readily admits that no one has completely mastered AI governance yet. Because the playbook is still being written, she focuses on simple, pragmatic boundaries. In her experience, oversight often starts at the individual user level rather than with thick policy binders.

"We don’t have thousands of employees, so we can ask the team how they’re using AI and learn from that in real-time," says Chadwick. Seventh Wave encourages its team to use tools like ChatGPT for routine administrative tasks, as long as they respect a few hard lines about what not to upload. Employees learn how to spot low-stakes versus higher-stakes use cases, keeping the barrier to entry low while protecting sensitive information.

  • Walled gardens: That emphasis on enablement is paired with structured access controls. “We have specific organizational accounts for AI usage that people get added to,” Chadwick says. “So it's much more intentional rather than a free-for-all. We tell them to use it, take full advantage of it, but we also tell them, here’s what you should never upload and how to think about the output depending on the risk."

  • Cloning the tone: Chadwick says their account managers and service account team have seen success with AI for generating follow-ups with clients. “We've taught them how to go in and set it up to mimic their tone of voice versus what ChatGPT will recommend, so that it matches who they truly are." To keep experimentation grounded, Chadwick relies on short, recurring conversations where her team talks through the good, the bad, and the ugly of their early experiments. That organic feedback loop turns promising one-off uses into repeatable patterns. But zoom out to the enterprise level, and the playbook changes entirely.

  • Skip the generic: For Chadwick, a natural turning point arrives when decisions involve high-cost assets or sensitive work, such as fleet management and facilities tech adoption. There, she hesitates to rely on generic model outputs, especially when questions touch on work order data quality or how vendors are integrating AI into existing systems. When evaluating fleet performance or the staffing impact of automation, she validates general insights against domain-specific partners who live and breathe AI in FM. The top use cases in these functions should be tested carefully before teams start leveraging AI for high-stakes operational calls. “I can go and ask ChatGPT what the average spend is, and I can filter for Georgia, but sometimes I don't know the data behind it. So it could be really good or bad,” she says, “versus if I go to a vendor that already has the data on fleet management and use their data to better gauge if we're doing well or not based on our specific scenario.”

When workflows get messy and the stakes get high, Chadwick relies on a simple framework. Her team evaluates new use cases or vendor capabilities against a five-question checklist to manage exposure. Chadwick observes that established public companies tend to rely heavily on this kind of documentation. With more stakeholders and higher liability, legacy organizations frequently put policy and structure in place before rolling tools out widely. Many organizations are finding that waiting for federal lawmakers to catch up isn't a viable strategy. As companies work through the gray areas of new agentic AI capabilities and the impact of AI integration, Chadwick suggests documenting an internal approach grounded in fairness, accountability, transparency, safety, and data privacy.

  • The five-piece armor: “When you're talking about AI governance, you're talking about fairness, accountability, transparency, safety, and data privacy,” Chadwick says. “When we're doing something, is it really fair to us or to whoever for the decision that we're trying to make? Will this allow us to have accountability to ourselves, our teams, and external people? Are we being transparent on how this information is being used? And then, is it going to create any safety issues for anyone?”

  • Beat the subpoena: Chadwick says that organizations need to invest the time to establish their own governance structure, instead of waiting for government regulation to eventually come. “If you wait for the government, you are going to be late,” Chadwick says. “If you don't have a way to demonstrate you understand what was happening behind this, and your parameters on how you made this decision, something bad is likely to happen, and there's going to be a case.”

  • Cover your assets: “If you proactively, as a company, know what your guidelines are and how you made that decision, then your chances of that lack of governance being seen as putting the company at risk are not going to be there, because you can explain it well,” she says.

Chadwick’s perspective aligns with a growing focus on oversight in the workplace, where surveys of AI adoption frequently point to trust and risk allocation as central questions. For her, the five pillars act as a shared language that helps operations, IT, legal, and frontline teams talk about the same thing in practical terms. In her view, a governance model's success frequently comes down to the people leading it.

Chadwick notes that hesitancy at the executive level often creates a bottleneck for organizational progress. Her advice to peers navigating these hurdles is straightforward: spend some time using the tools yourself, even if only on a throwaway side project. “Even if you're at the highest level and even if your organization is big, if you don't try to get involved at some level in using it yourself, it's going to be very difficult to govern it,” she concludes. “You have to get your hands dirty. If you're hesitant to do something with it, even if it's a side project of building something that doesn't matter at all, you're not going to be able to understand the nuances and speak and be able to challenge it in the best way when it comes to governance.”