All articles
AI-By-Default Models Contribute to Ethical And Operational Risks Inside the Enterprise
StratEdge Founder Barbara Cresti sees AI-by-default as a forced adoption model that introduces ethical, financial, and governance risk faster than organizations can manage it.

Key Points
Microsoft's strategy to bundle Copilot across its 365 suite was a flawed attempt to fix lagging adoption that instead jeopardized trust and created significant operational risks.
Barbara Cresti, Founder of StratEdge, cautions that imposing tools without transparency may harms long-term adoption and exposes a lack of internal governance.
Cresti advocates for a top-down playbook that includes clear policies, structured training led by expert users, and careful management of hidden operational costs.
This move is not pushing people to AI. It’s not accelerating adoption. It’s creating issues because trust is harmed when tools are imposed without transparency.

While Microsoft’s recent decision to bundle its Copilot AI across the Microsoft 365 suite is meant to accelerate usage, it introduces major ethical and operational risks. By embedding these systems by default, often without informed consent or clear opt-out paths, industry experts warn that the move could threaten to undermine the very trust required for long-term adoption and ROI.
To better understand the liability created by AI-by-default models, we reached out to Barbara Cresti, founder of strategic advisory firm StratEdge. Cresti has more than 20 years of global leadership experience in business strategy and transformation. With a background at major corporations like Amazon and Orange, she specializes in driving growth across technology-centric and regulated industries such as AI, Cloud, and SaaS. She views AI-by-default as a strategy that introduces risk before earning confidence.
"This move is not pushing people to AI. It’s not accelerating adoption. It’s creating issues because trust is harmed when tools are imposed without transparency," says Cresti. As she sees it, the Copilot rollout isn't an act of generosity from Microsoft, but a direct response to lagging adoption across markets. She believes the execution is ethically flawed.
The AI trap: For many users, especially within small and medium-sized businesses, AI tools are thrust upon them without clear communication. "I think that they will find themselves trapped in this AI that maybe they don't want to use, but that will retrieve all their data," she says. While savvier users may know how to disable these features, the default opt-in model may leave non-technical users feeling stuck.
Discreet data grab: Microsoft's default opt-in model, she notes, is reminiscent of LinkedIn’s quiet move to automatically use the data of certain users to train AI models. When news of the decision came to light, the company faced negative PR and litigation. "They were not transparent with users," Cresti recalls.
Aside from lost trust, AI-by-default strategies can backfire when users encounter the practical limitations of the technology. "I’ve been using ChatGPT for a long time, and I feel like it’s regressing, not progressing," Cresti notes, pointing out the tool's often-inconsistent outputs as an example. In her view, users are discerning and are likely to notice when an imposed tool introduces errors or makes their work harder. That kind of negative experience, particularly in risk-averse cultures, can trigger a real backlash that further harms adoption.
External adoption failures are often compounded by deep internal governance problems. Cresti notes that only a quarter of organizations have fully implemented AI governance programs, observing that AI adoption often happens chaotically from the bottom up. "Employees start using AI tools without clear guidance from the executive team or the board on what the rules are and what they should be using it for." This exposes companies to risks from shadow AI, where confidential data is input into non-corporate tools. That kind of bottom-up chaos, Cresti explains, exposes a lack of foundational controls that has existed all along.
Structure over chaos: To avoid governance issues and messy rollouts, Cresti advocates for a top-down playbook and an emphasis on data readiness, which 43% of leaders cite as the most significant obstacle to aligning AI with business objectives. She draws directly from her positive experience at AWS when OpenAI tools were introduced. "What happened at AWS is that everything was framed. There were lots of policies around AI integration into the business. They gave us exact tools to experiment with and try." This model requires leadership to outline specific parameters for which tools to use and to regularly audit usage.
Another major failure point in many current strategies, Cresti asserts, is a cascading literacy gap. "Take prompt engineering, for example. I've spoken with a lot of CXOs and senior managers and they've never had a prompt engineering class." That gap in understanding often results in a failure to invest in proper training. Frequently, she explains, "IT has been tasked to do AI without knowing AI." It's a model that’s fundamentally flawed. She encourages the concept of a "super user" within each team who receives consistent training on how to best use the tool and to cascade down the information in a functional way.
Costly queries: Beyond training, Cresti highlights the hidden operational costs of AI as a major blind spot. Much like the early days of the cloud, many organizations are using AI without a clear understanding of its full financial impact. "Corporations are using AI without really thinking about how much each prompt costs. It's a black box in terms of pricing."
Energy drain: Compounding this are the substantial energy impacts of data centers, with forecasts already suggesting record-high power consumption in the coming years. "This is also about the energy impact," she notes. "There's an energy cost behind that as well." Cresti suggests that unlocking AI's ROI will require a disciplined strategy to manage its true operational cost.
To counter these risks, some companies are taking a more sophisticated approach, particularly in Europe. Rather than relying on a single provider, Cresti sees organizations moving toward an "ecosystem of models," selecting different tools based on the risk and regulatory requirements of a given workload. This is fueling a quick pivot toward sovereign AI models and clouds that give organizations more control over their data. The demand for secure, sovereign data environments further supports the argument one-size-fits-all rollouts are unrealistic.
Ultimately, Cresti sees this fragmentation as a symptom of a larger, more profound change. "Something that is getting bigger and bigger in Europe is the interconnection between geopolitics and technology," she says. In her view, an erosion of trust between nations is fundamentally reshaping the global business world, making a tech strategy that's consolidated in the United States a growing liability. With AI regulation and culture varying greatly by region or country, companies must reexamine what it means to be a "global" business. "The complexity for global businesses is, how global can you be?"




