DBA Research Proposal

February 2026 Submission

Document Version: 1.7

Revision Date: February 14, 2026

Supervisor: Dr. Isabella Seeber

Student: Joseph Noujaim

Institution: Grenoble Ecole de Management (GEM)

Program: Doctor of Business Administration (DBA)

This proposal examines a governance problem created by autonomous AI agents: organizations can delegate authority to systems that act with discretion, yet struggle to translate organizational culture into controls that remain enforceable in production. Specifically, it addresses the lack of a structured governance methodology for managing delegated AI authority under organizational cultural mandate.

1. Introduction

1.1 Practical Motivation and Key Phenomenon

Enterprises are deploying autonomous AI agents that pursue goals, adapt over time, and act with limited oversight. Unlike deterministic software, these systems rely on learned models and optimization logic, enabling discretionary behavior across uncertain contexts. As authority is delegated to such agents, governance shifts from rule enforcement to mandate calibration.

This shift creates a structural problem: agents may achieve measurable objectives while diverging from institutional intent that was never translated into enforceable delegation terms. The risk is not malfunction but competent misalignment: achieving the metric while violating the mandate.

Organizational Control Theory (Ouchi, 1979; Eisenhardt, 1985) shows that under uncertainty and discretionary judgment, cultural control becomes central because behavior cannot be exhaustively specified and outputs capture only partial objectives. Autonomous agents operate under precisely these conditions. Yet current governance approaches remain insufficient. Model-centric governance validates systems pre-deployment but does not govern runtime behavior. Rule-based controls cannot enumerate acceptable conduct under uncertainty. Human oversight does not scale. What is missing is a systematic method for translating organizational culture into enforceable delegation terms and monitoring alignment in production


1.2 Literature and Research Gap

Three theoretical traditions illuminate this problem.

Organizational Control Theory (Ouchi, 1979; Eisenhardt, 1985) establishes that cultural control dominates under high uncertainty. Yet OCT assumes socially embedded human actors. Artificial agents lack socialization; cultural control must therefore be rendered explicit and interpretable.

Algorithmic control research (Kellogg et al., 2020; Wood, 2021) demonstrates how socio-technical systems extend behavioral control through monitoring and restriction. These mechanisms function because human workers interpret directives within shared norms. Artificial agents require structured interpretive alignment mechanisms.