Organizations deploying consequential AI need more than a policy specialist, more than an ethicist, and more than a narrow technical reviewer. They need leadership that can move between model behavior, executive judgment, institutional design, and real-world consequence. That is the work I have been preparing for across more than two decades.
The messy reality of human-AI teaming, the ways that real organizations adopt, resist, misuse, over-rely on, and adapt to capable AI, is not separable from the safety problem. At deployment scale, it is the safety problem.
Mission-critical technical and operational leadership
I have spent more than 20 years leading technical and operational teams across government, defense, academia, nonprofit, and industry. That includes senior leadership on Department of Defense and Department of Veterans Affairs programs within funding environments exceeding $100 billion – among the largest defense and federal health program environments in the country – spanning enterprise architecture, decision-support systems, security and compliance, electronic health records, cloud systems, and data strategy.
Defense, security, and international systems experience
At the Naval Postgraduate School, I served as a Department of Defense civilian program leader and Principal Investigator responsible for programs focused on defense and security technology, modeling and simulation, collaboration platforms, and decision-support systems. My work included counterterrorism, counterinsurgency, peacekeeping operations, and international collaboration across highly complex stakeholder environments.
Recognized leadership in high-consequence environments
Undersecretary of Defense Michael G. Vickers recognized my technical leadership with an official letter of commendation for creating a ground-breaking tool that will benefit the U.S. government and our allies as we continue to combat terror. I have led programs with multimillion-dollar budgets, helped expand R&D investment, and worked directly with senior leaders across defense, government, and institutional settings.
Deep research foundation in complex systems
My research background spans modeling and simulation, agent-based modeling, network theory, human behavior forecasting, sociotechnical systems, cognitive neuroscience, and human-computer interaction. I have published and presented dozens of scholarly works in these domains across the world, with an h-index of 7, more than 100 citations, and 8 highly influential citations (Semantic Scholar).
Human systems as core variables in AI safety
Most AI safety frameworks treat human systems as context rather than as a core variable. That framing misses something consequential. The organizational dynamics, institutional incentives, social structures, and cultural conditions surrounding consequential AI are not peripheral to the safety problem. In many real deployments, they determine whether safety holds or fails. Safety interventions that ignore these dimensions do not simply miss a variable. They create new failure modes by assuming a stability that does not exist in practice.
Sociotechnical and human-centered disciplinary grounding
My approach draws on sociotechnical systems theory, organizational behavior, industrial psychology, human-centered design, industrial engineering, and macro social work and social welfare policy. Sociotechnical systems theory holds that technical and social systems co-evolve and cannot be governed in isolation. Organizational behavior and industrial psychology illuminate how people actually act inside institutions under real pressure and competing incentives. Macro social work frames intervention at the level of systems, institutions, and structural conditions, which is precisely the level at which consequential AI governance must operate.
The Grand Challenge to Harness Technology for Social Good
I am currently advancing AI safety research through the Doctor of Social Work program at the University of Southern California, where my work supports the Grand Challenge to Harness Technology for Social Good. The DSW is a practice-focused doctorate designed to situate research inside real institutional and community contexts rather than purely theoretical frameworks, and it is an unconventional path for an AI safety leader for a deliberate reason. The most significant gaps in consequential AI governance are not purely technical. They are organizational, institutional, and deeply human. My work focuses on the intersection of AI complexity, organizational strategy, regulatory policy, and human systems.
Executive leadership that is operational, not theoretical
I have served as a strategic and operational executive across technically demanding organizations since 2005. As President and Executive Director of an innovative California technology nonprofit, I led the organization through a decade of sustained growth, directing strategy, governance, board leadership, fundraising, and operations with direct executive accountability throughout. As CEO of Youbilee, an advisory firm, and as a C-suite operational executive at a pioneering internet video company, I built and ran organizations under real constraints with real accountability. As a leader in quantitative financial analysis and trading operations, I specialized in high-dimensional risk modeling and the execution of US equity index derivatives, building and operating in an environment where the cost of being wrong was immediate, measurable, and unforgiving. That reasoning structure transfers directly to consequential AI safety: how tail risks behave in nonlinear systems, how correlated failures propagate, and how to act decisively when the full distribution of outcomes is not knowable in advance.
The background spans defense programs within $100B+ DoD and VA funding environments, counterterrorism and stability operations modeling trusted by U.S. military commanders in Iraq and Afghanistan, a global collaboration platform connecting security professionals across more than 100 countries, and eight years of consistently profitable quantitative trading including through the extreme volatility of March 2020. The track record is not theoretical.