10 to the 23 AI logo
Stephen Lieberman
Through 1023AI

About Stephen Lieberman

AI Safety and Alignment Leadership

Stephen Lieberman, AI safety and alignment leader

Stephen Lieberman

Available for fractional and remote executive roles through 1023AI.

Start a Confidential Conversation

Stephen Lieberman is an AI safety and alignment leader whose work sits at the intersection of consequential AI, emergent complex systems, institutional governance, and human consequence. Through 1023AI, he works with organizations as capable AI systems move from controlled research environments into the messy, high-stakes realities of real-world deployment.

He focuses on emergent capabilities and emergent risks, emergent misalignment, the robustness gap between evaluated safety and deployed safety, and the institutional and human conditions under which consequential AI safety holds or disappears. His core view is that capable AI cannot be governed as if it were ordinary software. As systems scale, they become emergent complex systems, shaped not only by model architecture but by interaction effects, organizational structure, human-AI teaming dynamics, and downstream social consequence.

A background built for high-consequence systems

Stephen brings more than 20 years of experience leading technical and operational teams across government, defense, academia, nonprofit, and industry. His career has included executive leadership, software and systems development, security and compliance, emergency management, decision-support systems, organizational resilience, nonprofit governance, privacy, strategic change management, and research leadership.

As a senior manager at Northrop Grumman, he served in programmatic leadership roles responsible for Department of Defense and Department of Veterans Affairs programs within funding environments exceeding $100 billion – spanning some of the most complex defense and federal health systems in the country – in coordination with the Defense Manpower Data Center. His work spanned enterprise architecture, electronic health record systems, security and compliance, access management, decision-support systems, data strategy, cloud platforms, and the coordination of technical and operational teams across military, civilian, contractor, and executive stakeholders.

Before that, at the Naval Postgraduate School, he served as a Department of Defense civilian program leader and Principal Investigator responsible for programs in defense and security technology, modeling and simulation, web and collaboration systems, and decision-support tools. He developed and led programs with budgets exceeding $10 million in 2026-adjusted dollars across his NPS tenure, more than doubling the Institute's R&D program budget following his first-year delivery, and worked directly with stakeholders across academic, military, government, and international settings.

His defense and national security work included counterterrorism, counterinsurgency, peacekeeping operations, and social network analysis, drawing on direct collaboration with military, government, and civilian personnel from more than 100 countries worldwide. One of his programs was described as the Facebook of Defense for providing global collaboration tools to defense and security professionals, and Undersecretary of Defense Michael G. Vickers recognized his technical leadership with an official letter of commendation for creating a ground-breaking tool that will benefit the U.S. government and our allies as we continue to combat terror. He was also the sole recipient of the National Security Institute Scholar Fellowship Award in 2008. His contributions to civil affairs operations were recognized with an endorsed Certificate of Outstanding Support from the 358th Civil Affairs Brigade.

He has served in peer leadership roles within the international simulation and network science communities, including as Program Chair of the Agent-Based Models and Multi-Agent Systems tracks at the International Network for Social Network Analysis Conference, as a Program Committee Member for the Winter Simulation Conference, and as a Selected Reviewer for multiple peer-reviewed technology journals.

Research depth in complexity, simulation, and human systems

Stephen's intellectual foundation is unusually well matched to the realities of consequential AI.

His research has long focused on how complex sociotechnical systems behave under pressure, how human and institutional dynamics interact with technical architectures, and how leaders can reason under uncertainty when system behavior is nonlinear and difficult to predict.

At the Naval Postgraduate School's MOVES Institute, he led research programs that placed him at the frontier of computational social science while completing all doctoral coursework and dissertation work toward a PhD in Computer Science (Modeling, Virtual Environments, and Simulation). His research applied machine learning, agent-based simulation, network theoretic, and high-dimensional optimization techniques to forecasting the behavior of complex systems. He pioneered in silico research techniques that laid the groundwork for digital twin systems, including approaches to constructing artificial populations from real survey data to model and forecast human behavior at scale. He lectured and served on doctoral committees in machine learning, computer vision, human-computer interaction, and simulation. His work contributed to foundational AI research methods underlying modern machine learning. He was recruited by Northrop Grumman into a senior management position before completing his defense.

His master's work at the University of Connecticut focused on organizational behavior, industrial psychology, continuity modeling, operational resilience, network science, security, and disaster preparedness. His thesis developed a science-based framework for business continuity and resilience using network theory, statistics, and organizational science.

His undergraduate work at Simon's Rock and Bard College combined psychology, biology, music, and cognitive neuroscience, including laboratory research on perception, cognition, neural systems, and human subjects experimentation.

Across these domains, Stephen has published and presented more than 20 peer-reviewed works in neural networks, modeling and simulation, network science, agent-based systems, and sociotechnical analysis, with an h-index of 7, more than 100 citations, and 8 highly influential citations (Semantic Scholar).

Why his approach to AI safety is different

Stephen approaches consequential AI from a premise that many organizations still underestimate: safety problems change qualitatively at scale.

As systems become more capable and more embedded in high-stakes environments, they begin to exhibit the properties of emergent complex systems. Behavior arises through interaction, context, incentives, and feedback effects that were not present during evaluation. Uncertainty becomes epistemic, not merely informational. Local interventions produce nonlocal consequences. Safety mechanisms generate harms of their own. Alignment that held at one capability level quietly breaks down as capability grows. This is emergent misalignment, and it is the most serious expression of the robustness gap in real-world deployment.

That is why Stephen's work centers on emergent capabilities and emergent risks, the robustness gap, AI iatrogenics, emergent misalignment, emergence foresight, and the broader frame of emergent AI safety.

His approach integrates complexity science, agent-based modeling, network theory, cognitive neuroscience, human-AI teaming, community-based participatory practice, and person-in-environment systems theory. The goal is not to make a model appear safer in isolation. The goal is to make consequential AI safety hold inside the organizational, institutional, and human systems where advanced AI actually operates and actually causes harm or benefit.

Human systems as core variables in AI safety

Most AI safety frameworks treat human systems as context rather than as a core variable. That framing misses something consequential. The organizational dynamics, institutional incentives, social structures, and cultural conditions surrounding consequential AI are not peripheral to the safety problem. In many real deployments, they determine whether safety holds or fails.

This part of Stephen's background is not a departure from technical work. It is a necessary extension of it. Consequential AI will reshape institutions, incentives, public systems, and human lives. Safety interventions that ignore these dimensions do not simply miss a variable. They create new failure modes by assuming a stability that does not exist in practice.

Sociotechnical and human-centered disciplinary grounding

Stephen's approach draws on sociotechnical systems theory, organizational behavior, industrial psychology, human-centered design, industrial engineering, and macro social work and social welfare policy. Sociotechnical systems theory holds that technical and social systems co-evolve and cannot be governed in isolation.

Organizational behavior and industrial psychology illuminate how people actually act inside institutions under real pressure and competing incentives. Macro social work frames intervention at the level of systems, institutions, and structural conditions, which is precisely the level at which consequential AI governance must operate. That human-centered dimension strengthens his ability to help organizations design safety strategies that are not only technically literate, but operationally and socially durable.

The Grand Challenge to Harness Technology for Social Good

Stephen is currently advancing AI safety research through the Doctor of Social Work program at the University of Southern California, where his work supports the Grand Challenge to Harness Technology for Social Good. The DSW is a practice-focused doctorate designed to situate research inside real institutional and community contexts rather than purely theoretical frameworks.

It is an unconventional path for an AI safety leader, and that is exactly the point. The most significant gaps in consequential AI governance are not purely technical. They are organizational, institutional, and deeply human. His work focuses on the intersection of AI complexity, organizational strategy, regulatory policy, and human systems.

Executive leadership in real organizations

Stephen's work is not confined to research or advisory contexts. He has led organizations directly, across nonprofit, technology, media, and quantitative finance.

As the Board President and Executive Director of a technology nonprofit focused on innovation in digital inclusion and workforce development, Stephen built and sustained the organization across ten years of growth, leading strategy, governance, fundraising, and operations at every stage. His responsibilities included strategic planning, fundraising, governance, board leadership, technology alignment, budgeting, branding, and executive development.

As Lead Organizer of the 2017 March for Science in Monterey, he directed the largest demonstration in the city's recorded history, coordinating with municipal and public safety authorities, securing elected officials and scientists as featured speakers, and generating front-page regional coverage and broadcast television reporting – a public demonstration of science advocacy and organizational leadership at a moment when the integrity of evidence-based institutions was under direct political pressure.

He has also served in several strategic and operational executive leadership roles at advisory and technology companies throughout his more than 20-year career, including as founder of a quantitative financial analysis and trading venture specializing in high-dimensional risk modeling and the execution of US equity index derivatives.

That operational breadth is directly relevant to consequential AI leadership. High-dimensional risk modeling requires reasoning about how tail risks behave in nonlinear systems, how correlated failures propagate, and how to make decisions when the full distribution of outcomes is not knowable in advance. That is the same reasoning structure that consequential AI safety demands.

Selected institutions and mission areas

Stephen's background includes work involving the Department of Defense, Department of State, U.S. Congress, FEMA, Northrop Grumman, the Defense Manpower Data Center, the Department of the Navy, the Department of the Army, the Department of Veterans Affairs, the Undersecretary of Defense, the U.S. Army Training and Doctrine Command, the Center for Homeland Defense and Security, the Naval Postgraduate School, and the University of Southern California.

His mission areas have included defense and security, counterterrorism, counterinsurgency, peacekeeping operations, health systems, decision-support systems, disaster recovery, economic modeling, nonprofit leadership, workforce development, digital inclusion, and consequential AI alignment.

Why 1023AI

1023AI is the vehicle for Stephen's work in AI safety and alignment. The name is a reference to Avogadro's number, 6.022 x 1023, the number of particles in one mole of a substance. Below that threshold, you can reason about individual particles. Above it, aggregate behavior takes over entirely and changes qualitatively.

The European Commission's official Guidelines under the EU AI Act arrive at the same number, establishing 1023 floating-point operations (FLOPs) of training compute as the precise threshold at which AI models qualify as General Purpose AI triggering mandatory regulatory oversight. That convergence is not coincidental. It marks the boundary where AI generality becomes real, emergent capabilities and emergent risks become the dominant safety challenge, and governance must cross the same threshold the model does. Most organizations reach that boundary without leadership that understands what has changed.

Stephen's work exists to close that gap. Stephen brings the technical depth to reason about emergence, the executive experience to act on it, and the judgment to make safety hold in the real world under real pressure.

Interested in a conversation about safety at scale?

Stephen is open to conversations with organizations deploying consequential AI exploring fractional or in-house executive leadership in AI safety and alignment.

If your organization is navigating emergent capabilities or emergent risks, the gap between evaluated safety and real-world resilience, or the human and institutional conditions that determine whether consequential AI safety holds at scale, reach out.

Start a Confidential Conversation

Your message goes directly to my private inbox. I treat every conversation as confidential.