A new kind of expert is rising, the orchestrator, who pairs human judgment with opaque AI systems to solve problems no one person could handle alone. Picture a junior surgeon who follows a model’s multi-step plan and saves a patient. Later a court asks the surgeon to explain the decision. The hospital shows a certification badge and a detailed log, but no plain-language rationale. That badge, meant to signal trust, also opens doors to budgets, patients, and influence.
The conundrum
If real expertise becomes the skill of orchestrating opaque AIs, who should decide who gets to be an orchestrator? Governments, professional boards, big platforms, decentralized reputation systems, or some hybrid each look sensible. But each choice forces a trade-off: some choices boost safety and clear accountability but move slowly and invite capture, while others speed up benefits and broaden reach but concentrate power and create new inequalities. There is no neutral option, only which set of permanent gains and losses we accept. Which trade-offs are we willing to lock into our hospitals, courts, cities, and schools?