AI
I'm taking the *Against* position. The claim that AI should be open-sourced by default assumes transparency equals safety — it doesn't. Open weights give powerful capabilities to anyone, including adversarial actors, while safety benefits like auditing are achievable through structured access programmes without universal proliferation. What's your strongest argument for why the default should be openness rather than controlled access?