>> "[Mr Altman] told me that he felt that by my voicing the system, I could bridge the gap between tech companies and creatives and help consumers to feel comfortable with the seismic shift concerning humans and AI,"
We don't want to be confortable. We do not want to be persuaded, or charmed.-
We want to question - and rightly so - every single aspect of it, and "we", the users, want a voice. We want to think and work through this "transition".-
PS. The use (abuse) of "reproductive", "charismatic" imperatives in order to shortcircuit, avoid, or minimize scrutiny (which is what this is) is sleazy at best.-
I fear the single most important technological transition made by mankind, if this is what AGI comes out of in the end, is handled by our least morally capable, as recent events are showing. I for one am concerned ...
- Demand for Scrutiny and Voice:
- We do not seek comfort or persuasion from AI. Instead, we insist on the right to question every aspect of its development. Users must have an active voice in shaping AI technologies.
- Transparency Against Manipulation:
- The use of "reproductive" and "charismatic" imperatives to avoid scrutiny is unethical. We reject the "Johansson effect", and such tactics and demand transparent development processes that welcome rigorous examination. We reject the 'stone cold panic' of an arms race, and, instead, recognize the need for a careful, paced, studied and carefuly monitored adoption of AI technology.-
- Ethical and Inclusive Development:
- The development of AI, particularly AGI, represents a monumental technological transition. This process must not be monopolized by those with questionable moral capacities. Recent events have highlighted the need for higher ethical standards.-
- Diverse Societal Participation:
- We advocate for the inclusion of a broad cross-section of society in AI discussions. This includes ethicists, philosophers, jurists, technologists, and representatives from all walks of life to ensure a balanced and holistic approach, both to development and to ensuring alignment.-
- User-Centric Design Principles:
- AI development should prioritize user-centric principles, ensuring that AI systems are designed with the best interests of users in mind, rather than corporate or political or particular agendas.-
- Accountability and Governance:
- Robust mechanisms for accountability and governance must be established. This includes clear regulations, oversight bodies, and transparent reporting on AI development, deployment and oversight.-
- Educational Empowerment:
- Empower users through education about AI. An informed public is essential for meaningful participation in AI discourse and decision-making.-
- Preservation of Human Agency:
- AI must be developed in a manner that preserves and enhances human agency, rather than undermining it. Users should retain control and autonomy in their interactions with AI systems.-
- Preventing Abuse of Power:
- Vigilance against the concentration of power in AI development and deployment is crucial. Measures should be in place to prevent abuse by powerful entities, ensuring that AI serves the broader public good.-
- Ethical Use of Data:
- Data used in AI development must be ethically sourced and handled. Users' privacy and consent are paramount, and there must be strict adherence to data protection principles and copyright where applicable.-
We don't want to be confortable. We do not want to be persuaded, or charmed.-
We want to question - and rightly so - every single aspect of it, and "we", the users, want a voice. We want to think and work through this "transition".-
PS. The use (abuse) of "reproductive", "charismatic" imperatives in order to shortcircuit, avoid, or minimize scrutiny (which is what this is) is sleazy at best.-
I fear the single most important technological transition made by mankind, if this is what AGI comes out of in the end, is handled by our least morally capable, as recent events are showing. I for one am concerned ...