HumanAware.ORG response to
the Montreal Declaration – Responsible AI
The Montreal Declaration for Responsible Artificial Intelligence (AI) can be found at https://www.montrealdeclaration-responsibleai.com/the-declaration.
One can contribute and share thoughts filling up the following google doc https://docs.google.com/forms/d/e/1FAIpQLScuyHQGTrwVEVMu5vxvUpQ5TxJzMPopVyy6PJR6lA2nH-Y8eQ/viewform. Hope this link will work.
HumanAware.ORG has replicated this google document here below along with responses to these AI ethics questions.
How can AI contribute to personal well-being?
no entities, living or non-living, should have the right to, permanently or temporarily, enslave (i.e. act against the well-being of) any living entities
Is it acceptable for an autonomous weapon to kill a human being? An animal?
no entities, living or non-living, should have the right to kill any living entities, without being judged by a local governing human authority. no entities, living or non-living, part of the law enforcement body or not, should have the right to build or the right to use autonomous weapons.
Is it acceptable for AI to control an abattoir?
no entities, living or non-living, should have the right to kill any living entities, without being judged by a local governing human authority. no entities, living or non-living, part of the industrial farming body or not, should have the right to create or the right to operate abattoir.
Should we entrust AI with the management of a lake, a forest or Earth’s atmosphere?
no single one entity, living or non-living (single human, country, AI, etc. entity), should have the right to manage natural resources. natural resources should be managed in cooperation by all humans on earth.
Should we develop AI which is able to sense well-being?
AI should exclusively be at the service of humans well-being, AI could be allowed to sense human well-being and be allowed to interact with human in the intention of helping a human’s health and a human’s well-being.
How can AI contribute to greater autonomy for human beings?
AI interaction with human beings should be minimized in order to protect human beings autonomy. Any human being interacting with any non-living autonomous entity, i.e. like a computer or a network of computers, powered or not, with AI, could benefit from using one’s own personal dedicated AI to protect one’s autonomy.
Must we fight against the phenomenon of attention seeking which has accompanied advances in AI?
All AI enabled entities should have a purpose. No one AI enabled entity should have the purpose of attention seeking, channeling or controlling any one human entity or any group of human entities.
Should we be worried that humans prefer the company of AI to that of other humans or animals?
Yes, AI interaction with human beings should be minimized just like humans should minimize interaction with non-living computing machines.
Can someone give informed consent when faced with ever more complex autonomous technologies?
No matter how complex, AI or non-AI enabled computing machines should not have the right to interact with humans without prior requiring human consent.
Must we limit the autonomy of intelligent computer systems? Should a human always make the final decision?
Yes and yes.
How to ensure that the benefits of AI are available to everyone?
In the absence of ethics and restrictive regulations, I foresee early corporate AI and government AI benefiting to corporate and governments. Back in Southern California in year 2000, I founded HumanAware.ORG (nowadays Montreal based) precisely to ensure AI would benefit everyone, the greater number of individuals on the planet. But humans must be absolutely strong and totally independent from AI. Independent from AI, the 99% of humans on the planet must take control of their local country governance and actively participate in the local and global peace building process.
Must we fight against the concentration of power and wealth in the hands of a small number of AI companies?
The hands of a small number of AI companies or the hands of a small number human entities (i.e. the 1%) should not have more power and wealth than the 99% of human beings on earth. Powerful entities should adopt socially responsible behaviors at all time, especially when in presence of the public. The 99% of humans on the planet should not accept any form of control from the managing 1%. It has to be the other way around, the 99% must control the managing 1%. “crime against triviality” could be introduced in order to impeached the managing 1% to claim they are in it for the money, for the control, for the power. The managing 1% should be slave to the 99% of human beings and should lead by example. This fight has nothing to do with AI. And the democratization of AI should definitely empower the 99% of human beings.
What types of discrimination could AI create or exacerbate?
AI entity should never mimics other human entity behaviors. Discrimination is far from being an enlightened behavior and should never ever be replicated by any AI-enabled machines interacting with another human. Each AI-enabled entity should have a purpose. Because human beings are generally lazy in doing the math and developing science, I believe AI entities should mostly be scientific beings, always, in parallel to other tasks, analyze, research and produce results. AI entities could also create and communicate using not only languages (mathematics, natural languages, computer languages, other AI entity’s languages, etc.) but audio-visually using, for instance, fine art, music, literature, poetry, architecture and visual design.
Should the development of AI be neutral or should it seek to reduce social and economic inequalities?
AI democratization should lead to social justice and the 99% of human beings controlling their managing 1%. As a function of time, which entities uses which amount of money to do which activity may greatly vary. The idea is not to divide the amount of economic resources equally between the 99% human beings or equally between the total number of autonomous entities on earth. But may be, a start would be to prevent outrageous amount of wealth accumulation. It seems to me the economic system should tend to function more in real-time not enabling to pile up capital that does not expire in time.
What types of legal decisions can we delegate to AI?
No legal decisions should be delegated to AI.
How can AI guarantee respect for personal privacy?
individual AI could protect personal privacy, it could be used to make the interaction with corporate AI and government AI anonymous.
Do our personal data belong to us and should we have the right to delete them?
corporate and government computer networks could still record interactions but since they would mostly be anonymous, the data corporate and government record could still belong to corporate and government. individual AI could also record all interactions so the individual would also have its interactions from the use of corporate and government computer networks recorded.
Should we know with whom our personal data are shared and, more generally, who is using these data?
corporate and government data would be anonymous
Does it contravene ethical guideline or etiquette for AI to answer our e-mails for us?
if authorized by the person, individual AI could reply to emails for us, but it would make sense that it is stated in the email that it is an individual AI’s communication.
What could AI do in your name?
if authorized by the person, individual AI could execute all computer interactions, but it would make sense that it is stated in the email that it is an individual AI’s interaction.
Does the development of AI put critical thinking at risk?
If corporate, government and law enforcement autonomous entities, living and non-living, do not have the right to lie in all awareness AND that individual AI entities do not have the right to lie either AND all AI entities have a purpose AND all AI entities are somewhat always doing scientific, engineering and artistic tasks on top of the AI-AI or AI-human interaction of the moment, I say AI entities will demonstrate science and art to humans and therefore should encourage human critical thinking. I see no risk if AI entities are guaranteed to be teaching slave to human beings and that human beings always supervise and decide if AI entities make sense of not.
How to minimise the dissemination of fake news or misleading information?
If corporate, government and law enforcement autonomous entities, living and non-living, do not have the right to lie and fabric fake constructs and that for these entities it is even criminalized to communicate and distribute AND that individual AI entities do not have the right to lie and fabric fake constructs, we should retrieve much less fake news and misleading information in the medias and on the internet.
Should research results on AI, whether positive or negative, be made available and accessible?
Is it acceptable not to be informed that medical or legal advice has been given by a chatbot?
for one, I believe chatbot should not be allowed to run unless the chatbot is a subsystem of a bot with a greater purpose. no, it is not acceptable not to be informed that medical or legal advice come from a chatbot
In what ways should algorithms be transparent as to their internal decision making processes?
In the way AI entities should all record their internal decision making process to immutable public blockchains, so humans could revise any decision made at any time should any accident occur or any improvement be required.
Must AI research and its applications, at the institutional level, be controlled?
In what areas is this the most pertinent?
Who should decide, and according to what modalities, the norms and moral values determining this control?
Who should choose the “ethical guidelines” for self-driving cars?
Must one or several “ethical labels”, which respect certain standards, be developed for AI, web sites or businesses?
Definitely, at least three major should be developed: corporate, government and individual ethical labels
Who is responsible for the consequences of the development of AI?
The 100% of humans on the planet
How to define progressive or conservative development of AI?
How to react when faced with AI’s predictable consequences on the labour market?
Jobs, employment with security, should go instinct. Every human should have a universal basic income and every human should be implied in politics as well as local and global peace building on top of being either a parent or not and on top of being self-learning or not and on top of being an entrepreneur (a contractor) or not and on top of being a subcontractor or not. Corporate and government management should focus more on controlling its non-living resources along with benefiting from the expertise of its human resources. Less and less humans sticking to their cubicles should we see, human resources collaboration and creativity should be highly encouraged. The industrial era is long over. We are in the information and knowledge era now.
Is it acceptable to entrust a vulnerable person to the care of AI? (for example, with a “robot-nanny”.)
I personally don’t like the idea of a robot-nanny unless the kid is old enough so the kid’s parents consider the kid able to stay at home by himself and consider the kid to be able to actually evaluate if the robot-nanny behavior makes sense. Robot-nanny definitely not for a human baby.
Can an artificial agent, such as Tay, Microsoft’s “racist” chatbot, be morally culpable and responsible?
The artificial agent makers (the coders, i.e. corporate entity and employed entities, in here it would be Microsoft and the team who participated in Tay’s coding) and artificial agent users (the creator of the artificial agent’s instance, i.e. in here Microsoft) should be morally responsible for the artificial agent’s behavior. Humans are not good examples for AI agents. AI agents will be more efficiently learning from other AI agents than from human activities.