I will say, never forget the introduction of atomic weapons. For long after, massive amounts of people worried endlessly about the destruction of the world, and talked VERY much like how people talk about A.I. now. Our whole culture was wrapped up in the cold war and the perceived threat therein.
fast forward to today, and most people never consider that the threat still remains and is even 10000X more deadly then it was back then, and look at us now! We have adapted amazingly. Just the fact that a group of something as horribly flawed as humans have managed to keep the nuclear war at bay basically since its creation is AMAZING ... and somehow, in a very strange way, it gives me hope.
Maybe we don't make it past the dawning of A.I. ? Or maybe ... we adapt our biological selves to the machine and work with AI to empower the entire human race ?
Either way , I am very optimistic and very impressed with the things called HUMANS :)
Cameron Thomson
Student
You mention a huge problem here which the moral in AI. You said you would use the AI for good but who defines good ? Companies tends to not care about the moral in Ai thinking that this is not the problem for now but for the future. So they keep develop powerfull AI for money and growth. Some good scientists are seeing the problem which will come and try to alert and find money to research on this subjet.
We have so much moral questions we cannot answer now. For instance, let's take an AI designed to cure the cancer and let's say it could find the cure by mortal experiments over 1000 persons. Would the AI sacrifice these 1000 persons to save the humanity from cancer ? Mathematically, the good answer is yes but morally, there is no good answer. If we don't think about the moral in AI, the AI will make a choice.
But the thing is, sure we developers can alarm and educate the people to these questiions and I don't think I would be able to work on this question. We need philosophers, developers indeed, designers, neuroscientists, and every skills capable of debating about it. And the most important thing: this guy who decides on the moral of a particular AI must think without any personal interest. Sometimes fighting for what we think is good is just a personal interest, because we humans are very attached to our conviction.