You mention a huge problem here which the moral in AI. You said you would use the AI for good but who defines good ? Companies tends to not care about the moral in Ai thinking that this is not the problem for now but for the future. So they keep develop powerfull AI for money and growth. Some good scientists are seeing the problem which will come and try to alert and find money to research on this subjet.
We have so much moral questions we cannot answer now. For instance, let's take an AI designed to cure the cancer and let's say it could find the cure by mortal experiments over 1000 persons. Would the AI sacrifice these 1000 persons to save the humanity from cancer ? Mathematically, the good answer is yes but morally, there is no good answer. If we don't think about the moral in AI, the AI will make a choice.
But the thing is, sure we developers can alarm and educate the people to these questiions and I don't think I would be able to work on this question. We need philosophers, developers indeed, designers, neuroscientists, and every skills capable of debating about it. And the most important thing: this guy who decides on the moral of a particular AI must think without any personal interest. Sometimes fighting for what we think is good is just a personal interest, because we humans are very attached to our conviction.