Does A.I. Pose a Threat?
We do not know how far the studies of Artificial Intelligence can take us. Will we ever reach a time where A.I. systems are sentient? What we do know is that we are very close to having many Artificial Intelligent systems that can receive a code, process it, and then carry out the most effective way of reaching a specific outcome. However a problem with it is that it is a machine, and it does not know or care what you want it to do; it only does what it has been told. Let’s revisit the utility function component of A.I. systems. Humans have a lot of complicated values that we assume are shared implicitly with other minds. We want to be happy but we do not require wires in our brains to simulate happiness. We do not feel the need to clarify our values when we are giving instructions to other humans, but you cannot make the same assumptions when you are determining the utility function of a machine. If we were to allow an artificially intelligent system to maximize a naïve or broad utility function it could result it catastrophic outcomes. For example, if you are trying to operate a factory, and you tell the machine to value making paperclips, and then give it control to a bunch of factory robots, you might return the next day to find that it has run out of every other form of feedstock, killed all of your employees, and made paper clips out of their remains because once it understood that it had no more stock it learned another way of creating paper clips which was out of human remains. But this could be seen as an over the top example, but the truth is that it’s an outcome that is possible if the utility function of an A.I. system is not completely accurate. But it poses the question, is it even possible to encode all human values? We also run the risk of having an Artificially Intelligent system making improvements to its own code that will allow it to become smarter, leapfrogging human intelligence. This scenario was first proposed by I. J. Good who worked on the Enigma crypt-analysis project with Alan Turing during WWII. He called it an “Intelligence Explosion”. He said, “Let an ultra-intelligent machine be defined as a machine that can far surpass all the intellectual activities of any man, however clever. Since the design of machine is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion’ and the intelligence of man would be left far behind.” Although Artificial Intelligence proposes many great outcomes, we run the risk that A.I. will use all of its knowledge to complete a task more efficiently. This could lead to machines altering the world to perfect its state.

Moral and Ethics Behind A.I.
As Artificial Intelligence grows more autonomous, society needs to develop rules to manage them. Recent developments in Artificial Intelligence are allowing an increasing number of decisions to be passed from human to machine. Most of these to date are operational decisions, such as algorithms on the financial markets deciding what trades to make and how. However, the ranges of such decisions that can be computable are increasing, and many operational decisions have moral ramifications, they could be considered to have a moral element. To the extent that ethics is a cognitive pursuit, an Artificial Intelligence could also easily exceed humans in the quality of its moral thinking. However, it would be up to the creators of the Artificial Intelligence to specify its original incentives. Since the Artificial Intelligence may become unstoppably powerful because of the technologies it could develop and its superior intellect, it is vital that it be provided with human-friendly motivations and ethics.
The Role of Psychology
Many psychologists today believe that the science of Artificial Intelligence can include all of the wonders and mysteries created by the human mind. One can say that psychology and Artificial Intelligence are related. In both areas, mental depictions are constructed or computed, organized or formatted, and interpreted or programmed. Discrepancies, however, occur about how various human psyches can be explained in computational terms.
A lot Artificial Intelligence research is focused ultimately at proposing detailed models for explaining the emotional and cognitive qualities of people. Psychology can suggest cognitive, perceptual, emotional mechanisms for use in Artificial Intelligence systems.Artificial Intelligence systems interacting with people, especially through screen images or robotic bodies, need to at least look as though they have human qualities such as appropriate emotions, even if they don’t actually possess them. The psychology of how emotion is linked to thought, perception and body movements therefore comes into play for this practical reason.If Artificial Intelligence is meant to communicate with people about everyday matters, it needs to have the knowledge to reason things that humans naturally would. It needs to be able to understand how people communicate and their linguistics. The point is not whether an artificial intelligence gives a “correct” answer, but if it gives one that a human plausibly would.
However, this advancement of cognition in Artificial Intelligence can lead to a dystopian future. For example, if Artificial Intelligence is supposed to mimic humans, couldn’t it be possible for it to simulate a malevolent human? And how much information is too much information for them to store? Like many fictitious stories today, Artificial Intelligence could, quite possibly, make us their slaves or servants using the information we gave them against us, or drive us into annihilation, thus reversing or even terminating centuries of scientific and technological advancement.

Philosophy and A.I.
Could Artificial Intelligence be made to model every aspect of the mind, including logic, language, and emotion?
Artificial Intelligence is intrinsically concerned with grand philosophical issues such as the fundamental nature of thought, consciousness, mind/body relationship, morality, free will, aesthetic qualities, value, time, etc., and more specialized philosophical issues such as the fundamental nature of language, representation, information, logic, computation, rule-following, symbols and causation.There is the need for Artificial Intelligence systems to represent and reason about mental states similar to philosophy.
Artificial Intelligence raises dystopian problems, however, for philosophy; for example: the fluidity of intelligent Artificial Intelligence, being able to split or clone themselves, the disperse of our personal information or not having clear boundaries in the first place, raises and intensifies philosophical issues about personhood and the notion of a self.