top of page

A.I. On The Silver Screen

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Over the years we have seen many movies based on the idea of Artificial Intelligence. Some example of movies being Ex-Machina, Avengers: Age of Ultron, Her, Blade Runner, and A.I. Artificial Intelligence back in 2001.

 

Her was a movie created in 2013. This film paints a dystopic future of constant surveillance and centralized control familiar, yet is beautiful and inviting because of the evolving technologies however there seems to be an emptiness and coldness that lies under the surface, a sense that something is lost between us and our devices. The film reveals that technology provides an ultimately ill-fitting substitute for genuine human interaction. The devices we’re addicted to are draining us of our humanity and our scope for human connection, isolating us.

 

The movie Blade Runner was created in 1982. It is a portrayal of a postmodern dystopian future. Los Angeles, the setting of the film, helps to create the film’s dystopia. It is a visually spectacular but bleak vision of the future. Its towering skyscrapers and large advertising screens show a society consumed with commerce. In the film, there are chaotic streets where people crowd. Overpopulated and polluted, its rotting state can be considered a consequence of economic excess. The images of the city give the viewer a very visual sense of the film’s dystopia, but this is also portrayed in the evidence of the technological advancements we are made aware of, for example, the replicants aka Artificial Intelligence.

 

The movie A.I. Artificial Intelligence was created back in 2001. Set in a future where humans have advanced technologically, enough so to build robots or ‘mechas’ to serve them, it follows a robot ‘child’ who is capable of feeling emotion. With the advancement of technology, we see in their utopian world, the pursuit for the perfect robot- an exact replication of a human being that can ultimately have feelings.

 

Ex-Machina (2015) truly gave us a more accurate look on how Artificial Intelligence can potentially become so advanced that the system itself is able to display sentience in a more realistic approach. The director of Ex-Machina, Alex Garland, released a very interesting quote. He stated,"...do I think there will be A.I.'s one day that are strong A.I.'s and that have sentience? If I had to bet, I'd bet yes, ... I could be wrong ... But one of the pleasures of working on this film is that I've got to meet some people who are involved at a very high level of current A.I. research, and you pretty much get the same message from all of them, which is that it will happen, but it's not about to happen. It's not imminent. We're really not talking about 5 years. We may not be talking about 20 years... you may well be talking about 200 years, and then again you may be talking about   never."  I found this quote to be very powerful because it shows that Garland has seen glimpses of how much we have progressed in the field of Artificial Intelligence. However, if we reach a point in time where Artificial has become sentient, do we risk having Artificial Intelligence that is sentient, but also all knowing?

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

This is a prime example of what Ultron from Avengers: Age of   Ultron (2015) has followed. Ultron represents the ultimate example of Artificial Intelligence applications gone wrong. Ultron shows that if a sentient Artificial Intelligent system gains so much knowledge that it could possibly seek to dethrone the humans who created it.  Ultron also shows applications of Artificial Intelligence that we have in our world today.  Ultron continues to evolve and gain more iterations of itself over time. With each iteration Ultron is able to become stronger, smarter, and more bent on fulfilling two main desires: survival and bring peace and order to the universe. The unfortunate part for us humans is that Ultron would like to bring peace and order by eliminating all other intelligent life in the universe.  It calls us to look at the face of humanity because humanity doesn't often recognize when we have crossed the boundaries when pursuing more and more knowledge. We do not realize until we are on the other side. In science we often push to discover and apply things before we truly understand all implications, both positive and negative that will accompany them. We often do things mainly     because we can, and we do not fully consider if we should, in fact, do them at all. Tony Stark a.k.a. Iron Man falls under this statement when he takes an unknown piece of tech and installs it into Ultron. Rest assured, he greatly regrets his mistake.

 

Opinions on Artificial Intelligence

“Any sci-fi buff knows that when computers become self-aware, they ultimately destroy their creators. From 2001: A Space Odyssey to Terminator, the message is clear: The only good self-aware machine is an unplugged one. We may soon find out whether that's true. ... But what about HAL 9000 and the other fictional computers that have run amok? ‘In any kind of technology there are risks,’ [Ron] Brachman acknowledges. That's why DARPA is reaching out to neurologists, psychologists - even philosophers - as well as computer scientists. ‘We're not stumbling down some blind alley,’ he says. ‘We're very cognizant of these issues.’”

 

-Kathleen Melymuka, Computerworld

 

 

 “The primitive forms of artificial intelligence we already have, have proved very useful. But I think the development of full artificial intelligence could spell the end of the human race. Once humans develop artificial intelligence it would take off on its own, and re-design itself at an ever increasing rate. I think machine intelligence will eventually surpass biological intelligence—and, yes, there will be significant existential risks associated with that transition.”

 

-Stephen Hawking, Smartest man in the world

 

 

“Why give a robot an order to obey orders—why aren't the original orders enough? Why command a robot not to do harm—wouldn't it be easier never to command it to do harm in the first place? Does the universe contain a mysterious force pulling entities toward malevolence, so that a positronic brain must be programmed to withstand it? Do intelligent beings inevitably develop an attitude problem? (…) Now that computers really have become smarter and more powerful, the anxiety has waned. Today's ubiquitous, networked computers have an unprecedented ability to do mischief should they ever go to the bad. But the only mayhem comes from unpredictable chaos or from human malice in the form of viruses. We no longer worry about electronic serial killers or subversive silicon cabals because we are beginning to appreciate that malevolence—like vision, motor coordination, and common sense—does not come free with computation but has to be programmed in. (…) Aggression, like every other part of human behavior we take for granted, is a challenging engineering problem!”

 

-Steven Pinker, How the Mind Works

 

 

“I am in the camp that is concerned about super intelligence. First, the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that, though, the intelligence is strong enough to be a concern.”

 

-Bill Gates, Funder of Artificial Intelligence

 

 

“I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.”

 

-Alan Turing, Computing machinery and intelligence

A.I. Effects in Society

The argument between the pro's and con's can be an endless war, but we need to take a step back and consider the idea of having Artificial Intelligence in our society and how it would impact our world. According to Carl Benedikt Frey and Michael A. Osborne, whom are from Oxford Martin School and Faculty of Philosophy in the UK state that, "According to our estimate, 47 percent of total United States employment is in the high risk category (because of Artificial Intelligence systems) meaning the associated occupations are potentially automatable over some unspecified number of years, perhaps a decade or two."  Another support to this claim comes from two MIT academics, Erik Brynjolfsson and Andrew McAfee, who argue that we are enter a "second machine age", in which the accelerating rate of change has the potential to leave millions of medium and low-skilled workers behind.  As industrial robots are becoming more advanced, they will be able to perform a wider scope of non-routine manual tasks. The remainder of employment in production occupations is thus likely to diminish over the next decade.  There is information that claims that information technology has already impacted workers who are "middle skilled." Their jobs are recognized as administrative workers, repairmen, and factory workers.  MIT's "Technology Review: reported in 2013 that researchers at the University of Oxford projected that 45% of American jobs are at high risk of being taken by computers in the next two decades. We can see how the benefits of Artificial Intelligence can be seen to help the economy by supplying efficiency but there is an underlying truth that these Artificial Intelligent systems are taking away 45% of the labor force away. That does not only affect those people who are being laid off, but it affects us as a country and as a world.  MIT also stated that the takeover will happen in two different stages. In the first, Artificial Intelligent systems will start replacing people in transportation and logistics, production labor, administrative support, and jobs in services, sales, and construction could also be automated. We will then hit a plateau due to funding issues. Then MIT predicts there will be a second wave of computerization which will "depend on the development of good Artificial Intelligence." In this stage jobs in management, science, engineering, and the arts will be at risk. If MIT's predictions are accurate, then it appears that no one is safe from the progression of ever growing Artificial Intelligence.

 

The time where we depend of Artificial Intelligence to serve us on the front line of protection is already upon us. The United Kingdom is already looking at how A.I. techniques can boost digital forensics. The government funded, Cyber Security Knowledge Transfer Network will examine the potential use of A.I. in web counter terrorism surveillance, and fighting internet fraud. A.I. could also be used to safeguard privacy, by monitoring precisely what information is being sent where.  Although this sounds great that Artificial Intelligence is protecting our privacy, in a way it is also invading our privacy by tracking where all our private information is being sent. It wants us to believe that we are safe from others looking at our information, while it is looking at our info the whole time. Another frightening application of Artificial Intelligence is the idea of military drones being controlled by A.I. technology. Soon, drones will be able to perform target killing without consultation of a human driver, and only responding to a set of criteria set by the person coding the machine. Machine Researchers behind this type of technology emphasize the idea of "Friendly A.I." in order to hide the dystopian trends that are actually happening in the background. Imagine a world where a heavy duty drone can be sent without even leaving the house. If this type of technology were to fall into the wrong hands, it would be catastrophic to our world. So it makes me wonder, if we are trying to protect the world by using Artificial Intelligence, then why are we in danger from the same A.I.?

A.I. and the Future

Imagine a world even more advanced than it is today. As technology grows, so does the advancement of A.I. What does the future hold for these intelligent robots in our society? Today’s A.I. can do simple, everyday tasks, such as give us advice or directions, but we may envision A.I. to do more complex responsibilities. Vicarious, which is supported by $70 million from Silicon Valley celebrities like Mark Zuckerberg and Elon Musk, has made progress by taking an unusually visual approach. In 2013, Vicarious generated buzz when it announced its A.I. could solve CAPTCHAs, the ubiquitous Web site puzzles that require visitors to transcribe a string of distorted letters—a basic test used to verify that the site’s visitor is actually human—to proceed. Scientists are still quest after a machine as adaptable and intelligent as the human brain. Now, computer scientist Hava Siegelmann of the University of Massachusetts Amherst, an expert in neural networks, has taken Turing’s work to its next reasonable step. She is deciphering her 1993 discovery of what she has named “Super-Turing” computation into a malleable computational system that learns and evolves, using input from the environment in a way much more like our brains do. This model is inspired by the brain. “Super-Turing” is a mathematical formulation of the brain’s neural networks with their adaptive abilities. In the coming decades, we shouldn't expect that the human race will become extinct and be replaced by robots. We can expect that AI will go on creating more and more sophisticated applications.

Mohabat Gill

Marisa McCann

UCOR 3430: Humanities and Global Challenges

  • Facebook Clean
  • Twitter Clean
  • YouTube Clean
bottom of page