ETEC 511 – IP #2: Artificial Intelligence

1. Who were these people, and how did/does each contribute to the development of artificial intelligence? How did/does each think “intelligence” could be identified? (~50 words each)

Alan Turing: In the context of artificial intelligence, Turing is best known for developing Turing’s test – which is a game that a computer and a human answer questions with the interregator trying to determine which is human. While this test of “thinking” is a bit limited especially when you consider a computer could be trained to mimic a human response to questions, it is the first ideas of what kinds of qualifications artificial intelligence might require.

John McCarthy: McCarthy is often listed among the parents of the artificial intelligence field. To me, McCarthy is most important for opening up the philosophical problems of AI (McCarthy and Hayes, 1969) and trying to separate intelligence from humanity, and to start to dissect what people mean by intelligence. McCarthy believed that intelligence was the “computational part of the ability to achieve goals in the world”. (McCarthy 1997, in Sutton, 2020)

Herb Simon: Another founding parent of artificial intelligence, Simon drew from his early research into decision making and brought that rationality and requirements for large data to draw an analysis from to the field of artificial intelligence. He was awarded the Nobel Prize in Economics in 1978 for his work on how people make decisions when they have incomplete information.

Marvin Minsky: Minsky in 1960 wrote about how artificial intelligence needed to address problems from multiple perspectives. He viewed artificial intelligence as a complex problem solving divided into five main processes: search, pattern recognition, learning, planning and induction. Minsky believed that when computers were able to take into account each of those aspects, that computers would be considered intelligent.

Timnit Gebru: Gebru was the co-lead of Google’s ethical AI team and was forced out as the result of a paper that suggested that large language models that train AI were often discriminatory. (Hao, 2020) This paper, and the subsequent social media discussion around ethical AI, has ushered in a new dimension to consider when developing AI tools.

2. How do “machine (programming) languages” differ from human (natural) ones? (~100 words).

Harris (2018) writes that the difference between the two languages are that programming languages are completely described, having their own set of rules, and they do not evolve with their usage on their own. I would add two other aspects in that natural language is used to create programming languages, and programming languages need a compiler. Of course, our own natural languages require interpretation (even when using the same natural language).

3. How does “machine (artificial) intelligence” differ from the human version? (~100 words).

If we deem AI intelligence at all, artificial and biological are not comparable. Firstly, artificial intelligence is limited in a myriad of ways that make it overall limited in capacity. Looking at AI art generators – they can function within the programming of the generator. The generator cannot become inspired by another art style. Intelligence is not simply the regurgitation of facts but drawing from different disciplines to develop novel ideas. Secondly, while artificial intelligence operates within parameters with a specific purpose, human intelligence does not. It wanders, it does not simply focus on solving the problem posed to it, but also runs a biological body on top of it.

4. How does “machine learning” differ from human learning? (~100 words) 

Machines do not have the ability to assess the motivation for an author to publish something or discriminate against false information. Essentially because humans can be flawed and discriminatory (or outright racist, sexist or biased) and humans make these algorithms that determine how and on what machines learn, it follows that any bias that might exist in a human programmer, or body of data that trains the machine, would introduce those flaws into the machine. However, a human can correct those flaws (or double down on them) whereas the machine would simply use the programming to “learn” the same facts.

5. And for your LAST challenge, a version of the Turing Test: how do YOUR answers to these questions differ from what a machine could generate? (~200 words)

It all depends? Is the AI trained to draw from the same sources I have drawn (and linked) to? If so, then yes. Is AI likely to draw the same parallels that I see with the power structures that serve as guideposts for society and programming as the guiderails for AI? No. It strikes me that elements of my answers, particularly the answers to question 1, would be easy for a search engine (never mind a paragraph writing AI) to replicate. It might have some issue with the personalization that I tried to provide. In fact, it might give a better answer. Less susceptible to my personal interests in what the author wrote, or what they might have said. The answer to question 5, would probably lead to a variety of interpretations? Or maybe the AI would have a way to answer these sorts of self-examination questions? It reminds me of the Voigt-Kampff test from the movie Blade Runner, which is an empathy test designed to foil AI.

Blade Runner – Voight-Kampff Test


Harris, A. (2018, November 1) Human languages vs programming languages. Medium.

Hao, K. (2020, December 4) We read the paper that forced Timnit Gebru out of Google. Here’s what it says. MIT Technology Review.

McCarthy, J. and Hayes, P. (1969) Some philosophical problems from the standpoint of artificial intelligence. John McCarthy’s Home Page.

Sutton, R.S. (2020) John McCarthy’s definition of intelligence. Journal of Artificial General Intelligence 11(2), 66-67

The Nobel Prize (1978, October 16) The Prize in Economics 1978. The Nobel Prize.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.