Artificial Intelligence played a crucial role in our American history and the history of the world. Some view it as the vain pursuit of man to become god-like and create life, others, as the next logical step in computer technology. However, the conclusion is not nearly the most important part of it. The process of the pursuit of the creation of mechanical sentient life has also led to a much deeper understanding of how our own biological minds work, creating new methods to treat brain diseases, and other brain related disorders. Through this, life is longer sustained, but modern life itself would not exist without some AI programs today. Several AI programs control the stock market, and the military has countless uses for it, and we even rely on it at home. AI has advanced greatly since it began, bringing neurology with it, and modern America could not function today without it.
Computers are nothing to be afraid of’. An idea clung to like a religion by IBM promotional people in the late 1970’s. That’s when the first AI programs were being produced and the public became concerned over basically, whether or not computers would take over the world. IBM raced to put the public’s fears to rest, so they created an ad-campaign. Pretty soon, every salesperson for IBM was parroting the phrase: “Computers are nothing but quick morons”. This seemed to halt the fears of the public, but as long as AI research progressed, the uneasiness continued.
Computers can process information at speeds thousands of times faster then the human brain, but can only do as they are instructed. So, what happens when there instructed to learn from their mistakes, or react with their surroundings? What constitutes intelligence? The earliest attempts at AI were at making computers sentient, based on the theory that something must be alive to have intelligence. Needless to say, this didn’t work out and left many disappointed investors and researchers alike. Later on though, scientists found that the problem early AI developers had been that they tried to take too big of a step. They believed if they could just create qualities of intelligence in a machine and the rest will stem from there. One of the major off products in artificial intelligence was the further study our own brains. The theory is that if we can make a machine that can accomplish and display the process behind a complex human task, then we can be shown how we work by our own creation.
Computers operate by electrons moving through paths and turning “switches” on or off by means of a charge, positive being on’, and negative being off’. Researchers believe the brain works the same way, but with chemicals and proteins instead of electricity. “The brain is a electrical and chemical mechanism, whose evolution is barely understood, whose organization is enormously complex, and which produces complex behavior in response to an even more complex environment.” (Pamela McCord, Machines Who Think, p. 70) The human brain is the most complex thing on this planet, without question. Not only because of it’s methods of transferring and recalling information, but its incredible ability of abstraction, prediction, learning, adaptation, and guessing. Aristotle suggested that thoughts could be classified into three categories: intelligence, logic, and algebra. However, this is flawed because irrationality can also influence thought.
Artificial intelligence is based on the idea that you can instruct a computer to learn, and act upon the knowledge gained. In this way, computers are intelligent today. The only thing they’re missing to becoming scientifically alive is a fear of death and self-awareness. What constitutes thought? Programming languages are based on decisions and options according to external variables, and then having the computer execute the correct decision or end the process if something goes wrong. This almost perfectly parallels thought. In the summer of 1956, at Dartmouth collage, some of the most prominent figures in the field of artificial intelligence met for the common goal of making significant progress in research and results. Some of the attendees were John McCarthy (Asst. Prof. Of Mathematics at Dartmouth), Marvin Minsky (Harvard Junior Fellow in mathematics and neurology), Allen Newell, and Herbert A. Simon.
McCarthy and Minsky had talked before on the subject of AI, but didn’t change each other’s minds on the points where they differed, making no significant progress. McCarthy was the leader of the conference. Unfortunately they didn’t make nearly as much progress as they had hoped to. Some people would attend from two days to the full six weeks, making regular meetings impossible. “At the time I believed if only we could get everyone who was interested in the subject together to devote time to it and avoid distractions, we could make real progress.” (John McCarthy) Newell and Simon were the first people (at Dartmouth) to complete a working prototype of an intelligent machine called the General Problem Solver.
There are artificial intelligence programs out today. Deep Blue is an example of a powerful chess engine programmed to not be perfect, but to act out the most logical moves within an allotted time. The earliest form of a completed publicly recognized AI program was checkers created by Arthur Samuel, working for IBM. This was only a side job for him, but it became an embarrassment to himself and his company when he completed it because checkers was regarded as a trivial time waster. He built the engine on the basic principals of checkers, leaving out his own experience, to let the computer decide moves for itself. He also programmed the engine to have the ability to learn from its mistakes. In 1961 the checkers program played on the masters level by simply looking ahead, evaluating, and incorporating the knowledge it received from previous mistakes.
Chess was being worked on far before the Samuel’s checkers program was, but none were finished until after Alex Bernstein developed an engine that was capable of beating Level C chess players. Shortly after, two programmers from Northwestern University, David Slate and Larry Atkin, developed a chess-playing engine, which was capable of beating Level B players. This engine didn’t incorporate any new logic, just faster processing speeds making deeper thinking possible. Unfortunately, these AI programs did very little to give insight into how the human brain works. Brilliant people can do very poorly at chess, while at the same time slower people can be chess masters. Chess isn’t a measure of intelligence, just a different aspect of thinking. ” If all you have is a machine that bests humans by speed alone, what really do you have? What have you understood about human intelligence, what core of human intellect have you penetrated?” (Arthur Samuel – 1978).
When AI is at its most advanced state, there will be a living, thinking, speaking machine with awareness of its own mortality, the ability to reproduce and fear of it’s own death. Today we have computers creating circuit boards, chassises, and other products for other computers in mass production. Reproduction is already happening, but the other steps have a long way to go. However, simple forms of AI exist now. Almost every computer have you have ever played has had a form of artificial intelligence in it. The computer you have at home has an operating system with a form of AI in it. Cars and most other forms of automated transportation have them. During any war with any automated navigation systems AI was present. Countless American businesses are based on e-commerce, with also uses AI. Brain cancers can be treated more efficiently because of the neurological studies and thought processes brought on by the study of AI. America would crumble without it, and the several comforts of life as we know it wouldn’t exist.
Minds Over Matter: A New Look At Artificial Intelligence, Jeffrey Rothfeder.
Computer Book Division/Simon & Schuster, Inc., New York
Machines Who Think, Pamela McCorduck.
W.H. Freeman and Company, San Francisco
What Computers Can’t Do: A Critique of Artificial Reason, Hubert L. Dreyfus
Harper & Row
Thinking Machines: A Layman’s Introduction to Logic, Boolean Algebra, and Computers, Irving Adler
The John Day Company, New York