Detmar Straub, PhD, Temple University
This event is part of the Decision Sciences Seminar Series series.
Location:
Gerri C. LeBow Hall722
3220 Market Street
Philadelphia, PA 19104
Registration Option:
Capabilities of AI and thinking/learning machines are clearly overtaking human abilities (aka “technological singularity” or, more plainly speaking, “singularity”), with several forecasters like Winograd (2006) predicting that machine will outthink us within the first half of the 21st century. Is it possible that humans will not be able to control the burgeoning intelligence of machines and that we will, frighteningly, be subordinated to them, especially as they become self-aware?
This talk starts by sketching out some past and present forecasts of when technological singularity will be real and present; what social, economic, and political issues will emerge; what security issues will loom; and finally, how futurists (including science fiction writers and the movies) have envisioned the role of human beings in the coming era of the thinking machine.
While the future of humanity might be hanging in the balance, one key academic question arises. What should researchers, in particular information systems researchers, study w/r/t AI? This overall issue has been framed as IA versus AI, or intelligence (human) augmentation (IA) versus artificial (computer) intelligence (AI). Enduring research questions might include: (1) technical issues with achieving singularity and requirements such as designing a tamper-proof “kill” switch for intelligent machines; (2) behavioral questions such as the pace of change and problems with duplicating human creativity; (3) social-economic conundrums such what will people do in an era of omnipresent thinking/working machines and worldwide societal disruption; and (4) organizational matters such as will there be an IS/IT Dept. and, if so, what will it do if machines are themselves designing and coding new systems?