This weekend I attended the Ethics of Artificial Intelligence conference at NYU. There were a ton of high-profile and interesting people there from philosophy( David Chalmers, Peter Railton, Nick Bostrom, Thomas Nagel, Paul Boghossian, Frances Kamm, Wendell Wallach) and science( Yann LeCun, Stuart Russell, Stephen Wolfram, Max Tegmark, Francesca Rossi) as well as Eliezer Yudkowsky.
There were two reasonably long days of talks and panels. David Chalmers( famous for his philosophy of intellect and consciousness) did not officially speak but acted as chair for the event. He outlined the philosophy of the conference, which was to discuss both short and long term issues in AI ethics without worrying about either detracting from the other. He was, as usual, highly awesome.
Here is a summary of the event with the most interesting points made by the speakers.
The first block of talks on Friday was an overview of general issues related to artificial intelligence. Nick Bostrom, writer of Superintelligence and head of the Future of Humanity Institute, started with something of a barrage of all the general ideas and things he’s come up with. He floated the idea that perhaps we shouldn’t program AI systems to be maximally moral, for we don’t know what the true morality looks like, and what if it turns out that such a directive would lead to humen being penalized, or something else that was pathological or downright weird? He also described three principles for how we should treat AIs: substrate nondiscrimination( moral status does not depend on the kind of hardware/ wetware you run on ), ontogeny nondiscrimination( moral status does not depend on how you were created ), and subjective time( moral value exists relative to subjectively experienced time rather than objective time, so if a intellect operate at a fast clock velocity its life would be more important, all other things being equal ).
He pointed out that AI moral status could arise before they reach there is any such thing as human level AI – just like animals have moral status despite being much simpler than humen. He mentioned the possibility of a Malthusian calamity from limitless digital reproduction as well as the possibility for election manipulation through agent duplication, and how we’ll need to prevent these two things.
He voiced is supportive of meta level decisionmaking – a ‘moral parliament ‘ where we imagine moral hypothesis sending’ delegates’ to compromise over contentious issues. Such a system could also accommodate other values and interests besides moral theories.
He answered the question of “what is humanity most likely to fail at? ” with a qualified selection of’ intellect crime’ committed within advanced AIs. Human already have difficulty with empathy towards animals when they exist on farms or in the wild, but AI would not inevitably have the basic biological features which incline us to be empathetic at all towards animals. Some robots attract empathetic attention from humen, but many invisible automated procedures are much harder for people to feel empathetic towards.
Virginia Dignum was next; she is at the Delft University of Technology and spoke about mechanisms for automated procedures to make decisions. She specified four methods of decisionmaking based on whether decisions are taken intentionally or imposed upon a system and whether government decisions are build internally or externally. The two former features lead to algorithmic decisionmaking in machines; the latter two lead to imposed decisions predetermined by regulatory institutions. Deliberated external decisionmaking entails there is a’ human in the loop’ and internal imposed decisionmaking is essentially randomness.
Yann LeCun concluded this section with a pretty fantastic overview of deep learning methods and these restrictions which stand in the way of developments in machine intelligence. He pointed out that reinforcement learning is a rare and narrow slice of the field today and that the greatest obstacles for machines include common sense judgements and abstraction. The biggest current problem for AI is unsupervised learn, which is having machines that can learn to classify things on their own without being given clearly labelled data from humen. He showcased some of the( very cool) features of adversarial learning which are being used to tackle this.
He expressed support for the orthogonality thesis, namely the idea that intelligence and morality are’ orthogonal’ – simply because an agent is very smart doesn’t mean that it’s inevitably moral. He believes we should build a few basic drives into AIs: do not hurt humen, interact with humen, crave positive feedback from trusted human trainers. He also described a couple of reasons for why he is not concerned about uncontrolled advanced artificial intelligence. One was that he is confident that objective functions can be specified in such a way as to make machines indifferent to being switched off, and the other is that a narrow-AI focused on eliminating an unfriendly general-AI would’ win’ due to its specialization.
In Q& A, Stuart Russell objected to LeCun’s confidence in machines being indifferent to being shut off based on the fact that self-preservation as a goal implicitly falls out of whatever other goals a machine has. Paul Boghossian objected to the’ behaviorist’ nature of the speakers’ points of view, saying that they were exempting consciousness from its proper role in these discussions. One person wished to know whether we should let AIs take charge of everything and supersede humanity – Bostrom pointed out that the space of possible futures is “an enormous Petri dish” which we don’t understand; an AI future could materialize as a planet sized supercomputer with no moral status, and we will need to learn how to technologist friendly advanced AI systems no matter what the scheme is.
The rest of the Friday talks were devoted to near-future issues such as specific AI systems. Peter Asaro started with a general overview of his organization, the’ Campaign to Stop Killer Robots ‘. He stated that targeting and killing should remain human-controlled actions. While he acknowledged that automated weaponry could result in fewer casualties on the battlefield, he believed that it was too narrow a position of the consequences. He said that it’s not straightforward to translate complicated battlefield morality questions for machines to understand, and is worried about unintended initiation and escalation of conflicts through automated systems, arms races, and threats to humanitarian law. He also believes that people should only be killed with’ dignity’ and that doing it with a robot rob people of this. Hence, he called for a clear and strong norm against automated weapons.
Kate Devlin of the University of London dedicated a brief overview of the ethics of artificial sexuality. Appearing at the history of sexualized robots featured in fictional media, she noted that almost all of them are female. Today there is a “Campaign Against Sex Robots” which is based on the idea that sexual robots would lead to the objectification of women. Devlin does not concur as she thinks it is too early to ban the technology and that we should explore it before thinking about banning it, especially since it does not really harm anyone. Instead she wants us to think about how to develop it correctly. There are many potential utilizes for these types of robots ranging all the way to the therapeutic; many of the rudimentary ones being sold today are bought by people who are incapable of forming ordinary relationships for various reasons. VR is being used in arousal tests to gauge the efficacy of treatments against pedophilia.
She noted that gender issues have arisen in technology already; the history of gendered technology includes pacemakers originally designed only for men and telephones too large for women’s pockets. We should get into AI now to make sure that it is not designed in problematic ways.
She mentioned privacy fears, as the manufactures of the female stimulator WeVibe have already been sued over fears that they were not properly informing clients of their collection of data from the devices. She wondered if we will ever get to a stage where a robot might have some knowledge of its role and refuse to give consent to its use, and if transmission/ duplication of data and code between machines could serve as some sort of digital sexual reproduction.
Vasant Dhar of NYU spoke next about data and privacy in the era of autonomous vehicles. He said that our legal and financial liability institutions are based on outdated notions of data and that they fail to address liability and crime. However, the tools we have now even in ordinary cars for recording data can be used to improve insurance and justice system. He proposed black boxes for cars that would contain all relevant data to decide fault in the event of accidents, and said that customers should have the choice to share their driving data with insurance companies to get lower premiums.
Dhar reiterated the importance of improving vehicle safety through autonomous driving; each percentage point reduction in vehicle accidents equates to 400 deaths and 40,000 injuries avoided every year.
Adam Kolber followed up with a discussion of whether “the code is the law”, based on the case study of The DAO which was an automated capital money which was subjected to a $50 million loss through exploitation. The answer apparently is that the code should not be the law, even though many people seemed to accept that it was.
Steve Wolfram of WolframAlpha and Mathematica fame discussed the issues of computer languages and goal specification. He said that their own lives work has essentially been about trying to find ways for humen to specify their goals to machines, and that this can work for ethics as well as for math. He doesn’t think that any single moral theory is likely to work for guiding artificial intelligence, apparently because of Godel’s theorem and the incompleteness of computational languages.
Francesca Rossi of IBM was contended that for AIs and humen to interact very productively “weve got to” embed them in surroundings, so that rather than picking up a tool like a laptop or a phone, we are interacting with artificial systems all around is in our rooms and spaces. Human will be recognized by their surroundings and our needs and wants will be inferred or asked about. AI embedded in surroundings can have memories about humen to better serve their interests. Most of all, we will need to establish trust between humans and AIs.
Peter Railton, philosopher at the University of Michigan, attacked the subjects of orthogonality and value learn. He said that we can’t simply tell AIs to do what we want because our wants and values involve critical assessment. He said that the orthogonality thesis might be right, but as we increasingly interact with systems and allow them to participate in our own lives and decisionmaking, the question of what it would take for them to be intelligent might involve certain features relevant to morality.
He stated that AIs should be thought of as social beasts; as a simple model, self regulation in a Hobbesian social contract have contributed to constraints and respect derived from self preservation. A society of intelligent cooperators can defy aggression and malice, and being moral is more efficient for their home communities than being cunning. From these principles we have a recipe for constructing proto-moral agents.
He discussed the’ moral point of view’ required for many strong ethical hypothesis such as Kantian ethics and consequentialism: it requires agents to have a hierarchical , non-perspectival, modal/ planning-oriented, and consistent position of the world which assigns intrinsic moral weight to things. He described how all these features are also part of the process of becoming generally intelligent in the first place, implying that general social intelligence ensures the necessary information required for moral decisionmaking. In the path towards functional moral agents, we will have to build agents which can represent the goals of others and have them learn how to act in beneficial styles. So if it is possible to construct AIs that we can trust, then we are on a good path towards building artificial moral agents.
In the Q& A, Eliezer Yudkowsky objected that in the long run the’ instrumental strategy’ are not too what you want because maximizing people’s desires as they are explicitly disclosed can lead to bad outcomes, and you have to have a position like coherent extrapolated volition which asks what people would really want. Russell objected that when an agent becomes sufficiently powerful, it has no need to cooperate anymore.
Regina Rini of the NYU Center for Bioethics stated that the approaches to ethics in so far described relied too much on the Western post-enlightenment position of ethics, which is a historical aberration, and omitted African, Chinese and other approaches to ethics. Railton stated that his scheme was grounded in basic empathy and not mediated by any higher order moral theory; Wolfram and Rossi said that no one ethical approach will work and AI will have to represent diverse values.
Saturday was devoted to long term discussion of the future of advanced artificial intelligence. Stuart Russell, prof at UC Berkeley and head of the new Center for Human Compatible Artificial Intelligence, started with a basic overview of the control problem. He described the points induced in Steve Omohundro’s newspaper on convergent instrumental drives. He also had some pretty harsh terms for the researchers in the AI community which have denied and rejected notions of the control problem without seriously engaging with the relevant literature.
He had three simple notions which he proposed to constitute the definition of’ provably beneficial’ AI: maximizing values for humen is the system’s only goal; the robot is initially uncertain about these goals, and the best source of information is human behaviour. He referred to inverse reinforcement learn as a technique for machines to learn human preferences, and used to say uncertainty offer an incentive for machines to learns, ask questions, and explore cautiously.
His answer to the off-switch problem is to make robots unsure of their objectives, so that they assume that the human will switch the robot off if and only if it has a good reason to, and will therefore be complicit with the action. He said that the wireheading problem can be avoided if you construct the reward signal as information about the reward function rather than as a reward itself; this way, any hijacking of the reward signal induces it useless.
He said that there is a strong economic incentive for value alignment, but humen are irrational, nasty, inconsistent, and weak-willed.
The next speaker was Eliezer Yudkowsky of the Machine Intelligence Research Institute. Chalmers pointed out his role there as well as his side venture in Harry Potter fanfiction.
Yudkowsky started his talk by pointing out how the Terminator pictures in every media article about the control problem are inappropriate. The real analogy to be used is Mickey Mouse as the Sorcerer’s Apprentice in Fantasia .
He used to say the first difficulty of AI alignnment is that the utility functions we imagine are too simple, and the second difficulty is that maximizing the likelihood of achieving a dedicated goal have contributed to pathological outcomes. He and MIRI are concerned with the nature for the objectives of’ maximizing’ and how to define goals in such a way that avoids the problems of perverse instantiation.
He used to say the fears of AI being developed by some terrorist or rogue group were silly, as “ISIS is not developing convolutional neural nets.” Instead the most powerful AI is likely to be developed by big groupings of government , academia and industry.
He claimed that the four central propositions which support the idea that AI is a very big problem are: the orthogonality thesis, instrumental convergence, ability gain( the velocity at which advanced AI can make itself better ), and alignment difficulty. He said the first two are logical matters such as computer science that people always learn to accept when they reflect upon them, while the latter two are more controversial.
The next talk was from Max Tegmark and Meia Chita-Tegmark. Max is a world-renowned physicist who helps run the Future of Life Institute, and Meia is a psychologist. They explained how physics and psychology provide useful tools for understanding artificial intelligence; physics tells us about computation and such constraints of the universe, and psychology tells us about the nature of well being, ways to debug the intellect when reasoning about AI and methods to design psychomorphic AIs. Meia was the only speaker at the conference to discuss unemployment in any detail; she pointed out that retirement has only mixed consequences on well being and that happiness comes from financial gratification and feelings of respect. She said that analyzing homemakers, proportion time workers and early retirees can tell us more about how an automated economy would affect people’s well-being.
Max checked off a list of common myths considering advanced AI. Meia said that we should look at the cognitive biases which have led to these misconceptions( such as availability bias leading to people worrying about robots rather than invisible artificial intelligence) and figure out how to avoid similar glitches from hindering our thinking in the future.
By the way, Max Tegmark is very cool, he has a sort of old-rocker-dude vibe, and he and Meia are super cute together.
Wendell Wallach of Yale spoke next. He is the man who quite literally wrote the book on AI ethics. He recognise top-down approaches of formally specifying AI behaviours from bottom-up approaches of value learning. He said that neither will be sufficient on its own and that both have important roles to play. He is worried that AI engineers will make simplistic premises about AI, such as the idea that every decision should be utilitarian or the idea that’ ethics’ and’ morality’ are icky ideas that can be ignored.
Steve Petersen, a philosopher at the University of Niagara, dedicated the next talk, based on the draft of a forthcoming newspaper of his. He aims to push back against the orthogonality thesis and modulate the level of the risk assessment provided by Bostrom. His argument is that designing AI to follow any complex goal will necessarily require it to be able to learn the values of its “teleological ancestors”( the original human designers or the previous iterations of AI before it self-improved or self-modified) and arrive at a state of coherence between goals. As agents replicate, self-modify and merge in the digital world, there can be no fact of the matter about which agents are the same or different; instead there will be an’ agential soup’ unified by a common teleological thread originating with the designers. Coherence reasoning have contributed to impartial reasoning with the goals of other agents.
There were several responses to him in Q& A. Yudkowsky’s objection was that reaching coherence requires a meta-preference framework with particular premises about the universe and ontology; hence, for any goal, there are many preference frameworks which could fulfill it, many of which would be perverse. Russell used to say simply coherence is sufficient to because you need the systems to give special weight to humen. Max Tegmark used to say the problem was the vagueness of humanity’s final goals. Chalmers pointed out that the orthogonality thesis still allows for all kinds of correlations between between intelligence and morality, as long as they are not necessary by design. Petersen said that he is arguing for’ attractor basins’ in the prospect space of AI intellects. Interestingly, he was motivated to start his research by the Dylan Matthews Vox article on effective altruism where Dylan thought that effective altruists shouldn’t be concerned by artificial intelligence. Petersen doesn’t think that AI is unimportant and thinks that Bostrom and Yudkowsky’s work is valuable, but he wanted to get a more critical assessment of the level of risk when he learned that alternative altruistic projects were at stake.
Matthew Liao of the NYU Center for Bioethics dedicated an debate for moral status on the basis of capabilities – that an entity is morally valuable to the extent that it has the physical/ genetic basis for achieving features of moral relevance. I did not get a chance to ask him if this would imply that a’ seed AI’ could be the most morally valuable entity in the world. He did argue against the ideas that level of intelligence or degree of moral bureau determine moral status, as we don’t commonly think that smarter or more benevolent humen are more morally valuable than others.
Liao was contended that moral hypothesis are too specific and too high level is usually implemented in AIs. Instead, AI will need a universal moral grammar in which to specify morality. The holy grail is to develop machines that understand why things are right or wrong.
Eric Schwitzgebel and Mara Garza of UC Riverside argued for basic principles of AI rights. They introduced a very weak “no-relevant-difference” argument: the idea that there are possible AIs which have the same morally relevant features that humans do and therefore there are possible AIs with equal value to humen. They questioned if cheerfully suicidal or hardworking AI is acceptable, and stated a’ self respect principle ‘: that human grade AI should be designed with an appropriate appreciation of its own value.
John Basl and Ronald Sandler of Northeastern University argued for AI research committees to approve or deny research in cases where AI topics might be harmed. They said it would not be very different from suits like animal testing where we have similar review committees, and sketched out details of how the proposal would work.
Daniel Kahneman, one of the most famous behavioral economists in the world, made something of a astound appearance in the final panel. He said that we should take intuitions about lawsuit analyses like the trolley problem seriously, as that is how the public will think about these events, for better or for worse. He used to say no matter how AI cars kill people, it will be perceived with horror whenever the first incident happens, and we should prepare for that. Hunches depend on irrelevant factors and will especially depend on whether AIs are designed to resemble us or not.
Gary Marcus, prof of psychology at NYU, of dedicated a much needed presentation about the nature of intelligence. The previous talks in this discussion had mostly assumed that intelligence was one-dimensional and simple and that there was some fixed idea of’ human-level’ AI which we could eventually reach. Of course this is a ridiculous oversimplification; intelligence is multidimensional and it is more about implementing a combination of various cognitive tools, some of which are already stronger in AIs than in humen. AIs can be better or worse than us in various domains, so we really have no idea where AIs will be in this multidimensional space. AIs could in fact be more appropriate than us at moral reasoning. He also emphasized the gap is between machine learning today and what human reasoning can do.
Susan Schneider of Marquette University, a philosopher who has written quite a bit about AI and superintelligence, gone over various issues. She argued that intellect uploads might constitute death of the individual as long as we don’t prove certain notions about consciousness and personal identity, and also claimed that designing an intelligent and morally valuable robot to serve the interests of its inventors would constitute slavery.
Jaan Tallinn, founder of Skype, also dedicated a quick talk. He has been a strong financial backer for MIRI and other efforts in this space, and simply conveyed his faith in the importance of the issue and his happiness at the success of the conference and the number of students who were interested in pursuing the topic.
There was some final banter about the nature of consciousness which David Chalmers sat through very passively. Yudkowsky conveyed optimism that one day we will have an explanation of consciousness which clears up our disarray on the matter. Nagel said that we will need to think more about the dynamics of multi-agent systems and moral epistemology. After that the event ended.
The conference videos are available here . In my views, the best talks were given by LeCun, Railton, Russell, Yudkowsky, the Tegmarks, Petersen, and Marcus. The event overall was great and is available on Manhattan induced it even better. There was quite a bit of valuable informal meeting and deliberation between many of the speakers and attendees. There was no’ sneering’ or disdain about Yudkowsky or Bostrom as far as I could tell. It seemed like a generally open minded yet well educated crowd.
If you regret missing it, then you might like to head to the Envision Conference this December.
Read more: www.reddit.com
Get more stuff like this
Subscribe to our mailing list and get interesting stuff and updates to your email inbox.