Speaker 2
you give an example of the kind of thing you tok, i guess
Speaker 1
the most famous example is the distinction between something like deep blue, the chess playing programme, the allegedly beat caspro and alfa go or alfa zero now, or me zero wich. The next phase.
Speaker 3
Deep blue was programmed
Speaker 1
by experts, chess experts, experts in the systems and so on, over many years.
Speaker 3
It was, in fact, a machine that didn't itself beat caspar off.
Speaker 1
What beat asparovas the human information and knowledge that was put into the machine, and it turned the crank and gave possible moves. It
Speaker 3
would't really be fair to say that
Speaker 1
machine intelligence defeated a human. A system like alta go starts with no knowledge of go strategy, only the rules of the game, and uses simulation, playing against itself and developing its own method of evaluating simulations. The only learning signal it gets is whether it wins a game or not. And through that developed, not only discovered some known strategies and go, but invented new strategies, and was able todefeat a world champion. It works very fast. But it was able to do this not with years and years of training and expertse but through this process of reinforcement learning. Now,
Speaker 2
this kind of learning machine overs has huge pretential, but it also has a quality which seems to be getting close to human mind in some respects. And obviously, we hold individual human beings responsible for all kinds of things. Do you invisig to a world in which we'll start to hold machines responsible for things? Well,
Speaker 1
that's a good question, ecause responsibility is self's a very thorny issue. There are various kinds of responsibility. One question is whether we will allow them a certain amount of autonomy in decision making. That's in effect, recognizing that we can entrust certain decisions with them, and that's allowing them a certain responsibility. Does that mean we think of them as moral agents? No. Because full moral agency requires a lot more than just the ability to make decisions. We think of it as involving, for example, self reflection, consciousness, capacity for moral emotion.
Speaker 1
like that is present in these machines. But they will be given responsibilities. And the question is, are we entitled, or can we be justified, in in them these responsibilities? What would they have to be like for that to be reasonable?
Speaker 1
speaking in the future, but they're already being given lots of responsibilities.
Speaker 3
And this means that we'd better think very seriously, very quickly about
Speaker 1
the question of, ok, well, in what sense might these machines be themselves capable of responding to morally relevant features of situations?