no code implementations • 6 Oct 2020 • James D. Miller, Roman Yampolskiy, Olle Haggstrom, Stuart Armstrong
To reduce the danger of powerful super-intelligent AIs, we might make the first such AIs oracles that can only send and receive messages.
no code implementations • 31 Jan 2020 • James D. Miller, Roman Yampolskiy, Olle Häggström
An artificial general intelligence (AGI) might have an instrumental drive to modify its utility function to improve its ability to cooperate, bargain, promise, threaten, and resist and engage in blackmail.
no code implementations • 23 Jun 2019 • James D. Miller, Roman Yampolskiy
This paper reveals a trap for artificial general intelligence (AGI) theorists who use economists' standard method of discounting.