no code implementations • 22 Sep 2021 • Arie Glazier, Andrea Loreggia, Nicholas Mattei, Taher Rahgooy, Francesca Rossi, K. Brent Venable
To this end, we propose a novel inverse reinforcement learning (IRL) method for learning implicit hard and soft constraints from demonstrations, enabling agents to quickly adapt to new settings.
no code implementations • 4 Dec 2020 • Jaelle Scheuerman, Jason Harman, Nicholas Mattei, K. Brent Venable
In multi-winner approval voting (AV), an agent submits a ballot consisting of approvals for as many candidates as they wish, and winners are chosen by tallying up the votes and choosing the top-$k$ candidates receiving the most approvals.
no code implementations • 29 Nov 2019 • Jaelle Scheuerman, Jason L. Harman, Nicholas Mattei, K. Brent Venable
In real world voting scenarios, people often do not have complete information about other voter preferences and it can be computationally complex to identify a strategy that will maximize their expected utility.
no code implementations • 28 May 2019 • Jaelle Scheuerman, Jason L. Harman, Nicholas Mattei, K. Brent Venable
In multi-winner approval voting (AV), an agent may vote for as many candidates as they wish.
no code implementations • 21 Sep 2018 • Andrea Loreggia, Nicholas Mattei, Francesca Rossi, K. Brent Venable
CPDist is a novel metric learning approach based on the use of deep siamese networks which learn the Kendal Tau distance between partial orders that are induced by compact preference representations.