Search Results for author: Zachary Robertson

Found 5 papers, 2 papers with code

GPT4 is Slightly Helpful for Peer-Review Assistance: A Pilot Study

1 code implementation16 Jun 2023 Zachary Robertson

In this pilot study, we investigate the use of GPT4 to assist in the peer-review process.

Layer-Wise Feedback Alignment is Conserved in Deep Neural Networks

no code implementations2 Jun 2023 Zachary Robertson, Oluwasanmi Koyejo

In the quest to enhance the efficiency and bio-plausibility of training deep neural networks, Feedback Alignment (FA), which replaces the backward pass weights with random matrices in the training process, has emerged as an alternative to traditional backpropagation.

No Bidding, No Regret: Pairwise-Feedback Mechanisms for Digital Goods and Data Auctions

no code implementations2 Jun 2023 Zachary Robertson, Oluwasanmi Koyejo

The mechanism's novelty lies in using pairwise comparisons for eliciting information from the bidder, arguably easier for humans than assigning a numerical value.

Pairwise Ranking Losses of Click-Through Rates Prediction for Welfare Maximization in Ad Auctions

no code implementations1 Jun 2023 Boxiang Lyu, Zhe Feng, Zachary Robertson, Sanmi Koyejo

We study the design of loss functions for click-through rates (CTR) to optimize (social) welfare in advertising auctions.

Learning-To-Rank

Double Descent Demystified: Identifying, Interpreting & Ablating the Sources of a Deep Learning Puzzle

1 code implementation24 Mar 2023 Rylan Schaeffer, Mikail Khona, Zachary Robertson, Akhilan Boopathy, Kateryna Pistunova, Jason W. Rocks, Ila Rani Fiete, Oluwasanmi Koyejo

Double descent is a surprising phenomenon in machine learning, in which as the number of model parameters grows relative to the number of data, test error drops as models grow ever larger into the highly overparameterized (data undersampled) regime.

Learning Theory regression

Cannot find the paper you are looking for? You can Submit a new open access paper.