no code implementations • 23 Nov 2022 • Juyang Weng
This paper establishes a theorem that a simple method called Pure-Guess Nearest Neighbor (PGNN) reaches any required errors on validation data set and test data set, including zero-error requirements, through the same misconduct, as long as the test data set is in the possession of the authors and both the amount of storage space and the time of training are finite but unbounded.
no code implementations • 23 Aug 2022 • Juyang Weng
A theorem is established that the NNWT method reaches a zero error on any validation set and any test set using the two misconducts, as long as the test set is in the possession of the author and both the amount of storage space and the time of training are finite but unbounded like with many deep learning methods.
no code implementations • 4 Aug 2022 • Juyang Weng, Zejia Zheng, Xiang Wu
By dynamic, we mean the automatic selection of features while disregarding distractors is not static, but instead based on dynamic statistics (e. g. because of the instability of shadows in the context of landmark).
no code implementations • 19 Jun 2021 • Juyang Weng
To avoid future pitfalls in AI competitions, this paper proposes a new AI metrics, called developmental errors for all networks trained, under Three Learning Conditions: (1) an incremental learning architecture (due to a "big data" flaw), (2) a training experience and (3) a limited amount of computational resources.
no code implementations • 12 Oct 2018 • Juyang Weng
This theoretical work shows how the Developmental Network (DN) can accomplish this.