Paper

Training Conditional Random Fields with Natural Gradient Descent

We propose a novel parameter estimation procedure that works efficiently for conditional random fields (CRF). This algorithm is an extension to the maximum likelihood estimation (MLE), using loss functions defined by Bregman divergences which measure the proximity between the model expectation and the empirical mean of the feature vectors. This leads to a flexible training framework from which multiple update strategies can be derived using natural gradient descent (NGD). We carefully choose the convex function inducing the Bregman divergence so that the types of updates are reduced, while making the optimization procedure more effective by transforming the gradients of the log-likelihood loss function. The derived algorithms are very simple and can be easily implemented on top of the existing stochastic gradient descent (SGD) optimization procedure, yet it is very effective as illustrated by experimental results.

Results in Papers With Code
(↓ scroll down to see all results)