• Keeping Neural Networks Simple

  • Nov 2 2024
  • Length: 7 mins
  • Podcast

Keeping Neural Networks Simple

  • Summary

  • This episode breaks down 'Keeping Neural Networks Simple' paper, which explores methods for improving the generalisation of neural networks, particularly in scenarios with limited training data. The authors argue for the importance of minimising the information content of the network weights, drawing upon the Minimum Description Length (MDL) principle. They propose using noisy weights, which can be communicated more efficiently, and develop a framework for calculating their impact on the network's performance. The paper introduces an adaptive mixture of Gaussians prior for coding weights, enabling greater flexibility in capturing weight distribution patterns. Preliminary results demonstrate the potential of this approach, particularly when compared to standard weight-decay methods.

    Audio : (Spotify) https://open.spotify.com/episode/6R86n2gXJkO412hAlig8nS?si=Hry3Y2PiQUOs2MLgJTJoZg

    Paper: https://www.cs.toronto.edu/~hinton/absps/colt93.pdf

    Show More Show Less

What listeners say about Keeping Neural Networks Simple

Average Customer Ratings

Reviews - Please select the tabs below to change the source of reviews.

In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.