Mentor: Shashi Kant
Project Members: Som Tambe, Mohit Kulkarni, Nikita Chauhan, Anmol Pabla, Vaibhav Thakkar
Despite remarkable advances in artificial intelligence and machine learning, machine systems have lagged behind Human Learning in two aspects. First, people can learn a new concept from just one or a handful of examples, whereas standard algorithms in machine learning require tens or hundreds of examples to perform similarly. Second, people learn richer representations than machines do, even for simple concepts, using them for a wider range of functions.
Past efforts to counter these problems include the Bayesian Program Learning, which follows the idea of “how humans do one-shot classification”. The Bayesian Program Learning paper was remarkable but seems to have hand-engineered various parts of the model.
The Omniglot Challenge focuses on developing more human-like algorithms like One-Shot Learning. The Project was divided into several sub-teams which focused on Replication of the original Bayesian Program Learning model in Python on the ‘Omniglot’ dataset, Building SOTA ML models based on BPL fundamentals i.e. breaking the problem down into smaller problems aiming to build more generalized one-shot learning models and Comparing the BPL model with traditional ML models for the Text Generation task this would allowing us to implement tasks like Classification and Generation on the Omniglot Dataset through several methods and at the same time be able to compare them.