Completed Projects: Projects which we have completed so far.
Projects Vacancies: If you are an IITK student and interested in taking up any project under BCS @IITK do check this option for available openings for projects.
BYOI Projects: "Bring Your Own Idea". If you are interested in proposing your own project to carry it under BCS @IITK.
Neuroeconomics seeks to explain human decision making, the ability to process multiple alternatives and to follow a course of action.In general, a population of people thrives when they depict some sort of social behaviours. Small individual choices can have a big effect on the population. In this project we’ll be taking a look at how multi-agent systems interact and produce macro effects as a result of micro choices, and how those results in turn affect the decisions of the agents.
Human memory can store large amounts of information. Nevertheless, recalling is often a challenging task. In this project we look at the past models of memory retrieval namely the hopfield network and the mean field theory. After the classical models, we also develop a more realistic sequential neural network model for recall tasks(For eg, NTM, MANN etc). After that, we will try to optimize and develop our own models of memory that may be
The motive of the project is to learn how to analyse a NeuroImaging data. We start from the basics, learn about brain anatomy and various neuroimaging dataset and various techniques/libraries helpful in analysing data. Then everything learnt to analyse Steinmetz dataset to find the role of a particular brain group [this depends on the student’s interest, for now I have chosen Hippocampus which is involved with memory and learning] in the decision making process. So the aim would be basically to understand the role of memory and learning [at which stage are they required] in a decision making process.
Using RAVDESS dataset which contains around 1500 audio file inputs from 24 different actors (12 male and 12 female ) who recorded short audios in 8 different emotions, we will train a NLP- based model which will be able to detect among the 8 basic emotions as well as the gender of the speaker i.e. Male voice or Female voice. After training we can deploy this model for predicting with live voices.
The motive of the project is to address the major concerns of the deep learning models. The dl models are black boxes needed to be explained. For this part we will learn how we can get insights from the model, about the model. In the second part, we will address another major concern about dl models- Robustness/Vulnerabilities. How the dl models can fooled and how we can overcome these vulnerabilities by defense techniques
Multiple studies have been carried out to study how the brain perceives color. Some on macaque monkeys, some on humans. We start with studying the literature on color perception and on the experiments carried out. Then, we look for a dataset of MRI images on which a deep learning analysis can be done.
Using UCF101 and Something-Something datasets, we implement high-quality action classification and video captioning within a video, where each video can consist of a few hundred frames. We will look at previous approaches and implement a convolutional network for online video understanding. The network architecture takes long-term content into account and enables fast per-video processing at the same time.
In order to keep cognition models accessible, we need it to understand human language. The Natural Language Processing (NLP) becomes important which is concerned with the interactions between computers and human (natural) languages. One of the basic attributes of human communication is Sentiment. It is important that machines understand these sentiments.
The Omniglot Challenge focuses on developing more human-like algorithms like One-Shot Learning. The Project was divided into several sub-teams which focused on Replication of the original Bayesian Program Learning model in Python on the 'Omniglot' dataset, Building SOTA ML models based on BPL fundamentals i.e. breaking the problem down into smaller problems aiming to build more generalized one-shot learning models and Comparing the BPL model with traditional ML models for the Text Generation task this would allowing us to implement tasks like Classification and Generation on the Omniglot Dataset through several methods and at the same time be able to compare them.
This project seeks to understand what the connectome is and develop the modern tools required to study it. Through this project, we have looked at various the biological organization of neurons and the higher structures they form. We have looked into various neuroimaging techniques that are involved. We also looked into the auditory system in humans; the visual system in insects, and in more detail the olfactory system in Drosophila.
Atari 2600 is a challenging RL testbed that presents agents with a high dimensional visual input and a diverse and interesting set of tasks that were designed to be difficult for humans players. The goal is to connect a RL algorithm to an deep neural network which operates directly on RGB images and by using stochastic gradient updates.
To understand the working, function and to some extent, the structure of the brain specifically so, by using empathy for pain to target Bilateral Anterior Insula and Anterior Cingulate Cortex using external stimuli.To study Brain activity and function using EEG data, learn how to analyse this data and explore the reason behind certain behaviour as we know it.
Knowledge Graphs connect various types of information related to items into a unified space. Different paths connecting entity pairs often carry relations of different semantics, and PGPR (Policy Guided Path Reasoning) models these with the help of high-quality user and item representations generated using the TransE graph embedding scheme.
Computer Vision , a well-known problem of every ML enthusiast , is leveraging the computer/machine with the ability to see and classify objects much like human beings. This project was based on exploring Computer vision to a little extent. The aim was to develop a Machine learning model which is able to classify some basic emotions (Happy , Sad, Angry, Disgust, Fear, Surprise and Contempt) using facial expressions of humans .We had chosen the CK+ dataset for implementation. Overall, the project had three phases; Preprocessing , Model , Evaluation.
In this project, we are replicating the work of Peterson et al 2016, testing using other state-of-the-art models and testing the method with different changes to testify if the method is really adapting to psychological representations.
Reinforcement Learning(RL) is part of Machine Learning.RL provides very innovative algorithms for control and prediction problems.the principle of RL methods is that there is no supervisor or explicit teacher to command the correct actions.The agent learns by interacting with the environment,rewarding itself when goals are achieved and punishing itself when not.