Projects:2018s1-121 In-Memory Semantic Processing Using Hyperdimensional Computing
Supervisor and Co-advisor
Branden Phillips and Peng Wang
In hyperdimensional computing, long binary vectors are used to encode knowledge. The relationship between facts can be captured using correlations between their binary encodings. Specially designed memory hardware can then perform operations, such as searches for relevant information, in parallel, in the memory. This is one way to overcome the well known von Neumann bottleneck of conventional computers, in which memory can only be searched sequentially. Hyperdimenional computing is a relatively new idea. A first aim of this project is to develop a case study example, simulated in software, that demonstrates its use in a practical application. The second aim of the project is to investigate hardware designs for memory circuits to perform hypervector operations in memory. These circuits may use emerging technologies such as resistive RAMs (RRAMs). Hardware to automate the spreading activation function is of particular interest.
Word Sense Disambiguation
The word sense disambiguation task is a task that multiple research papers have tried to solve with different solutions to varying levels of success. It involves creating an algorithm of some description that understands words based on the context they are presented in. It is possible that using hypervectors alongside activation memory can be used as a novel solution to this natural language problem. By encoding each word with the words that are related to it, it becomes easy to search a comiled dictionary of definitions for any related or similar words. From this the correct definition of more obscure and flexible words can be determined from the words present nearby that relate to a particular definition or variation on the word.
resistive RAMs (RRAM)
RRAM is an emerging technology, which has overall non-linear resistive interval. RRAM model can be generally summarized into two resistive mode, high resistive mode and low resistive mode from explicit voltage intervals. Using this technology, the fundamental issues present within hypervectors can be avoided or reduced, specifically the need for large amount of space by stacking the memory cells in 3 dimensions, while potentially allowing simultaneous calculations to reduce the overall computation time inherent in comparing large numbers of bits.
Since hyperdimensional computing occupied chunk of memory space and data transfer bandwidth, we tried to use RRAM as a dedicated device for the most of physical implementation for hyperdimension computing. Due to RRAM has several merit as such low physical space occupation, fast switching speed, non-volatile, low power cost. We can compact the device but keep the other characteristic can be strengthen compare with the current technology.
In this project, we tried to implement the basic search-enabled RRAM gird by simulating in EDA software.