Please use this identifier to cite or link to this item: http://dspace.dtu.ac.in:8080/jspui/handle/repository/19906
Full metadata record
DC FieldValueLanguage
dc.contributor.authorSINGH, STUTI-
dc.date.accessioned2023-06-16T04:40:02Z-
dc.date.available2023-06-16T04:40:02Z-
dc.date.issued2023-05-
dc.identifier.urihttp://dspace.dtu.ac.in:8080/jspui/handle/repository/19906-
dc.description.abstractThe fundamental tenets of Von Neumann architecture have served as the foundation for a majority of modern digital computing systems. In traditional Von Neumann systems, data is stored in separate memory units, and instructions are fetched and executed one at a time. This sequential nature creates a significant delay between instruction fetch, data access, and execution, leading to reduced system performance. The Von Neumann architecture has revealed bottlenecks that hinder system performance and efficiency. These include the Von Neumann bottleneck, caused by sequential processing of instructions and data, and the limited bandwidth between the CPU and memory. In order to overcome these bottlenecks, In-Memory Computing (IMC) has emerged as a promising solution. IMC leverages the parallelism and proximity of data and computation within memory, enabling faster and more efficient processing. It reduces data transfer requirements, alleviating the Von Neumann bottleneck, and improving overall system performance. IMC involves performing computations directly within memory rather than relying on separate processing units. By adopting this approach, the requirement for data transfer between memory and processor is eliminated, leading to reduced latency and enhanced system performance. In-Memory Computing (IMC) is poised to revolutionize the way we process and analyze data. It eliminates the need for data movement, resulting in unprecedented speed and efficiency. This paradigm shift opens up new possibilities for real-time analytics, large-scale data processing, and complex computational tasks. IMC can accelerate machine learning algorithms, enhance artificial intelligence applications, and enable rapid decision-making in various domains. Additionally, IMC has the potential to reduce power consumption and energy requirements, contributing to greener and more sustainable computing systems. As research and development in IMC progresses, the transformative impact on the future of computing is expected to be profound, paving the way for unprecedented capabilities and advancements in data-driven technologies The significance of SRAM for In-Memory Computing (IMC) cannot be overstated. It serves as the primary storage medium for data and computation, allowing fast and direct access to data. It also exhibits low power consumption, making it ideal for energy-efficient computing systems. It also offers high-density storage, allowing large amounts of data to be stored in a compact memory array. Overall, SRAM is a key component for realizing the potential of in-memory computing. v The use of 8T SRAM cells for In-Memory Computing (IMC) has gained significant attention due to their unique characteristics and benefits. These cells offer advantages over traditional 6T SRAM cells, such as improved stability, reduced read-disturb failures, and increased noise immunity. They enable efficient and reliable storage of data while supporting computational operations within the memory array, minimizing data movement and reducing latency. The use of 8T SRAM cells in IMC architectures can lead to enhanced system performance, reduced power consumption, and increased efficiency. Additionally, the scalability and compatibility of 8T SRAM cells with existing fabrication technologies make them a viable option for integrating IMC into various computing systems. As research and development in IMC continue to advance, the use of 8T SRAM cells is expected to play a crucial role in unlocking the full potential of in-memory computing. Through this work, I present an approach to In-memory computing that uses SRAM cells and is capable of processing Boolean operations in addition to the standard storage operations. The boolean computations, including NOR, OR ,NAND AND, XOR, are accomplished through the utilization of an 8T SRAM cell.. In addition, we also propose a novel in-memory computation methodology using a Transmission Gate based SRAM Cell. This proposed IMC scheme offers an improvement of 22.6% in average power over an IMC based on the traditional 8T SRAM cell. This design also shows an improved delay performance for NAND, NOR and XOR operations. All simulations are carried out in LTSPICE using 32 nm CMOS process technology.en_US
dc.language.isoenen_US
dc.relation.ispartofseriesTD-6463;-
dc.subjectIN-MEMORY COMPUTATIONen_US
dc.subjectRANDOM ACCESS MEMORIESen_US
dc.subjectVON NEUMANN ARCHITECTUREen_US
dc.titleIN-MEMORY COMPUTATION USING STATIC RANDOM ACCESS MEMORIESen_US
dc.typeThesisen_US
Appears in Collections:M.E./M.Tech. Electronics & Communication Engineering

Files in This Item:
File Description SizeFormat 
Stuti_Singh MTech.pdf863.78 kBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.