Market Focus : High-Performance Computing

Green Mountain Semiconductor has been doing fundamental research on compute-in-memory architectures since 2016 and filed 7 patents to date on this topic. GMS is focusing on ultra-low power in-memory neural networks for autonomous AI inferencing.

High-Performance Computing

Green Mountain Semiconductor brings a sophisticated expertise in memory and high speed I/Os design.

This allows us to maximize in-memory computation with the aim to reduce power and increase performance while leveraging the high parallelism inherent in the memory architectures.

Our company consists of a proficient team of experts in the field with the know-how to develop a full-fledged PHY on advanced technology nodes for the specific purpose of enabling high-speed data exchanges between memories and the CPU.

Examples of Completed Projects

  • Commodity DRAM design (LPDDR, DDR up to 2GB, 6.4GB/s)
  • SRAM macro design up to 128MBit
  • Emerging memories (STT MRAM, Phase Change Memory)
  • Memory PHY design in various technology nodes down to 7nm
  • Development of product prototypes (256M LPDDR2-NV and 32M SPI Flash-replacement) with error correction code (ECC)
  • Specialty memory R&D (ultra low temperature DRAM circuits)


  • In-memory computing based on SRAM, DRAM and Phase Change Memory
  • Self-contained in-memory neuromorphic compute solution​
    • Multiply Accumulator​
    • Programmable Activation Function
    • Multi-layer control sequencer
    • Sparse network support

Patented Technology

Green Mountain Semiconductor has several patents granted and pending in the field of commodity memory architecture, error correction code and in-memory high-parallel data processing.

White Papers

Our firm has authored a multitude of comprehensive white papers. We invite you to explore our library. For full access and the ability to download any of our papers, please submit a request for access.

Co-design of a novel CMOS highly parallel, low-power, multi-chip neural network accelerator

Why do security cameras, sensors, and Siri use cloud servers instead of on-board computation? The lack of very low power, high-performance chips greatly limits the ability to field untethered edge devices. We present the NV-1, a new low-power ASIC AI processor that greatly accelerates parallel processing (10X) with dramatic reduction in energy consumption (>100X), via many parallel combined processor-memory units, i.e., a drastically non-von-Neumann architecture, allowing very large numbers of independent processing streams without bottlenecks due to typical monolithic memory. The current initial prototype fab arises from a successful co-development effort between algorithm- and software-driven architectural design and VLSI design realities. An innovative communication protocol minimizes power usage, and data transport costs among nodes were vastly reduced by eliminating the address bus, through local target address matching. Throughout the development process, the software/architecture team was able to innovate alongside the circuit design team’s implementation effort. A digital twin of the proposed hardware was developed early on to ensure that the technical implementation met the architectural specifications and, indeed, the predicted performance metrics have now been thoroughly verified in real hardware test data. The resulting device is currently being used in a fielded edge sensor application. Additional proofs of principle are in progress, demonstrating the proof on the ground of this new real-world extremely low-power high-performance ASIC device.

May 15, 2024