semiwiki.com publication, Sept 29, 2016
Once a year, during the TSMC’s Open Innovation Platform (OIP) Forum you can expect to see cutting edge technical achievements by TSMC and their partners. This year was no exception, with Open-Silicon presenting its accomplishments in implementing an HBM reference design in 16nm. It’s well understood that HBM offers huge benefits in terms of bandwidth and lower power consumption over alternatives such as DDR. With the advent of the JEDEC HBM Gen-2 specification, both density and data rates have gone up significantly. In the 2, 4 or 8 stack configuration HBM Gen-2 supports up to the 8 Gb per stack. In addition, data rates are going up to 1.6 Gb per second or even up to 2 Gb per second per pin.
According to open silicon 16 nm FinFET is the key to unlocking the full benefits of HBM. 16 nm FinFET processes can potentially reduce power by 50% and boost performance by the same amount relative to 28 nm. However, to implement these HBM designs a complete ecosystem is required which includes the die, interposer, assembly and packaging. Open-Silicon paired SK-Hynix’s HBM die stack with a TSMC 16nm/2.5D/CoWoSTM ASIC implementation. CoWoSTM is TSMC’s 2.5D interposer technology. In fact, TSMC has been making a big deal out of all of its advanced packaging options.
TSMC has been innovating their packaging options and is seeing the results in their business. It’s widely understood that TSMC scored a design win with the Apple A10 that is used in the iPhone 7. So clearly packaging technology is becoming a significant differentiator for foundries. We can still expect to see much more creative and diversified offerings in the already exploding packaging market.
But now back to Open-Silicon and their HBM implementation at 16nm. HBM is a good choice for products where there is threefold pressure on form factor, bandwidth and power. These applications include a data centers, networking, radar, virtual reality, gaming and cloud computing. In the target design Open-Silicon was able to replace 24 DDR3 1600 (x16) with 1 HBM stack, the power consumption went from 1.0 mW per gigabit to 0.33 mW. At the same time the data rate climbed from 4 GB/s up to 256 GB/s.
According to Open-Silicon’s Bupesh Dasila, Engineering Manager for Silicon Engineering, some of the major challenges for implementing a 2.5 D SIP using HBM are: having a scalable PHY architecture, designing the 2.5D interposer, managing the custom die-2-die IO’s, and testing the completed system. There were 1840 routes on the interposer that were up to 5 mm in length connecting the HBM to the SOC. To effectively shield the signal lines from cross talk, ground wires of 0.5um were placed 2.1um to the side of each signal wire. This left 2.1 um for each signal line. The signal wires were 0.85 um thick.
After his presentation Bupesh told me that they did extensive modeling to verify that electrical characteristics of the signals on the Interposer. Below is an example of some of their Interposer SPICE simulations. In addition to the PHY design, Bupesh and his team designed the IO for the 16nm die that communicated to the HBM memory module.
Open-Silicon’s roadmap includes heading to 7nm with this approach, but they also are going to be validating the Gen2 HBM on a 28nm design as well. The results of the 16nm chip were impressive, with data rates of 2 Gb/s per pin using their custom IO’s and PHY. They were diligent in adding testability features as well. They added probe pads and included loopback to help if needed with issue isolation among system components.
Open-Silicon emphasized that they are ready to deliver solutions offering the potentially game changing benefits of HBM today. Admittedly this is new technology that requires more up-front costs, nevertheless, the area savings at volume are significant. Also the cooling and power improvements will change the equation for cost of ownership of the finished products they are used in. More information on Open-Silicon’s HBM expertise is available on their website, here.