Issue link: https://iconnect007.uberflip.com/i/1295812
30 DESIGN007 MAGAZINE I OCTOBER 2020 Traditional co-design requires trade-offs. You need to be aware of what those are, and you need to understand the trade-off at the chip level, as well as how it will impact the imple- mentation at the package level. I've talked about formats, translations, and domains, which are important, too. How do you propa- gate information? If you have mapping at the chip level, what is that going to correspond to—the package or the BGA? Being able to propagate the information properly is part of the challenge. Managing connectivity is some- what related. We also talked about the data- base format. We're looking at the board, pack- age, and chip, all of that has to be done in a seamless fashion. We also identified other challenges, such as multi-physics, thermal awareness, and mechanical constraints. Sometimes, you com- bine electrical with optical. The idea is that these are different domains where the scales and resolutions related to physics are very dif- ferent. What's big in the optical domain may end up being very small in the microwave or electrical domain. And the design rules are dif- ferent. One thing you will observe is that the tools are old and slow, meaning how long it takes to develop new algorithms is a lot lon- ger than how quickly new technologies come into play. Some of the proposed solutions have been somewhat successful, such as behavioral and macro modeling. I talked a little bit about that earlier when I mentioned model order reduction. This has been quite successful in reducing complexity. Statistical modeling is also helpful because, given the fact that the performance of these systems is highly sensitive with respect to small variations, the best way to handle those is really to perform statistical analysis. We talked about AI, and although we're looking at quantum computing as a potential applica- tion to co-design, if quantum computing ever becomes a reality, it could facilitate co-design. One slide shows how things have evolved over the past 20–30 years. In the old days, wire on a PCB used to be considered as a simple capacitor. But as the clock rates increased, you had to use that same wire at that same chan- nel that had to be treated differently. Over the years, the tools had to implement transmission line simulators in order to describe how a sig- nal travels from point A to point B. Today, a channel is even more complicated because, in addition to transmitting the information from point A to point B, you also have to provide equalization. When you clock in at a very high speed, the nature of the channel is much more complex than the simple wire. A good example is the SerDes. Imagine having two chips that are mounted on the board, and the two chips both have 256 pins. You're not going to be able to connect 256 pins from one chip to the 256 of the second chip. What do you do? You need to use a SerDes, and you need to somewhat select those wires and serialize them. You take them 16 at a time, and you use only one wire to transmit the information from those 16 pins. This means that you need to multi- plex the information, serialize it, transmit it, and then de-serialize it when it arrives at the other end. Consequently, transmission must be done 16 times faster. That's a design of SerDes that is a very complex and demanding operation. Yet people have them working at these speeds, like 12.8, 4.4, and 25.6 terabits per second for the optical implementation. It's very ambitious. Then, there's placement and routing. Again, this is a very different world because it has nothing to do with modeling and simulation; it has to do with optimizing how you're going to lay out your system. Where should the bumps be placed? Where would we expect to put the gate, and what's the best way of rerouting the wires? Placement and routing are very differ- We're looking at the board, package, and chip, all of that has to be done in a seamless fashion.