By Ed Sperling
System-Level Design sat down to discuss ESL modeling with Anmol Mathur, CTO of Calypto; Steve Frank, CEO of Paneve; Fabian Clermidy, senior architect at Leti, and Sylvian Kaiser, CTO of Docea Power. What follows are excerpts of that conversation.
SLD: How critical is it to keep models synchronized?
Kaiser: Keeping the synchronization between the models is a key for success. With ESL, there is another step in the chain. You can add another stage to your implementation flow. It is SystemC to GDS, but with ESL you also can have representation of your whole system at a very high level. You need to keep those models synchronized with your implementation. This is not only a concern for ESL. Every time you want to deal with anything that is not functionality, whether it’s a classical flow of RTL to GDS, or SystemC to GDS, it’s driven by functionality implementation. But people do not want to include other estimates, and they need to keep control in the flow of other non-functional properties. For these they need models, and they have to keep these models synchronized. This is true for ESL, for UPF or CPF, and they have to keep UPF and CPF synchronized. This is key to keeping interoperability in the flow.
SLD: Another piece that needs to be included is architectural exploration. What happens when you have loosely timed, cycle accurate, measurements to the microwatt?
Frank: Part of that is very early in a project. Before you really know what it is, you’re going to be building tools and models to estimate. Once you get to the point where you’ve eliminated three out of the four quadrants where you want to work and you’re narrowing in, you’re at a point where you need a different level of exploration. The best thing you can do with that exploration once you’ve figured out what you want is to throw it away. It got you to this point where you know where you want to focus. After that, you have to switch focus so you can look at kernels of code and compilers. You need a simulator that looks like what you want for a reference design. This early stuff isn’t something you want to maintain anymore.
Mathur: That’s not any different than what we do. People do have performance models. At the very beginning of a project you’ll have a model for the graphics, the processor, and other things. That high-level model serves a purpose, but once you’re done you have to discard it and move onto implementation. There is no magic. Your accuracy only has to be sufficient to make decisions at that level. Even if you’re off 90%, if you’re looking at two different microprocessors you can determine which one will be better as long as the correction on both is the same. People already have performance models and they will continue to need them. From the implementation point of view they will need to raise the level of abstraction. Once you have done all your performance analysis and architectural exploration, you have to have a standardized level of abstraction that you design to, verify, validate and then push on in terms of implementation.
Frank: It’s a combination of using the tools and applying methodology for using it.
Mathur: But you can’t have three or four of these levels. You do performance evaluation and then power evaluation using a different model, and then you’re going to have another model. Design teams don’t have that much bandwidth. The bulk of their work is in implementation.
Clermidy: What we need is to combine the constraints, both in terms of performance and core measurement, and thermal if we are talking about stacking chips. In the past you might have been able to put this block with this block because we knew it worked in previous designs. That’s no longer possible. There are too many different constraints. You need something very efficient to make sure it works.
Mathur: But you have to do all of that from one level of abstraction. Design teams will only do one model. You have to find the right level. If there is some purpose that requires a different model, maybe there’s one other model people can do. But you can’t have three or four different flavors of things.
Kaiser: I don’t think it’s a good idea to have a single model that is used for implementation and a single model that is used for architectural exploration. At the very high level of abstraction you want to do what-if analysis and try to assess various derivatives and techniques. But if you do that on the same one that will be used for implementation you may jeopardize your design because you may introduce flows and things that are not under control. Today with ESL there are several models doing different things. One model is for the implementation and the implementation flow. This model is used just for the design of a new product. Another model is used for architectural exploration. This model can be maintained and used to try out various things such as different IPs and different core management techniques to try and assess a distribution using this model. This is something that is changing.
SLD: So how many different models do we need?
Kaiser: One for the implementation, one for the exploration that is kept synchronized with the product development and that is used for closure all along the flow. Using this architectural model you can achieve closure of all the non-functional properties along the flow.
SLD: It sounds as if one of the takeaways is that you need to set up a methodology specific for your team and what you’re trying to achieve. Is that right?
Frank: Yes, and for what your specific task is. If you are synthesizing and you want to converge timing, area and power, do you make changes downstream or do you go back to SystemC and make changes to impact that? You have to learn how to have that impact with area, timing and power. There’s a tremendous temptation not to do that when you get close, but there’s a learning curve. The way you do that with SystemC is very different than when you have RTL and a netlist and you’re trying to make tweaks. The whole methodology of having ECOs is creating other source code. There are other models that creep in that you don’t even realize when you get toward the end.
SLD: In theory ESL should reduce the time on both sides of a design, the architectural exploration and implementation on one side and the verification on another. Is that happening?
Frank: Being able to cut down the number of lines of code has cut down the verification time. We base our verification on the architectural versus the implementation and comparing the two. If we’re building a multi-core processor and we have one unit that does the caching and coherence, we do a unit test to do a verification of the algorithms. Out of a team of 13 people, there are two people full time on the verification. Most of the other work is done by the architects and designers because they’re familiar with the results of the verification.
Clermidy: We’re developing a network on chip with different protocols between the cores. We don’t need to wait for the implementation of these protocols to verify whether it’s correct. We can do some formal verification, for example at a very high level. When we go into the details, using the golden model of SystemC is the right way to go. You will not develop multiple different testbenches. You have just one. The problem is simulation time. Sometimes you have to develop very specific tests and know what has to be covered.