For years, the promise and allure of a concurrent design methodology included talk of models, high-level synthesis, virtual prototyping and other system-level technologies all peacefully coexisting in a single design methodology.
While it sounds like a good idea, the model-based design approach hasn’t mixed well with the virtual prototype approach. And at least for the foreseeable future, it probably won’t.
… From a use-case perspective, it does make sense that these things come together in one way or the other …
… On the virtual prototyping side for software, it’s more a question of what type of model is taken from the C-based development flow into the virtual prototype …
At the end of the day, while the input for execution and implementation routes is different today, industry players are working on standards and technologies to eventually have a single architectural intent that drives both—and one that ideally reduces complexity for architects and design teams.
Like Oil And Water
ReplyDeleteby Ann Steffora Mutschler
System-Level Design
http://chipdesignmag.com/sld/blog/2012/09/27/like-oil-and-water/
For years, the promise and allure of a concurrent design methodology included talk of models, high-level synthesis, virtual prototyping and other system-level technologies all peacefully coexisting in a single design methodology.
While it sounds like a good idea, the model-based design approach hasn’t mixed well with the virtual prototype approach. And at least for the foreseeable future, it probably won’t.
“High-level synthesis traditionally has been marketed very well as a tool to gain hardware design productivity, and virtual prototypes are a way to gain software productivity,” said Bill Neifert, chief technical officer at Carbon Design Systems. “Getting a more holistic view of that entire system requires rethinking how designs are done. And although you get a lot of people talking about the need for concurrent engineering, that typically means you get the two teams working together earlier. But there are still two teams. They are not combined into a single team, and to what extent that will happen I don’t know because when you look at the new design content that’s going into a system nowadays, it’s shrinking.”
What is growing is the amount of third-party content and re-used content, and that doesn’t require high-level synthesis.
“You don’t need to bring in the hardware and software teams designing at the exact same time to do that because your software guys are spending a lot of time developing on something they got from somewhere else,” Neifert said. “From that perspective, it’s a change in what we always thought the concurrent design methodology would be, but it’s being necessitated by the fact that design complexities are such that one company can’t do it all. You’ve got to assemble things from a bunch of different places, which means that the virtual prototype of course becomes important but it’s not necessarily derived off of the stuff you did yourself.”
Frank Schirrmeister, group director for product marketing of the System Development Suite at Cadence, said it’s a question of scope. “It’s typically not the same model which is suitable for both input in terms of high-level synthesis and being using in virtual prototyping.”
For virtual prototyping, he explained, a full subsystem or full chip would be used, meaning a couple of processors, peripherals and then the accelerator that may be synthesized at a high level. There’s an assembly process and the assembly of all of it together becomes the virtual prototype. The objective is to have lots of different components interact with each other from an execution perspective.
Conversely, high-level synthesis is really focused on one block. “You wouldn’t high-level synthesize the whole chip. It has different requirements. It’s not about execution speeds. It is about the implementability. There you take the code and you then annotate the code with the right information or write it in a way that the implementation is more obvious because you now want to go from the C level to the RTL level and at the very least you need to put in constraints. Do I optimize this for area, i.e., cost? Do I optimize this for power?” Schirrmeister said.
Today there are a couple ways to do that. One involves a cycle-accurate model, where you take the RTL output of high-level synthesis, harden it and use it for virtual prototyping. “I’ve also seen tools spitting out intermediate representations of the model at a higher level, which then can be used in virtual prototyping,” he noted. “What almost always happen is if you look into high level synthesis, you really are focused on the functional aspects.”
ReplyDeleteWhat happens where is driven by user goals, said Shawn McCloud, vice president of marketing at Calypto. “On the virtual platform side, they’re looking at basically trying to get early software validation, early software development, or they might be looking at doing some sort of system level analysis, whether it be for power or performance. More often than not, the big driver behind these virtual platforms is the early software development and validation.”
On the high-level synthesis side, users are coming at it from a different perspective, he said. “We’re at a different place. Here, we have a hardware accelerator that’s going to be hanging off of the latest Cortex A9 processor bus and it’s going to be something that’s dedicated to implementing the video codec and doing it in a way that’s extremely power-efficient, extremely area-efficient. Or some latest image processing algorithm that allows you to take a picture with a fairly lousy lens on your cell phone camera but yet be able to, through image processing, fix the distortion of that lens, get rid of red eye, zoom in, and eventually compress it and write it to the disk. These are very different goals.”
Looking at the model itself in terms of what is required in these two spaces, on the virtual prototyping side, requirement No. 1 is speed. “The models must be extremely fast. If it doesn’t simulate or execute extremely fast then you’re really not able to do software validation. On the hardware side, No. 1 is quality of results, quality of the hardware. The challenge of these two worlds is how to get enough detail in the model to allow you to have good quality of results yet also give you fast simulation,” McCloud added.
Convergence
ReplyDeleteFrom a use-case perspective, it does make sense that these things come together in one way or the other, asserted Johannes Stahl, director of product marketing for system-level solutions at Synopsys. “If you look into the typical blocks that are being done with high level synthesis, which are typically data-processing-intensive blocks, those blocks tend to produce a lot of traffic on a chip architecture. That’s interesting for the architectural evaluation task, so if you can include them it makes sense. The flip side is people can also get by with something different because they can just write a relatively simple traffic model, as these models typically produce relatively regular traffic patterns. So that’s how people get by today—designing these blocks in whatever means even if they are designed with RTL.”
On the virtual prototyping side for software, it’s more a question of what type of model is taken from the C-based development flow into the virtual prototype, he said. “People typically will start with just a functional description, which is enough for the virtual prototype. But then, depending on the synthesis tool, they need to add implementation-related artifacts, and depending on the synthesis tool there can be more or less artifacts.”
At the end of the day, while the input for execution and implementation routes is different today, industry players are working on standards and technologies to eventually have a single architectural intent that drives both—and one that ideally reduces complexity for architects and design teams.