Emulation Division, Mentor Graphics
With the majority of designs today containing one or more embedded processors, the verification landscape is transforming as more companies grapple with the limitations of traditional verification tools. Comprehensive verification of multi-core SoCs cannot be accomplished without including the software that will run on the hardware. Emulation has the speed and capacity to do this before the investment is made in prototypes or silicon.
This means that, theoretically speaking, emulation's time has come. However, there has been an intractable and very practical barrier to that arrival. The high-cost of emulators has made them affordable only for companies with deep pockets. Fortunately, recently introduced virtualization technologies are changing that by demolishing this barrier. It won't be long before emulators will be a common fixture at small and medium sized companies, as well as larger ones. In this article we will look at how and why that is.
THE RISE OF THE MACHINES
Full SoC verification, of hardware and software, requires connecting the chip to its end environment, including all of the peripheral devices; such as USB, Ethernet, displays, hard drives, and PCI Express. A device driver cannot be run against an SoC and fully tested unless the device driver has a device to talk to. As those devices are outside of the SoC, they need to be connected to it in order to be tested.
Figure 1: Target environment includes peripherals.
 |
Typically, this has been done with physical hardware and speed bridges or speed adapters. For emulation, it's been done in a mode called "in circuit emulation" (ICE), where physical devices are cabled to a speed adapter, which are in turn cabled into the emulator. This setup consumes expensive lab space and takes time to configure and debug. With all the cables and external hardware, it is the least reliable part of the emulation environment. And it's difficult to debug if something goes wrong. These physical peripheral setups are not only inflexible but also costly to replicate. Each supports only a single user, locking the machine down to a given project or a single SoC. Emulators cost so much money that they need to support many users; yet ICE limits multi-user flexibility.
UNLOCKING THE DOOR TO GLOBAL EMULATION ENVIRONMENTS
The golden key is to virtualize all of this hardware. In this scenario, hardware accurate models of the peripherals run on a standard workstation, such as Linux, so that the SoC and software device drivers running in the emulator can interact with these hardware accurate models, just like with ICE, except that they are all implemented virtually. There is no physical hardware, no cables, and no hardware speed bridges for these devices. Because the virtual lab is entirely in software, a single emulator can support many users. The complete emulation environment can be instantly replicated and reconfigured.
The virtual lab setup enables emulators to be shared around the world and around the clock. The emulation environment is now data center compatible, as opposed to being confined to the role of a lab bench dedicated to a single project. Because of the ease of replication and configuration, engineers from anywhere in the world can take turns using the same emulator.
Each user needs only a single workstation connected to an emulator to verify their entire SoC environment. These workstations are highly reliable, low cost, and very compact. Instead of a mass of cables, speed adaptors, and a lab bench full of boards and stuff, there is only the emulator and a small rack of unit-high workstations.
With the emulation environment now housed in a data center format, it has to be reconfigurable by software, because users will not have physical access to it. With the virtual lab, teams around the world can reconfigure it via software. It is officially shared by multiple projects and geographies, and it can be reconfigured for another project with a different set of peripherals instantaneously; in the same amount of time it takes it to load a new design into the emulator.
Figure 2: A single unit-high workstation provides the equivalent emulation capability per simultaneous user as an ICE lab bench.
 |
JUST AS ACCURATE, JUST AS FAST AS THE REAL THING
Virtual models are hardware accurate because they are based on licensed design IP; therefore they are just as accurate as ICE peripherals. Virtual models use the actual, synthesizable RTL that SoC designers license and put into their chips. Because it is synthesizable RTL, the peripheral model can be compiled into the emulator, and the DUT can talk to a full, accurate, RTL hardware representation of the peripheral, such as a USB 3.0 controller that is in the emulator.
Continuing with USB 3.0 as an example, on the workstation side, there is a USB software stack and software that targets it for a mass storage function client or USB memory stick. The result is a functionally accurate USB memory stick; all done virtually between the RTL synthesized into the emulator linked over a co-model channel to the software stack and function client running on the workstation. This is why users do not lose any accuracy by going to the virtual approach.
Virtual devices are also very fast. They are not faster than ICE, but they are just as fast. Emulation speed is typically limited by how fast a design can run in the emulator, not by the virtual device or the communication to the workstation.
To demonstrate this, we can take a Veloce customer's experience using Mentor's VirtuaLAB Ethernet capability to generate Ethernet traffic to exercise an edge router chip. They activated twelve 10G ports on their chip, with a million packets passing through each port, for 12 million packets total. It took 45 seconds to emulate it on Veloce. The customer estimated that it would take about thirty days to run a single port, with one million packets, in simulation. Doing all twelve ports would have been impractical to simulate.
Figure 3: Virtual USB 3.0 mass storage peripheral.
 |
The customer's edge router chip was configured by their router control application running on a Linux workstation, just like their routers are configured by the same application. Via the co-model host they linked the Linux box to the router chip design in the emulator. It looked exactly like running their chip in the lab, except that it was all done virtually and done long before the hardware prototypes or silicon was available. The chip was compiled into the emulator, and the Virtual Lab Ethernet capability was shared between the Ethernet transactors in Veloce—written in RTL and compiled in the emulator—and a traffic generator on the co-model host. This setup allowed them to do all kinds of analysis, make modifications, analyze performance and bottlenecks, and fix their chip months before silicon.
To do the same thing with the classic, physical ICE approach they would need to purchase Ethernet testers, which are fairly expensive. Then they would need a bank of speed bridges to act as the speed adapters between the Ethernet testers and the emulator. In order to support multiple users, as they could easily do with a virtual lab, they would have to duplicate that entire environment of Ethernet testers, speed bridges, and cables. When designs start getting up to 24, 48, 96, and even 128 ports, it becomes impractical to do this via ICE. The virtual lab is a much better approach.
Figure 4: VirtuaLAB setup compared to ICE.
 |
Another benefit of the virtual approach pertains to how simulators and emulators support a capability called save and restore, or check point restart. With check point restart, a user does not have to re-emulate their RTOS every time they want to check out the device drivers and applications they wrote— as the RTOS only needs to be debugged once and it can take from one to ten hours to emulate the RTOS boot just to get to the device drivers. However, if physical peripherals are connected to the emulator via the classic in-circuit approach, they are not going to be able to checkpoint their state and they are not going to be able to restore or restart their state. Physical peripherals simply cannot save register and restore states.
Virtual lab peripherals support save and restore of the complete environment, including the peripherals connected to the SoC as it sits in the emulator. This makes save and restore practical on a full-chip, with peripherals, at the emulation level.
CONCLUSION
A virtual lab emulation environment is very cost effective, providing simultaneous access to a single emulator for many software engineers. Veloce and the VirtuaLAB technologies support total verification of the hardware, the software, and the peripheral interfaces— all in a reliable, flexible, reconfigurable, easily replicated virtual environment that is as fast and as accurate as hardware without the limitations. By delivering efficient multi-user support and data center compatibility, virtual peripherals will usher in emulation for companies large and small. Around the world. Around the clock.
Back to Top