Home > Community > Blogs > Functional Verification > chip level verification with processors
 
Login with a Cadence account.
Not a member yet?
Create a permanent login account to make interactions with Cadence more convenient.

Register | Membership benefits
Get email delivery of the Functional Verification blog (individual posts).
 

Email

* Required Fields

Recipients email * (separate multiple addresses with commas)

Your name *

Your email *

Message *

Contact Us

* Required Fields
First Name *

Last Name *

Email *

Company / Institution *

Comments: *

Chip Level Verification with Processors

Comments(0)Filed under: Functional Verification, FPGA: DMA, ARM, ISX, verification strategy

Today, I will discuss some alternatives for chip-level verification with designs that have microprocessors in them. Since I started at Axis Systems back in 2001, the number of designs with processors has steadily gone from a few, to some, to most, to nearly all. Not only do most chips have processors, many have more than one.

About a month ago I was visiting one of our ISX users and we were discussing a new project. We were talking about verification strategy with the processor, and he admitted that the processor in this particular design didn't really play a major role in the function of the device. In fact, he seemed to think it may not even be needed at all, but either because the marketeers didn't want to be left behind compared to the competition or maybe somebody would figure out something useful to do with it, the design had a processor. This is probably an uncommon situation, but it shows that almost all chip level verification must have a strategy for dealing with processors.Most of the discussion in this area usually toggles between keeping the processors out as much as possible versus using the processor as another way to drive the design during verification. Advocates for keeping the processor out usually have reasons like:
  • Simulations run faster, not only is there less RTL, but there is no wasteful instruction fetching eating up simulation cycles.
  • Processor bus traffic can be coordinated with the rest of the test bench. All good verification engineers know that verification is about control. Running software on a processor during verification means there is something that is uncontrolled, or mostly out of control.
  • As a hardware verification engineer, what the software does it pretty much irrelevant. No matter what kind of software is executed, in the end all it can do is bus traffic that is legal according to the bus protocol. High quality verification components can generate any and all combinations of legal bus traffic so there is no need to run any software.
Even if most of the verification is done without full functional processor models, at least a few simple tests are normally used to make sure the CPU can at least fetch instructions and run a basic memory test. Only once in my travels have I come across a company that chronicled how they were building disk drive controllers with an embedded ARM processor and using only a VHDL bus functional model for the ARM. They were confident they didn't need to run even one small assembly language test program on the CPU, but again this is a rare case.Advocates for doing verification with the processor also have some good reasons such as:
  • In order to qualify as chip level verification all of the major blocks in the design must be instantiated, it's not really chip level verification if you take out important blocks of the chip.
  • Verification components don't really generate realistic traffic. A processor runs with a certain mix of traffic depending on cache characteristics and configuration, such as cache line fills for instruction fetching, a high percentage of reads vs. writes, etc. Even with weighted constraints for a verification component, if you look at the generated bus traffic there is a big difference between a real CPU and a verification component.
  • All of the C test programs that are written for verification can be used again, either with an FPGA board or with final silicon as hardware diagnostics. Since these test programs are needed anyway they should be developed early during verification and reused.
There are probably more reasons for each approach and feel free to share your own approaches and ideas. One of the main uses for ISX has been as a way to augment the use of C test programs for hardware verification.Let's look at a simple example of a test that checks DMA. The test initializes the DMA controller, puts some data into memory to use as the source of the transfers, and loops setting up a DMA transfer, starting it, and checking the data was moved correctly from the source to the destination.
 

This makes a nice test to use for verification and to reuse on the FPGA board or final silicon to make sure basic DMA operations work correctly. The first thing a verification engineer notices about the test is that it is very deterministic. The test will not really exercise any interesting corner cases. The addresses, the data, the size, the modes, and the timing are fixed by the test writer and hard coded into the test.

Although it may take a few tries to get this test working with the hardware, once it works it will always work. It can be run over and over every day and it is unlikely to find any new issues with the hardware. In order to make the test stress the hardware, more C code could probably be written to hit more corner cases.

Maybe more configurability could be added to the test to use different data, addresses, and modes, but this would probably get messy as most control would need to be done at compile time of the C code. Enhancing the C code for better verification also starts to contradict with the reuse aspect of the test program. Most diagnostic programs used in the lab with silicon are meant to help find and debug manufacturing defects such as shorted or open pins versus comprehensive functional verification.

ISX has been successful in augmenting existing C test programs to drive better hardware verification. Projects that have used a library of directed C programs in the past have been able to leverage ISX to use the same C code but to add additional capability improve verification in a couple of ways.First, ISX allows a test bench to control the software running on the CPU. No longer are the C test programs running uncontrolled on the CPU, but they can be coordinated with the rest of the test bench.

This additional control allows for better sequencing and timing and makes it more likely that tests will hit interesting corner cases.
ISX also enables C tests to run with the needed variations in stimulus without building more C test programs by enabling the use of data generated from the test bench to be passed to the C functions without the need to change the C code.

This makes managing the C code much easier since the same program can be loaded into memory and run and the results are different because different data and timing comes from the test bench. Different tests can easily be loaded that call the C tests and different random seeds can be used with the same tests to produce different results. As with all constrained random generation functional coverage can be used on software variables, function arguments, and function return values to measure what was executed.

In future blog entires I will provide more insight into how ISX provides communication between a test bench and C test functions and variables running on an embedded processor to improve verification, but if you are really interested and cannot wait I recommend you can attend CDNLive! next week in San Jose.

Three ISX users will present ISX verification stories, and one of them started with a library of directed C tests running on an ARM processor and ended up with an environment that provides the ability to vary the tests using constraints and improve verification. Look for session 1FV5 "Using ISX to Build a Constrained-Random Test Environment from Directed C-Based Tests" on Tuesday afternoon.

Questions? Comments? Post below.

Comments(0)

Leave a Comment


Name
E-mail (will not be published)
Comment
 I have read and agree to the Terms of use and Community Guidelines.
Community Guidelines
The Cadence Design Communities support Cadence users and technologists interacting to exchange ideas, news, technical information, and best practices to solve problems and get the most from Cadence technology. The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information. By accessing, contributing, using or downloading any materials from the site, you agree to be bound by the full Community Guidelines.