Cadence.com will be under maintenance from Friday, Oct. 3rd at 6pm (PST) thru Sunday, Oct 5th at 11pm (PST).
Cadence.com login, registration, community posting and commenting functionalities will be disabled.
Home > Community > Blogs > Functional Verification > tlm2 0 uvm 1 0 and functional verification
 
Login with a Cadence account.
Not a member yet?
Create a permanent login account to make interactions with Cadence more convenient.

Register | Membership benefits
Get email delivery of the Functional Verification blog (individual posts).
 

Email

* Required Fields

Recipients email * (separate multiple addresses with commas)

Your name *

Your email *

Message *

Contact Us

* Required Fields
First Name *

Last Name *

Email *

Company / Institution *

Comments: *

TLM 2.0, UVM 1.0 and Functional Verification

Comments(1)Filed under: Functional Verification, OVM, SystemVerilog, SystemC, DVcon, verification, TLM, VMM, accellera, uvm, Accellera VIP TSC, ports, TLM 2.0

The DVCon 2011 conference was held this week and the Accellera Universal Verification Methodology (UVM) 1.0 release is breaking records in term of interest and attendance.  UVM 1.0 is a big deal(!) The core functionality is solid and ready for deployment.  Accellera held a full day tutorial on UVM 1.0 on Monday.  And during a panel discussion on Tuesday afternoon, AMD and Intel announced that they are in the process of adopting it.

We (I’m wearing my Accellera hat) briefly introduced the industry-proven basic UVM concepts, but spent most of our time talking about the great enhancements and new capabilities in UVM1.0.  For many in the audience, it was hard to map the new features to the existing methodology, to understand what is deprecated and to know what is the recommended use model.  Indeed the use models of a few of the new capabilities have not been finalized by Accellera.

Instead of answering inquiries individually, (not a scalable solution), I decided to write down a few high-level notes on each topic. In this first blog I will discuss the transaction level modeling (TLM 2.0) additions and impact.  I want to emphasize that these notes represent Cadence’s technical views on these topics.

TLM 1.0 ports were heavily used in OVM and in UVM 1.0EA (Early Adopter). The UVM 1.0 release adds a partial SystemVerilog implementation of the Open SystemC Initiative TLM 2.0 capabilities. At DVCon John Aynsley, author of TLM 2.0 spec, did a great introduction of TLM 1.0 and TLM 2.0 concepts and capabilities (one of the best I’ve seen so far for TLM). Later he moved on to UVM TLM implementation both in terms of TLM 1.0 and TLM 2.0, covering the benefits and contrasting it with the OSCI SystemC capabilities. His slide is shown below:

 

 

 

The TLM2.0 standard was created for modeling memory-mapped buses in SystemC.  Most of the DVCon discussion was devoted to the concepts of TLM 2.0, with a rich (or complex) set of capabilities. For example sockets and interfaces, blocking and non-blocking transports, the generic payload, hierarchical connection, temporal decoupling and more were covered. The main questions asked were: How much of this is relevant to functional verification and, specifically, UVM environments? What do I need to do differently in a UVM verification environment to leverage the TLM 2.0 potential?

 

Let’s start by focusing on agents that reside within an interface UVC.  As you can see below, monitors contain analysis ports. The monitor does interface level coverage and checking, and distributes events and monitored information to the sequencer, scoreboard, and other components. Obviously, there is nothing different in UVM from OVM to replace this kind of distributed one-to-many  communication. While this is trivial, this brings us to Guideline # 1: In the monitor, keep using the analysis port.

 

 

 

 

Another communication channel is needed between the sequencer that creates transactions and the driver that sends these to the Device Under Test (DUT). What we have in UVM (introduced in OVM) is a producer/consumer port (uvm_seq_item_pull_port) that has the needed API and hides the actual channels (TLM or others) from the implementation. I know that there was not always an agreement on this by all vendors, but Cadence was constantly recommending users to use this abstract layer, as opposed to the direct TLM ports.  TLM 2.0 sockets do not solve all the communication requirements between the sequencer and the driver (for example try_next_item semantic is hard to resolve in either TLM 1.0 or TLM 2.0).

 

Also, as was mentioned in the Accellera tutorial, the multi-language support is not solved with UVM 1.0 yet -- and for now, this is a vendor specific implementation. This is a great time to re-iterate our existing recommendation:  Guideline #2:  For sequencer-driver communication, use the abstract producer/consumer ports in your code and avoid using the TLM connections directly. This will keep your code forward compatible with existing or future solutions that the implementation uses (we might need extensions to facilitate cross language communication).  Usage of the high-level functions also allow us, the library developers to add more functionality on get_next_item() and iten_done() calls.  

 

Another communication layer you may need is for stimuli protocol layering. There are multiple ways to implement layering, but Guideline #2 is valid for this use case as well, where one downstream component need to pull items from a different component. If you stick with the abstract API of the producer/consumer port, you are going to keep your environment safe as we take the liberty of improving the communication facilities for you.

 

Let’s review other benefits of the TLM 2.0 and the value that they can provide to the verification environment. Again, I include John Aynsley’s slide covering the benefits of TLM 2.0 below. See also my analysis for the individual potential benefits.

 

 

 

Let’s review the “value” of these benefits in the context of verification:

  • Isn’t TLM 2.0 pass-by-reference capability faster than TLM 1.0, which is is critical for speed?  Indeed, pass by reference is critical for speed and memory usage, but the TLM 1.0 implementation in UVM does not copy by value, so no speed advantages are expected from adopting TLM2.0.
  • What about TLM 2.0 support for timing and phases? TLM 2.0 allows defining the transaction status and phases as part of the transaction. NOTE – This is unrelated to UVM phases. This might be a consideration for UVM. I will argue that the timing and status are more important in verification context for the analysis ports and monitors, as this is the channel that is used for such introspection. This can be considered in the next version of the UVM library as part of replacing the underlying implementation of the producer/consumer ports. In general timing annotation in TLM2.0 is complex especially as it is related to “temporal decoupling.” These are too difficult to be used with little return on investment.
  • A well defined completion model? We need to think of a use case for this … As we listed all the communication use cases for verification, we could not map this one into a mainstream functional verification need.
  • What about the generic payload (GP)? The generic payload is a standard abstract data type that includes typical attributes of memory mapped busses. For example it includes attributes like command, address, data, byte enable and more. An array of extensions exists to enhance this layer with protocol-specific attributes (for example, an AXI transaction defines attributes such as cacheable privileges that are not part of the generic payload definition) .  The generic payload can be used to create protocol independent sequences that can be layered on top of any bus. It is also useful as you communicate to a very abstract model, in early design model before the actual protocols have been decided upon and should be united at some point with the register operation. The generic payload usage does not replace the existing protocol specific sequencer. It also does not lend itself nicely to sequences and randomization as it is hard to constrain the extensions that are stored as array items. To put things in the right perspective, we find the generic payload a good addition to UVM . We used it as part of the Cadence ESL solution and will be happy to share more of our recommendation on the correct usage of the generic payload class. Guideline #3:  check if and how usage of GP can help your specific verification challenges.
  • What about the multi-language potential of TLM 2.0? OSCI TLM 2.0, as specified, is a C++ standard. Portions of it cannot be implemented in SystemVerilog nor does it enable or simplify multi-language communication (in fact passing by reference makes it more challenging to support than TLM 1.0). However, what we hear from users is that communicating to high-level models that use TLM 2.0 interfaces is the main requirement of TLM 2.0, which involves multi-language support. As officially stated multiple times in the Accellera tutorial, the multi-language transaction level communication support is not part of the standard library and was left for the individual vendors to support. This will be tricky for users who would like to keep their testbench  vendor-independent. Guideline #4:  Remember that the current UVM TLM 2.0 multi-language support is not part of the standard library and may lock you to a specific vendor and implementation.


To solve this main TLM2.0 requirement, Cadence is working within IEEE 1800 committee to propose extending the DPI to handle passing of objects between different object-oriented languages. Requirements such as passing items by reference or querying hierarchy and others that are not part of TLM 2.0 will be standardized as language features and will hopefully be supported by all vendors. Cadence is working with multiple users that ask for this solution. If you wish to support this effort follow Guideline #5: Join a standardization body or encourage your vendor to support standard multi-language communication  :-)

Summary of recommendations regarding TLM2.0 and verification:

 

Guideline #1:  In the monitor, keep using the analysis port.
Guideline #2: 
Use the abstract producer/consumer ports in your code and avoid using the TLM connections directly.
Guideline #3: 
Check if and how usage of GP can help your specific verification challenges.
Guideline #4: 
Remember that the current UVM TLM2.0 multi-language support is not part of the standard library and may lock you to a specific vendor and implementation.
Guideline #5:
Join a standardization body or encourage your vendor to support standard multi-language communication.

I hope that these notes address the multiple concerns I heard about the complexity of TLM 2.0 and the amount of required changes for your existing verification environments. I saw other tutorials that alternate between verification needs and modeling requirements that have little to do with verification.

 

In summary, if you find the TLM 2.0 extensions to UVM to be complex, don't worry, you don't really need to bother with them.  You will probably find the TLM 1.0 communication more than sufficient for most of your testbench development needs.  You might find the Generic Payload useful for abstract modeling of transactions, and you can easily adopt GP without worrying about the rest of the TLM 2.0 complexity.  The main requirement for verifying/integrating SystemC TLM 2.0 models with a SystemVerilog testbench is not yet part of the UVM standard, so we invite you to join the effort to standardize a solution for this problem.

 

Sharon Rosenberg

Comments(1)

By Priyanka on March 20, 2013
Hi Sharon, It is a nice article. I am not able to see the advantages of Generic Payload (GP). Since the fields of the GP cannot be randomized, in Verification Environments It isn't going to be really helpful. So what is the main use of GP? Is there a possibility of extending this GP and probably add constraints to the data and the address fields etc?

Leave a Comment


Name
E-mail (will not be published)
Comment
 I have read and agree to the Terms of use and Community Guidelines.
Community Guidelines
The Cadence Design Communities support Cadence users and technologists interacting to exchange ideas, news, technical information, and best practices to solve problems and get the most from Cadence technology. The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information. By accessing, contributing, using or downloading any materials from the site, you agree to be bound by the full Community Guidelines.