Home > Community > Blogs > Functional Verification > grey boxed data path approach using quot when sub typing quot
Login with a Cadence account.
Not a member yet?
Create a permanent login account to make interactions with Cadence more convenient.

Register | Membership benefits
Get email delivery of the Functional Verification blog (individual posts).


* Required Fields

Recipients email * (separate multiple addresses with commas)

Your name *

Your email *

Message *

Contact Us

* Required Fields
First Name *

Last Name *

Email *

Company / Institution *

Comments: *

Grey-Boxed Data-Path Approach Using 'when sub-typing'

Comments(0)Filed under: Functional Verification, Coverage-Driven Verification, verification strategy, IES, Incisive Enterprise Simulator (IES), e, Specman, Aspect Oriented Programming, AOP, when sub-typing

[Please join Team Specman in welcoming the first guest blogger from our user base: Ms. Kaberi Banerjee, a senior design & verification engineer based in Silicon Valley California]

Fellow Specmaniacs (or should I say "specmites" – in alignment with the subject of bugs!), I recently had a very pleasant encounter with the all powerful Specman “when” sub-typing that I would like to share with you. (When subtyping is one of the many Aspect Oriented Programming (AOP) features of the e language.)

Embarking on some generalizations, data processing by DUT’s can be categorized as:

(a) Data traversing the DUT is not altered and is available at the output with some packaging or re-timing

(b) Data traversing the DUT is used to generate additional data or control.  Operations like parity, CRC, etc. would fall into this category

(c) Data is completely consumed and transformed by the DUT and the output data is very different from the input data

I am of the opinion that categories (a) and (b) data-paths are better tackled with a “black-boxed data-path” using pure constraint driven randomization. When the DUT does not modify the data, the functional coverage of the DUT will not be influenced by the data content. In cases where simple operations such as parity and CRC is derived from the data there will be circuitry in the DUT that needs to be functionally covered by providing a variety of data samples and the default constrained random data generation, built into the Specman tool may suffice.

My project dealt with the third category, (c), above. 

In order to perform this verification task, a “grey-boxed data-path” approach was adopted along with the use of one of the AOP features of e, namely, "when sub-typing".

The “grey-boxed data-path” approach examines, as a first-step, the data-processing algorithm and extracts from this algorithm all the steady-state and boundary condition operations. For instance any data-length based operations would have to examine large, small and default data-lengths and the various corners that may or may-not exist in each of these broad categories. The benefits of taking this “grey-boxed data-path” approach is that it naturally leads into the following next steps of the verification task:


(1) Defining the functional coverage points of the DUT

(2) Defining the constraints to design a data/pattern-generator that would provide the input data-stream to the DUT

To implement the data-generator, the “when” sub-type feature of e came in very handy.  At the test-case level it was possible to very easily describe, the type of data stream that would be used to exercise the DUT.


Below is a simple example of a data generator, leveraging the power of the “when” construct.


This e code-snippet, labeled – DATA_GEN describes a data-structure that captures some characteristics of a data-stream that will exercise the features of the DUT. The DUT features are a circuit-level realization of the algorithm level micro-operations. The micro-operations were extracted as a result of the “grey-boxing“ step described above. The last section of the e-code labeled – USE_CASE describes how a data sequence may be constrained to generate a specific data-stream. Both code snippets are shown without code delimiters.


The extracted features in the data structure blk_s are as follows –


1. Data format of the basic data item – 8-bit, 12-bit or 16-bit , represented as BIT8, BIT12 and BIT16;


2. Number of basic data items delivered on the DUT interface during a single clock cycle, namely the packing type, represented as ONE (one 8-bit, or one 16-bit etc), TWO (two 16-bit items etc), THREE etc.


3. Number of packed data item delivered for processing by the DUT, namely length of a block of data. It is assumed that the DUT will accept data in terms of data-bursts that measure a block length.


4. Feature (3) above is derived  from an abstract Boolean test-control feature of the data structure, namely “is_residual”.  Thus is_residual is a “knob” that controls the generation of default values of lengths and lengths that are smaller than the default value.

Note: this set of extracted features is only used to describe the data-structure below and in reality the features extracted and their control may be larger and more complex.


//file: data_gen.v

`define MAX_24  (1 << 25) - 1

`define MAX_12  (1 << 13) - 1

`define MAX_8   (1  <<  9) - 1

`define  BLK_MAX  256

//file: data_gen.e


-- Read in a file that includes Verilog `define text macros.

-- After reading in the file, you can use these text macros

-- in Specman expressions.

verilog import data_gen.v; 

extend sys {

blk: blk_s;

}; // extend sys

-- data width can be 8/12/16 etctype format_typ:  [BIT8, BIT12, BIT16];

-- data input bus width to the DUT could consist of 1, 2

-- or 3 of the data widthtype pking_typ:  [ONE, TWO, THREE];

-- Descriptor for a block of datastruct blk_s { 

    fmt: format_typ; -- basic data unit

    keep soft fmt == BIT12;    

    -- the blk may be smaller than the standard blk size

    is_residual_blk: bool;   

    keep soft !is_residual_blk;   

    -- characterize input data width on DUT interface

    pking: pking_typ;   

    keep soft pking == TWO; 

    length            : uint;

    keep soft length == calc_length();   

    // define the needed empty methods.  Functionality will be added in 

    // when subtypes.

    calc_length():uint is empty;

    custom_pack_bits() is empty;   

    when TRUE'is_residual_blk blk_s {          

        calc_length():uint is {      

            // Complex generation algorithm to determine length such        

            // that it is smaller than the standard block length; The   

            // residual generation could be further constrained based on 

            // some boundary conditions of the residual length. These       

            // boundary conditions of a residual block may trigger       

            // special case processing in the micro-operations of the DUT.

            // In order to control the generation of these boundary       

            // conditions additional "knobs" may be defined in blk_s.    

        }; // calc_length()…       

    }; // when TRUE’is...   

    when FALSE'is_residual_blk blk_s {         

        calc_length():uint is {      

            // Simple generation algorithm to determine length such 

            // that it is equal to the standard block length      

            gen result keeping {it == `BLK_MAX;};    

        }; // calc_length 

    }; // when FALSE’is...    

    -- Having defined the block length above, the when constructs below

    -- are used to group basic data items into packed data items

    -- For simplicity, not all the format and packing types have been coded here.

    -- The first when construct describes a data stream that generates a 

    -- a block of data of length "length" (generated above); each 

    -- data item in that block packs 2, 12-bit "basic data items" into a

    -- "packed data item".   

    -- dat_l is a list of length "length" with each list item being 

    -- a 24 bit uint. 

    when BIT12'fmt TWO'pking blk_s {   

        -- packing density is 2 and basic data item is 12-bits  

        !dat_l: list of uint (bits: 24);        

        custom_pack_bits() is only {      

            -- dut interface items are 24-bits wide 

            var pk_temp : uint(bits:24);      

            var temp : uint(bits:12);               

            for i from 1 to length-1 do {         

                for j from 1 to 2 do {            

                    gen temp keeping {               

                        it >= 0 and it <= `MAX_12 ; 


                    pk_temp[(j*12)-1:(j*12)] = temp; 

                }; // for j from 0 to...         


                }; // for i from 1 to...   

            }; // custom_pack_bit... 

        }; // when BIT12'fmt ...    

        when BIT8'fmt TWO'pking blk_s {   

            -- packing density is 2 and basic data item is 8-bits 

            !dat_l    : list of uint (bits:16);      

            custom_pack_bits() is only {      

                var pk_temp: uint(bits:16); 

                var temp: uint(bits:8);      


                gen temp keeping {it >= 0 and it <= `MAX_8;};


            }; // custom_pack_bit...

        };// when BIT8’fmt … 


        -- More data stream description.

    };-- end of blk_s description

The data structure blk_s is the abstract description of the data stream that will be capable of exercising completely the data-processing functions performed by the DUT. In fact the term “data_descriptor” may be assigned to such a data structure.

The take away from this example is:

  • The data descriptor (blk_s) captures the data features (fmt, pking, etc) that are of interest to the data-processing algorithm.
  • The “when” constructor can be used to concisely describe the generation process of a “use-case” based data stream.

As a result of the “grey-boxing“ step described earlier, the test-plan will have captured the test items that are required to be exercised. Using this test plan it is possible to build-up a sequence library of data-generation use cases. 

The e code snippet below is the code of one such sequence for a stream of 10 residual blocks of format 8-bit and packing type TWO.



-- Build a sequence

extend RESIDUE_SEQUENCE blkdata_seq_s { 

    num_blks: uint ;    

        keep num_blks  == 10;    

        body() @driver.clock is only {       

            for i from 1 to num_blks do {          

                do blk_s keeping {            

                    it. is_residual == TRUE;            

                    it.fmt == BIT8;            

                    it.pking == TWO;           





Once the sequence has been defined, it can then be driven into the DUT either randomly, or by an explicit call to the sequence driver to send that type of sequence.  The below test contains all code needed to send the above stream of 8 bit, packing type TWO blocks.


extend MAIN blkdata_seq_s { 

    test_seq:  RESIDUE_SEQUENCE blkdata_seq_s;  

    body() @driver.clock is only {    

        do test_seq ;  




The correlation of a specific sequence to a test-item in the test-plan is easily visible as a result of adopting this approach.

This method of generating “targeted data” provided us with many advantages; in terms of project execution, the above approach had the following outcomes:

  • Few iterations of the regression (small set of test cases) resulted in high levels of functional and code coverage
  • Using the controlled data-generation it was possible to test exhaustively any bug-fixes that were made around any of the functions, in a very short amount of time.

Note: A “white-box” approach to data-intensive algorithms can provide high levels of code-coverage but does not provide the connectivity between the algorithm, the test-plan, the generated data-set and the related test-cases.  Therefore I would champion the “grey-boxed data-path” approach along with the “when” AOP construct of the e language for verification of DUT’s used for data-intensive processing.

Ms. Kaberi Banerjee is a senior design & verification engineer based in Silicon Valley California, and is fluent in e, Verilog, VHDL, SystemVerilog assertions, C, and TCL.



Leave a Comment

E-mail (will not be published)
 I have read and agree to the Terms of use and Community Guidelines.
Community Guidelines
The Cadence Design Communities support Cadence users and technologists interacting to exchange ideas, news, technical information, and best practices to solve problems and get the most from Cadence technology. The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information. By accessing, contributing, using or downloading any materials from the site, you agree to be bound by the full Community Guidelines.