Home > Community > Forums > Functional Verification > ncsim out-of-memory when enabling code coverage

Email

* Required Fields

Recipients email * (separate multiple addresses with commas)

Your name *

Your email *

Message *

Contact Us

* Required Fields
First Name *

Last Name *

Email *

Company / Institution *

Comments: *

 ncsim out-of-memory when enabling code coverage 

Last post Tue, Nov 28 2006 11:34 AM by archive. 5 replies.
Started by archive 28 Nov 2006 11:34 AM. Topic has 5 replies and 2622 views
Page 1 of 1 (6 items)
Sort Posts:
  • Tue, Nov 28 2006 11:34 AM

    • archive
    • Top 75 Contributor
    • Joined on Fri, Jul 4 2008
    • Posts 88
    • Points 4,930
    ncsim out-of-memory when enabling code coverage Reply

    Hi everybody,
    this is my first post in the forum and I hope I'm writing in the right section.

    My group is using a specman-based verification environment to test a small system composed of 2 IPs. Additionally, we use nc-coverage to collect data on block, expression and FSM coverage.

    At this moment our most intensive runs are based on three different tests, each executed 80 times with a random seed. This scenario will be further complicated in the near future.

    Multiple test runs are controlled with the ncsim Tcl shell, using some custom procedures. Note that only one instance of the simulator is run. Basically, at each iteration, we perform the following tasks:

    foreach test $test_list {
    - specman: load a specific test
    LOOP on test runs
    - ncsim: reset the simulator
    - specman: reload environment
    - specman: test -seed=random
    - ncsim: setup coverage:
        * coverage -setup -dut :
            * coverage -setup -testname [format "%s_run%s" $test_name $tag]
            * coverage -setup -workdir [file join $outdir nc_cov_data]
            * coverage -code
            * coverage -fsm
    - ncsim: run
    - ncsim: dump coverage data
        * coverage -code -dump
        * coverage -fsm -dump
    - update counters for next iteration
    END LOOP
    } # close loop on tests

    Simulations run fine if code coverage is disabled (not instrumented during elaboration), but goes out of memory when enabling code coverage. Memory reduction can also be achieved by reducing the sample rate of code coverage (i.e. every 10 test runs, instead of all test runs)

    In my understanding, all simulation runs are independent and simulator should flush data between each reset/reload. However, it seems that memory occupation increases after each test run until the max_size limit is reached.

    I think that a possible solution could be moving the loop from the simulator shell to an external system shell, in this way each test run would require starting and shutting down ncsim. This would also require re-arranging our script system, which is something I'd like to avoid as we're planning to move to Enterprise Manager in january.

    Any suggestion for a fast-fix?

    I'm using IUS5.5 and Specman 5.0.3 eVCs are compiled in a shared library and linked to ncsim_specman with the SPECMAN_DLIB variable. IPs are developed in VHDL.

    Thank you very much,
    nico


    Originally posted in cdnusers.org by nko
    • Post Points: 0
  • Tue, Nov 28 2006 12:40 PM

    • archive
    • Top 75 Contributor
    • Joined on Fri, Jul 4 2008
    • Posts 88
    • Points 4,930
    RE: ncsim out-of-memory when enabling code coverage Reply

    nko,

    I am moving this thread to the Functional Verification forum, "e" where the Specman experts hang out.

    Administrator


    Originally posted in cdnusers.org by Administrator
    • Post Points: 0
  • Tue, Nov 28 2006 1:17 PM

    • archive
    • Top 75 Contributor
    • Joined on Fri, Jul 4 2008
    • Posts 88
    • Points 4,930
    RE: ncsim out-of-memory when enabling code coverage Reply

    I have seen something similar when instrumenting FSM's in older versions of IUS. Can you omit coverage -fsm and see what happens to your memory blow out?


    Originally posted in cdnusers.org by douge
    • Post Points: 0
  • Tue, Nov 28 2006 8:08 PM

    • archive
    • Top 75 Contributor
    • Joined on Fri, Jul 4 2008
    • Posts 88
    • Points 4,930
    RE: ncsim out-of-memory when enabling code coverage Reply

    Hi Niko,

    The reason why NCSIM runs out of memory is because you have enabled FSM coverage in the run. You can do two things to compensate on this

    1. Disable FSM coverage. Run the complete regression. Analyse the block, expression and functional coverage numbers at the end of the run. If the numbers are satisfactory then you can sign off your verification just with that.

    2. If you still need state coverage. You can try modelling the states in e and try doing a transition coverage on the states in e. That should give you a good estimate of the state space covered


    Originally posted in cdnusers.org by krish.p@samsung.com
    • Post Points: 0
  • Wed, Nov 29 2006 7:37 AM

    • archive
    • Top 75 Contributor
    • Joined on Fri, Jul 4 2008
    • Posts 88
    • Points 4,930
    RE: ncsim out-of-memory when enabling code coverage Reply

    Hi guys, thank you very much for the suggestion.

    Unfortunately it seems that my memory blow-out is not related to FSM coverage, as disabling it does not remove the problem.

    I've tried various combination of code coverage and it seems that problems are related to expression coverage:
    - fsm only: OK
    - block only: OK
    - expression only: fail
    - expr+block: fail
    - block + fsm: OK

    I will try to collect data only on one IP at a time.
    Any other idea is welcome.
    Thank you again for your help.

    nico


    Originally posted in cdnusers.org by nko
    • Post Points: 0
  • Fri, Dec 1 2006 2:36 AM

    • archive
    • Top 75 Contributor
    • Joined on Fri, Jul 4 2008
    • Posts 88
    • Points 4,930
    RE: ncsim out-of-memory when enabling code coverage Reply

    Hi guys.
    I found the cause all of my troubles! One of the eVC instantiated in the environment was extending setup() in the following way:

    setup() is also {
    set_config(memory, gc_threshold, 300M);
    set_config(memory, gc_increment, 50M);
    set_config(memory,max_size,1000M);
    set_config(memory,absolute_max_size,1100M);
    }

    It seems that garbage collection threshold and increment were too high.
    Do you think this is a reasonable explanation of my problem?
    I didn't notice this settings until I had to modify the eVC configuration for other reasons. I was pretty sure specman was running with default memory settings.

    Lesson learned :-)


    Originally posted in cdnusers.org by nko
    • Post Points: 0
Page 1 of 1 (6 items)
Sort Posts:
Started by archive at 28 Nov 2006 11:34 AM. Topic has 5 replies.