Home > Community > Forums > Custom IC Design > Ocean distributed processing memory usage

Email

* Required Fields

Recipients email * (separate multiple addresses with commas)

Your name *

Your email *

Message *

Contact Us

* Required Fields
First Name *

Last Name *

Email *

Company / Institution *

Comments: *

 Ocean distributed processing memory usage 

Last post Mon, Sep 12 2011 9:57 PM by TjaartOpperman. 2 replies.
Started by TjaartOpperman 05 Sep 2011 01:44 AM. Topic has 2 replies and 1660 views
Page 1 of 1 (3 items)
Sort Posts:
  • Mon, Sep 5 2011 1:44 AM

    • TjaartOpperman
    • Top 500 Contributor
    • Joined on Mon, Aug 24 2009
    • Brits, South Africa
    • Posts 16
    • Points 265
    Ocean distributed processing memory usage Reply
    I am using IC6.1.4.500.1 and spectre 7.1.0.031. I am using the paramRun function to do distributed processing on 4 machines in blocking mode using an Ocean script. The paramRun only submits 16 jobs at a time to these 4 machines. When they are complete, the script analyses these waveforms using a for loop and then dumps the results into a text file. The script reads the data using the selectResults function and only writes around 100bytes or so per job. A for loop continuously submits 16 jobs at a time in blocking mode after changing a few variables using the desVar and paramAnalysis statements. The script extracts the waveforms using the famValue function. The spectre.out file shows that each job uses 14.7MB of virtual memory. After a total of 273 Jobs have been submitted the host machine runs out of memory in 32-bit mode. 273*14.7=4013MB. I am expecting the script to use a maximum of 14.7MBX16 of virtual memory at a time, since the variables used to store the waveforms are overwritten. Is there perhaps a way to free up the simulation results from the virtual memory before submitting a new batch of distributed jobs? I am aware that 64-bit mode will provide access to more memory, but in the long run such a solution will not work for me.
    Filed under:
    • Post Points: 20
  • Mon, Sep 5 2011 10:09 AM

    Re: Ocean distributed processing memory usage Reply

    It's unlikely that the amount of memory reported by spectre in the spectre.out is relevant here. What's likely to be the limiting factor is the memory taken by the waveforms that are being read into memory after each simulation and are used to create your "100 byte" summary of the run. 

    Most likely the garbage collection of the waveform point data is not being triggered, and hence the memory used by the waveforms is not being reused - until too late. This happened because what was being monitored was the waveform and vector objects, not the point data itself, and whilst the memory for the points themselves was reclaimed, it was only triggered if you'd used up all the available waveform/vector slots. This has been significantly improved in IC615 - so if you can use IC615 that would be a good solution.

    Unfortunately there's no easy (general purpose) solution in IC614.

    Regards,

    Andrew.

    • Post Points: 20
  • Mon, Sep 12 2011 9:57 PM

    • TjaartOpperman
    • Top 500 Contributor
    • Joined on Mon, Aug 24 2009
    • Brits, South Africa
    • Posts 16
    • Points 265
    Re: Ocean distributed processing memory usage Reply
    Andrew, As a workaround in IC614 my script now builds another skill script with all the variables set up for a distributed parametric run. I then execute that script from the Virtuoso session by spawning another ocean session in the shell using the sh() command. The memory gets freed this way, but designing the script in this way is a very tedious process. The script also looks quite clumsy and is difficult to read, but it works. Regards, Tjaart
    • Post Points: 5
Page 1 of 1 (3 items)
Sort Posts:
Started by TjaartOpperman at 05 Sep 2011 01:44 AM. Topic has 2 replies.