Home > Community > Blogs > System Design and Verification > where s the bridge to cross the great divide
 
Login with a Cadence account.
Not a member yet?
Create a permanent login account to make interactions with Cadence more conveniennt.

Register | Membership benefits
Get email delivery of the System Design and Verification blog (individual posts).
 

Email

* Required Fields

Recipients email * (separate multiple addresses with commas)

Your name *

Your email *

Message *

Contact Us

* Required Fields
First Name *

Last Name *

Email *

Company / Institution *

Comments: *

Where's the Bridge to Cross the Great Divide?

Comments(0)Filed under: Hardware/software co-verification, ISX, linux, windows, dwarfdump, Embedded Systems Conference 2009, VMware

At this year's Embedded System Conference in San Jose there was a panel with the title Who's Taking over Whom - Is EDA Moving into Embedded or Embedded into EDA?

One of the analogies Mike McNamara from Cadence used was hardware and software engineers on opposite sides of a river wondering how to construct a bridge to the other side while sharks swim in the water separating them. It reminded me of a song from a group called Point of Grace titled "The Great Divide" (track 8 on the link). Part of the chorus says:

There's a bridge to cross the great divide
There's a cross to bridge the great divide

One practical place I see the missing bridge on a daily basis is the difference in computing platforms used in each domain. The majority of embedded software engineers use Windows to edit, compile, and debug software. Most hardware and verification engineers use Linux machines connected by a network. Even if they have a Windows desktop, they use things like VMware, VNC,  X servers, and various other software to access Linux machines from a Windows PC on their desk. 

If I attempt to step back and figure out why the world is this way, I can make some educated guesses, maybe readers can help identify additions reasons.

When the embedded software engineer arrives at work for his first day he sits down and finds a Windows PC on the desk. His assignment is to write embedded software for some box, board, or device so his first instinct is to get a CD-ROM, put it in the drive, and install the cross-compiler for the target CPU. He also installs his favorite editor and gets right to work. Later he may add a JTAG box to connect to the target system and debug or maybe he does this in a lab with a different Windows PC that is provided in the lab. I can certainly understand this behavior, the obvious thing to do if you are given a Windows PC is to just use it rather than ask more questions. Almost every cross-compiler used for embedded software today works on both Windows and Linux, but the majority of usage is probably on Windows. It would be great to have somebody in the Embedded Software Tool business confirm this.

On the other hand, the first thing a hardware engineer does when he shows up for work is to start asking about which Linux machines on the network he should login to and how to setup the environment to run EDA tools. Most companies have scripts to configure tools, pick among multiple versions of each tool, and run whatever combination of tools engineers need. If there is a Windows PC on his desk it's assumed to be for writing specs with a word processor, doing e-mail, and web browsing. Most EDA tools used for ASIC and SoC design run only on UNIX based platforms (I mostly say Linux, but I acknowledge there are others in use like Solaris and AIX). There are some EDA tools that run on Windows, especially in the FPGA and PC Board areas, but nothing that would constitute a complete chip design flow. In the past there have been Windows versions of popular EDA tools for ASIC design, but they never seemed to get enough momentum to make them sustainable.The history of chip design is based on UNIX, first Apollo workstations, then Sun, then Linux on x86. I still recall the hype of Windows NT and how all of EDA would move to Windows NT since it was a viable computing platform providing SMP support for applications that required higher performance and reliability (compared to Windows 95), but Windows never really gained much traction in the synthesis, simulation, and verification areas.

The result is a situation I see frequently where software engineers edit and compile on Windows and hardware design and verification engineers work on Linux. As the SoC design process has evolved, the need to build the bridge between hardware and software raises immediate questions about how to structure the computing environment to connect the two groups. This was first apparent as verification engineers started to run software in a logic simulation environment.This required software compiled on Windows to be moved to Linux and executable files processed by some utilities to get files to load into HDL memory models using things like $readmemh in Verilog. Most companies did this using network drives so software engineers could mount a drive from their PC and put the $readmemh files on a server that the Linux machines could also see. This worked fine for awhile since the only thing that was passed back and forth was the memory contents for the software program to run. 

As more functionality was needed to actually debug the software using something other than a waveform viewer things became more difficult. A software executable normally has paths embedded in it's debug information about where to find the source code. To actually debug a software program the source must also be copied over to the area visible from Linux, but still the debugger will struggle with the Windows paths. Most debuggers have ways to map the paths and find the new location of the source, but there is always some headache involved to set it all up.

As new verification tools like ISX have evolved, this puts additional stress on the environment. Like a software debugger, ISX utilizes information from software executables to automatically create a verification environment for constrained random testing of the software and the combination of hardware and software. To give you a feel for it consider the dwarfdump utility. Dwarfdump is an example of a utility that can extract lots of interesting information from the dwarf debugging information in an executable (similar to what a software debugger does).

 % dwarfdump -l test.elf

will give output about the source code line numbers and how they map to instruction addresses like this:

.debug_line: line number info for a single cu
Source lines (from CU-DIE at .debug_info offset 286):
<source>        [row,column]    <pc>    //<new statement or basic block
/home/jasona/scratch/arm-gdb/ex_arm968/c/sorts.c:       [ 36,-1]        0x1cc   // new statement
/home/jasona/scratch/arm-gdb/ex_arm968/c/sorts.c:       [ 40,-1]        0x1d4   // new statement
/home/jasona/scratch/arm-gdb/ex_arm968/c/sorts.c:       [ 42,-1]        0x1e4   // new statement
/home/jasona/scratch/arm-gdb/ex_arm968/c/sorts.c:       [ 45,-1]        0x1f0   // new statement
/home/jasona/scratch/arm-gdb/ex_arm968/c/sorts.c:       [ 46,-1]        0x204   // new statement
/home/jasona/scratch/arm-gdb/ex_arm968/c/sorts.c:       [ 47,-1]        0x24c   // new statement
/home/jasona/scratch/arm-gdb/ex_arm968/c/sorts.c:       [ 50,-1]        0x258   // new statement
/home/jasona/scratch/arm-gdb/ex_arm968/c/sorts.c:       [ 56,-1]        0x26c   // new statement
/home/jasona/scratch/arm-gdb/ex_arm968/c/sorts.c:       [ 57,-1]        0x284   // new statement

In the case of an executable compiled on Windows the output may be something like:

.debug_line: line number info for a single cu
Source lines (from CU-DIE at .debug_info offset 11):
<source>        [row,column]    <pc>    //<new statement or basic block
D:\Documents and Settings\jasona\Desktop\proj\trunk\fw/src\setup.c:      [ 54,-1]        0xbfc003c0
D:\Documents and Settings\jasona\Desktop\proj\trunk\fw/src\setup.c:      [ 55,-1]        0xbfc0042c      // new statement
D:\Documents and Settings\jasona\Desktop\proj\trunk\fw/src\setup.c:      [ 56,-1]        0xbfc00430      // new statement
D:\Documents and Settings\jasona\Desktop\proj\trunk\fw/src\setup.c:      [ 60,-1]        0xbfc00438      // new statement
D:\Documents and Settings\jasona\Desktop\proj\trunk\fw/src\setup.c:      [ 61,-1]        0xbfc0043c      // new statement
D:\Documents and Settings\jasona\Desktop\proj\trunk\fw/src\setup.c:      [ 62,-1]        0xbfc00448      // new statement 

The source files have DOS paths with things like D:\ as well as the back slashes and spaces in the file names. Like a debugger this path mapping can be done, but it adds extra complexity. Most Cadence tools use TCL as the command language. TCL is great but extra work must be done to allow for the possibilities that the software may have been compiled on Windows or Linux. For example, the TCL file command provides some help, but is not really platform independent in dealing with path separators. Anyway, this is just one example of inefficiency caused by the lack of a bridge between the hardware and software engineers.

Now, as we enter the era of the Virtual Platform, things are going to get even more difficult. Consider a situation where a project team has developed a SystemC TLM2  model of the hardware system to be used for embedded software development. The software engineer is going to ask to run it on Windows. This may be possible using the OSCI reference simulator, but in most cases there are other dependencies that complicate things (beyond the fact that the OSCI simulator doesn't have much in terms of debugging, tracing, and visualizing what is happening in the platform). Maybe there is an IP model for a processor that is not available for Windows. Maybe the platform reuses some verification code or an RTL design block from the HDL simulation environment that cannot run on Windows. Maybe the software engineer asks to run the software debugger on Windows and connect to the simulation over the network so he can stay at his PC, but run the simulator on a remote Linux machine that is shared with the hardware team, but has never logged into Linux before. Maybe the solution is to give the software engineer a VMware Appliance with all of the simulation tools and design data all installed and ready to run so there is nothing to setup and it all runs right on the Windows desktop.

Many of these combinations are possible depending on the tools, models, and project details. There are many IT solutions available to bridge these gaps, but my point is that all of them take time and add complexity. The EDAC Consortium does have platform road maps for EDA and certainly Cadence has a supported platform matrix and road maps, but this most basic difference between software engineers and hardware engineers seems to come up over and over again. As soon as anybody mentions a product targeting embedded software engineers right away sirens start whooping about how everything must be on Windows (and embedded software engineers don't pay anything for tools). It's clear this era of separate hardware and software teams is coming to an end. It's also clear that software engineers will not be able to manually iterate through the code, compile, run, debug loop in isolation on the desktop forever. I outlined much of this in a previous post about how Virtual Platforms are used for embedded software development.

My preference would be for embedded software to adopt a model more like hardware design and verification, but unfortunately I cannot make this decision. Does your company have this gap between Windows and Linux? How do you address it? What kind of bridges do you build? What other changes or solutions do you see on the horizon?

Jason Andrews

 

 

Comments(0)

Leave a Comment


Name
E-mail (will not be published)
Comment
 I have read and agree to the Terms of use and Community Guidelines.
Community Guidelines
The Cadence Design Communities support Cadence users and technologists interacting to exchange ideas, news, technical information, and best practices to solve problems and get the most from Cadence technology. The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information. By accessing, contributing, using or downloading any materials from the site, you agree to be bound by the full Community Guidelines.