The commercial aircraft industry is at a stage where it innovates at a much slower pace than the chip design industry -- however, we can find some parallels that offer us lessons. The most notably innovative aircraft recently developed is the Boeing 787 Dreamliner. It is the first commercial aircraft to use primarily composite materials in its construction in order to improve its fuel efficiency, which reduces operating costs for airlines and allows them to fly longer routes.
This material also enables higher cabin pressure and humidity for more passenger comfort (though those factors seem meaningless on a 13-hour flight if the airline decides to squeeze in seats at a 30-inch pitch. Just saying.) The 787 also relies much more heavily on electrical systems than any other aircraft, a feature which unfortunately has had the side effect of having a faulty battery ground the entire fleet.
An aircraft of this size is obviously an incredibly complex system. In order to get new aircraft to market with less schedule risk, the industry has traditionally taken a very incremental approach. This typically involves re-using designs and components that are previously proven, sourcing components and parts from external vendors that supply multiple aircraft manufacturers, and making incremental adjustments to fit more passengers or add newly-available technology. This sounds a lot like what our industry calls derivative design. In this case, because the overall system does not radically change, parts and subsystems can be produced more efficiently by external companies that can scale their efforts across the aircraft systems vendors. This sounds a lot like what we call IP or ASSPs, depending on the context.
This incremental, outsource-driven approach reduces risk and speeds time-to-market. But what happens when your customers need more than just a minor incremental update? What happens when you want to create a truly differentiated breakthrough product? Or even a product that defines a new category? In Boeing's case, they tried the same approach, which was really the only approach they knew. This is well analyzed over at the Harvard Business Review blog:
In the end, they were able to get their breakthrough product to market, but with significant delays and issues. When they first began assembling all the components and subsystems, the entire system was far overweight - have you ever worked on a chip that, once assembled, was way over the area budget?
The HBR post contrasts this approach with that of Apple. Apple famously designs products from the system -- or perhaps the user experience -- point of view, and designs all the parts and much of the electronics themselves so that they work together toward the specifications of that end-product. They still outsource manufacturing and assembly, but they do so with exact "blueprints" that result from the system design effort.
In terms of chip design, they decide what they can source externally and what they need to design themselves in order to deliver what they deem important to the success of the product. For instance there has been no need for them to design their own flash memory. But with the recent iPhone they decided to design their own processor in order to make it "insanely great" in terms of performance and efficiency.
This would be like Boeing deciding it needed to design and build its own engines for the 787 that would further improve the aircraft's weight, range, and fuel efficiency. In that context it doesn't sound like such a crazy idea. Yet aircraft engines are never designed by aircraft manufacturers these days. In fact the aircraft engine manufacturers - primarily Rolls Royce and GE - have enough market power that Boeing had to design the 787 so that an engine from either vendor could plug in, depending on what the end-customer wanted. Imagine trying to build a breakthrough smart phone with the constraint that the customer could choose which processor to plug in!
Back to chip design -- according to the previously-linked article on Apple's processor efforts, Apple spent upwards of $500M on acquiring the needed capability and then designing and laying out their own processor. It is an extreme example. Fortunately for them, they will likely sell very high volumes of iPhone 5s, as well as iPad 3s where the A6 is also used. We work with many customers that do not have the ability to acquire and retain an army of engineers to design and manually implement an SoC. Yet these companies are able to deliver differentiated hardware from one generation to the next. They cannot achieve this by incrementally tweaking existing designs, or assembling an SoC from externally-sourced IP. So how do they do it? They use SystemC TLM-driven design and verification.
Take the case of Casio, who needed to re-architect their image processing algorithm while moving to the next process generation. They converted from an RTL-driven methodology to SystemC-driven, and this allowed them to explore the architectural solution space in days, which would not have been possible in several weeks with RTL. And by moving most functional verification to a higher-level, the overall verification/debug cycle was reduced by 50%. Similarly at Renesas Micro Systems, 2 designers completed a brand-new 17M gate design in 8 months using SystemC. We have many examples of very small teams producing new designs with SystemC and high-level synthesis that would have required very large teams and lots of time in an RTL-driven methodology.
But more importantly, in-sourcing hardware design enables engineers to more freely customize it to meet the needs of the end-product. Why else would Apple design a processor when there are established solutions available? There was a great article at EETimes on the need for an image processing IP core. The argument is that so many applications need image processing -- from smart phones to automobiles to a "smart pen" -- that a standard IP core could easily plug in and provide the functionality.
While this is true, each of these products has very different requirements in terms of size parameters, economics, power consumption requirements, and likely some differences in functional requirements. After all, a pen may only need to recognize movement in two dimensions, while a car needs three dimensions. A smart phone needs to recognize details like faces for photography and social networking, or neighborhoods for augmented reality. Notice that all of these functionalities also require unique interaction with software. SystemC is also well-suited for bridging the gap between hardware and software development. This is where innovation occurs -- across traditional compartmentalized tasks.
It still remains to be seen whether the 787's battery fire issues are also due to outsourcing. It would not be entirely surprising, since it could be that the battery was designed for a slightly different type of load or operating environment from how it was ultimately deployed. But it seems to be very frustrating for Boeing to debug it since they have to interface with another company while the clock is ticking. Then again, even Apple doesn't design their own batteries. That is, until they decide that by designing their own in the context of the system, they can produce a 4G smart phone that can get Jay-Z through an entire day without being power-less.
SystemC-driven design and verification allows companies to look at the make vs. buy decision in a new light, by greatly reducing the risk and turnaround time associated with the "make" decision. This enables the decision to made on business terms, where it should be.