Global Sources
EE Times-India
 
EE Times-India > EDA/IP
 
 
EDA/IP  

Predict trouble spots to shorten timing closure

Posted: 09 Nov 2015     Print Version  Bookmark and Share

Keywords:FinFET  physical design  IP  SoC  EDA 

Whether Moore's Law is still valid or not, it is beyond question that semiconductor design is becoming ever more complex. In the race to adopt 16/14nm FinFET designs, we sometimes fail to see the ramifications of physical design constraints on logic design. In particular, most of the SoC wires that connect various IPs and sub-system blocks are contained in the interconnect IP. In modern designs, choices made in the interconnect IP design not only effect the SoC architecture but also physical design.

The need for physically aware IP
In an industry where time-to-market is of critical importance, our chip designs are becoming so complex that they are extremely difficult to implement in the physical design stage. If only we could predict trouble spots beforehand, perhaps we could avoid the pitfalls in the backend of the process. To do this, we need to make interconnect IP design physically aware to address to complexity and cost of sub-28nm SoC projects. This can be done by applying EDA techniques to leverage network on chip interconnect IP RTL hardware already used in many of the world's highest volume advanced SoC designs.

One of the challenges is to easily allow architects to visualise the physical design implications of their architecture choices. SoC architectures that do not take into account physical considerations can lead to serious problems. One relatively recent example of failing to consider physical design implications was a complex gaming chip with an architecture that was so difficult to route that it forced the rework of the SoC topology. This rework delayed the project so much that the chip missed a major market window and incurred a $200M loss.

Another challenge is that manual pipeline insertion is getting too lengthy and complex, leading to timing closure cycles of 45 to 90 days at 28nm processes and below. Even worse is that the interconnect changes 8 to 10 times per SoC project, so the manual pipeline insertion scheme has to be over-engineered in order to not invalidate the timing closure as the interconnect IP changes. Over-engineering leads to excess area which adds cost and additional latency cycles which lower performance. This problem is only getting worse on 16/14nm FinFET SoC projects and will get even more difficult at 10 and 7nm SoC generations.

Finally, physical layout teams are given well verified interconnect IPs from a logic point of view but these deliverables are rarely well verified from a timing perspective. This leads to additional place and route cycles which add to SoC delivery schedules and R&D cost.

Separate the interconnect from other SoC IP
SoCs are getting so complex that it is beneficial to separate the interconnect IP at the physical level from the rest of the SoC in order to manage growing complexity.

As an example, a network-on-chip (NoC) instance IP and a floor plan can be used to leverage physical information to automatically add pipelines in order to quickly achieve timing closure. Currently, pipelines are added manually and then the design is run through place and route to see if timing will converge. A long timing convergence loop can take a great deal of time and effort. This problem gets much worse at 16/14nm and even worse at 10nm.

It would be useful to the layout team to see what the architect intended from a physical perspective in order to get a starting point on their layout optimisations.

Timing closure and long paths
There is a rule for the SoC interconnect and that is, "Data has to get there on time." It has been a long while since an entire die can be crossed in one clock cycle. That means that transport delays are forcing designers to insert pipelines in order to close timing. The larger the chip die the higher the frequency, and the smaller the process geometry the more complex it is to insert pipelines.

As an example, consider a relatively modest size design in TSMC 28nm HPM process. This process requires adding a pipeline stage each 2.2mm in order to achieve our target performance frequency of 600Mhz. There is a long wire between the last level cache and the CPU, and so for this design, three pipelines have to be added.

Figure: Growing chip sizes mean that signals cannot reach their destination in one clock cycle. Designers have resorted to manually inserting pipeline stages to achieve the performance that they need, but this process has contributed to lengthy design cycles. Using back-end information in the front-end digital design process allows automatic pipeline insertion to accelerate and improve quality of timing convergence.


1 • 2 Next Page Last Page



Comment on "Predict trouble spots to shorten tim..."
Comments:  
*  You can enter [0] more charecters.
*Verify code:
 
 
Webinars

Seminars

Visit Asia Webinars to learn about the latest in technology and get practical design tips.

 

Go to top             Connect on Facebook      Follow us on Twitter      Follow us on Orkut

 
Back to Top