I was at DesignCon in Santa Clara today and listened to Jonah Alben of NVIDIA's keynote on what their approach is to improving design methodology. He started by pointing out that most companies underinvest in EDA (and he includes NVIDIA in this). Partially it is complaceny: that last chip taped out so we know we can do it again. Partially it is getting used to the quirks of the methodology: we don't want to change. And partially it is a tradeoff since people working on methodology are not working directly on the product.
His rules for methodology improvement are:
- Promote "defend your productivity" mentality (egineers should complain more).
- Define a long-term direction (and avoid the "this must be fixed right now" mentality).
- Pick the most important task for near-term investment
- Every project should do something to improve the methodology (even though in the short-term that might not help that project)
- Explicitly allocate resources to methodology (or it won't happen)
- Involve the product engineers (don't let the methodology get too remote from actual development).
- Keep the lights on (don't try and cut over from the existing methodology to new in one go, you need to keep the old methodology up and running too).
He then talked a bit about what NVIDIA are doing for GPU accelerated EDA, in particular for logic simulation. The problem is that in 4 years you have a design that is 4 times bigger (two nodes of Moore's law) and 4 years improvement in CPU improvement, which leads to 3.4X longer simulations.
Working with Synopsys on VCS using an NVIDIA K10 running 2 jobs per K10. They get 5X speedup on the GPU which, with all the testbench results in an overall speedup of 2.8X. This isn't just experimental, it is in actual use at NVIDIA.
Working with Rocketick on gate-level simulation used as part of ATPG, with 2 jobs per K10, they get 17.1X speedup. On one next generation GPU design they sped up test-generation from 20.7 days to 16 hours. Again this is being used in NVIDIA for production use.