24.02.2013 Views

Chip Design Magazine..

Chip Design Magazine..

Chip Design Magazine..

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

Now online at www.chipdesignmag.com: “Trends” reports, i<strong>Design</strong>, IP <strong>Design</strong>er, FPGA Developer, Blogs, Resource Catalog, Back Issues, ... Includes<br />

www.chipdesignmag.com April 2008<br />

Lead Story:<br />

BURNING ISSUES IN<br />

TESTBENCH AUTOMATION<br />

Also in this issue:<br />

MEMORy CONTROllERS − BUIld OR BUy<br />

65-NM dESIGN − GETTING IT RIGHT<br />

FlExIBlE RAdIO: AN SdR AlTERNATIvE?<br />

Affiliate Sponsors:<br />

The SPIRIT<br />

Consortium<br />

Guide


EVER GET THE FEELING RTL DESIGN<br />

JUST ISN’T CUTTING IT ANYMORE?<br />

Electronic System Level <strong>Design</strong> What if you had the option of producing and verifying<br />

your complex designs up to 100 times faster while maintaining exceptional quality levels? You’d<br />

take it right? Of course you would, because Mentor Graphics ® has perfected ESL methodology.<br />

ESL elevates design to a higher level of abstraction so you can accomplish more every day, and<br />

move your complex new products to market on time, every time. Use the right tool for the job.<br />

To learn more go to www.mentor.com/AVM or call us at 800.547.3000.


More gates, more speed, more versatility, and of course, less cost<br />

— it’s what you expect from The Dini Group. This new board features 16 Xilinx Virtex-5<br />

LX 330s (-1 or -2 speed grades). With over 32 Million ASIC gates (not counting memories or<br />

multipliers) the DN9000K10 is the biggest, fastest, ASIC prototyping platform in production.<br />

User-friendly features include:<br />

• 9 clock networks, balanced and distributed to all FPGAs<br />

• 6 DDR2 SODIMM modules with options for FLASH, SSRAM, QDR SSRAM, Mictor(s),<br />

DDR3, RLDRAM, and other memories<br />

• USB and multiple RS 232 ports for user interface<br />

• 1500 I/O pins for the most demanding expansion requirements<br />

Software for board operation includes reference designs to get you up and running quickly. The<br />

board is available “off-the-shelf” with lead times of 2-3 weeks. For more gates and more speed,<br />

call The Dini Group and get your product to market faster.<br />

www.dinigroup.com • 1010 Pearl Street, Suite 6 • La Jolla, CA 92037 • (858) 454-3419 • e-mail: sales@dinigroup.com


In thIs Issue<br />

18<br />

1<br />

8<br />

15<br />

1<br />

4<br />

6<br />

Cover story<br />

Burning Issues in TestBench<br />

Automation<br />

With intelligent testbench automation,<br />

it’s possible to bring verification<br />

costs back in line.<br />

by Mark Olen and Matt Balance,<br />

Mentor Graphics<br />

Features<br />

Keeping FPGAs on Time<br />

by Andrew Haines, Synplicity<br />

Focus Report: What Makes <strong>Chip</strong>s Different?<br />

IBM, Samsung team up to differentiate chips<br />

with embedded software modules.<br />

by Ed Sperling<br />

Special:<br />

Synplicity's Confirma Seminar Series<br />

Make an Informed Build or Buy Decision<br />

for Memory-Controller Solutions<br />

When it’s time to make a memory mind, don’t<br />

forget to measure all of the risks and rewards.<br />

by Raj Mahajan and Raghavan Menon, Virage Logic<br />

Get <strong>Design</strong>s Right at 65-nm and Beyond<br />

Up-front planning and more attention to<br />

detail can help make the transition to new<br />

process nodes less daunting.<br />

by Prasad Subramaniam, eSilicon<br />

Cellular Standards May Demand<br />

Alternatives to SDR<br />

Is there a better way to meet growing<br />

consumer demands than relying on softwaredefined<br />

radios?<br />

by Duncan Pilgrim, Sequoia Communications<br />

READ ONLINE:<br />

www.chipdesignmag.com<br />

9<br />

special section—<br />

the sPIRIt<br />

Consortium Guide<br />

30 A Message from SPIRIT<br />

33 Doulog<br />

34 Evetronix<br />

35 IP Extreme<br />

36 Magillem<br />

37 Mentor Graphics<br />

38 Scarlet Code<br />

39 Synplicity<br />

Departments<br />

4 <strong>Chip</strong> <strong>Design</strong> Online<br />

6 Editor's Note-Synopsys and the<br />

Power of Programmability<br />

by John Blyler<br />

8 In the News-People in the News<br />

by Jim Kobylecky<br />

9 Max's <strong>Chip</strong>s & Dips — Silicon<br />

Canvas Blows My Socks Off<br />

by Clive “Max” Maxfield<br />

14 BlogSphere -- Taken for Granted:<br />

The persistence of ESL synthesis<br />

by Grant Martin<br />

16 Top view — Performance and Cost<br />

Now Drive Emulation<br />

by Luc Burgun, EVE<br />

28 Dot.org— The Second<br />

Commandment for Effective<br />

Standards<br />

by Karen Bartleson, Synopsys<br />

40 No Respins— Compilation<br />

Can Play a Big Role in System<br />

Performance<br />

by Clyde Stubbs, HI-TECH Software<br />

www.chipdesignmag.com<br />

Publisher & Sales Director<br />

Karen Popp (415) 255-0390 x19<br />

kpopp@extensionmedia.com<br />

EDitorial Staff<br />

Editor-in-Chief<br />

John Blyler (503) 614-1082<br />

jblyler@extensionmedia.com<br />

Contributing Editor<br />

Ed Sperling<br />

Managing Editor<br />

Jim Kobylecky<br />

Coordinating regional Editor<br />

Pallab Chatterjee<br />

associate Editor—China<br />

Jane Lin-Li<br />

Chief analyst<br />

Erach Desia<br />

Business Editor<br />

Geoffrey James<br />

Executive Editor – i<strong>Design</strong><br />

Clive "Max" Maxfield<br />

Contributing Editors<br />

Cheryl Ajluni, Dave Bursky, Nicole Freeman, Craig Szydlowski<br />

Editorial Board<br />

Tom Anderson, Product Marketing Director, Cadence • Cheryl<br />

Ajluni, Technical Consultant, Custom Media Solutions •<br />

Karen Bartleson, Standards Program Manager, Synopsys •<br />

Chuck Byers, Director Communications, TSMC • Rich Faris,<br />

Marketing Director, Real Intent • Kathryn Kranen, CEO, Jasper<br />

<strong>Design</strong> Automation • Barry Marsh, Vice President Marketing,<br />

Actel • Tom Moxon, Consultant, Moxon <strong>Design</strong> • Walter Ng,<br />

Senior Director, <strong>Design</strong> Services, Chartered Semiconductor<br />

• Scott Sandler, CEO, Novas Software • Steve Schulz,<br />

President, SI2 • Adam Traidman, <strong>Chip</strong> Estimate<br />

CrEatiVE/ProDUCtioN<br />

Production Director<br />

Stephanie Rohrer (415) 255-0390 x13<br />

Graphic <strong>Design</strong>ers<br />

Keith Kelly & Brandon Solem<br />

traffic Coordinator<br />

Liz Matos (415) 255-0390 x20<br />

SalES Staff<br />

advertising and reprints<br />

Karen Popp (415) 255-0390 x19<br />

kpopp@extensionmedia.com<br />

audience Development / Circulation<br />

Jenna Johnson • jjohnson@extensionmedia.com<br />

President<br />

Vince Ridley (415) 255-0390 x18<br />

vridley@extensionmedia.com<br />

Vice President, Marketing & Product Development<br />

Karen Murray • kmurray@extensionmedia.com<br />

Vice President, Business Development<br />

Melissa Sterling • msterling@extensionmedia.com<br />

Vice President, Sales<br />

Embedded Systems Media Group<br />

Clair Bright • cbright@extensionmedia.com<br />

to SUBSCriBE or UPDatE yoUr ProfilE<br />

www.chipdesignmag.com/subscribe<br />

SPECial thaNkS to oUr SPoNSorS<br />

<strong>Chip</strong> <strong>Design</strong> is sent free to design engineers and engineering managers in<br />

the U.S. and Canada developing advanced semiconductor designs. Price for<br />

international subscriptions is $125, US is $95 and Canada is $105.<br />

<strong>Chip</strong> <strong>Design</strong> is published bimonthly by Extension Media LLC, 1786 18th<br />

Street, San Francisco, CA 94107. Copyright © 2008 by Extension Media<br />

LLC. All rights reserved. Printed in the U.S.<br />

• April 008 <strong>Chip</strong> <strong>Design</strong> • www.chipdesignmag.com


4������<br />

� �� �����<br />

� �� �����<br />

4���4������4�����4�������4�������������<br />

�����������������������������������������������<br />

��������������<br />

������������������������<br />

4�������<br />

� �� �����������<br />

� � ����������<br />

� �� �����������������<br />

� �� �����������<br />

� �� ����<br />

��������������������������������<br />

4����������<br />

� ����������<br />

� �������������<br />

� ��������<br />

� ����������������<br />

� ��������<br />

� ������<br />

4���������<br />

� �� ���� � �� ����<br />

� �� �������� � �� ������<br />

� �� �������� � �� �����<br />

� �� ������ � �� �����


www.chipdesignmag.com<br />

ChIP DesIGn OnlIne<br />

Blogs:<br />

www.chipdesignmag.com/blogs<br />

JB’S CIRCuIT<br />

With VC investment down<br />

and acquisitions up, we’re<br />

taking care of “now.” But what will<br />

tomorrow be like?<br />

TAKEN FOR GRANTED<br />

Grant Martin takes a closer<br />

look at our use of embedded<br />

processors and ESL where, he points<br />

out, “hope springs eternal.”<br />

FAhRvERGNüGEN'<br />

Juergen Jaeger celebrates the<br />

skills and pleasures of highperformance<br />

verification. Use the right<br />

tools he proclaims and enjoy the journey.<br />

RICK'S WIRELESS NETWORKING<br />

The efficiency of a tool, Rick<br />

Denker explains, depends<br />

on how well it affects our end goals.<br />

Rick’s blog is a model of efficiency.<br />

PALLAB’S PLACE<br />

Pallab Chatterjee takes a<br />

close look at the SNUG<br />

event in San Jose, reporting on talks<br />

and new developments.<br />

TuNING IN TO JIM<br />

Jim Lipman continues his<br />

Everyman approach to finetune<br />

the mind-bending world of<br />

technology and invention.<br />

KOBY’S KAOS<br />

Editor Jim K finds that most<br />

future mistakes begin in the<br />

past. If you forget the human element,<br />

hard science can produce random events.<br />

i<strong>Design</strong>:<br />

www.chipdesignmag.com/idesign<br />

METhODOLOGY AND FLOW ChALLENGES<br />

IN SYSTEM-LEvEL CO-DESIGN OF<br />

MuLTI-DIE PACKAGED SYSTEMS<br />

Today’s designs require new robust design<br />

flows with concurrent capabilities that<br />

bridge the communication gap.<br />

COORDINATING FROM SILICON TO PACKAGE<br />

A new IC/Package/Board co-design<br />

and co-verification methodology needs<br />

to be implemented.<br />

IF YOu CAN’T MEASuRE PROGRESS<br />

AGAINST YOuR PLAN, YOu hAvE NO PLAN!<br />

There are verification technologies<br />

available today that use markup languages<br />

to capture functional verification plans,<br />

enabling measurement of completion<br />

throughout a project.<br />

And always look for Max’s latest <strong>Chip</strong>s<br />

and Dips column!<br />

Max's <strong>Chip</strong>s and Dips: Silicon Canvas<br />

Blows My Socks Off<br />

By Clive (Max) Maxfield<br />

Max is still trying to wrap his brain<br />

around all of the incredible things the<br />

company’s working on in the field of<br />

automating analog layout.<br />

vISIT<br />

www.chipdesignmag.com<br />

Recent Poll Results:<br />

How will the chip design<br />

industry fare 2008?<br />

Grow – 52%<br />

Level Off – 29%<br />

Shrink – 19%<br />

To take part in our current poll,<br />

visit www.chipdesignmag.com<br />

e-Newsletters<br />

www.chipdesignmag.com/enewsletters<br />

Stay ahead of the pack with latest industry<br />

viewpoints and news from our e-newsletters.<br />

Here’s a sample of recent issues:<br />

ChIP DESIGNER<br />

• The Case for Heterogeneous Multi-<br />

Core SoCs<br />

• Atom Launches for Low-Power<br />

Processors<br />

• Synopsys to Acquire Synplicity<br />

IP DESIGNER & INTEGRATOR<br />

• Algorithmic Synthesis Boosts<br />

Platform-Based SoC <strong>Design</strong> and<br />

Validation<br />

• What You Need to Know about<br />

PCI Express<br />

• Field Programmable SoCs Require<br />

IP-centric Solutions<br />

NEW! - COMING TO A SCREEN NEAR YOu:<br />

ChIP/PACKAGE/BOARD -- AN IDESIGN<br />

NEWSLETTER - Edited by Clive (Max)<br />

Maxfield<br />

ChIP DESIGN TRENDS (CDT)<br />

BIANNuAL REPORT – 2007<br />

Basis of Report<br />

• Over 44,000 unique, worldwide<br />

and regional pre-silicon design<br />

investigations<br />

• Analysis and forecasting of key chip<br />

design metrics, including power, die<br />

size, clock speed, analog vs. digital<br />

IP, metal layers, technology nodes,<br />

memory usage and much more.<br />

Available Now!<br />

Report ID: TB10235<br />

Price: $1,950 (One Issue); $2,950<br />

(Two Issues)<br />

To learn more contact: Melissa Sterling<br />

at 415.970.1910 or<br />

msterling@extensionmedia.com<br />

4 • April 008 <strong>Chip</strong> <strong>Design</strong> • www.chipdesignmag.com


sensors expo<br />

& conference<br />

Education:<br />

Featuring Nine Research-Based Technical Tracks:<br />

• Novel Approaches to Measurement & Detection<br />

• Wireless Sensing<br />

• Systems & Embedded Intelligence<br />

• Power Management, Battery Technology & Low-Power Sensing<br />

• Sensor Standards<br />

• Emerging Technologies & Applications<br />

Including Three ALL-NEW Tracks Examining:<br />

• Transportation Infrastructure & Structural Health Monitoring<br />

• Machine Health & Predictive Maintenance<br />

• “Green” Sensing Technologies & Applications<br />

And Three Optional Pre-Conference Symposia:<br />

• Digital Data Acquisition & Analysis Symposium<br />

• Nanotechnology & MEMS/MST/Micromachines Symposium<br />

• Energy Harvesting for Powering Sensor Applications Symposium<br />

June 9 - 11, 2008<br />

Donald E. Stephens<br />

Convention Center<br />

Rosemont, Illinois<br />

www.sensorsexpo.com<br />

Advances in Measurement, Monitoring,<br />

Detection and Control<br />

N e w A p p r o a c h e s • N e w T e c h n o l o g i e s • N e w A p p l i c a t i o n s • N e w I d e a s<br />

OPENING KEYNOTE<br />

James<br />

McLurkin<br />

MIT Roboticist, Inventor,<br />

Researcher, Teacher<br />

“Engineering<br />

Creativity”:<br />

Exercises for<br />

the Right Brain<br />

Expo Hall – Over 200 key suppliers showcasing their products and solutions at Sensors Expo.<br />

Register Today & Take Advantage of the Early Bird Rates or Sign-up Now for Your<br />

FREE Expo Hall Pass! Register at www.sensorsexpo.com or call 877-232-0132.<br />

Please use your source code: 327M<br />

Produced By: Official Publication: Gold Sponsors: Silver Sponsor: Media Sponsor:


eDItOR's nOte<br />

By John Blyler<br />

Synopsys and the Power of Programmability<br />

Many have commented about the Synopsys’s intention to<br />

acquire Synplicity - a high-end FPGA synthesis and hardware<br />

verification prototyping company. Much has also been made about the<br />

significance of the later capability, i.e., highlighting the important of<br />

prototyping in any verification methodology. But I believe the FPGA<br />

synthesis portion – that is, the programmability of reconfigurable<br />

hardware – may ultimately prove to be as important. Let’s consider<br />

what both capabilities have to offer<br />

The capability to prototype complex ASIC/ASSP designs in<br />

FPGA-hardware will complement Synopys’s existing product<br />

lines, most noticeably their virtual prototyping business. You may<br />

recall that roughly two years ago Synopsys expanded into the<br />

electronic system level (ESL) market through the acquisition of<br />

Virtio – a company that created virtual platforms for embedded<br />

software development. Combing high-level virtual with hardware<br />

prototyping means that Synopsys now covers both the software<br />

and hardware sides of system level chip design.<br />

Still, the acquisition is not without risks. FPGA-prototyping<br />

is still a relatively low-end technique for verification – though<br />

the hardware prototyping market is definitely a growth area.<br />

Synopsys is well known for high end tools. Will they be able<br />

to attract the developers who use FPGA prototyping? The<br />

acquisition of Synplicity will help attract many of these<br />

developers to Synopsys, but will the integration of the low-end<br />

and high-end technologies be successful? Synopsys has some<br />

system-level experience, but very little board level capabilities.<br />

These will be challenges that must be overcome.<br />

Still, I can’t help but wonder if a presence in<br />

the “programmability” market won’t prove<br />

as important as the capability to perform<br />

hardware prototyping.<br />

Now, let’s look at the timing of this intended acquisition.<br />

Why now? George Zafiropoulos, vice president of solutions<br />

marketing at Synopsys, explains that the removal of past<br />

product overlaps contributed to the current acquisition. “A<br />

few years back, we (Synopsys) had a product called FPGA<br />

Compiler that competed with Synplicity. We exited that<br />

market because we felt that our primary target market was the<br />

high end ASIC space and custom IC space.”<br />

Conversely, George notes that in the recent past Synplicity<br />

focused on the structured-ASIC (S-ASIC) market. This<br />

focus – at least in some measure – offered competition to<br />

Synopsys. Synplicity has since exited the S-ASIC market,<br />

just as Synopsys left the FPGA world. “We went from being<br />

competitors to non-competitors,” said George. “ The door<br />

was now open for both companies to collaborate together,<br />

which we did about a year ago with a join marketing and<br />

development program in verification.”<br />

All of which makes good business sense. Still, I can’t help but<br />

wonder if a presence in the “programmability” market won’t prove<br />

as important as the capability to perform hardware prototyping.<br />

For example, Intel’s recent announcement that they are developing<br />

reconfigurable hardware capabilities – in the area of digital multiradios<br />

SoCs – highlights the need for programmability in today’s<br />

SoCs. Either way, the Synopsys acquisition of Synplicity seems<br />

like a move in the right direction. ◆<br />

6 • April 008 <strong>Chip</strong> <strong>Design</strong> • www.chipdesignmag.com


EVE AD


In the neWs<br />

People in the News<br />

J. MARK GOODE BECOMES PRESIDENT<br />

AND CEO OF vIASIC<br />

ViASIC Inc. has promoted J. Mark Goode to the<br />

position of president and chief executive officer.<br />

Goode holds a bachelor’s degree in music from the<br />

University of Miami, a master’s degree in computer<br />

science from New Jersey Institute of Technology, and a master’s<br />

degree in business administration from the Wharton School at the<br />

University of Pennsylvania<br />

DR. ILIA OvSIANNIKOv JOINS MAGNAChIP AS<br />

vICE PRESIDENT OF SOC ENGINEERING<br />

Magna<strong>Chip</strong> Semiconductor Ltd. has named<br />

Dr. Ilia Ovsiannikov vice president of SoC<br />

engineering in the Imaging Solutions Division.<br />

Ovsiannikov received a BS cum laude in computer<br />

engineering from MIREA Moscow State Technical University as<br />

well as an MS and PhD in Computer Science from the University<br />

of Southern California.<br />

MAuRY AuSTIN SELECTED AS MIPS CFO<br />

MIPS Technologies Inc. has appointed Maury<br />

Austin as chief financial officer. Austin brings<br />

more than 25 years of financial experience to<br />

the company. Most recently, he served as SVP<br />

and CFO of Portal Software Inc. Austin holds a<br />

bachelor’s of science in finance from the University of California,<br />

Berkeley and an MBA from Santa Clara University.<br />

GERhARD ANGST RECEIvES EDA AChIEvEMENT AWARD<br />

Concept Engineering has announced that Gerhard Angst, its founder<br />

and CEO—together with the company’s entire engineering team—<br />

has received the EDA Achievement Award 2007 from edacentrum<br />

(www.edacentrum.de), an independent association. This award is<br />

given for results in research or development and for improvements<br />

within the Ekompass Project.<br />

JIM hOGAN AND ED ChARRIER<br />

TO ADvISE PYxIS TEChNOLOGY<br />

Pyxis Technology Inc. announced<br />

that EDA veteran Jim Hogan<br />

and KLA-Tencor vice president<br />

and general manager Ed Charrier<br />

have joined as advisors to the company’s board of directors.<br />

Jim Hogan is general manager of Vista Ventures Partners. Ed<br />

Charrier is vice president and general manager of the Process<br />

Control Information Division at KLA-Tencor. ◆<br />

KEEPING FPGAS ON TIME<br />

By Andy haines<br />

One of the big reasons companies love FPGAs<br />

is because they provide a great way to get an<br />

electronic product to market quickly and without<br />

all the hassles of ASIC design. Early market entry<br />

and being first with an application has proven over and over again<br />

to be critical for product success. With FPGAs now containing<br />

millions of equivalent ASIC gates, high-density RAM, DSP<br />

functions, embedded processors and high-speed IO, finding a way<br />

to take full advantage of all these resources without re-inventing the<br />

wheel and maintaining the early-to-market advantage offered by<br />

programmable logic, is becoming an increasingly tough challenge.<br />

As ASIC designers have done for years, FPGA designers are now<br />

turning to the use of IP as a solution. In fact, almost half of the<br />

FPGAs designed today contain some sort of IP. IP may come<br />

from an FPGA vendor, a 3rd party or developed in-house from<br />

previous designs, but nevertheless it is quickly becoming standard<br />

practice for higher-end FPGA design. With this migration<br />

of IP into FPGAs comes new methodologies and design tool<br />

requirements. A key problem here is that as the complexity of the<br />

IP itself has increased, it has become critical for EDA tools such<br />

as those for simulation and synthesis, to understand timing within<br />

the IP. Synthesis for example needs to have good timing estimates<br />

of paths within the IP in order to ensure optimization is working<br />

on true critical paths within the design and that logic around the IP<br />

is also optimized. This is not so much an issue for in-house IP, but<br />

as more designers turn to 3rd party IP which is typically encrypted,<br />

it can be a significant problem.<br />

A second difficulty for FPGA designers comes in just locating<br />

the right IP, and then determining technically if it will work in<br />

their system. Working through legal agreements permitting<br />

access to protected IP alone can be time consuming even if<br />

only evaluating one or two alternatives.<br />

What’s needed is an easy-to-use source for systems designers<br />

targeting FPGAs to browse a wide range of IP offerings,<br />

download and then evaluate it in their system. This would allow<br />

designers to more easily determine the best alternative for their<br />

particular design. At the same time IP providers need to be<br />

assured that their IP is protected during such evaluations. EDA<br />

and IP providers must work together to offer FPGA designers a<br />

solution to their advanced IP needs. ◆<br />

8 • April 008 <strong>Chip</strong> <strong>Design</strong> • www.chipdesignmag.com


By Clive (Max) Maxfield<br />

Silicon Canvas Blows My Socks Off<br />

MAx's ChIPs & DIPs<br />

Recently, I was chatting to the folks at Silicon Canvas, and I'm still trying to wrap my brain around<br />

all of the incredible things they've been working on in the field of automating analog layout.<br />

Good grief, things are certainly moving fast in analog<br />

EDA space (where no one can hear you scream).<br />

Recently, I was chatting to the folks at Silicon Canvas, and<br />

I'm still trying to wrap my brain around all of the incredible<br />

things they've been working on.<br />

Due to the fact that my background is that of a digital logic<br />

designer, I have a tendency to focus my attention on digital<br />

design tools and to neglect what's happening on the analog<br />

side of the fence. I need to stop doing this, because I'm<br />

coming to realize that there's a lot of really exciting stuff<br />

going along in the analog domain.<br />

Like many people, for example, I tend to think that analog<br />

designs are still predominantly created painstakingly<br />

by hand and that very little automation is available to<br />

analog designers (with the exception of stuff like PCells,<br />

of course). Similarly, like a lot of folks, I've considered the<br />

Laker product from Silicon Canvas only in the context of<br />

an analog layout editor – until now I didn't realize just how<br />

much automation this little beauty provides.<br />

Truth to tell, it can be a little tricky to fully grasp how<br />

the Silicon Canvas product family has evolved over time.<br />

Figure 1 provides a high-level historical perspective of the<br />

way in which the Silicon Canvas products, starting with the<br />

initial Laker layout editor, have grown into a full-featured,<br />

technology-leading custom IC design flow.<br />

Note that Laker OA is not a specific product per se, but instead<br />

represents an Open Access capability that is of special interest<br />

in context of the Interoperable PCell Library (IPL) initiative.<br />

Now, let's see if I can do this justice without confusing the<br />

issue too much. The original Laker-L2 was a rules-driven<br />

analog layout tool that provided some sophisticated realtime<br />

DRC/LVS capabilities. Of particular interest was its<br />

concept of Mcells ("Magic Cells"), in which a simple forms-<br />

Figure 1: A historical perspective of Laker product introductions.<br />

driven interface allows you to quickly and easily create<br />

analog cells without requiring any programs or scripts.<br />

The next major advance was Laker-L3, which supported<br />

schematic entry (for new designs), automatic schematic<br />

generation (from legacy design netlists), and automated<br />

schematic-driven layout capabilities that are fully integrated<br />

with Laker-L2's powerful rules-driven technology.<br />

Laker-L3 includes a custom floor planner that assigns<br />

pin locations automatically, provides congestion map<br />

information, and mixes soft and hard instances to<br />

minimize the gap between top-down planning and<br />

bottom-up layout realization. The integrated Stick<br />

Diagram Compiler provides a high level of abstraction to<br />

facilitate efficient transistor floorplanning, including gate<br />

merging, swapping, and splitting. Meanwhile...Please visit<br />

www.chipdesignmag.com to read more online! ◆<br />

Clive (Max) Maxfield is author of Bebop to<br />

the Boolean Boogie (An Unconventional Guide<br />

to Electronics) and The <strong>Design</strong> Warrior’s<br />

Guide to FPGAs (Devices, Tools, and Flows).<br />

Max also is the co-author of How Computers<br />

Do Math, featuring the pedagogical and<br />

phantasmagorical virtual DIY Calculator<br />

(www.DIYCalculator.com).<br />

<strong>Chip</strong> <strong>Design</strong> • www.chipdesignmag.com April 008 • 9


Where the electronics design industry meets…<br />

EDA tools, design methods and solutions<br />

for ICs, ASICs, FPGAs and SoCs.<br />

Electronic design has never been easy. And with<br />

today’s growing technical challenges, customer<br />

expectations, and competitive pressures, it’s not<br />

getting any easier.<br />

More than 10,000 members of the electronics<br />

design community will convene in Anaheim in<br />

search of the latest technologies and ideas and find<br />

personal exchanges that spark creativity and drive<br />

bold innovations.<br />

Their destination will be the 45th <strong>Design</strong><br />

Automation Conference (DAC) at the<br />

Anaheim Convention Center, June 8-13, 2008,<br />

in Anaheim, California, USA.<br />

So make your plans now. Come to the place where<br />

design meets your critical challenges. Come to DAC.<br />

Only DAC offers:<br />

• A robust technical program covering the latest<br />

research developments and trends, ranging from<br />

management practices to products, methodologies<br />

and technologies for the design of SoCs, FPGAs,<br />

ASICs, digital ICs and more.<br />

• Worldwide attendance from developers,<br />

designers, researchers, academics, managers and<br />

engineers from leading electronics companies and<br />

universities.<br />

• A vibrant exhibition with over 250 companies<br />

displaying products, technologies and services for<br />

the electronic design industry.<br />

• A full roster of panels and presentations is once<br />

again planned for the DAC Pavilion on the Exhibit<br />

Floor, open for all attendees.


Call for Exhibitors:<br />

DAC is actively expanding its exhibitor base to encompass the entire design eco-system from embedded<br />

software and system-level design tools, IP, EDA, and design services through to silicon manufacturing. The<br />

expanded scope of the show floor along with DAC’s unique booth/suite combination and world-class<br />

conference and educational program makes participation a must for companies with products used in the<br />

design and development of circuits and systems.<br />

Contact Susie Horn at 303-530-4333 or Susie@dac.com for details.<br />

June 8-13, 2008<br />

Anaheim Convention Center<br />

Anaheim, California<br />

T +1 303.530.4333<br />

F +1 303.530.4334<br />

US Toll-free 1 800.321.4573<br />

©2007 <strong>Design</strong> Automation Conference<br />

www.dac.com


FOCus RePORt<br />

What makes chips different?<br />

IBM, Samsung team up to differentiate chips with embedded<br />

software modules, but can it work this time?<br />

Re-using embedded software has been the subject of<br />

exuberant marketing for nearly two decades—and so far<br />

it has amounted to little more than that. But in recent months,<br />

quietly and far from the blaring hype, real-world testing is under<br />

way to use software as a cost-effective means to embedding<br />

functionality in a wide range of devices. Perhaps no where<br />

is this development of re-useable embedded software more<br />

evident than with the major players of the Power Architecture<br />

(Power.org) community – especially with IBM.<br />

The most experiment in re-useable software comes from a<br />

joint effort between Samsung and IBM’s Haifa Research<br />

Lab in Israel. If all goes as planned, IBM plans to begin<br />

commercializing what is essentially a middleware framework<br />

that allows embedded software to plug in and differentiate<br />

semiconductors. That means in many cases it will no longer be<br />

necessary to build new chips for every new device.<br />

Exactly where this plays in the real world is unknown, but the<br />

first beneficiaries are likely to be IBM’s partners in its growing<br />

ecosystem, most notably the members of Power.org, which<br />

includes companies such as Freescale, Sony, Cadence <strong>Design</strong><br />

Systems, Synopsys and Applied Microcircuits (AMCC).<br />

The re-usable code project is based on what IBM calls the<br />

Consumer Electronics Development Environment, known<br />

inside IBM as COMPETENCE. At the heart of this<br />

environment are architecture description languages, a class of<br />

descriptive languages that includes Darwin, C2 and Acme, to<br />

provide a high-level of abstraction for writing software. None<br />

actually exists for the consumer electronics world, a fact that<br />

IBM wants to change and to capitalize on.<br />

Normally, embedded software is written for specially<br />

developed processors with very specific functionality. With<br />

COMPETENCE, a single chip design theoretically will be<br />

sufficient to replace many designs, because the embedded<br />

software modules will add different functionality.<br />

The advantages are immediately obvious to anyone who has<br />

designed chips for such areas as cell phones or MP3 players.<br />

Each chip can cost many millions of dollars to develop, with<br />

those costs escalating at each new process node. At 65nm and<br />

45nm, non-recurring engineering costs render it impossible<br />

By ed sperling<br />

to earn a profit without an enormous sales volume. Adding<br />

more functions only increases that cost, and it can slow the<br />

time it takes to bring products to market, in large part because<br />

of the difficulty of verifying the chip.<br />

“The world of modeling is well known in hardware because<br />

it’s so difficult to work with,” said Alan Hartman, research<br />

scientist manager at IBM’s Haifa Research Lab. “In software,<br />

this is new. But when you look at the cost of a luxury car, a<br />

large part of that cost is the software.”<br />

hISTORY REPEATS…SORT OF<br />

In the realm of software, component-based systems engineering<br />

is a relatively new field. Some of the groundbreaking research<br />

was performed by Philips Electronics a decade ago using<br />

various component models, and in the mid-1990s many<br />

software companies including IBM had plans for software<br />

objects—portions of applications rather than applications<br />

themselves—that could be bundled together like Legos and<br />

plugged into a framework.<br />

The problem was that at the application level, these software<br />

objects never played together like a single-vendor plugand-play<br />

system. IBM, however, was large enough and had<br />

enough of an investment in middleware that it could shift<br />

the development to where there was a strong business need.<br />

That need grew as the cost of developing ASICs and ASSPs<br />

escalated into the tens of millions of dollars.<br />

“If you look at multifunction printers, some have faxes,<br />

scanners, and different speeds, and each is implemented by<br />

a separate component,” said Hartman. “We can design a new<br />

printer with new components so you only have to write a very<br />

small amount of code.”<br />

The approach plays off of the holistic design that IBM has<br />

been pitching for the past several years. “What this allows<br />

you to do is manage a product line as a whole,” said Julia<br />

Rubin, an IBM research scientist for industry solutions in<br />

the Haifa Lab. “You determine what features you want, and<br />

only the pieces you want go to market.”<br />

That works in principal, at least. But the real trick in this<br />

scenario is what is relegated to software. Tony Massimini,<br />

1 • April 008 <strong>Chip</strong> <strong>Design</strong> • www.chipdesignmag.com


chief of technology at Semico Research, said certain<br />

functions are fine in software, particularly with a powerful<br />

PowerPC processor running those functions. But he said<br />

the model runs into trouble when it comes to video because<br />

the response time has to be so high that it needs to be hardwired<br />

into the processor.<br />

“ The PowerPC has a lot of performance, and if it’s not a<br />

portable application you may get away with it in software,”<br />

Massimini said. “If you look at a car, the response time versus<br />

streaming video is easy to accomplish in software. With a<br />

portable application, you can overload the processor.”<br />

WhAT’S IN ThE PACKAGE?<br />

What IBM and Samsung are developing goes beyond just writing<br />

embedded software, though. Hartman said the companies also<br />

are providing an editing environment, the ability to validate the<br />

software with rules and to configure a particular product.<br />

“ Then, at the touch of a wand, you get general building<br />

scripts,” he said. “Companies in consumer electronics have<br />

already moved to a component-based architecture, so this<br />

is a natural fit. What we’re also doing is taking legacy code<br />

and componentizing it.”<br />

When exactly that will become productized remains to<br />

be seen, however. At this point, the legacy code work is<br />

relegated to some future date. IBM will offer that type of<br />

work as a service through its consulting group. However,<br />

IBM researchers contend that fixing the bugs in software<br />

is a relatively simple and inexpensive process, versus<br />

trying to fix them in hardware. In part, that is because the<br />

programming languages for software are relatively flexible.<br />

C++, for example, is highly modular, compared with trying<br />

to scour through massive amounts of verification data to<br />

pinpoint a bug and then fix it in hardware.<br />

LANGuAGE ISSuES<br />

One of the reasons that ADLs remain relatively unknown in design<br />

is that there is no single standard. As such, there is no agreement<br />

on what needs to be included in the overall description.<br />

IBM is looking to establish such as standard for the consumer<br />

electronics industry, along with a modeling framework that<br />

third-party components can plug into to provide consistent<br />

behavior. IBM said it also plans to extend that framework<br />

to other industries in the future.<br />

FOCus RePORt<br />

At least part of that will be based on a standard modeling<br />

framework, called the Unified Modeling Language. IBM’s<br />

COMPETENCE relies on the Rational toolset (see<br />

diagram), which allows designers to develop components<br />

based on established models. But how quickly all of this<br />

technology comes to fruition, and just how widely it is<br />

deployed, remains to be seen. Building better software,<br />

particularly for the embedded world, is not a new problem.<br />

REAL-WORLD APPLICATIONS<br />

What this means for chipmakers such as Freescale,<br />

interconnet makers such as AMCC, and large OEMs such<br />

as Sony depends on when and how this technology hits<br />

the market. Nevertheless, such moves could make both<br />

companies more competitive because re-usable code means<br />

faster time to market and lower NRE costs.<br />

In Freescale’s case, for example, such code could be used across<br />

a raft of vertical market applications, including automotive,<br />

consumer electronics, industrial, and wireless, while for AMCC<br />

the code would be used in embedded interconnect applications.<br />

In Sony’s case, re-usable code could help cut NRE costs in the<br />

consumer electronics market, where price is a key differentiator<br />

for many of its competitors. ◆<br />

Ed Sperling has spent the past two decades immersed<br />

in technology and is the recipient of numerous awards<br />

for journalistic excellence.<br />

<strong>Chip</strong> <strong>Design</strong> • www.chipdesignmag.com April 008 • 13


Receive a Complimentary<br />

EDA & IP Industry Market Report<br />

Is your company a missing piece<br />

in the EDA & IP industry?<br />

The EDA Consortium Market Statistics Service<br />

(MSS) is the timeliest and most detailed market<br />

data available on EDA & IP. Top companies have<br />

been participating in the MSS since 1996, but no<br />

report is complete without ALL the pieces.<br />

If your company is not contributing to the MSS, it<br />

is a missing piece in the puzzle. Contribute your<br />

company’s data to the MSS and receive a<br />

complimentary Annual MSS Executive Summary.<br />

What’s in it for your company?<br />

• Worldwide statistics on EDA & IP revenue<br />

• Ability to size your market<br />

• Six years of market data for trends analysis<br />

• The opportunity to make a positive difference<br />

in the industry<br />

For details contact:<br />

mss@edac.org<br />

(408) 287-3322<br />

www.edac.org


photo courtesy of www.fotos.org<br />

Confirma Seminar Series<br />

Theme: Prototyping as Productive<br />

Verification Methodology<br />

When: May-June (Exact dates tbd)<br />

Dates and Locations:<br />

• Wednesday, May 28 - Sunnyvale, CA<br />

• Wednesday, June 18 - Toronto, Ontario<br />

• Thursday, June 19 - Schaumburg, IL<br />

• Tuesday, June 24 - Austin, TX<br />

• Wednesday, June 25 - San Diego, CA<br />

• Thursday, June 26 - Longmont, CO<br />

“The sophistication of verification tools and techniques has<br />

increased with design complexity.”<br />

– John Blyler, <strong>Chip</strong> <strong>Design</strong> magazine<br />

** Agenda **<br />

• Verification market and technology trends – 30 min<br />

(John Blyler or Clive “Max” Maxfield)<br />

There are many ways to verify ASIC and SoC designs, and<br />

several tools have to work together to successfully complete<br />

a design. But what are the recent trends and developments?<br />

Which methodology is gaining momentum and which is losing?<br />

And how do successful companies verify their designs?<br />

ThE PERSISTENCE OF ESL SYNThESIS<br />

Many people are familiar with Salvador Dali’s painting, The<br />

Persistence of Memory:<br />

When I look at developments in ESL, I sometimes think of<br />

the painting and its title. Some of the old ideas in ESL must<br />

be good ones because they keep coming back. ESL synthesis,<br />

for example, has now been through at least two or three<br />

generations. In fact, to many people, ESL = ESL synthesis<br />

(or behavioural synthesis, or high-level synthesis). We have<br />

seen some recent activity in this area - for example, Celoxica<br />

got out of ESL synthesis to focus on FPGA-based accelerators<br />

for high performance computing applications, and moved its<br />

technology and team to Catalytic, forming Agility <strong>Design</strong><br />

Solutions. At the recent Electronic <strong>Design</strong> Process Workshop,<br />

I saw Rishiyur Nikhil, CTO of Bluespec, give a talk on Parallel<br />

Atomic Transactions in the Bluespec synthesis approach.<br />

I’ve heard some people comment that they don’t really<br />

consider Bluespec (using System Verilog and some particular<br />

• Prototyping – a mandatory step for successful ASIC/<br />

ASSP and SoC design – 60 min (Synplicity)<br />

Prototyping, the use of FPGAs to verify an ASIC or SoC<br />

design has come a long way and is no longer a ‘ad-hoc and<br />

assembly required’ methodology. Prototyping has evolved into<br />

a productive and without any doubt, highest performance,<br />

verification solution. So what are the things you should<br />

consider when deploying prototyping? And what are the<br />

necessary steps involved to take an ASIC design and get it to<br />

work on a prototyping board?<br />

• And it works … - 30 min (customer)<br />

Real-world example of a successful prototyping project. <strong>Design</strong><br />

setup, challenges, results.<br />

• The HAPS prototyping systems – 30 min (Synplicity)<br />

A closer look at the HAPS architecture and capabilities.<br />

System set-up and configuration. Getting the most out of a<br />

prototyping system<br />

• Lunch<br />

• Confirma demonstration – 60 min (Synplicity)<br />

Life demonstration of the complete flow:<br />

- preparing the ASIC design for prototyping<br />

- design partitioning and implementation<br />

- prototype system configuration and bring-up<br />

- debugging the design and fixing errors<br />

constructs to express their atomic transactions) to be “highlevel”<br />

enough to be ESL synthesis. Most ESL synthesis<br />

tools use C/C++/SystemC or other dialects of C as inputs.<br />

However, the atomic transaction semantic used by Bluespec<br />

is definitely more abstract than the normal way people write<br />

HDL for RTL synthesis, so I think we can count it... Please<br />

continue reading at Taken for Granted by Grant Martin at<br />

http://www.chipdesignmag.com/martins/?p=6 ◆<br />

<strong>Chip</strong> <strong>Design</strong> • www.chipdesignmag.com April 008 • 15


Top View<br />

By Luc Burgun<br />

Performance and Cost Now Drive Emulation<br />

Within electronic design automation (EDA), the<br />

emulation market sector has a set of dynamics all its<br />

own. Just consider that this market has been growing steadily<br />

since 2001. It went from $120 million in 2001 to $160+<br />

million in 2007. To say that the emulation market sector is<br />

thriving is almost an understatement.<br />

Emulation has evolved from infancy at the end of the 1980s<br />

to childhood in the 1990s and finally to maturity after 2000.<br />

Yes, it has been around for close to 20 years. Who remembers<br />

Supersim, the first custom field-programmable-gate-array<br />

(FPGA) -based emulator launched in 1988? It pre-dated the<br />

rise of Zycad, Quickturn, and Ikos—icons of the 1990s.<br />

During the ’90s, significant engineering efforts were invested<br />

to decrease the problem of emulation installation and enhance<br />

debugging features. Until then, “black boxes” served in the absence<br />

of native visibility into the mapped designs. In the early days, the<br />

deployment of an emulation system came with a “field application<br />

engineer in the box” to compensate for drawbacks. These limitations<br />

drove the introduction of custom chip emulators to replace standard<br />

FPGA-based solutions. Of course, the casualties of this trend<br />

are legend and came from sub-par performance and prohibitive<br />

cost. FPGA-based emulators responded by reaching Megahertz<br />

performance in 1995. By the end of the 1990s, the typical selling<br />

price of these tools decreased to 5 to 10 cents per gate from more<br />

than 50 cents per gate.<br />

Over the last 10 years, capacity has grown from 500,000<br />

application-specific-integrated-circuit (ASIC) gates to five<br />

million gates on an average design. For these reasons, FPGAprototyping<br />

popularity soared at the beginning of the new<br />

century. Within the past three years, the market for emulation<br />

has evolved once again. Development teams are now requesting<br />

higher performance, lower cost, and a more flexible test<br />

environment. They also want debugging capabilities at a<br />

higher level of abstraction. Meanwhile, hardware/software coverification<br />

is driving the demand for higher performance and<br />

software diagnostic validation. Billions of cycles are needed for<br />

drivers and a real-time operating system (RTOS) or to execute<br />

applications like high-definition display/compression.<br />

According to the “old” metric, the price must be 1 cent per gate<br />

to target software deployment or have access to the software<br />

community. The actual goal is to reduce the cost per cycle. After an<br />

active consolidation period, there are several strong competitors in<br />

this field. Such competition will translate into a significant price<br />

erosion.<br />

While in-circuit emulation (ICE) is less popular, the need for a<br />

more flexible test environment is growing. Application hardware<br />

bridges fail to offer controllability/reproducibility. They also add to<br />

the complexity of the test environment. In addition, synthesizable<br />

testbenches (STBs) are difficult to develop. Transaction-based<br />

verification (TBV) is the only alternative to quickly set up a test<br />

environment for a full design.<br />

At higher levels of abstraction, debuggers are needed. Although<br />

it may be considered useful to generate waveforms of all signals<br />

at any clock cycle, it won’t be effective. Instead, designers need<br />

system-level hardware/software debugging tools because they<br />

don’t want to look at the assembly code. Transaction-based<br />

testbenches and a multi-layer debugging methodology are a<br />

good way to address this need.<br />

Looking ahead, the emulation market’s growth will come from new<br />

applications, such as software development and design architecture<br />

investigation or ESL emulation. These new applications will<br />

continue to call for more performance and lower cost. Capacity<br />

won’t be the key driver for this market—except for some specific<br />

applications of more than 100-million gates. Performance and cost<br />

are going to become predominant. <strong>Design</strong> teams will buy or renew<br />

emulation because it will become fast and affordable.<br />

Under these conditions, the use of custom chips is questionable.<br />

They are slower than standard FPGAs and their development<br />

cost is high at 65-nm technology. It doesn’t make sense to<br />

invest $50 million to develop a custom-chip-based emulator<br />

to address an annual market of $150 million in a competitive<br />

environment and with lower margin.<br />

Ultimately, standard FPGA-based emulators will address cost/<br />

performance needs and keep the emulation sector growing. ◆<br />

Luc Burgun is chief executive officer and president of<br />

EVE in Paris, France.<br />

Burgun is a co-founder of EVE and has 18 years of<br />

experience in CAD development and digital system<br />

design. He holds a PhD in computer science from<br />

the University of Paris-Jussieu, France. Burgun can be reached at<br />

luc_burgun@eve-team.com.<br />

16 • March 2008 <strong>Chip</strong> <strong>Design</strong> • www.chipdesignmag.com


Methods-tools VeRIFICAtIOn<br />

As design complexity has increased, design verification has<br />

become an increasingly difficult problem. Today, project<br />

teams spend more time verifying their designs than they do creating<br />

them. To bring verification costs back into line with design costs,<br />

teams need test benches that find more bugs with less engineering<br />

investment. The answer lies in intelligent testbench automation.<br />

This type of automation generates optimal sets of tests to improve<br />

both productivity and effectiveness.<br />

Through a unique, rule-driven test-generation technology,<br />

intelligent testbench automation delivers many benefits.<br />

For example, it cuts design respins by using rules to generate<br />

significantly more unique tests than are possible with conventional<br />

methods. In addition, intelligent testbench automation reduces<br />

testbench bugs by minimizing the amount of testbench code that<br />

engineers need to write. It also avoids the most time-consuming<br />

testbench edits through a testbench redirection facility. This<br />

facility allows engineers to change their simulation goals without<br />

altering their test implementation code.<br />

Intelligent testbench automation also eliminates duplicate<br />

testing. It utilizes test-generation algorithms that use closedloop<br />

adaptive coverage algorithms, which only produce unique<br />

tests. By providing dynamic design-resource allocation and<br />

facilitating layered testbench modules, this type of automation<br />

also retargets module-level testbenches at the system level<br />

without changing the testbench environment. Finally, intelligent<br />

testbench automation enables testbench-module reuse through<br />

functionality sub-setting and adaptive morphing technology.<br />

OPTIMAL TESTS FOR ELECTRONIC DESIGNS<br />

For the electronic-design-automation (EDA) industry, the<br />

need for better testbenches has resulted in considerable<br />

investment in testbench-related research and development.<br />

Such R&D has largely focused on improving the coding<br />

languages that engineers use to write their testbench programs.<br />

Language improvements have made a noticeable difference.<br />

Because design complexity has continued to increase, however,<br />

the verification problem has been simultaneously getting more<br />

difficult. Yet another new language would offer incremental<br />

improvements at best. An entirely new type of testbench based<br />

on rule sets is a more fundamental advance.<br />

By Mark Olen and Matt Ballancee<br />

Burning Issues in TestBench Automation<br />

With intelligent testbench automation, it’s possible to bring verification<br />

costs back in line while improving productivity.<br />

Essentially, a rule set describes how a high-level testing activity<br />

can be performed as a series of lower-level testing activities. A<br />

modest number of rules—taken together—can describe a very<br />

large set of tests. In fact, a well-chosen set of rules can compactly<br />

define how to run every test scenario that engineers might want<br />

to simulate during a digital design project—no matter how<br />

large or complex. A few pages of rules can easily define test sets<br />

containing thousands, millions, or even billions of tests. The<br />

result is a substantial savings in verification engineering cost.<br />

Rule sets also make it possible to employ an entirely new<br />

class of algorithms. Intelligent testbench automation provides<br />

algorithms that generate optimal sets of tests specifically for<br />

engineers verifying electronic designs.<br />

Figure 1: Unlike traditional testbenches, algorithmic testbenches can<br />

be used across different design projects. They also can be retargeted at<br />

different levels of a design’s hierarchy and redirected to generate different<br />

tests based on the verification engineer’s goals for the next simulation run.<br />

REDIRECTED TESTBENChES CuT CODE<br />

With rule-based algorithmic test generation, every simulation run<br />

has a purpose that’s captured in a verification goal. A verification<br />

goal makes references to relevant parts of a set of rules. Those rules<br />

specify a particular verification objective. For example, a verification<br />

goal might be to test all of the firmware instructions that are<br />

intended to execute conditionally. In doing so, one would verify that<br />

the instructions react correctly to each possible condition.<br />

When a simulation starts, intelligent testbench generation<br />

initializes along with the simulator. It begins generating tests<br />

per the verification goal. As simulation proceeds, the algorithms<br />

monitor the progress toward the verification goal, and to avoid<br />

18 • March 008 <strong>Chip</strong> <strong>Design</strong> • www.chipdesignmag.com


wasting time on duplicate tests, they intelligently adapt their<br />

test-generation strategies. As a result, engineers don’t need to<br />

write constraints to avoid duplicate tests. The self-adapting<br />

algorithms automatically make continuous progress toward<br />

completion of the verification goal.<br />

The verification goal enables an engineer to focus the testbench<br />

on producing the types of tests that are currently needed. This<br />

capability, which is known as testbench redirection, is important<br />

because a project team’s engineers need different tests at different<br />

times. Sometimes they’re investigating a problem report and<br />

they need a specific test sequence. Other times, they’re trying to<br />

give quick feedback on a design fix and they need to concentrate<br />

testing on a particular functional area. Or they could be running<br />

a regression test and need to thoroughly test a wide variety of<br />

functionality. Each time the project team wants different tests,<br />

it only needs to modify the verification goal to specify the<br />

type of tests that are needed. The rest of the testbench doesn’t<br />

need to be changed. The algorithms take care of all of the testcustomization<br />

details. This aspect saves editing time while<br />

eliminating a significant source of testbench bugs.<br />

SCALABLE TESTBENChES<br />

In addition to testbench redirection, testbench retargeting<br />

is a capability that’s often requested by testbench tool users.<br />

Testbench retargeting is important because complex designs need<br />

to be tested at the module, subsystem, and system levels. Ideally,<br />

a testbench module that’s used to drive a particular interface (like<br />

USB 2.0) during module testing would be retargeted without<br />

changes to drive that interface during subsystem- and systemlevel<br />

testing. In the real world, such retargeting rarely happens.<br />

Usually, subsystem- and system-level testing require completely<br />

new (or at best heavily edited) testbench code.<br />

Retargeting a module-level testbench is problematic for two<br />

reasons. One barrier is the accessibility of the target module.<br />

As it’s incorporated into a subsystem, some or all of the target<br />

module’s pins recede from the periphery and become internal<br />

connections. The original module-level testbench is therefore<br />

rendered unusable. Intelligent testbench automation helps<br />

project teams solve this problem with layered testbenches.<br />

The second barrier to retargeting a module-level testbench is the<br />

target module’s dependency on system-level resources during<br />

system testing. For example, some PCI transactions require a<br />

direct-memory-access (DMA) channel, which is a system-level<br />

resource. A PCI testbench module that’s testing a PCI design<br />

module in isolation can ignore DMA-channel availability issues.<br />

After all, it can run transactions that require a DMA channel<br />

anytime. In contrast, a PCI testbench module that’s testing a PCI<br />

design module in a system doesn’t have that luxury. During system<br />

testing, the PCI testbench module must avoid DMA transactions<br />

when all of the system’s DMA channels are busy handling<br />

transactions that were initiated by other testbench modules.<br />

Attempts to address system-resource problems by parameterizing<br />

testbench modules haven’t provided a general solution. This<br />

approach locks specific resources to particular testbench<br />

modules for the duration of a simulation. Real systems have<br />

more modules that can initiate DMA transactions than there are<br />

DMA channels available. As a result, locking schemes cannot be<br />

used without sacrificing large swathes of functional coverage.<br />

To solve this problem, intelligent testbench automation provides<br />

a dynamic-resource allocation facility. Thus, a PCI testbench<br />

module can request a DMA channel for a single test. If one is<br />

available, the DMA channel is “checked out” to the PCI testbench<br />

module. The DMA transaction then runs and the DMA channel<br />

is “checked in” so another testbench module can use it. If no<br />

DMA channel is available, the PCI testbench can run non-DMA<br />

transactions and make good use of the available simulation time<br />

until such a channel is available. The combination of testbench<br />

layering and dynamic-resource allocation enables project teams<br />

to retarget testbench modules without changing them. In other<br />

words, the same testbench modules work for testing a module<br />

design, subsystem design, and a complete system design.<br />

MAKING TESTBENCh REuSE A REALITY<br />

After a project team has invested in a testbench for a particular<br />

design, management often wants to reuse testbench modules on<br />

other design projects. For example, say a team develops a USB<br />

2.0 testbench module for a cable-modem chip design. Ideally, it<br />

would be reusable on a subsequent Global Positioning Satellite<br />

(GPS) chip design. Unfortunately, attempts to reuse testbench<br />

code often result in substantially more editing and “tailoring”<br />

than expected. One common problem is that the first design<br />

has a different resource configuration (memory, DMA channels,<br />

interrupts, etc.) than the second design. Intelligent testbench<br />

automation solves this problem by editing the system resource<br />

list to match the second design. No changes to the testbench<br />

modules are needed, as the dynamic-resource allocation facility<br />

automatically makes the necessary testing adjustments.<br />

Another common testbench-reuse problem is that different<br />

designs implement different subsets of the underlying specification.<br />

The original design, for example, may implement the full range of<br />

AMBA AHB functionality, including wrap transfers to optimize<br />

performance with the system’s cache. Yet the second design may<br />

implement an AMBA AHB subset without wrap transfers<br />

because there’s no cache in the system. Normally, such a change<br />

would require multiple edits to the AMBA testbench to remove<br />

or disable all wrap-transfer-related code. This task is problematic<br />

<strong>Chip</strong> <strong>Design</strong> • www.chipdesignmag.com March 008 • 19<br />

VeRIFICAtIOn<br />

Methods-tools


Methods-tools VeRIFICAtIOn<br />

because the engineers on the second design team aren’t the ones<br />

who wrote the original testbench code. As a result, they cannot<br />

be sure which changes can be made safely.<br />

With intelligent testbench automation, there’s a safe, standard way<br />

to test functionality subsets. Users can simply disable portions of<br />

the rule set to “knock out” unimplemented functionality, such as<br />

wrap transfers in the AMBA AHB example. No knowledge of<br />

testbench coding details is required.<br />

Figure 2: By separating the functional specification from design<br />

implementation, a testbench module becomes highly reusable across multiple<br />

implementations of the same specification.<br />

A third common testbench-reuse problem is that two design<br />

teams may intentionally implement functionality from a common<br />

specification quite differently. This doesn’t mean that one of the<br />

design teams is wrong. On the contrary, it usually means that the<br />

two teams are addressing different end-use markets. Almost all<br />

modern specifications give design teams room to optimize their<br />

implementations for specific applications.<br />

For instance, one design team may be optimizing its<br />

implementation for low cost (i.e., gate count), while another<br />

team may be optimizing its implementation for high speed.<br />

Both implementations are designed to conform to the same<br />

specification. Yet their interactions with the testbench at the<br />

detailed electrical level can be quite different. One AMBA<br />

AHB slave implementation may have a very small input buffer<br />

and frequently force “split” transactions. Another AMBA AHB<br />

slave implementation may have a very large input buffer and<br />

never force such transactions.<br />

Engineers want to be able to verify both implementations with<br />

a single testbench. To maximize the potential for reuse, such<br />

a testbench would ideally be able to adapt itself to work with<br />

any legal implementation of the specification. At the same<br />

time, it should be able to reject any illegal implementation.<br />

It’s very difficult for humans to write such a highly flexible<br />

testbench by hand while getting all of the details right for<br />

each of the many different possibilities.<br />

Intelligent testbench automation solves this problem in two steps.<br />

First, it captures all of the different behavioral possibilities allowed<br />

by a specification using a non-deterministic rule-set description.<br />

Second, a patented “morphing” technology automatically adapts<br />

test generation to simultaneously conform to both the rule set<br />

and the dynamic responses of the design during simulation.<br />

Dynamic-resource allocation, functionality-subset specification,<br />

and morphing make it practical to reuse the resulting testbench<br />

modules on a wide variety of real-world designs. In addition,<br />

project teams aren’t required to incur the expense and risk of<br />

modifying unfamiliar testbench code.<br />

The advantages of intelligent testbench automation are quite<br />

dramatic. The time needed for testbench programming is reduced<br />

while the overall coverage area is increased. It is reusable across<br />

multiple designs without requiring significant changes to the<br />

rule sets. Furthermore, algorithmic testbenches can be reused<br />

across module, sub-module, and full-system design levels. With<br />

testbench redirection capabilities, design teams can easily redirect<br />

test generation at various times during verification.<br />

Ultimately, intelligent testbench automation finds more bugs<br />

more quickly while requiring less engineering time. It cuts down<br />

on the respins caused by the functional design errors that escape<br />

traditional functional verification. This technology is exactly<br />

what companies need if they’re to bring their verification costs in<br />

line with their design costs—especially as they face ever-growing<br />

and increasingly complex design challenges. ◆<br />

Mark Olen is a product manager at Mentor Graphics<br />

for Advanced Functional Verification Technologies.<br />

He has worked for Mentor for 10 years and has more<br />

than 25 years of experience in the semiconductor<br />

design verification and test industries. He can be<br />

reached at mark_olen@mentor.com.<br />

Matthew Ballance is an engineer at Mentor<br />

Graphics for the inFact product. He holds a<br />

BSCpE from Oregon State University and has<br />

worked in the areas of HW/SW co-verification<br />

and transaction-level modeling. Ballance can be<br />

reached at matt_ballance@mentor.com.<br />

0 • March 008 <strong>Chip</strong> <strong>Design</strong> • www.chipdesignmag.com


When planning a complex product-development project<br />

using an application-specific integrated circuit (ASIC)<br />

or system-on-a-chip (SoC), it’s critical to analyze the various<br />

risks, project costs, resources, and expertise required to allocate<br />

resources (money, equipment, and people). Often, such analysis<br />

fails to include many of the key hidden costs and risks associated<br />

with the implementation of a complex double-data-rate (DDR)<br />

memory-controller solution. If those risks and costs are more<br />

clearly anticipated, the choice between building a DDR memory<br />

controller internally and buying third-party intellectual property<br />

(IP) will be much easier. In a complex DDR memory-controller<br />

SoC/ASIC design, for example, buying a third-party IP core<br />

can lower cost, improve profitability, and reduce risk.<br />

<strong>Design</strong>ing an efficient DDR memory controller is now a<br />

very complex and time-consuming task—whether in lowpower<br />

consumer electronics, where mobile DDR memories<br />

require power-efficient high-bandwidth access, or in the highperformance,<br />

low-latency data-communication applications that<br />

require cutting-edge DDR3 memories. It may seem daunting to<br />

decide whether to design a portion of an ASIC or SoC product<br />

or to purchase third-party IP. When looking at buying from<br />

a third party, one must consider the direct cost of purchasing<br />

an IP core along with the costs and resources required for<br />

integration and support. Determining the cost to develop the<br />

IP core internally is much more complex. The direct costs for<br />

planning, designing, verification, debugging, integration, and<br />

tools must all be understood and estimated.<br />

DDR memory controllers are key components to nearly<br />

every SoC/ASIC design with access to external memory.<br />

It doesn’t matter if the design objective is to reduce cost,<br />

improve performance, provide advanced features, lower power,<br />

reduce board space, or any combination of these goals. The<br />

implementation of the DDR memory controller will be critical<br />

to accomplishing the desired outcome.<br />

Developing an optimal DDR memory controller is a<br />

complicated design and verification task. Many standards need<br />

to be considered. For example, DDR, DDR2, DDR3, GDDR3,<br />

GDDR4, and LP-DDR are key standards. Each one contains<br />

a variety of capabilities. Among the many memory-controller<br />

features that need to be considered are the following: access<br />

By Raj Mahajan and Raghavan Menon<br />

Make an Informed Build or Buy Decision<br />

for Memory-Controller Solutions<br />

When it’s time to make a memory mind, don’t forget to measure all of the risks and rewards.<br />

priority, error checking and correcting (ECC), read-modifywrite<br />

support, byte-write implementations, out-of-order access<br />

support, FIFO options, latency, and bandwidth tradeoffs.<br />

DEvELOPMENT SuPPORT<br />

In addition to designing the DDR memory controller, it’s critical<br />

to make modifications to the design (optimization and tuning)<br />

during the design-integration phase. A DDR memory-controller<br />

expert would need to be available for support during system<br />

integration, verification, and built-in testing and error checking.<br />

He or she also should support the testing and verification of the<br />

physical-layer (PHY) portion of the design.<br />

It can be difficult to model the complete behavior of the system<br />

unless a platform for system integration has already been created<br />

and can easily support this task. Ideally, such modeling would<br />

allow modifications and what-if scenarios to be created during<br />

system integration. In doing so, it would require design changes<br />

and a variety of experiments. Tradeoffs for latency, performance,<br />

and special features that impact the system bandwidth and<br />

cost must be explored to identify the mix that fits the design<br />

requirements. The verification of the DDR memory controller<br />

itself can be very complex. Deep knowledge of the corner cases<br />

for each standard and feature must be used if the verification<br />

process is to obtain the coverage required to create a robust<br />

solution.<br />

All of the above points cover the controller portion of the design.<br />

Yet the PHY interface also needs to be considered. If an analog<br />

approach to the design is taken, a completely separate skill set is<br />

required—along with specialized tools and a test methodology.<br />

One additional task that’s usually overlooked is the need for<br />

built-in testing and error-tracking features. Often, a DDR<br />

memory-controller design is already completed and testing is<br />

well underway before it’s discovered that it’s not possible to test<br />

the system memory within the time allowed or to the quality<br />

level required by the test plan. The result could be a schedule slip<br />

very late in the development cycle.<br />

Although all of the risks discussed so far can impact schedules<br />

and development costs, the overall impact on profitability can be<br />

dramatic. An example is provided to show how profitability could<br />

be impacted by a schedule slip in a project if an internal DDR<br />

<strong>Chip</strong> <strong>Design</strong> • www.chipdesignmag.com April 008 • 1<br />

DIGItAl<br />

Memory Controllers


Memory Controllers DIGItAl<br />

memory-controller effort—the “build” option—runs into some<br />

design, verification, support, and integration schedule issues.<br />

Assume that these issues result in a three-month schedule slip<br />

for Company Y. Also, assume that Company X is introducing<br />

a similar product. Because it has selected the “buy” option,<br />

however, Company X introduces its product on time—three<br />

months ahead of Company Y. The profit and gross margin for<br />

both Company X and Company Y are illustrated in Figure 1.<br />

Figure 1: This profit comparison shows Company Y with a three-month<br />

schedule slip.<br />

Because Company X introduces its product three months earlier,<br />

its profit starts sooner and grows to a higher level than the profit<br />

of Company Y. Note that the gross margin is higher during the<br />

early portion of the sales cycle, thereby providing a profit boost<br />

to Company X over Company Y. Late in the sales cycle, however,<br />

Company Y shows slightly higher profit than Company X. This<br />

is deceiving, as Company X is introducing a new, follow-on<br />

product with higher margin. That higher margin is accounting<br />

for a significant amount of additional company profit that isn’t<br />

shown on the single product-comparison chart. If Company X<br />

leverages its three-month advantage to create a new product<br />

earlier than Company Y, it will again obtain higher profit. Yet if<br />

Company Y is late again, this delay can compound the schedule<br />

slip. Company Y could end up being six months behind.<br />

Development costs are higher when a schedule slips, which<br />

leads to increased project costs while further reducing overall<br />

profitability. If a schedule slips three months, engineering costs<br />

will be that much higher. If the project is a year long, three<br />

months will be about 25% of the overall cost. The manufacturing<br />

costs must be paid again if the SoC or ASIC requires a re-spin.<br />

This could be a multi-million-dollar expense. Typically, it will<br />

account for a large percentage of the project cost. Other costs,<br />

such as materials for scrapped wafers, testing expenses, EDA<br />

license costs, computer time, or equipment rental, all add up. A<br />

three-month schedule slip with a re-spin could easily cost $5 to<br />

$10 million. Amortizing this amount over the project sales cycle<br />

and subtracted from profitability for Company Y indicates the<br />

result (see Figure 2).<br />

Figure 2: Here, the profit comparison depicts Company Y with both a<br />

schedule slip and increased costs.<br />

BuILD vS. BuY AND PROJECT RISK<br />

The potential sources of slips—design risks that can turn into<br />

schedule delays and requirements to re-spin an SoC/ASIC<br />

design—are numerous. For a DDR memory-controller design,<br />

however, there are three main areas of risk. Functionality<br />

requirements can grow during development. Timing closure can<br />

be delayed. In addition, verification can take more time.<br />

When “building” a DDR memory controller, the first area of<br />

schedule risk comes from the fact that functionality requirements<br />

can grow over the course of the project. Memory bandwidth is<br />

a key element of a typical system’s performance, functionality,<br />

power, and cost metrics. As a result, it’s not unusual for the DDR<br />

memory controller to be targeted as the area for improvement.<br />

When using the “buy” option on IP with a robust set of features<br />

and expert support from a vendor with an experience-based<br />

schedule, the above functionality risks can be dramatically<br />

reduced.<br />

When the feature set is finalized and verified, timing closure—<br />

the second key source of risk—can be difficult to estimate while<br />

pursuing the “build” option. The DDR memory controller alone<br />

has a variety of critical design areas, which are primarily due to<br />

the importance of the PHY interface timing between the DDR<br />

memory controller and the external memory. When using the<br />

“buy” option from an IP vendor with a robust and proven design<br />

that meets critical timing constraints with sufficient margin, the<br />

risk of a prolonged timing closure effort is eliminated.<br />

Verification is the third significant area of risk in a “build” option.<br />

A DDR memory controller must operate under a variety of<br />

different conditions. The “buy” option significantly reduces<br />

verification risks. If the DDR memory-controller IP is part<br />

of a complete solution, it will have been exhaustively verified<br />

using a wide variety of corner cases, conditions, and target<br />

applications. These areas comprise a few of the key risks<br />

that a development team needs to consider when comparing<br />

a third-party IP “buy” option versus an in-house “build”<br />

approach to implementing a DDR memory controller.<br />

• April 008 <strong>Chip</strong> <strong>Design</strong> • www.chipdesignmag.com


SuPPORT COSTS<br />

When pursuing the “build” option, the work on the DDR<br />

memory controller is far from finished even after the design<br />

is completed and a new product is started. The DDR<br />

memory controller will need to be supported during the<br />

following: code development; feature, customer-specific,<br />

and test-coverage enhancements; and all other engineering<br />

support required for a complex system.<br />

If a product is successful, a follow-on product also will be<br />

required. Say that follow-on product is in a new process.<br />

The DDR memory controller will then need to be ported<br />

to the new process and another set of complexities will<br />

appear. New library elements will need to be tested and<br />

optimizations for power, performance, and die size may<br />

need to be repeated. New memory technology (for example,<br />

the migration from DDR2 to DDR3) must be taken into<br />

account along with potential new vendors, memory features,<br />

and characteristics. The “buy” option can reduce or eliminate<br />

all of these concerns.<br />

Keep in mind that IP vendors can afford to invest in a<br />

very robust infrastructure for the design, development,<br />

testing, and support of standardized and highly specialized<br />

products. Because IP vendors amortize a product over many<br />

projects, they can make a significantly higher investment<br />

than a company with only one or two projects.<br />

When engineering resources are being used to design a<br />

DDR memory controller, they’re not being utilized to<br />

differentiate a system’s core technology. The “opportunity<br />

cost” of this use of resources is sometimes overlooked.<br />

But it can allow a competitor to do a better job of adding<br />

compelling features, pushing performance, or lowering<br />

cost. In doing so, it makes it difficult to compete in the<br />

marketplace. If the DDR memory controller was made<br />

using third-party IP—the “buy” option instead of an inhouse<br />

“build” option—internal engineering resources could<br />

be applied to counter likely moves by the competition and<br />

enhance a product line’s core technology.<br />

Significant costs are associated with designing a DDR<br />

memory controller in house instead of sourcing it from<br />

a third party. For products with less sales and profit<br />

potential, even small impacts to overall profitability can<br />

make the difference between a company staying in business<br />

or closing its doors. As shown in Figure 2, Company Y has a<br />

dramatically lower profit curve because it is burdened with<br />

a three-month schedule slip, additional engineering costs<br />

for spinning the SoC/ASIC, and additional engineering<br />

costs for designing, verifying, supporting, and testing the<br />

DDR memory controller. The company also suffered the<br />

opportunity cost of missing features and capabilities in the<br />

current project.<br />

In summary, building a DDR memory controller internally<br />

can put development schedules and cost estimates at risk<br />

due to three development complexities. First, functionality<br />

requirements can grow during development. In addition,<br />

timing closure can be delayed and verification can require<br />

additional time. When building a DDR memory controller,<br />

the opportunity cost of not applying engineering talent to<br />

core capabilities also needs to be factored into the financial<br />

analysis of a “build” versus “buy” decision. A third-party<br />

DDR memory controller improves profitability by avoiding<br />

the key development and support risks. It also eliminates<br />

the opportunity costs of engineering deployment. ◆<br />

Raj Mahajan has more than 10 years of experience<br />

architecting, designing, and verifying memory-access<br />

solutions for advanced ASICs for a variety of target<br />

markets. At startup Ingot Systems, he guided the<br />

architecture, design, and verification of MemCore’s<br />

flagship memory controller, which led to the sale of<br />

the company to Virage Logic in 2007. Mahajan<br />

continues to lead the development of this IP, branded as “IntelliDDR,”<br />

at Virage Logic.<br />

Raghavan Menon has more than 14 years of<br />

experience leading development teams and a<br />

proven track record in delivering complex multichip<br />

solutions with first-silicon success. He is currently<br />

a senior director at Virage Logic. Previously, he<br />

held the position of vice president of engineering<br />

and CTO at Covalent Semiconductor Inc. Menon<br />

holds an MSEE with honors from the University of Kansas<br />

Pallab's Place blog: RSA Conference 2008<br />

The keynotes RSA Conference held in San Francisco followed<br />

the theme that data Security is no longer an option,<br />

it is required. This was reiterated several times by RSA<br />

president Art Coviello who also noted the dialog box pop-up<br />

of “Are you sure?” before you submit information is the tech<br />

worlds equivalent of the old movie line “Do you feel lucky?”.<br />

A later keynote by Dept of Homeland Security Secretary<br />

Michael Chertoff identified today’s threat of cyberspace is<br />

on a par with 9/11. Replays of the conference keynotes are<br />

available after registration at the following site...Read more<br />

at: www.chipdesignmag.com/blogs ◆<br />

<strong>Chip</strong> <strong>Design</strong> • www.chipdesignmag.com April 008 • 3<br />

DIGItAl<br />

Memory Controllers


Methods-tools 65nM DesIGn<br />

<strong>Design</strong>ing in 65-nm and smaller geometries can be<br />

intimidating. But engineers can ensure first-time silicon<br />

success by paying attention to certain aspects of their design<br />

methodologies. Many of those methodologies have already<br />

been used for 130- and 90-nm designs. They simply need<br />

more attention at 65 nm and beyond (see Figure 1). Power<br />

management, for example, is a key aspect of design—especially<br />

if the application is battery powered. There are two kinds of<br />

power: active and leakage. A variety of techniques exist to<br />

A 65nm flow must take into consideration low power,<br />

timing and DFM issues.<br />

manage both types.<br />

For active power, it’s essential to incorporate power-management<br />

techniques at the architectural level. Yet additional techniques are<br />

available during the physical implementation of the design. Because<br />

a significant portion of the power dissipated is due to net capacitance,<br />

signals with high activity are used to minimize interconnect<br />

capacitance. In addition, logic is restructured to minimize the fanout<br />

of high-activity signals.<br />

During timing optimization, cell sizing and buffer insertion<br />

are performed with power taken into consideration. Because<br />

a significant portion of active power is in the clock network,<br />

several methods are used to reduce clock-tree power. Clock<br />

gating optimizes clock power while integrated clock-gate cells<br />

eliminate the need for close placement of the flip flop and gating<br />

logic. In addition, clock gating is integrated with clock-tree<br />

synthesis to optimize power with timing, skew, and insertion<br />

delay constraints. Other tools create custom flip flops to reduce<br />

power even further. Another useful method is using multiple<br />

voltage islands, as power is proportional to the square of the<br />

voltage. Yet that approach requires multiple power supplies and<br />

more than two generally isn’t practical.<br />

By Prasad subramaniam<br />

Get <strong>Design</strong>s Right at 65-nm and Beyond<br />

up-front planning and more attention to detail can help make the<br />

transition to new process nodes less daunting.<br />

eSilicon • Enabling Your Silicon Success CONFIDENTIAL<br />

Figure 1: A 65-nm flow must take into<br />

consideration low power, timing, and<br />

DFM issues.<br />

Leakage power is a concern as geometries shrink and the gate<br />

oxide thickness of the transistor and operating voltage gets<br />

smaller. At 65 nm and below, it is a significant component of<br />

the total power—especially at higher operating temperatures.<br />

The most effective way to reduce leakage power is to use power<br />

islands and shut-off sections of the design when they’re not<br />

active. This approach reduces total power consumption including<br />

leakage power. Two considerations need to be taken into account<br />

with power islands. First, critical data should be saved in dataretention<br />

flip flops before shutting down. When the main power<br />

island is shut down, its output signals also should go into a<br />

high-impedance state and leave the inputs of the active island<br />

in floating state. Depending on the voltage level at these floating<br />

inputs, short-circuit currents could occur in the active island’s<br />

input stages. They would consume significant power. As a result,<br />

inputs are always gated with sleep-mode signals to ensure that<br />

the signals are always in a known state. The wake-up time is<br />

typically in the range of a few milliseconds.<br />

Another way to reduce leakage power is to use footer switches with<br />

1<br />

the logic. In this design, the logic cells are connected to ground by<br />

a low-leakage footer switch. That switch is implemented using a<br />

high-threshold transistor. In the normal mode of operation, this<br />

transistor is on and the logic cells have a common virtual ground. In<br />

sleep mode, the footer switch is off—causing the logic to go inactive.<br />

Because the footer uses high-threshold transistors, it minimizes<br />

leakage in the logic circuitry.<br />

Although this approach reduces leakage, it negatively affects area<br />

and performance. To be effective, the footer switch is distributed<br />

across the layout. Each switch controls a group of logic cells.<br />

Additional resources are required for routing the sleep signal and<br />

virtual ground across all switches. The switch presence causes<br />

some degradation to the logic speed as well. As it is in the powerisland<br />

scenario, retention flip flops are required. But the switches<br />

are typically fast with wake-up times in the range of µs.<br />

A third method of reducing leakage is by applying a back-gate bias<br />

to the substrate. The MOSFET has four terminals. Yet it’s typically<br />

treated as a three-terminal device by tying the source and substrate<br />

terminals together. By applying a separate voltage to the substrate<br />

terminal, the threshold voltage can be increased—thereby reducing<br />

leakage. This approach has area and performance penalties, however.<br />

An internal charge pump is required to generate the substrate voltage.<br />

4 • March 008 <strong>Chip</strong> <strong>Design</strong> • www.chipdesignmag.com


Additional routing resources also are needed to distribute this voltage<br />

across the logic cells. Unfortunately, the logic slows down as well.<br />

The final method for reducing leakage is the use of multi-threshold logic<br />

process<br />

cells. This approach invites very little penalty in area or performance.<br />

During the physical design process, multiple threshold-voltage libraries<br />

are used for logic synthesis and optimization. Standard or lowthreshold<br />

devices are selected for critical portions of the design, while<br />

high-threshold cells are used in non-critical sections. Typically only 20%<br />

of the design uses the standard or low-threshold devices. The remaining<br />

cells are therefore high threshold, enabling lower leakage.<br />

TIMING SIGNOFF<br />

Typically, timing signoff is achieved by performing analyses at the<br />

fast and slow corners of a design. In 65-nm and lower geometries, the<br />

worst-case corner isn’t clearly defined. It also is design dependent—<br />

especially with a large operating-voltage range due to multiple voltage<br />

islands. The corner for the worst-case timing path becomes circuit<br />

dependent. In addition, the circuit timing behavior is no longer<br />

monotonic. As a result, there may be multiple conditions when the<br />

worst-case timing occurs. Such conditions arise due to temperature<br />

inversion, in which a transistor becomes slower at a lower temperature<br />

and affects the delay of a path depending on its location. Clearly,<br />

timing analysis needs to be performed at more corners.<br />

Signal integrity, crosstalk, and on-chip variations also become<br />

significant at smaller geometries. Variations arise from process,<br />

geometry, and power-supply changes due to IR-drop and temperature<br />

fluctuations. The relative magnitude of these variations can be as<br />

much as 20%. Although static timing analysis uses additional margins<br />

for on-chip variations, it could be overly pessimistic. In other words,<br />

achieving signoff may require additional area overhead or may simply<br />

not be possible. Statistical-timing-analysis techniques are emerging<br />

that promise to address both of these issues. These techniques treat<br />

each timing path as a statistical variable and determine the probability<br />

of it meeting the timing requirements.<br />

DESIGN FOR MANuFACTuRABILITY<br />

Normally, design-for-methodology (DFM) rules are incorporated<br />

in a technology’s design rules. While it’s necessary for a design to<br />

pass the technology’s design rules, that requirement isn’t sufficient<br />

to achieve good yield. Although a design may pass a rule, how close<br />

it adheres to that rule and the statistical distribution of its deviation<br />

from the rule have an impact on yield. Many foundries don’t<br />

mandate DFM analysis for 90-nm and larger geometries. Yet it is<br />

critical for 65-nm and smaller geometries (see Figure 2). Performing<br />

DFM analysis for these designs can identify simple changes that will<br />

improve overall yield. DFM rules are useful in the early stages of a<br />

technology. As the technology matures, the manufacturing process<br />

is under tighter control and becomes more robust.<br />

Two types of DFM rules exist. The first set affects critical parameters<br />

of the circuit and its operation. It is used by extraction tools to more<br />

accurately model the physical layout. This set also is applied to cell<br />

libraries, analog circuits, and custom layout designs. The second set of<br />

rules isn’t critical. But it should improve yield.<br />

DFM aware design flows are a necessity at 65nm to<br />

address emerging physical effects earlier in the design<br />

Placed & Router <strong>Design</strong><br />

- Native redundant VIA<br />

- Native wire spreading<br />

CAA driven wire spreading<br />

CAA driven wire widening<br />

Redundant VIA insertion<br />

Recommended end-of-line<br />

LPC hot spot removal<br />

CMP driven metal fill<br />

Tape Out<br />

Figure 2: DFM-aware design flows are a necessity<br />

eSilicon • Enabling Your Silicon Success 2<br />

CONFIDENTIAL<br />

at 65 nm to address emerging physical effects<br />

earlier in the design process.<br />

Many foundries perform DFM analysis for a design and make<br />

suggestions to improve yield. Some of those suggestions can be<br />

implemented by modifying the layout without increasing the design<br />

area. Such capabilities also have been introduced in the layout tools<br />

based on a target foundry. These can be applied throughout the<br />

design flow—from cell level to completed chip. Wire spreading and<br />

wire widening, for example, can be performed to reduce critical areas<br />

during multiple stages of the flow. Examples of such stages include<br />

global and detailed routing. Lithography process check (LPC) is an<br />

analysis that’s superior to rule-based systems, which don’t address<br />

design complexity and variability well. By using a model-based<br />

approach that utilizes lithography simulation, these tools provide<br />

better accuracy. Lithography-aware routing and extraction provide a<br />

more robust design and sign-off.<br />

In summary, designing in 65 nm and beyond isn’t as hard as one<br />

would imagine. There are more steps with more pitfalls. As long<br />

as the designer is aware of them and proactively addresses them<br />

in the methodology, however, he or she has a good chance of<br />

first-time silicon success. ◆<br />

Dr. Prasad Subramaniam is responsible for<br />

developing the technology platforms for IC design<br />

at eSilicon. From 1982 to 1998, he was with Bell<br />

Laboratories, where he held a variety of technical and<br />

management positions culminating as the head of<br />

analog and RF CAD. Subramaniam joined Cadence<br />

<strong>Design</strong> Systems as VP of R&D in 1998. He is a senior member of the<br />

IEEE and has published over 40 papers in technical conferences and<br />

journals. Subramaniam holds a PhD in electrical engineering from the<br />

State University of New York at Stony Brook.<br />

<strong>Chip</strong> <strong>Design</strong> • www.chipdesignmag.com March 008 • 5<br />

65nM DesIGn<br />

Methods-tools


Flexible Radio MIxeD sIGnAl<br />

Cellular Standards May Demand Alternatives to SDR<br />

Is there a better way to meet growing consumer demands<br />

than relying on software-defined radios?<br />

Cellular standards continue to evolve—a trend that was first seen<br />

in the second-generation (2G) space with GSM developing<br />

into GPRS and then EDGE. Now, a similar progression is occurring<br />

for the third generation (3G) as Wideband Code Division Multiple<br />

Access (WCDMA) advances toward High Speed Packet Access<br />

(HSPA). Multi-mode WCDMA became a mainstream standard<br />

in 2007, while HSPA provides carriers with a relatively pain-free<br />

upgrade path to a more efficient network capable of supporting<br />

high- and low-data-rate traffic. In addition, WCDMA provides<br />

support for high-speed data by bundling multiple data channels<br />

and using spread-spectrum hybrid-phase-shift-keying (HPSK)<br />

modulation. The spread-spectrum modulation and orthogonal<br />

coding allow multiple users to share the same radio channel.<br />

Moreover, the wideband signal helps mitigate frequency-selective<br />

fading. As a result, WCDMA ushers in the packet data services<br />

needed for Web surfing and file transfer. It also provides multimedia<br />

broadcast multicast services (MBMSs), such as mobile TV.<br />

Figure 1: This graphic depicts the evolution of cellular standards.<br />

The development of all of the cellular standards is shown in Figure 1.<br />

This graphic illustrates the two main branches of standards: the CDMA-<br />

and GSM-based branches. It also shows how they compare in terms of<br />

data rates. HSPA was developed on the existing WCDMA framework.<br />

Compared to WCDMA, it provides the user with higher data rates,<br />

reduced latency, and improved network efficiency. HSPA is divided<br />

into two sections: High-Speed Downlink Packet Access (HSDPA)<br />

and High-Speed Uplink Packet Access (HSUPA). HSDPA, which is<br />

part of 3GPP Release 5, improves the downlink performance. As part<br />

of 3GPP Release 6, HSUPA enhances the uplink performance.<br />

HSDPA is now widely deployed across the world. It provides<br />

theoretical data rates for downloads (web browsing) of 14.4 Mbps,<br />

which is achieved through the use of a 16 quadrature-amplitude-<br />

By Duncan Pilgrim<br />

modulation (QAM) scheme. QAM allows four bits of data to be<br />

transmitted per symbol compared to two bits for quadrature phase<br />

shift keying (QPSK)—the modulation scheme used for WCDMA.<br />

HSDPA is complemented by HSUPA, which is being quickly<br />

deployed and provides theoretical upload speeds of 5.7 Mbps. In<br />

addition to improved data rates, HSPA makes use of a number of<br />

other features that enhance network efficiency. The resulting benefits<br />

include faster responses to changing radio conditions, improved<br />

resource scheduling, and higher efficiency in error processing.<br />

For its part, HSPA provides consumers with their first real taste<br />

of DSL-like data rates for mobile devices. With these data rates,<br />

consumers will—for the first time—see comparable performance<br />

between the speeds of their home and mobile data connections.<br />

Beyond HSPA, the evolution continues with HSPA+ followed<br />

by Long Term Evolution (LTE). Both of these technologies offer<br />

further improvements with lower latencies, higher data rates, and<br />

improved network coverage. In parallel to this is the Chinese TD-<br />

SCDMA standard, which exhibits very similar characteristics to<br />

those of WCDMA. This plethora of air standards—together with<br />

the increasing global penetration of cellular networks—requires<br />

versatile solutions that can cover multiple modes and multiple bands.<br />

At the same time, such solutions must meet the strict performance,<br />

cost, and size constraints imposed by the handset providers and—<br />

ultimately—the consumer.<br />

In summary, GSM/EDGE/HSPA/LTE provide an integrated path<br />

for the evolution of wireless communications that efficiently supports<br />

increasing data rates. Moreover, the connection between these networks<br />

and the requirement for backward compatibility make multi-mode<br />

devices essential. With this multi-mode requirement and the desire to<br />

use a single platform that can cover all geographic regions, traditional<br />

architectures have been stretched to the breaking point. To support the<br />

various performance requirements, current solutions either use suboptimal<br />

implementations, which trade off performance, or multiple<br />

architectures within the same device that increase cost. These tradeoffs<br />

have led to renewed interest in the concept of a software-defined radio<br />

(SDR) and its viability for cellular applications.<br />

According to the SDR Forum (www.sdrforum.org), SDR is the<br />

panacea of RF design: “SDR technologies provide software control<br />

of a variety of modulation, interference management and capacity<br />

enhancement techniques over a broad frequency spectrum (wide<br />

and narrow band).” This technology allows a single radio to support<br />

6 • April 008 <strong>Chip</strong> <strong>Design</strong> • www.chipdesignmag.com


anything from cellular to military applications. SDR also is an<br />

overused word, as there are now many “SDR solutions” entering<br />

the market. Yet 99% of these products are in fact software-defined<br />

basebands, which don’t provide the RF radio portion. As a result,<br />

they’re only part of the overall solution.<br />

Traditionally, SDR has been plagued by many issues, such as being<br />

cost prohibitive and having lackluster performance. Such aspects<br />

explain why it has never been implemented in a handset. There<br />

are many ways to create an SDR product and there is no single<br />

correct SDR architecture. At one end of the spectrum, companies<br />

are working on developing pure digital SDR solutions that require<br />

high-performance digital-signal-processing (DSP) techniques.<br />

These solutions suffer from high current consumption, linearity<br />

constraints, and noise/spurious-emission issues. At the other end<br />

are the solutions that rely predominately on analog circuitry, which<br />

is reconfigured by using a simple DSP or state machine. To meet<br />

the diverse needs of an SDR, these solutions have a tremendous<br />

amount of circuit overhead. This overhead leads to uncompetitive<br />

solutions with respect to size, cost, and current consumption.<br />

In all cases, developing a radio that has the capability of reconfiguring<br />

itself to meet the broad range of modes and frequencies required for<br />

an SDR has a negative impact on performance, size, cost, or any<br />

combination of the three. The viability of a true SDR solution is<br />

therefore difficult to determine. After all, consumers have become<br />

accustomed to end products that meet their expectations.<br />

A FLExIBLE APPROACh<br />

Cellular standards are well-defined and the rollout of nextgeneration<br />

technologies is relatively well mapped out. By focusing<br />

only on the cellular standards and frequencies, a solution’s required<br />

flexibility can be constrained. Traditionally independent paths have<br />

been used to transmit GSM/EDGE and WCDMA, which leads to<br />

very little design reuse between the two paths. To address this issue,<br />

RF semiconductor company Sequoia Communications created<br />

FullSpectraTM, a multimode architecture designed to support<br />

cellular standards. The vision was to provide a single RF path that<br />

could support both narrowband and wideband modulation schemes<br />

while being linear enough to meet the next generation requirements.<br />

The only transmit architecture that met these requirements was<br />

polar modulation (see Figure 2).<br />

Polar modulators have already been verified to provide very lownoise<br />

and high-efficiency solutions for narrowband modulation<br />

standards, such as GSM and EDGE. The difficulty, however,<br />

was extending it to support the dynamic range and widebandmodulation<br />

requirements of next-generation standards while<br />

maintaining the performance achieved in narrowband solutions.<br />

The first commercial instantiation of the FullSpectra architecture<br />

is the company’s SEQ7400 multimode RF transceiver. In this case,<br />

the common transmit polar modulator has been proven to meet the<br />

performance requirements of GSM, GPRS, EDGE, WCDMA,<br />

HSDPA, HSUPA, and TD-SCDMA, which means that a single<br />

cost-effective product can potentially be configured to support any<br />

or all of these cellular standards.<br />

Figure 2: The development of a single multi-mode RF path is shown here.<br />

Due to the inherently low-noise nature of polar modulation and<br />

the development of innovative digital-processing techniques, these<br />

implementations promise to efficiently remove transmit surfaceacoustic-wave<br />

(SAW) filters. When combined with the single<br />

path architecture, such a solution also paves the way for the use of<br />

multimode power amplifiers. Such solutions have great potential for<br />

the constantly evolving cellular industry, which must provide higher<br />

data rates and increased geographic coverage while maintaining<br />

backward compatibility with old technology. These developments<br />

increasingly translate into increased data rates, lower latencies, and<br />

improved coverage for the consumer.<br />

To support the growing number of modes, the ideal solution<br />

should boast sufficient flexibility while satisfying the performance<br />

and cost targets that are now expected by the consumer. Originally,<br />

this dilemma appeared to provide an opportunity for the growth<br />

of software-defined radios. But if SDR solutions cannot overcome<br />

the known tradeoffs in performance and cost, other approaches will<br />

emerge, such as the polar modulation architecture discussed above,<br />

which has been shown to satisfy these requirements. ◆<br />

Duncan Pilgrim brings over 14 years of RF engineering<br />

and marketing experience to Sequoia Communications,<br />

where he serves as director of product marketing. He<br />

completed his Executive Fast Track MBA in 2005 from<br />

Wake Forest University, and received a bachelor’s degree<br />

in electronic engineering from the University of Birmingham in the U.K.<br />

<strong>Chip</strong> <strong>Design</strong> • www.chipdesignmag.com April 008 • 7<br />

MIxeD sIGnAl<br />

Flexible Radio


DOt.ORG<br />

The Second Commandment for Effective Standards<br />

Standards committees are all about creating technically sound<br />

standards, right? Wrong! There are more aspects to standards<br />

committees than just producing the standard itself. Over the years, I’ve<br />

learned a lot about these other, non-technical aspects. I’m currently<br />

rolling out my “Ten Commandments for Effective Standards” in my<br />

blog, The Standards Game, at www.synopsysoc.org/thestandardsgame.<br />

The “Second Commandment for Effective Standards” deals<br />

with a topic that generates more debate in the standards arena<br />

than just about anything else.<br />

Before I say more, however, I must give a disclaimer: I’m not a lawyer<br />

and I’m not giving anyone legal advice. That being said, here is<br />

Commandment #2: Do Not Mix Patents and Standards.<br />

Participating in standards committees can have implications<br />

for a company’s patent portfolio. It’s possible that company<br />

employees who work on standards can inadvertently create<br />

intellectual-property (IP) ramifications for their employers. In<br />

simple terms, it’s cheating to help develop a standard, not reveal<br />

associated patents, and then assert patent rights against others<br />

who use the standard. Famous lawsuits continue to show that<br />

companies cannot introduce patents into the standards arena<br />

and automatically expect to retain rights to assert those patents.<br />

Whether the lawsuits are won or lost, there’s a significant cost to<br />

the companies involved as well as the industry at large.<br />

The phrase that’s often used for patents that have standards<br />

implications is “essential patent.” To make use of the standard,<br />

the patent would necessarily be infringed upon. If a company<br />

owns an essential patent and an employee of that company<br />

participates in a related standards committee, there’s a risk that<br />

the company could lose the IP rights provided by the patent.<br />

Yet patents that are related to a company’s product, which<br />

complies with a standard, can be a different situation. Product<br />

implementations that use the standard belong to the developer.<br />

If those implementations are copied, the developer can be<br />

entitled to assert IP rights.<br />

Some individuals, companies, and organizations have tried<br />

to address patents mixed with standards. They sought to<br />

preserve their essential patent’s IP rights while contributing<br />

to a standard. Complicated proposals to both license and<br />

require cross-licensing have been made. So far, however, these<br />

proposals have only caused confusion while derailing progress.<br />

There have even been attempts to pressure or trick companies<br />

into relinquishing their patent rights.<br />

On the positive side, there have been companies that elected to<br />

withdraw from standards committees to preserve their patent<br />

rights. In other instances, companies have made conscious decisions<br />

to forgo their IP rights in support of an important standard.<br />

The ideal situation would be for all standards (including formats,<br />

languages, databases, and APIs) to be free of IP and patent<br />

issues. There should be no essential patents from the beginning.<br />

Alternatively, essential-patent owners should be willing to offer<br />

their patents up if they want to participate in standards creation. If<br />

they wish to protect their IP rights, the companies holding essential<br />

patents should be forthcoming about them. Or they shouldn’t<br />

participate in developing standards associated with the patents.<br />

What should a person do if he/she wants to participate on<br />

a standards committee and represent a company that has a<br />

patent portfolio? It’s a very good idea to ask for advice from<br />

company legal counsel before starting work in a standards<br />

body. Standards organizations usually have policies to address<br />

patents. Some won’t accept donations of patented technology.<br />

In addition, most—if not all—standards organizations don’t<br />

require participants to conduct patent searches. Legal counsel<br />

can interpret these policies and help determine how to proceed.<br />

A company could choose to not participate at all. Or it might<br />

decide to contribute its essential patents to the standards effort<br />

for the good of its customers and the industry. ◆<br />

Karen Bartleson contributes to several electronicdesign<br />

standards organizations, drawing on 27 years<br />

of experience in the semiconductor industry. She also<br />

is senior director of interoperability at Synopsys.<br />

By Karen Bartleson<br />

8 • April 008 <strong>Chip</strong> <strong>Design</strong> • www.chipdesignmag.com


The SPIRIT<br />

Consortium Guide


The SPIRIT Consortium Guide<br />

30<br />

Sometimes change creeps up on us<br />

in a subtle and imperceptible way<br />

and, at some point in time, the effects<br />

seem to catch everybody by surprise<br />

and require wholesale reassessement<br />

of the ways of working. This type of<br />

significant change has crept up on the<br />

silicon design community in the last 5<br />

years or so. It was such a non-event that there were no big<br />

announcements and no earth-shattering technical papers,<br />

but the change is starting to have massive repercussions in<br />

the electronics industry.<br />

So, what happened? Well - silicon capacity increased a bit!<br />

“Duh - so what?”, I hear you say. “Moore’s Law has been<br />

with us for decades, and if increased silicon capacity caught<br />

anybody by surprise then they might possibly qualify as the<br />

most insular and out-of-touch engineer on the planet”.<br />

The recent increases in silicon capacity seem to have propelled<br />

us past one of those subtle threshold points. For a compelling<br />

number of designs, designers are no longer constrained by<br />

silicon capacity. If you can can conceive a design, there is a<br />

pretty good chance that you can fit it onto the available silicon.<br />

And this simple step-change is starting to have massive<br />

repercussions on the way that people create designs.<br />

First of all, designers used to spend a lot of time optimising<br />

designs to minimise gate-count and silicon area. There are still<br />

good reasons to optimise designs, but silicon area concerns are<br />

no longer anywhere near the top of that list.<br />

Removing constraints imposed by silicon capacity has<br />

enabled designers to use much bigger building blocks<br />

for their designs. It used to be transistors, then gates,<br />

then MSI. Today, the building blocks are often major<br />

functional subsystems in their own right. USB, ethernet,<br />

IP-Reuse :<br />

It’s Time To Get Serious!<br />

By John Wilson, Mentor Graphics<br />

PCI Express, processors, SATA. Nobody cares if a USB<br />

module uses up 30K or 40K gates - just as long as it works<br />

properly in the design.<br />

And that’s they key challenge facing designers today; how<br />

can they use pre-existing IP in new designs and get the<br />

designer productivity boost to enable them to make most<br />

use of the available silicon capacity?<br />

In the past, integrating building block modules together was<br />

done primarily at the electronic level. But no longer.<br />

Integrating an ethernet controller, USB interface and<br />

processor together requires electronic compatability (of<br />

course!), and also requires compatability at the software<br />

level (are there compatible drivers for the USB and Ethernet<br />

blocks for the RTOS that I want to run on the processor?).<br />

Why waste valuable design time to develop new software<br />

stacks for a module, when I can just choose an alternative<br />

that is good enough, and has the right software already<br />

available. In fact, software compatability is such a big issue<br />

that the primary factor in choosing the IP may not be the<br />

HDL in which it is written, or the simulator on which it<br />

was verified, but compatability with an existing OS for<br />

which existing software applications may already exist.<br />

Verifying those IP modules is relatively easy when looked<br />

at as standalone modules, but is fantastically more difficult<br />

when the IP modules are integrated into a system. If two<br />

different verification strategies were used by the different IP<br />

suppliers, it is asking a lot of the system integrator to learn<br />

enough about the IP to recreate the verification environment<br />

in a way that is compatible with the current design. The effort<br />

required is often comparable with developing that module<br />

from scratch. In fact, it can be worse than that. Because<br />

IP providers want the biggest possible potential market for<br />

their IP, it is often highly configurable. And that causes a<br />

exponential increase in verification complexity.<br />

The SPIRIT Consortium Guide www.chipdesignmag.com/spirit


The problem caused by stepping over that silicon capacity<br />

threshold – in truly building systems-on-chip –is that the<br />

information used to make good choices about integrating<br />

IP into designs does not exist in one place (in a single<br />

information ‘domain’). <strong>Design</strong>ers are challenged to<br />

evaluate information across multiple information domains<br />

simultaneously to enable the efficient selection of IP for<br />

use in a new design. IP reuse has been a technology that<br />

is full of promise but difficult to deploy.<br />

That’s the challenge to which IP-XACT, the XML Schema<br />

format from The SPIRIT Consortium, provides solutions.<br />

And, in providing these solutions, IP-XACT seems to be<br />

opening up new insights into design creation and processing<br />

which have just not been practicable before.<br />

Here’s the basic premise of IP-XACT : there is a lot of<br />

information associated with any IP module, spread across<br />

a wide range of information domains. It is impossible<br />

to predict which information will be important to any<br />

individual designer. This IP will be used in many different<br />

designs, in different design flows and using different<br />

design and verification processes.<br />

So IP-XACT says, “let’s just write down everything we<br />

know about the IP module, enabling the designer to use<br />

that information quickly and effectively. We really don’t<br />

know or care how that IP came into existence - here is<br />

everything we know about this IP.”<br />

This is not rocket science. <strong>Design</strong>ers have recognised the<br />

value of this sort of capability for many years - its called<br />

a databook. I still have my orange TI TTL databooks<br />

from twenty years ago on my shelf, but they are formatted<br />

completely differently from the PDF files that I used to<br />

learn about ARM processors today, so there is no easy<br />

and consistent way to locate the equivalent information<br />

in these different databooks.<br />

IP-XACT uses XML. This enables us to label all the<br />

different data in a consistent and machine-readable way,<br />

so it’s a lot easier to understand IP from many different<br />

sources. Using machine-readable XML data means that<br />

many of the design processes can be highly automated<br />

(that is the purpose of modular program modules called<br />

The SPIRIT Consortium Guide<br />

generators). This enables <strong>Design</strong> Environments (the tools<br />

that understand IP-XACT databooks and generators) to<br />

offer rapid design creation and verification possibilities using<br />

a range of IP in many different formats, in a way that can by<br />

easily customised for individual design requirements.<br />

So for most designers, it’s simply a case of dragging and<br />

dropping the IP modules into a design, configuring them,<br />

and then invoking the generators to turn that XML data into<br />

real life designs. If this sounds too fantastic to be true, then<br />

please tell that to my mum (a retired nurse, with just enough<br />

technical knowledge to turn on computers and mobile phones)<br />

who used IP-XACT to create and verify an ARM 9-based<br />

USB subsystem, and ran enough software on the system to be<br />

sure that the system was functioning correctly. She did this<br />

in about 15 minutes (10 of which were me explaining all the<br />

wiggly waveforms displayed in the HDL simulator).<br />

"If she had chosen a PowerPc Processor, she would<br />

have ended uP usIng a dIfferent usB (one recognIzed<br />

as comPatIBle wIth PowerPc, not amBa ahB), and<br />

dIfferent comPIlers and dIfferent sImulators..."<br />

I’m deadly serious here!<br />

My mum’s success all came down to great preparation by<br />

somebody else. The various IP experts had encapsulated<br />

their knowledge in an IP-XACT way, which meant that<br />

when she chose to include an ARM processor in the design,<br />

the information about the supported bus interface (in this<br />

case, AMBA AHB) and software compilers was immediately<br />

available. That, in turn, invoked the choice of compatible<br />

USB and also how to compile the USB drivers to work<br />

with the ARM processor. It also enabled a generator with<br />

AMBA AHB specialist knowledge to create the specialist<br />

interconnect HDL, checked that suitable memory had been<br />

connected into the system, offered up a list of verification<br />

IP that could be useful in this design, and generated some<br />

HTML design documentation and testbench. Finally, the<br />

design was automatically compiled for her chosen HDL<br />

simulator and verification system (no - I don’t know how<br />

she selected one simulator from another either - I think she<br />

www.chipdesignmag.com/spirit The SPIRIT Consortium Guide<br />

31


The SPIRIT Consortium Guide<br />

picked a name she liked from the list of simulators that were<br />

capable of running the mixed VHDL/Verilog design she had<br />

created) and she started simulating.<br />

If she had chosen a PowerPC processor, she would have ended<br />

up using a different USB (one recognized as compatible with<br />

PowerPC, not AMBA AHB), and different compilers and<br />

different simulators, but the result would have been the same -<br />

a functioning processor-based USB subsystem. As she learned<br />

more about USB subsystems, I bet she would also have gone<br />

back and chosen some different configuration options that<br />

would have produced even better results.<br />

And using IP-XACT, it means that the USB subsystem<br />

design that she created can now be shipped as a configured<br />

IP-XACT component for somebody else to use in a larger<br />

design (given her design experience, you see why I emphasize<br />

that automated verification by the recipient of this module is<br />

probably a good thing!).<br />

The ‘big picture’ here is that IP-XACT can enable designers<br />

to create and verify big designs very quickly, but only if the<br />

expert has taken the time to prepare and check that data in the<br />

first place. This information preparation is not a trivial task.<br />

And it is only worthwhile investing in that data preparation<br />

if there is a reasonable expectation that the IP will be used<br />

in many different designs. For quality IP providers, this can<br />

be a great way to put their IP directly into a working design.<br />

I say ‘quality’ because IP-XACT automation means good IP<br />

can be demonstrated to be working quickly, and bad IP will<br />

be shown to fail equally as fast.<br />

Adopters of IP-XACT are reporting some interesting data.<br />

The IP preparation process is quite rigorous (IP providers<br />

don’t know how the IP is going to be used in a design), and<br />

one consequence is that the delivered IP-XACT enabled IP is<br />

of measurably higher quality.<br />

Here’s some information for proponents of all-in-one ESL<br />

top-down design methodologies (by the way, IP-XACT is<br />

completely design and process agnostic, so any mix of ESL,<br />

RTL and gates is just fine). The recently released IP-XACT<br />

1.4 XML schema has extended the data model to encompass<br />

transactional interfaces. In trying to integrate ESL views of<br />

IP with RTL views (IP-XACT information domains mean<br />

that each IP might have multiple, equally valid, sets of data in<br />

different information domains), it became clear RTL and ESL<br />

data was being used inconsistently and there were only limited<br />

circumstances where the data could be considered different<br />

32<br />

views of the same IP. This was initially perceived as a<br />

problem - until it was realised that recognising, and working<br />

with, the differences opens the way to sophisticated design<br />

transformations that encompasses both ESL and RTL and<br />

lots more besides. Do you want to map your RTL design onto<br />

a ESL model format that enables power and performance<br />

estimation? IP-XACT may be able to automate the design<br />

transformation for you. And writing down the transform<br />

means that the same automated processes can be applied<br />

to many different designs automatically. This means that<br />

there is no real concept of top down (or bottom up, or<br />

middle out, for that matter), just transformations.<br />

Is the IP-XACT data model complete? The answer is no,<br />

and it probably never will be. There will always be new<br />

information domains that can be added to the IP-XACT<br />

model (the process is underway at the moment to enhance<br />

the register description capabilities, and debug capabilites<br />

and verification options). IP-XACT has been built in a way<br />

that external groups can also add their own data, for eventual<br />

inclusion in the IP-XACT standard.<br />

But the well-established core framework enables<br />

designers (and my mum) to exploit the expert knowledge<br />

already embedded into IP-XACT, to focus on the key<br />

design decisions and automate the non-value added<br />

design tasks, and means that sophisticated designs that<br />

fully utilize the available silicon capacity can be created<br />

more quickly, reliably and productively than has ever<br />

been possible before.<br />

The next opportunity to see the range of IP-XACT enabled tools,<br />

IP and design processes in action is at The SPIRIT Consortium<br />

Open General Meeting at the 45th <strong>Design</strong> Automation Conference<br />

on Monday evening, June 9th starting at 6:00pm in the Anaheim<br />

Hilton. For more details and to register to attend, please visit<br />

www.spiritconsortium.org/events �<br />

The SPIRIT Consortium Guide www.chipdesignmag.com/spirit


Duolog Technologies<br />

Duolog Technologies<br />

The SPIRIT Consortium Guide<br />

The Collaborative <strong>Design</strong> Automation Company<br />

Our Vision Duolog, The Collaborative <strong>Design</strong> Automation Company, is a pioneering developer<br />

of EDA tools that enable the flawless and rapid integration of today's increasingly<br />

complex SoC, ASIC and FPGA designs. Duolog’s Socrates <strong>Chip</strong> Integration<br />

Platform enables IC designs that are Perfect By Construction.<br />

Value Proposition The world's leading IP and IC/SoC development companies rely on Duolog tools<br />

to automate their chip integration processes - eliminating bugs, shrinking design<br />

cycles and drastically reducing the risk of costly delays and respins.<br />

Products Socrates Integration Suite – a comprehensive IC integration framework:<br />

www.duolog.com<br />

Spinner – Fully automated I/O fabric generator that eliminates I/O bugs<br />

from top-level IC integration<br />

Bitwise – Powerful register management tool that facilitates hardware/<br />

software interface collaboration for IPs and systems<br />

OCP Toolkit – a versatile suite of tools for OCP analysis and debug:<br />

OCP-Tracker – Performance analysis tool for OCP-based systems with<br />

powerful filtering and visualization capabilities<br />

OCP-Conductor – OCP transaction visualization tool to greatly ease system<br />

debug and provide intuitive transaction analysis<br />

R&D Duolog tools are designed by “IC designers for IC designers” to prevent and<br />

eliminate the design flaws that plague the integration of today’s complex chip<br />

designs. Duolog has over 70 engineers at Development Centers in Dublin and<br />

Galway, Ireland and Budapest, Hungary.<br />

Sales & Support Dublin, Ireland Nice, France<br />

Bangalore, India San Jose, California<br />

Contact www.duolog.com info@duolog.com<br />

“TI has been using Duolog's Integration tools since 2002 on our OMAP SoC<br />

developments. Spinner has fully automated the creation and management<br />

of the IO layers and has greatly enhanced the quality and timeliness of our<br />

deliverables."<br />

Ziad Mansour<br />

OMAP Program Manager, Texas Instruments<br />

Europe: +353 12178400 USA: +1.408.356.7702<br />

About Duolog<br />

Founded in 1999, Duolog Technologies, The Collaborative <strong>Design</strong> Automation Company, is a pioneering developer of groundbreaking EDA tools<br />

that enable the flawless and rapid integration of today's increasingly complex SoC, ASIC and FPGA designs. Duolog’s Socrates <strong>Chip</strong> Integration<br />

Platform enables IC designs that are Perfect By Construction<br />

All information is subject to change without notice. © 2008 Duolog Technologies Ltd. All Rights Reserved<br />

www.chipdesignmag.com/spirit The SPIRIT Consortium Guide<br />

33


The SPIRIT Consortium Guide<br />

34<br />

Evatronix<br />

Complete and integrated IP solutions<br />

Evatronix, as a SPIRIT member, plays its active role by introducing the latest IP-XACT standards in the IP industry. As a leading IP provider, the company has<br />

already implemented the IP-XACT packaging in the design and verification processes.<br />

A set of deliverables for one of our USB products, USB-OTG Controller, has been enhanced by IP-XACT 1.2 and 1.4 packaging to facilitate integration of the component<br />

in the SoC developer’s environment. RTL and TLM views, interface abstractors as well as verification components have all been described with IP-XACT meta-data.<br />

IP-XACT packaging may also be provided with other IP cores from our offer, which is listed below.<br />

Microprocessors & Microcontrollers<br />

We offer a wide range of 8-, 16- and 32-bit architectures to satisfy area or<br />

performance requirements of many SoCs.<br />

Our best-selling product, R8051XC, is a fully customizable, silicon proven<br />

microcontroller. It is also the heart of our pre-integrated solutions -<br />

Embedded Internet and USB subplatforms.<br />

The R8051XC includes many additions to the standard 8051<br />

architecture, which spread the range of applications<br />

and increase the performance. Proprietary features<br />

make the CPU benchmark run from 6.9 up to<br />

9.6 times faster than using Intel 80C51<br />

at the same clock frequency.<br />

C68000, C80186 and C6502 are<br />

configurable equivalents to the<br />

microprocessors once made by<br />

Motorola, Intel and MOS<br />

Technology, respectively.<br />

Ethernet Solutions<br />

All of our Ethernet Media Access<br />

Controllers (MAC) are designed for long<br />

and trouble-free operation within most<br />

complex SoCs.<br />

MAC-1G and its area-optimized version,<br />

MAC-1G-L (Lite), support 1000 Mbps networks,<br />

while MAC and MAC-L perform best in still popular<br />

10/100 Mbps connections.<br />

The Embedded Internet subplatform consists of the R8051XC<br />

microcontroller accompanied by the MAC-L controller, proprietary Hardware<br />

Acceleration module and a third-party TCP/IP software stack.<br />

A Physical Connection Sublayer (PCS) that was recently developed for the<br />

MAC-1G makes it an excellent choice for optical fiber dependant ICs.<br />

USB Controllers<br />

The USB Product Line offers a full scope of possible connections,<br />

varying in speed and functionality. USB-IF certification guarantees<br />

their comprehensive compliance with USB requirements.<br />

USB Full and High Speed controllers (namely CUSB and CUSB2) are<br />

supported by a range of dedicated applications. Both IP cores are available<br />

with the R8051XC microcontroller as development subplatforms.<br />

USB On-The-Go (OTG) is a dual-role component, ready to<br />

act as a host or a device, depending on its current<br />

function in the system.<br />

Evatronix, Electronic <strong>Design</strong> Department<br />

16 Dubois Street, Gliwice, PL43-300<br />

tel. +48 33 499 59 15<br />

fax. +48 33 499 59 18<br />

www.evatronix.pl<br />

ipcenter@evatronix.pl<br />

All the USB controllers were used with a<br />

number of third-party PHYs. Evatronix<br />

performs USB-IF compliance pretesting<br />

to discover and solve any<br />

issues regarding interoperability of its<br />

controllers and third-party USB PHYs.<br />

Memory Controllers<br />

Our solutions allow the development<br />

of an SoC that makes use of every<br />

available memory type.<br />

Static Memory Controller (SMC) fully<br />

supports RAM memory, while ATAIF<br />

Controller enables seamless integration<br />

of Parallel ATA drives.<br />

Both SDIO and NANDFLASH controllers are<br />

designed for more and more common flash<br />

memories. SDIO, due to its compliance with the<br />

latest SDIO specifications, supports a large spectrum of SDIO<br />

applications - not only various memory cards, but also wireless<br />

networking enhancements, like Bluetooth or Wi-Fi, as well<br />

as other SDIO-based solutions – Global Positioning Systems,<br />

cameras or TV tuners.<br />

Evatronix SA, headquartered in Bielsko-Biala, Poland, founded in 1991 develops electronic virtual components (IP cores) along with<br />

complementary software and supporting development environments. The company also provides electronic design services. Its main design<br />

office location, Gliwice (Poland), guarantees easy access to the pool of talented graduates from the Silesian University of Technology. Evatronix<br />

IP cores are available worldwide through the sales channels of its strategic distribution partner CAST, Inc. (New Jersey, USA). In the EU countries<br />

(excluding UK) and in Switzerland Evatronix operates a direct sales channel. <strong>Design</strong> services are offered directly by Evatronix world wide.<br />

The SPIRIT Consortium Guide www.chipdesignmag.com/spirit


Microprocessor Cores<br />

Power Architecture e200<br />

ColdFire Series<br />

C166<br />

TriCore<br />

CR16<br />

Nexus 5001 Debug for ARM®<br />

Multi-Core Debug System<br />

AMBA Subsystems<br />

Famous IP for your <strong>Chip</strong>s<br />

Silicon Valley HQ<br />

307 Orchard City Drive<br />

Suite 202<br />

Campbell, CA 95008<br />

TollFree: 800 289 6412<br />

Automotive Cores<br />

FlexRay<br />

CAN<br />

Multi-Link Interface<br />

(MLI)<br />

MicroSecond Channel<br />

(MSC)<br />

Famous IP Available Online<br />

We Proudly Support<br />

Consumer Cores<br />

Bluetooth<br />

MPEG2<br />

High Speed USB Hub<br />

Clock Generators (PLL<br />

replacement)<br />

www.ip-extreme.com<br />

Munich <strong>Design</strong> Center<br />

Betastrasse 9a<br />

85774 Unterföhring<br />

Germany<br />

Phone: +49 89 9954 8814<br />

Methodology, Services, Tools<br />

IP commercialization of<br />

captive IP<br />

IP design methodology<br />

deployment<br />

IP certification tools<br />

IP packaging tools<br />

IP repository and distribution<br />

system<br />

Tokyo <strong>Design</strong> Center<br />

Ichibancho MS Bldg. 5F<br />

17-6 Ichibancho, Chiyoda-ku<br />

Tokyo, Japan 102-0082<br />

Phone: +03-4334-3177


The SPIRIT Consortium Guide<br />

36<br />

Magillem <strong>Design</strong> Services<br />

Magillem <strong>Design</strong> Services: Go with the Flow<br />

Magillem <strong>Design</strong> Services offers tools and services that drastically reduce the<br />

global cost of complex system design. The Magillem platform is designed to<br />

preserve independence from EDA vendors and is built on a worldwide adopted<br />

standard: SPIRIT IP-XACT.<br />

The Magillem platform, the leading SPIRIT tool suite, provides an integrated<br />

development environment to systems designers.<br />

Offering<br />

Our experience in both IT (Information Technology) and EDA (Electronic <strong>Design</strong><br />

Automation) allows us to efficiently help you and follow up your projects.<br />

Customers have their own design environment and organization: it is key to<br />

provide services to help them fully deploy the best tools.<br />

MDS provides tools via Eclipse-based Magillem:<br />

- IP Handling<br />

- IP Integration<br />

- <strong>Design</strong> Flow supervision for ASIC and FPGA<br />

MDS provides expert services:<br />

- <strong>Design</strong> process audit<br />

- Custom tool development<br />

- <strong>Design</strong> Flow integration for ASIC and FPGA<br />

Solutions<br />

Unlike major EDA vendors, Magillem offers a solution that integrates<br />

existing tools while being unencumbered by proprietary software legacy.<br />

Magillem’s interpretation of the IP-XACT schema has been optimized thanks<br />

to its inputs of major early adopters. Magillem has a long term commitment<br />

to SPIRIT IP-XACT and its technology roadmap is aligned with all future<br />

releases of the specification.<br />

Services<br />

An IP-XACT Compliance Lab<br />

We help companies assess their SPIRIT IP-XACT compliance:<br />

• Semiconductors manufacturers: compliance of the database and<br />

the design flow<br />

• EDA vendors: compliance of the tools interfaces with IP-XACT<br />

• IP Providers: compliance of the metadata description and<br />

configuration nutshell<br />

We audit the existing industrial flow and propose a work plan to<br />

adapt them to IP-XACT. We validate and verify the full compatibility<br />

of tools interfaces into a flow testbench.<br />

We test the IP deliverables against a benchmark for compliance using<br />

our SPIRIT PACKAGER and check IP integration properties onto a test<br />

system.<br />

Our offering includes software modules that guarantee early<br />

adopters with full ascending compatibility of their IP files to any<br />

future upgrade of the IP-XACT standard.<br />

A Custom Generators Factory<br />

Using our own versatile Magillem toolbox, we are well equipped to<br />

offer a wide range of services<br />

Magillem <strong>Design</strong> Services<br />

161 West 54th Street<br />

Suite 202A<br />

New York, NY 10019 USA<br />

contact@magillem.com<br />

+1 646 226 3960 (tel)<br />

+1 212 292 3999 (fax)<br />

The SPIRIT Consortium Guide www.chipdesignmag.com/spirit<br />

From:<br />

• Analyzing customers database specifics and customizing SPIRIT<br />

Packager accordingly<br />

• Implementing new features to meet customer’s requirements<br />

• <strong>Design</strong>ing complex IP specific configuration nutshells<br />

• <strong>Design</strong>ing and implementing system configuration dashboard and<br />

custom checkers<br />

To:<br />

• Streamlining Flow process<br />

• Developing and integrating specific point tools<br />

• Developing and integrating tools for the global architecture of a<br />

start-to-end ESL flow


Mentor Graphics Corporation<br />

Platform Express, Mentor Graphics’ Platform-Based <strong>Design</strong><br />

product, enables rapid design creation and verification through highly<br />

automated IP Reuse. The product uses the latest IP-XACT 1.4<br />

databook format from the The SPIRIT Consortium, enabling designers<br />

to mix RTL and ESL components in the same designs.<br />

Mentor Graphics is the key developer of the core technology behind The<br />

SPIRIT Consortium’s IP-XACT specification and continues to advance<br />

the specification by developing design tools that bring the benefits of<br />

the specification to the end user.<br />

Using the tool and its large number of sophisticated generators, SoC<br />

designers can rapidly create and verify their system designs by automating<br />

complex, error-prone design creation, IP integration, power domain<br />

creation, software generation, verification steps, and a configurable<br />

build environment to enable easy design hand-off. Based on standards<br />

throughout, Platform Express is supplied in Eclipse plug-in format, enabling<br />

the tool to operate as part of a larger, customized, design environment.<br />

The SPIRIT Consortium Guide<br />

Platform Express provides a new set of generators that support 0-In®<br />

Checkerware verification IP, as well as PSL and OVL assertions. The<br />

product also features direct links to the Mentor Graphics Questa<br />

verification environment and provides a framework for enabling other<br />

verification formats.<br />

Mentor Graphics Corporation<br />

8005 S.W. Boeckman Rd.<br />

Wilsonville, OR 97070-7777<br />

USA<br />

Tel:503-685-7000<br />

Fax: 503-685-1204<br />

www.mentor.com<br />

www.chipdesignmag.com/spirit The SPIRIT Consortium Guide<br />

37


The SPIRIT Consortium Guide<br />

38<br />

Scarlet Code<br />

IP-XACT Components<br />

IP-XACT is a collection of real-data and meta-data describing<br />

components, systems and their implementations, allowing IP providers<br />

to create a standard, consolidated description of their IP components<br />

and systems. The standardised IP-XACT format improves down-stream<br />

usage for system Integrators and users of the IP.<br />

The data required for an IP-XACT file is itself sourced from many places<br />

and comes in several different formats, making IP-XACT creation an<br />

error-prone and difficult multi-step process.<br />

Component Foundry – eases your IP-XACT development flow<br />

Component Foundry is a low-cost IP-XACT component creation, viewing<br />

and editing tool, which will create IP-XACT descriptions straight from<br />

your legacy IP. Parsing information straight from Verilog file sets and<br />

PDF documents, it creates IP-XACT files in seconds.<br />

The intuitive GUI and semantic checker offer easy and accurate<br />

editing of IP-XACT components throughout their creation process,<br />

without any prior knowledge of XML ! The tool provides wizards<br />

to help where user input is required, and highlights any missing<br />

referenced files, allowing the user to ensure IP-XACT descriptions<br />

are complete and accurate.<br />

Component Foundry offers a unique intuitive visualisation and<br />

edit capability for;<br />

• Component top-level overview<br />

• Component interfaces, including bus-component port mappings<br />

• Component channels<br />

• Component model, including signals, busses and their directions<br />

• Memory maps, including Address blocks, Registers & bitfields<br />

• Associated files and file structures<br />

• Address spaces<br />

• Bus Definitions<br />

• Semantic Rule-checking of all IP-XACT data<br />

• A library capability to manage IP-XACT Components and Bus Defs.<br />

Component Parser – create IP-XACT descriptions from legacy<br />

IP automatically<br />

Parse IP-XACT information from Verilog file sets and PDF documents,<br />

creating IP-XACT files in seconds.<br />

The Verilog parser;<br />

• Supports Verilog 97 and 2001.<br />

• Handles module hierarchy, can locate top module automatically<br />

• Reads top-level signal list<br />

• Identifies all modules within a design<br />

• Builds a complete file list with ALL include files<br />

• Handles ‘defines, and parameterisation<br />

• Creates the IP-XACT component models<br />

• Extracts Bus Interfaces from library bus definitions<br />

• Extracts Bus signal mappings<br />

The PDF parser;<br />

• Extracts Register names, descriptions, offsets and access types,<br />

NB the PDF parser is document-format dependant.<br />

Component Foundry – Compare IP-XACT against source data<br />

Component Foundry provides the facility to intelligently difference two<br />

IP-XACT component files, displaying the results in an easy to understand<br />

graphical format. This allows users to see at a glance the type, number<br />

and context of the differences between two IP-XACT component files.<br />

For users maintaining or creating IP-XACT component libraries, Scarlet’s<br />

XACT-Delta file comparison capability allows comparison of later-revision<br />

source files against controlled IP-XACT files.<br />

Component Foundry – an eclipse plug-in<br />

Component Foundry is supplied as Eclipse plug-in, allowing users<br />

to focus on creating IP-XACT descriptions of their components, while<br />

leveraging the full power of the eclipse development environment.<br />

The Component Foundry GUI has been carefully designed to reflect the<br />

structure of the IP-XACT specification and allows the user to create accurate<br />

IP-XACT component descriptions in an intuitive step by step manner.<br />

Jump start your IP-XACT flow today<br />

Component Foundry including Component Parser, is available for<br />

immediate download and available for a 2-week free trial from<br />

the Scarlet website; www.scarletcode.co.uk. The toolset is licensed<br />

on a twelve month rental basis, as a node-locked, single-named<br />

user. Other license options are<br />

available by negotiation;<br />

Contact sales@scarletcode.co.uk<br />

The SPIRIT Consortium Guide www.chipdesignmag.com/spirit


Synplicity, Inc.<br />

Introducing The ReadyIP Program From Synplicity<br />

Access. Evaluate. Integrate. Introducing the ReadyIP program<br />

– the industry’s first “try before you buy” design methodology for<br />

FPGA implementation. This program enables you to easily evaluate<br />

and integrate IP from 3rd parties such as ARM, CAST, Gaisler<br />

Research, and Tensilica, or any IP in the Spirit Consortium’s IP-<br />

XACT format into your design. This new capability is a standard<br />

feature of Synplicity’s Synplify Pro® and Synplify® Premier<br />

solutions starting with version 9.2.<br />

Key Benefits<br />

• Access and select IP directly from your Synplify Pro and/or Synplify<br />

Premier’s IP browser<br />

• Evaluate 3 rd party IP instantly from participating 3rd party vendors<br />

by downloading it via links from the product browser, eliminating<br />

the need for complex licensing negotiations<br />

• Integrate internal or 3rd party IP-XACT IP using the Synplify Pro<br />

and/or Synplify Premier solutions’ System <strong>Design</strong>er capability, or<br />

include the IP’s encrypted (or unencrypted) RTL in any Synplify Pro<br />

or Synplify Premier project<br />

How It Works<br />

The ReadyIP program gives you the freedom to evaluate and choose<br />

from a wide range of 3rd party IP, and then once you have acquired a<br />

Analysis<br />

View<br />

domain@<br />

corner<br />

Operating<br />

corner<br />

Scope Sensitivity ??<br />

SDC<br />

Power<br />

mode<br />

Mode<br />

transition<br />

Library set<br />

Libary<br />

(.lib)<br />

domain@<br />

nominal<br />

Nominal<br />

condition<br />

Cell<br />

Power Domain<br />

Power<br />

switch rule<br />

State<br />

retention<br />

Isolation<br />

rule<br />

Level<br />

Shifter<br />

switch<br />

retention<br />

isolation<br />

level shifter<br />

The SPIRIT Consortium Guide<br />

license for the IP, target the design to your choice of FPGA vendor device.<br />

This is accomplished using the new IP browser and System <strong>Design</strong>er tool<br />

(now standard in both Synplify Pro and Synplify Premier products).<br />

Benefits<br />

• Use 3rd party FPGA IP from the most popular embedded IP and well-known<br />

peripheral providers including ARM, CAST, Gaisler Research, and Tensilica<br />

• Try 3rd party IP before you buy without the need for complex licensing<br />

negotiations<br />

• Gain design performance insights from Synplify Pro/Synplify Premier<br />

timing reports<br />

• Use the IP in your FPGA of choice once you have acquired IP usage rights<br />

• Rapidly configure and assemble your design, including your own and/or<br />

3rd party IP at the system level using Synplify Pro and Synplify Premier’s<br />

System <strong>Design</strong>er and then implement it in your FPGA of choice<br />

To learn more about ReadyIP, visit http://www.synplicity.com/readyip.<br />

Low Power Coalition<br />

Low-Power Information Model<br />

Activity<br />

File<br />

Instance<br />

pwr,gnd,n-bias,p-bias<br />

pwr,gnd<br />

• Purpose: Interoperable low-power design flows<br />

• Approach: Flow-based, user-centric, validation of standard with<br />

practical usage, supports wide range of power minimization<br />

techniques, from mobile devices to large servers<br />

• Status: CPF 1.0 Released, CPF 1.1 in process, CPF Pocket Guide<br />

available, CPF Tutorial on-line, CPF part of<br />

Reference Flow of major Fabs<br />

Synplicity, Inc.<br />

600 West California<br />

Avenue<br />

Sunnyvale, CA 94086<br />

www.synplicity.com<br />

info@synplicity.com<br />

www.si2.org<br />

www.chipdesignmag.com/spirit The SPIRIT Consortium Guide<br />

Module<br />

Net<br />

Power<br />

Gnd<br />

Bias<br />

Pins<br />

virtual<br />

Ports<br />

LPC Members<br />

AMD<br />

Apache <strong>Design</strong><br />

ARM<br />

Atrenta<br />

Azuro<br />

Cadence<br />

Calypto<br />

<strong>Chip</strong>Vision<br />

Entasys <strong>Design</strong><br />

Freescale<br />

Golden Gate Technology<br />

IBM<br />

Intel<br />

LSI<br />

NXP<br />

Sequence <strong>Design</strong><br />

STMicroelectronics<br />

Virage Logic<br />

39


nO ResPIns<br />

Compilation Can Play a Big Role<br />

in System Performance<br />

People tend to emphasize the effects of the processor<br />

architecture on performance and power consumption.<br />

Yet both the coding and compilation methodology also<br />

can have significant effects on performance and power<br />

consumption. Most programs spend the bulk of their time<br />

in loops of some kind, which can execute millions of times<br />

a second. Generally, compilers use worst-case assumptions<br />

to determine what’s required to get the job done. This<br />

frequently results in redundancy in the compiled code—<br />

especially because of assumptions made about code and<br />

data in other modules.<br />

The problem is that such assumptions cannot be divined by<br />

conventional compiler technology, which only optimizes for each code<br />

module. Redundant code in a loop can result in the millions of wasted<br />

cycles that are spent executing the same function multiple times every<br />

time the loop is executed. If just six instructions are repeated inside the<br />

loop of a PIC16, 24 extra clock cycles will be required per iteration.<br />

For a loop that executes 100,000 times per second, 2.4 million clock<br />

cycles will be unnecessarily wasted. In a 20-MHz microcontroller, for<br />

example, this translates into 12% of the total cycles.<br />

Interrupts also can have a large effect on MCU performance. Again,<br />

this is because of the worst-case assumptions used by conventional<br />

compilers. Ordinary compilers save every register that might be used<br />

by an interrupt because they have no way of knowing which registers<br />

will or will not be used by a given interrupt. Most PIC16 compilers,<br />

for instance, save 8 Bytes of data for every interrupt. They therefore<br />

require 168 cycles per interrupt. A 480-kbps serial-communication<br />

port, which generates 24,000 interrupts per second, will use 20%<br />

of the available cycles on a 20-MHz PIC16. So far, we’ve stripped a<br />

20-MHz MCU of 32% of its total available processing capacity. And<br />

that’s just for handling loops and saving context!<br />

New omniscient-code-generation (OCG) technology is available<br />

that has the intelligence to identify and eliminate redundant code.<br />

It also promises to optimize all pointer stacks and registers in the<br />

program for better performance and more efficient code. Essentially,<br />

omniscient code generation works by collecting comprehensive data<br />

on register, stack, pointer, object, and variable declarations from all<br />

program modules before compiling the code. An OCG compiler<br />

combines all of the program modules into one large program, which<br />

it loads into a call-graph structure.<br />

Based on the call graph, the omniscient code generator creates a<br />

pointer reference graph. This graph shows each instance of a variable<br />

having its address taken. It also depicts each instance of an assignment<br />

of a pointer value to a pointer (either directly via function return or<br />

function parameter passing or indirectly via another pointer). It then<br />

identifies all objects that can possibly be referenced by each pointer.<br />

This information is used to determine exactly how much memory<br />

space each pointer will be required to access.<br />

An OCG compiler knows exactly which functions call and are<br />

called by other functions as well as what variables and registers are<br />

required and which pointers are pointing to which memory banks.<br />

As a result, it knows exactly which registers will be used for every<br />

interrupt in the program. The compiler also can identify redundant<br />

or re-entrant code. It generates the object code to minimize the<br />

total code size, SRAM requirements, and CPU cycles required.<br />

By optimizing the context, the OCG compiler can cut the number<br />

of cycles used for context save and restore them to as few as 68<br />

cycles. In doing so, it can potentially reduce the CPU’s load to 1.6<br />

million—conserving 12% of the CPU cycles.<br />

By identifying and correcting redundant code in a loop, an<br />

OCG compiler can provide comparable performance benefits.<br />

Removing just 6 Bytes of unnecessary code from a loop would<br />

save 24 clock cycles in a PIC device. This translates into 2.4<br />

million cycles if the loop executes 100,000 times a second or an<br />

additional 12% of the processor cycles.<br />

Saving cycles in the code is equivalent to adding Megahertz to the<br />

MCU hardware. If software and compilation technology can conserve<br />

nearly 5 million clock cycles per second on a 20-MHz device, the<br />

device’s ultimate performance or its ability to do other processing has<br />

been effectively increased by 25%. In an application that’s reaching the<br />

limits of the MCU’s ability to execute, getting the extra cycles might<br />

prevent the forced migration to another faster and more expensive<br />

microcontroller. Such a migration would require that the code be rewritten<br />

as well. Omniscient compilation can improve performance<br />

while saving both power and cost. ◆<br />

Author: Clyde Stubbs is founder and CEO of HI-<br />

TECH Software in Queensland, Australia. His<br />

university research in compiler technology led to the<br />

founding of HI-TECH Software in 1984.<br />

By Clyde stubbs<br />

40 • March 008 <strong>Chip</strong> <strong>Design</strong> • www.chipdesignmag.com


������������������<br />

������������������<br />

�����������������<br />

����������������������������������<br />

��������������������������������<br />

�����������������������������������������������<br />

�������������������������������������������������<br />

���������� ���������� ��������� ����� ���������������<br />

�����������������������������������������������<br />

����������������������������������������������<br />

������� ���������� ��������� ����� �������� ����<br />

�������������������������<br />

���� ���� ������������� ���������� �� ��������<br />

������������� ��������� ��� ����� ����������� ����<br />

��������������������������������������������<br />

��� ���� ���������� ���� ����� �������� ��� �������<br />

����������� �������� ��������� ������� ����������<br />

��������������������������������������<br />

���� ������ ���������� ������� ������ ���� ���� �����<br />

������������������������������������������<br />

��������������������<br />

�������������


�<br />

��� ��������������<br />

��� ������ ����<br />

���� ������� �� ������ ���� ����� ���� ���� ������<br />

���<br />

��<br />

������ ��<br />

������<br />

�������<br />

���������<br />

����������<br />

���<br />

��������<br />

����� ��� ������ ����������� ���� ��� ������ �������� �� ��� ���� ���<br />

����� ���������� �������������� �� � ���������� ���������� �� ��� ������<br />

������������ ���� ������� �� � ���������� ���������� �� ������� ������<br />

�������� ������� ����� ����� ��� ��� �������� �� ����� ���������� �������<br />

�<br />

������<br />

������ ���� ���� ������ ����<br />

������� ������� ���� ������ �������� ���� ����<br />

������ ��������� �� �� ����� �� � ���������� �� ���������<br />

��� �������������� ��� �� ���� ������� ��� ��� ������ ���� ���<br />

�������������� ��� ������������� ��������� �������� ��������<br />

���� ��� ��������� ��������� ��� ����� ��� ������� �������� ���� �������<br />

��� ���������� ������ ��������� ���������� ���� ��� �� ���� ������<br />

����������� ���� ��� ��������� ��� ���� ������ ���� ��������������<br />

�������� ����� �������� �� ��� ������ ��������<br />

���� ������� �� ����<br />

������ ���� ��� ��������� ��� � ������� �� ����������� ������� ������ ������<br />

���� �� ���� ��� ������ ��� ��������� �������������� ���� ����������� ���<br />

���� ���� ���� ��� ���� ���� ������ ���� ����� ��� �������� ������ ���������<br />

����� ���� ������������ ���� �������������� ��� ������<br />

��� ���� ����������� ����� �������������� ���� ��� �� ������������ ��<br />

����� �� ������ �� ���������������������������<br />

iv •January/February 2008 <strong>Chip</strong> <strong>Design</strong> • www.chipdesignmag.com<br />

iv • April 008 <strong>Chip</strong> <strong>Design</strong> • www.chipdesignmag.com

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!