18.11.2014 Views

DNDJ 4-9.indd - sys-con.com's archive of magazines - SYS-CON ...

DNDJ 4-9.indd - sys-con.com's archive of magazines - SYS-CON ...

DNDJ 4-9.indd - sys-con.com's archive of magazines - SYS-CON ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

2<br />

September 2006 Volume: 4 Issue: 9 Visit us at www.dontnetdevelopersjournal.com


Visit us at www.dontnetdevelopersjournal.com September 2006 Volume: 4 Issue: 9<br />

3


Inside <strong>DNDJ</strong><br />

GUEST EDITRIAL<br />

A Se<strong>con</strong>d Glance at SharePoint<br />

2007 Features<br />

By Kevin H<strong>of</strong>fman ........................................................................................................7<br />

PODCAST<br />

Heard on Hanselminutes<br />

By Carl Franklin & Scott Hanselman ....................................................10<br />

Introducing C# Generics<br />

Leverage code reuse without sacrificing type safety<br />

By Robert R. Hauser 20<br />

METHODOLOGIES<br />

The Five Cs <strong>of</strong> Agile Management<br />

By Robert Holler ............................................................................................................18<br />

PROTOCOLS<br />

Exploring FTP in .NET 2.0<br />

By Alexander Gladshtein ....................................................................................28<br />

TOOLS<br />

Effective Database Change<br />

Management<br />

By Christoph Wienands .......................................................................................38<br />

Clean and Protect a Large .NET Code Base with<br />

Coding Standards and Unit Testing<br />

How to avoid runtime exceptions, instability,<br />

crashes, degraded performance and insecure<br />

backdoors<br />

By Hari Hampapuram and Matt Love 34<br />

MONKEY BUSINESS<br />

A Short History <strong>of</strong> Basic on Mono<br />

By Dennis Hayes .......................................................................................................... 42<br />

FIRST LOOK<br />

A First Look at Visual Studio<br />

2005 Code Snippets<br />

By Tommy Newcomb ...............................................................................................44<br />

4<br />

September 2006 Volume: 4 Issue: 9 Visit us at www.dontnetdevelopersjournal.com


1min<br />

AJAX-enable any<br />

application<br />

2 min<br />

Reporting<br />

Solution<br />

ASP.NET<br />

UI Suite<br />

AJAX<br />

Framework<br />

WinForms<br />

UI Suite<br />

Step 1:<br />

Drag the AJAX Manager<br />

<strong>con</strong>trol on the WebForm<br />

Total time: ~ 5 sec.<br />

Step 2:<br />

Use the single dialog to set:<br />

• which <strong>con</strong>trols initiate AJAX<br />

• which <strong>con</strong>trols are updated<br />

with AJAX<br />

Total time: < 1.55 min.<br />

• No Previous AJAX Experience Needed<br />

r.a.d.ajax does not require any prior experience with AJAX. You don’t need to learn an “AJAX-specific” way <strong>of</strong> building new applications.<br />

• No Modifications to Your Application Necessary<br />

No need to place Callback Panels around areas that need to be updated. No need to set triggers or manually invoke AJAX requests.<br />

• Intuitive and Centralized Management <strong>of</strong> AJAX Relations<br />

All relations between <strong>con</strong>trols which initiate AJAX and those that are updated with AJAX are managed from a single dialog.<br />

• Codeless Development<br />

Drag-and-drop the AJAX Manager <strong>con</strong>trol on the page, tick the respective checkboxes in the dialog, and hit F5.<br />

See r.a.d.ajax demonstrated at these events:<br />

(Santa Clara, CA Oct. 2–4)<br />

(www.ftponline.com/virtual/ajax/)<br />

Watch a video at:<br />

www.telerik.com/ajaxvideos<br />

Visit us at www.dontnetdevelopersjournal.com September 2006 Volume: 4 Issue: 9<br />

5


4 <br />

<br />

4 <br />

<br />

<br />

4 <br />

<br />

<br />

4 <br />

<br />

<br />

<br />

<br />

<br />

<br />

<br />

<br />

6<br />

September 2006 Volume: 4 Issue: 9 Visit us at www.dontnetdevelopersjournal.com


EDITORIAL BOARD<br />

dotnetboard@<strong>sys</strong>-<strong>con</strong>.com<br />

Editor-in-Chief<br />

Derek Ferguson derekf@speakeasy.net<br />

Group Publisher<br />

Jeremy Geelan jeremy@<strong>sys</strong>-<strong>con</strong>.com<br />

Mobility Editor<br />

Jon Box jbox@psgi.net<br />

Security Editor<br />

Patrick Hynds phynds@criticalsites.com<br />

Open Source Editor<br />

Dennis Hayes dennisdotnet@yahoo.com<br />

Product Review Editor<br />

Doug Holland doug.holland@precisionobjects.com<br />

VB Editor<br />

Keith Franklin keithf@magenic.com<br />

Smart Client Editor<br />

Tim Huckaby timh@interknowlogy.com<br />

BizTalk Editor<br />

Brian Loesgen brian.loesgen@neudesic.com<br />

ADVISORY BOARD<br />

dotnetadvisors@<strong>sys</strong>-<strong>con</strong>.com<br />

Derek Ferguson derekf@magenic.com<br />

Jeremy Geelan jeremy@<strong>sys</strong>-<strong>con</strong>.com<br />

Thom Robbins trobbins@micros<strong>of</strong>t.com<br />

John Gomez John.Gomez@eclip<strong>sys</strong>.com<br />

Scott Hanselman scott@hanselman.com<br />

Dean Guida deang@infragistics.com<br />

John Sharp johns@<strong>con</strong>tentmaster.com<br />

Jacob Cynamon jacobcy@micros<strong>of</strong>t.com<br />

Chris Mayo cmayo@micros<strong>of</strong>t.com<br />

Gary Cornell gary@thecornells.com<br />

Joe Stagner joestag@micros<strong>of</strong>t.com<br />

Peter DeBetta peter@debetta.com<br />

Executive Editor<br />

Nancy Valentine nancy@<strong>sys</strong>-<strong>con</strong>.com<br />

Associate Editor<br />

Lauren Genovesi laureng@<strong>sys</strong>-<strong>con</strong>.com<br />

SUBSCRIPTIONS<br />

For subscriptions and requests for bulk orders,<br />

please send your letters to Subscription Department<br />

Subscription Hotline: subscribe@<strong>sys</strong>-<strong>con</strong>.com<br />

Cover Price: $6.99/issue<br />

Domestic: $69.99/yr. (12 issues)<br />

Canada/Mexico: $99.99/yr. Overseas: $129.99/yr.<br />

(u.s. banks or money orders). Back issues: $12/ea.,<br />

plus shipping and handling.<br />

EDITORIAL OFFICES<br />

<strong>SYS</strong>-<strong>CON</strong> Media 135 Chestnut Ridge Rd.,<br />

Montvale, NJ 07645<br />

Telephone: 201 802-3000 Fax: 201 782-9601<br />

.NET Developer’s Journal (issn#1541-2849) is<br />

published monthly (12 times a year) for $69.99 by<br />

<strong>SYS</strong>-<strong>CON</strong> Publications, Inc., 577 Chestnut Ridge Road,<br />

Woodcliff Lake, NJ 07677.<br />

Postmaster: Send address changes to:<br />

.NET Developer’s Journal,<br />

<strong>SYS</strong>-<strong>CON</strong> Publications, Inc.,<br />

577 Chestnut Ridge Road<br />

Woodcliff Lake, NJ 07677.<br />

Copyright © 2006 by <strong>SYS</strong>-<strong>CON</strong> Publications, Inc.<br />

All rights reserved. No part <strong>of</strong> this publication may be reproduced or<br />

transmitted in any form or by any means, electronic or mechanical,<br />

including photocopy or any information storage and retrieval <strong>sys</strong>tem,<br />

without written permission. For promotional reprints, <strong>con</strong>tact Reprint<br />

Coordinator Dorothy Gil, dorothy@<strong>sys</strong>-<strong>con</strong>.com.<br />

Worldwide Newsstand Distribution<br />

Curtis Circulation Company, New Milford, NJ<br />

Newsstand Distribution Consultant:<br />

Gregory Associates / W.R.D.S.<br />

732 607-9941 - BJGAssociates@cs.com<br />

For list rental information:<br />

Kevin Collopy: 845 731-2684,<br />

kevin.collopy@edithroman.com;<br />

Frank Cipolla: 845 731-3832,<br />

frank.cipolla@epostdirect.com<br />

All brand and product names used on these pages are trade names,<br />

service marks, or trademarks <strong>of</strong> their respective companies. <strong>SYS</strong>-<strong>CON</strong><br />

Publications, Inc., is not affiliated with the companies or products<br />

covered in .NET Developer’s Journal. .NET and .NET-based marks are<br />

trademarks or registered trademarks <strong>of</strong> Micros<strong>of</strong>t Corporation in the United<br />

States and other countries.<br />

<strong>SYS</strong>-<strong>CON</strong> Publications, Inc., reserves the right to revise, republish and<br />

authorize its readers to use the articles submitted for publication.<br />

A Se<strong>con</strong>d Glance<br />

at SharePoint 2007<br />

Features<br />

By Kevin H<strong>of</strong>fman<br />

I<br />

have been guilty <strong>of</strong> underestimating the power <strong>of</strong> Features.<br />

It wasn’t really until I started digging deep into the bowels<br />

<strong>of</strong> the new Features and Solutions <strong>sys</strong>tem in SharePoint<br />

2007 that I finally started to realize how pervasive this stuff is.<br />

At first glance, it’s extremely easy to make a sweeping generalization<br />

that Features are just a simple ability to add “stuff” in a<br />

couple <strong>of</strong> different locations throughout SharePoint. SharePoint<br />

actually does a really good job <strong>of</strong> hiding its own use <strong>of</strong> Features,<br />

which makes this misunderstanding all the more easy to make.<br />

When you create a new Feature, you essentially define a Scope<br />

at which that Feature will be visible, such as Web or Farm, etc.<br />

Within your feature, you define Elements. These elements are<br />

the key to expanding SharePoint’s capabilities. The following<br />

is a list <strong>of</strong> the types <strong>of</strong> elements that you can have in your<br />

Feature (each <strong>of</strong> these element types corresponds to an XML<br />

element in the element manifest for the feature, and you can<br />

have 0 to many <strong>of</strong> each element type):<br />

• List Template - defines the custom schema for a list<br />

• List Instance - indicates that when your feature is activated,<br />

the scope in which it was activated will <strong>con</strong>tain an instance<br />

<strong>of</strong> a list<br />

• Module - defines a file set. If you create a feature that adds<br />

custom web part pages to an existing site, the module indicates<br />

the list <strong>of</strong> those files to add<br />

• Content Type - defines a custom <strong>con</strong>tent type within the<br />

feature<br />

• Field - defines a shared column that can be re-used by any<br />

list schema within that scope (including lists defined by<br />

your feature)<br />

• Workflow Type - defines a workflow that can be used at the<br />

site scope<br />

• Event - defines an event binding so that custom code you<br />

have created (possibly included in the module file-set) will<br />

be invoked upon given events in a given list. Event handlers<br />

can be bound to a specific list or to a <strong>con</strong>tent type.<br />

• Custom Action - defines a custom action, such as a new<br />

menu item in the administration menu or a new link in the<br />

“Site Actions” dropdown menu, etc. In addition to defining<br />

new custom actions, your feature can hide existing ones!<br />

• Delegate Control - A ridiculously powerful feature that<br />

allows you to essentially say “I’d like to replace this stock<br />

SP <strong>con</strong>trol with my own, add it to the queue <strong>of</strong> candidate<br />

<strong>con</strong>trols for replacement”. This is exactly how Portal Server<br />

replaces the WSS search that comes with team collaboration<br />

sites<br />

About the Author...<br />

Kevin H<strong>of</strong>fman has been programming since<br />

he was 10 and has written everything from<br />

DOS shareware to n-tier, enterprise Web<br />

applications in VB, C++, Delphi, and C. He<br />

is coauthor <strong>of</strong> Pr<strong>of</strong>essional .NET Framework<br />

(Wrox Press) and co-author with Robert<br />

Foster <strong>of</strong> Micros<strong>of</strong>t SharePoint 2007 Development<br />

Unleashed<br />

http://www.amazon.com/exec/obidos/<br />

ASIN/1861005563/billsatlcomnote/104-<br />

7254258-0048761<br />

Guest Editorial<br />

– <strong>CON</strong>TINUED ON PAGE 27<br />

tlamot@sbcglobal.net<br />

Visit us at www.dontnetdevelopersjournal.com September 2006 Volume: 4 Issue: 9<br />

7


Book Review<br />

Amazon.com Top 10 .NET Books<br />

10<br />

9<br />

8<br />

7<br />

6<br />

5<br />

MCAD/MCSD<br />

Self-Paced<br />

Training Kit:<br />

Developing Web<br />

Applications with<br />

Micros<strong>of</strong>t Visual<br />

Basic .NET and Micros<strong>of</strong>t Visual<br />

C# .NET, Se<strong>con</strong>d Edition<br />

Micros<strong>of</strong>t Corporation<br />

Hardcover<br />

Debugging<br />

Applications for<br />

Micros<strong>of</strong>t .NET<br />

and Micros<strong>of</strong>t<br />

Windows<br />

Robbins, John<br />

Hardcover<br />

735619271<br />

Micros<strong>of</strong>t .NET<br />

XML Web<br />

Services Step by<br />

Step Freeman,<br />

Adam Paperback<br />

Test-Driven<br />

Development<br />

in Micros<strong>of</strong>t<br />

.NET (Micros<strong>of</strong>t<br />

Pr<strong>of</strong>essional)<br />

Newkirk, James<br />

W.<br />

MCAD/MCSD<br />

Training Guide<br />

(70-320):<br />

Developing XML<br />

Web Services<br />

and Server<br />

Components with<br />

Visual C# .NET and the .NET<br />

Framework<br />

Amit Kalani<br />

Enterprise<br />

Solution Patterns<br />

Using Micros<strong>of</strong>t<br />

.Net: Version<br />

2.0 : Patterns &<br />

Practices<br />

Micros<strong>of</strong>t Corporation<br />

4<br />

3<br />

2<br />

1<br />

Programming<br />

Micros<strong>of</strong>t<br />

Windows Ce<br />

.Net, Third<br />

Edition<br />

Boling, Douglas<br />

MCAD/MCSD<br />

Training Guide<br />

(70-315):<br />

Developing and<br />

Implementing<br />

Web Applications<br />

with Visual C# and Visual<br />

Studio.NET<br />

Kalani, Amito<br />

Programming<br />

Micros<strong>of</strong>t Visual<br />

Basic .NET<br />

Version 2003<br />

(Book & CD-<br />

ROM)<br />

Balena, Francesco<br />

MCAD/MCSD Self-Paced<br />

Training Kit: Micros<strong>of</strong>t .NET<br />

Core Requirements, Exams<br />

70-305, 70-315, 70-306,<br />

70-316, 70-310, 70-320,<br />

and 70-300<br />

Micros<strong>of</strong>t Corporation<br />

PROVIDED BY<br />

President and CEO<br />

Fuat Kircaali fuat@<strong>sys</strong>-<strong>con</strong>.com<br />

Group Publisher<br />

Jeremy Geelan jeremy@<strong>sys</strong>-<strong>con</strong>.com<br />

ADVERTISING<br />

Senior Vice President, Sales and Marketing<br />

Carmen Gonzalez carmen@<strong>sys</strong>-<strong>con</strong>.com<br />

Vice President, Sales and Marketing<br />

Miles Silverman miles@<strong>sys</strong>-<strong>con</strong>.com<br />

Robyn Forma robyn@<strong>sys</strong>-<strong>con</strong>.com<br />

Advertising Sales Manager<br />

Megan Mussa megan@<strong>sys</strong>-<strong>con</strong>.com<br />

Associate Sales Managers<br />

Kerry Mealia kerry@<strong>sys</strong>-<strong>con</strong>.com<br />

Lauren Orsi lauren@<strong>sys</strong>-<strong>con</strong>.com<br />

PRODUCTION<br />

Lead Designer<br />

Abraham Addo abraham@<strong>sys</strong>-<strong>con</strong>.com<br />

Art Director<br />

Alex Botero alex@<strong>sys</strong>-<strong>con</strong>.com<br />

Associate Art Directors<br />

Louis F. Cuffari louis@<strong>sys</strong>-<strong>con</strong>.com<br />

Assistant Art Director<br />

Mandy Eckman mandy@<strong>sys</strong>-<strong>con</strong>.com<br />

WEB SERVICES<br />

Information Systems Consultant<br />

Robert Diamond robert@<strong>sys</strong>-<strong>con</strong>.com<br />

Web Designers<br />

Stephen Kilmurray stephen@<strong>sys</strong>-<strong>con</strong>.com<br />

Paula Zagari paula@<strong>sys</strong>-<strong>con</strong>.com<br />

ACCOUNTING<br />

Financial Analyst<br />

Joan LaRose joan@<strong>sys</strong>-<strong>con</strong>.com<br />

Accounts Payable<br />

Betty White betty@<strong>sys</strong>-<strong>con</strong>.com<br />

Accounts Receivable<br />

Gail Naples gailn@<strong>sys</strong>-<strong>con</strong>.com<br />

SUBSCRIPTIONS<br />

201 802-3012<br />

888 303-5282<br />

subscribe@<strong>sys</strong>-<strong>con</strong>.com<br />

CUSTOMER RELATIONS<br />

Circulation Service Coordinator<br />

Edna Earle Russell edna@<strong>sys</strong>-<strong>con</strong>.com<br />

8<br />

September 2006 Volume: 4 Issue: 9 Visit us at www.dontnetdevelopersjournal.com


BPEL is the<br />

SQL <strong>of</strong> SOA<br />

Get started building next-generation<br />

SOA applications with the leading vendor <strong>of</strong><br />

BPEL technologies<br />

Download BPEL tooling & server s<strong>of</strong>tware today<br />

<br />

BPEL <strong>con</strong>sulting, certification and training.<br />

BPEL design tools, servers and source code for Eclipse, Apache Tomcat, JBoss,<br />

WebSphere, WebLogic, BizTalk and Micros<strong>of</strong>t .NET.<br />

activeBPEL<br />

Copyright 2006 Active Endpoints, Inc. All Rights Reserved.<br />

All product names are trademarks or service marks <strong>of</strong> their respective companies.<br />

Visit us at www.dontnetdevelopersjournal.com September 2006 Volume: 4 Issue: 9<br />

9


Podcast<br />

Heard on Hanselminutes<br />

Interview with Web developer and technologist Scott Hanselman<br />

Hosted by Carl Franklin<br />

About the Authors…<br />

Carl Franklin has been a fi gurehead in the<br />

VB community since the very early days<br />

when he wrote for Visual Basic Programmers<br />

Journal. He authored the Q&A column <strong>of</strong> that<br />

magazine as well as many feature articles for<br />

VBPJ and other <strong>magazines</strong>. He has authored<br />

two books for John Wiley & Sons on sockets<br />

programming in VB, and in 1994 he helped<br />

create the very fi rst web site for VB developers,<br />

Carl & Gary’s VB Home Page. He now<br />

teaches hands-on VB .NET classes for<br />

his company, Franklins.Net. He has taught<br />

developers from Citigroup, Aetna, Fidelity<br />

Investments, Fleet Bank, Foxwoods Casino,<br />

UTC, Hubbell, Micros<strong>of</strong>t, Mohegan Sun<br />

Casino, Northeast Utilities, to name a few.<br />

Carl is co-host <strong>of</strong> a weekly talk show on his<br />

website for .NET programmers called .NET<br />

Rocks! Carl is MSDN Regional Director for<br />

Connecticut.<br />

carl@franklins.net<br />

Scott Hanselman is chief architect at the<br />

Corillian Corporation, an e-fi nance enabler.<br />

Recently, Scott was in the top 5% <strong>of</strong> audience-rated<br />

speakers at Tech-Ed 2003. His<br />

thoughts on the Zen <strong>of</strong> .NET, programming,<br />

and Web services can be found at http://<br />

www.computerzen.com.<br />

scott.hanselman@authors.<strong>sys</strong>-<strong>con</strong>.com<br />

Hanselminutes is a weekly 30-minute podcast<br />

with Web developer and technologist Scott<br />

Hanselman and hosted by Carl Franklin.<br />

The following is a transcript from show number 29,<br />

entitled “Dynamic vs. Compiled Languages”. You can<br />

listen online at www.hanselminutes.com.<br />

Carl Franklin: Today we are doing a show on Disruptive<br />

Technologies. What do you mean by that<br />

exactly?<br />

Scott Hanselman: Well, I think the disruptive technologies<br />

are the ones that come along every few<br />

years, and remind us that maybe the way we are<br />

doing things is a little more difficult than it needs to<br />

be, spending a little more time on things that aren’t<br />

as important. I think that Ruby is a good example,<br />

particularly Ruby on Rails, <strong>of</strong> something that’s come<br />

along and reminded us that development on the<br />

Web really doesn’t need to be this complicated. Now,<br />

we spend a lot <strong>of</strong> time as .NET developers suffering<br />

in the ASP - early, early, early ASP.NET years - and<br />

things are coming along that are disrupting the way<br />

we think. People are moving away from complex<br />

Web applications to simpler patterns, and I think<br />

that Ruby and Ruby on Rails, and the success <strong>of</strong><br />

those technologies, are worth exploring, but not just<br />

because it’s something interesting that’s not Micros<strong>of</strong>t<br />

- because most people who listen to this show<br />

are Micros<strong>of</strong>ties, meaning in some way Micros<strong>of</strong>t<br />

and .NET pay our mortgages. But as .NET developers,<br />

what can we learn from Ruby and Ruby on Rails<br />

that we can apply to our lives?<br />

Carl Franklin: Okay.<br />

Scott Hanselman:This doesn’t mean we should<br />

necessarily jump ship, but you are writing code - for<br />

example, the code that you used to manage Hanselminutes<br />

and the shows that you work on. It’s all<br />

written in ASP.NET, right? You can have an administrative<br />

counsel, it’s mostly using what we call CRUD<br />

- Create, Read, Update, Delete. I mean how much<br />

code did you use for the Hanselminutes admin?<br />

Carl Franklin: Not a lot <strong>of</strong> code.<br />

Scott Hanselman: Couple <strong>of</strong> thousand lines, couple<br />

hundred line? What do you think?<br />

Carl Franklin: Oh! Couple hundred lines. Really,<br />

not a lot <strong>of</strong> code at all.<br />

Scott Hanselman: So, you use a lot <strong>of</strong> DataGrids,<br />

Data Binding, and things like that.<br />

Carl Franklin: Exactly.<br />

Scott Hanselman: Did you write a lot <strong>of</strong> stored<br />

procedures?<br />

Carl Franklin: Just a few, yeah.<br />

Scott Hanselman: You are doing code generation?<br />

Carl Franklin: No.<br />

Scott Hanselman: So, there is an example a middle<br />

<strong>of</strong> the road .NET application that does some Creates,<br />

Reads, Updates and Deletes. You probably used a lot<br />

<strong>of</strong> wizards; you took advantage <strong>of</strong> as much as .NET<br />

as possible, and being that you are a trainer on this<br />

kind <strong>of</strong> stuff, you are not going to waste your time<br />

writing code that already exists.<br />

Carl Franklin: Exactly. And for us, the only people<br />

who have to use it are in-house, maybe three, or four<br />

people, and usually only one at a time.<br />

Scott Hanselman: Sure. So it doesn’t need to necessarily<br />

be pretty, but it is very functional. I mean I’ve<br />

used it and I have done administration on the show,<br />

so its not that big <strong>of</strong> a deal, but it’s a good example <strong>of</strong><br />

the 80% example. It’s the adding <strong>of</strong> information to a<br />

<strong>sys</strong>tem, retrieving it, looking at it, sorting it, adding<br />

details. If you were going to do something like an upgrade<br />

to it, you would probably do it manually. You’d<br />

change the schema and then you would change the<br />

code and it would work just fine. Now, in the .NET<br />

world, when you go File/New/Web Application<br />

you get a blank page. Hello World is really the start<br />

when you go File/New/ASP.NET Application. In the<br />

Ruby on Rails world, you get a little bit more. Rails<br />

assumes that you are probably going to be talking to<br />

a database, and has a thing called Scaffolding. You<br />

basically say, “I want to scaffold out an application”,<br />

and when you run scaffolding after installing Ruby<br />

on Rails, and maybe something like MySQL, you sit<br />

down and you basically go and say, rails (space) and<br />

then if I wanted an application called Foo, I’d say,<br />

rails (space) Foo, and I would get a whole application<br />

created for me, an empty Web application that<br />

would run under really any Web server. But typically,<br />

you can run it under either their built-in Web server,<br />

10<br />

September 2006 Volume: 4 Issue: 9 Visit us at www.dontnetdevelopersjournal.com


SUPERCHARGE<br />

YOUR APPS WITH<br />

THE POWER OF<br />

LOCATION<br />

INTELLIGENCE<br />

Enable Location Intelligent web services<br />

Single environment for deployment<br />

to desktop or web<br />

Full integration with VisualStudio .NET<br />

Create applications for:<br />

• Web-based store/asset location finders • Visualizing where your customers are<br />

• Analyzing where revenue comes from – and where it doesn’t • Managing assets such as cell towers, vehicles and ATMs<br />

Try it and see for yourself. Visit www.mapinfo.com/sdk<br />

Learn more about the MapInfo Location Platform for .NET,<br />

access whitepapers and download free SDKs.<br />

Contact us at sales@mapinfo.com<br />

Visit us at www.dontnetdevelopersjournal.com September 2006 Volume: 4 Issue: 9<br />

11


Podcast<br />

or under something like Apache. When you start it<br />

up, you basically get a Hello World page that says,<br />

“Congratulations, you are on Rails.”<br />

saying debug=falls - do you indicate that now I am<br />

developing, now I am testing, now I am in production?<br />

Carl Franklin: That reminds me <strong>of</strong> the Visual Basic<br />

Application Wizard.<br />

Scott Hanselman:Right, it gives you a lot more I<br />

think than we are really used to getting. I think<br />

that…<br />

Carl Franklin: But is it good? Is it good code? Is it<br />

typical wizard output, or is it something that is actually<br />

sustainable?<br />

Scott Hanselman:: Well, so here’s the thing that’s<br />

so interesting about Rails. They have a philosophy,<br />

which they call Convention over Configuration, we<br />

talked about this a little bit in a previous show as a<br />

philosophy that we like to apply to .NET code, but<br />

they have a lot <strong>of</strong> <strong>con</strong>ventions, they say that stuff<br />

goes in the <strong>con</strong>fig folder, and we have a folder called<br />

“Test” and we have a folder called “App” you can’t<br />

change those, right that’s what the folder’s name is.<br />

And this is the generalization, but in a .NET application<br />

you typically have some app -- <strong>con</strong>fig file where<br />

you could overwrite that; the name <strong>of</strong> that directory<br />

as this, or I want this file to be over there. They say,<br />

no, we are going to really focus on <strong>con</strong>vention.<br />

Carl Franklin: Standardize that stuff.<br />

Scott Hanselman: Exactly, one less thing to worry<br />

about. Now, the code that gets generated, what’s<br />

interesting about it is that there is so nothing<br />

there - it’s incredibly simple. And what really<br />

makes Rails work is that; it understands how<br />

the s<strong>of</strong>tware development process should<br />

work. For example, when you are dealing<br />

with a database, you typically have a development<br />

database, maybe a test one for running a unit test<br />

and a production one. With .NET there is really no<br />

built - in way that knows that this is the case, right?<br />

You typically will have different <strong>con</strong>nections strings,<br />

right? Of a <strong>con</strong>nection string that you would modify<br />

in a Web.<strong>con</strong>fig. In Rails they have a thing called<br />

database.yml file, it’s basically like in .ini file, and<br />

within that you indicate different environments, you<br />

can say, “Well I am going to use the MySQL adapter<br />

during development to talk to this database on the<br />

local host with this name and password, but when<br />

I go to test I will run over here, and when I go to<br />

production I’ll be using this different adapter.” So,<br />

it actually knows that there are different modes, it’s<br />

built that in. So, there is just an example where the<br />

way that development is done in s<strong>of</strong>tware, it’s built<br />

into the application itself, so the whole framework<br />

knows that this is how things happen.<br />

Carl Franklin: Well there is plenty <strong>of</strong> that stuff in ASP.<br />

NET I would argue.<br />

Scott Hanselman: Where in ASP.NET - other than<br />

Carl Franklin: No, no, no that’s not what I meant. I<br />

meant the kind <strong>of</strong> nice extra things such as the ability<br />

to just drag a table from data, and have an editable<br />

grid that’s a nice high-level feature. The <strong>con</strong>nection<br />

strings in the <strong>con</strong>fig file is a standardized way, the<br />

personalization, I mean there is a lot <strong>of</strong> high-level<br />

features in ASP.NET especially 2.0 that you could<br />

<strong>con</strong>sider falling into that same camp <strong>of</strong> extra stuff.<br />

Scott Hanselman: Okay, let me rephrase then. It’s<br />

a good point. There are lots <strong>of</strong> things in ASP.NET<br />

that make development easy, but I don’t think there<br />

are necessarily a whole lot <strong>of</strong> things that make the<br />

development process easy: migrations, for example.<br />

When you migrate from one version <strong>of</strong> an App to<br />

another version <strong>of</strong> an App, right?<br />

Let’s say I build an application, I make a table with<br />

people, and I have got a first name column, and a<br />

last name column. And I have got an object that I am<br />

going to be storing in that column. Within ASP.NET,<br />

I might create a class Person and he has first name,<br />

and a last name and then I might take that user and<br />

maybe I will use something like NHibernate. But not<br />

a lot <strong>of</strong> average-Joe developers use tools like that, that<br />

are Object Relational Mappers, and typically, they will<br />

write the Data Access Layer themselves, right? And<br />

they will take that object apart, and make sure that<br />

the object gets put into the table appropriately. So,<br />

then the application changes. I have to modify the<br />

schema, right? So, I’ll go into SQL, modify the schema<br />

and then generate a SQL file that I have to remember<br />

to run on the production database to alter that table<br />

and add a new column, and then I have to go into my<br />

code and then tell it that there is new middle name<br />

field that’s going to be added to our Person table, and<br />

then update my Data Access code to put that middle<br />

name in there. So, it’s not hard, its just detail oriented.<br />

There is a lot <strong>of</strong> stuff there.<br />

Within Rails there is this notion <strong>of</strong> Migrations,<br />

where you make a change and it keeps track <strong>of</strong> what<br />

is going to be required to get you from this state, like<br />

version 1.0, to this state version 1.1. So, all <strong>of</strong> this is<br />

being handled by a thing called ActiveRecord. And<br />

ActiveRecord would let you do things like, class<br />

project and then you could say, something like<br />

drives from ActiveRecord and a project belongs to a<br />

portfolio and a project has one Project Manager, a<br />

project has many milestones. It sounds like I am just<br />

reading English that’s kind <strong>of</strong> what Ruby looks like<br />

when you are using ActiveRecord. You might have in<br />

the database the description <strong>of</strong> what a project looks<br />

like and you just write your class like that. I mean<br />

the class project might be five lines just exactly<br />

as I described it - you would literally write down<br />

“class project has one project manager”. And those<br />

relationships then get managed by this ActiveRecord<br />

12<br />

September 2006 Volume: 4 Issue: 9 Visit us at www.dontnetdevelopersjournal.com


Visit us at www.dontnetdevelopersjournal.com September 2006 Volume: 4 Issue: 9<br />

13


Podcast<br />

<strong>sys</strong>tem. And it handles the persistence <strong>of</strong> that tune<br />

from the database because Ruby is such a dynamic<br />

language it handles it. So, if I wanted to migrate from<br />

one version <strong>of</strong> my application to another, I can automatically<br />

just generate the migration by describing<br />

what’s changed, by making a change and saying,<br />

“All right I will go from this version, to that version.”<br />

And it will handle it for you. It also keeps track <strong>of</strong><br />

irreversible migrations when I move forward, but I<br />

can’t back up. There is one <strong>of</strong> the other things that<br />

Rails allows you to do is say, “I want to roll back, I<br />

just made a go<strong>of</strong>. I added something to my database.<br />

I have changed my application, but it was a bad<br />

idea. I need to push the easy button and roll back<br />

everything automatically.”<br />

Carl Franklin: So, Scott, the next question that comes<br />

to my mind is, are these features that Micros<strong>of</strong>t if<br />

they were listening, could say, “Hey that’s a good<br />

feature we will add that in the next ASP.NET”, or is it<br />

more indicative <strong>of</strong> the dynamic nature <strong>of</strong> Ruby that<br />

these features become possible?<br />

Scott Hanselman: I think you have absolutely nailed<br />

it right there. You have asked both the right questions.<br />

Is Micros<strong>of</strong>t going to move towards this, and<br />

definitely I think that this is a computer science<br />

thing that’s happening, not necessarily a Micros<strong>of</strong>t<br />

thing. Everyone’s headed in this direction where we<br />

are -- we have had this schizophrenia, right? Thirty<br />

years ago, you have your Smalltalk folks that have<br />

got this very dynamic world and then C++ and C<br />

start getting more static typing. C++ starts making<br />

decisions about how inheritance works. C# says “all<br />

right, we don’t allow multiple inheritance.” Things<br />

get stronger and stronger typed. So, in the Windows<br />

world we are kind <strong>of</strong> having the screws tightened on<br />

us. So, over the last kind <strong>of</strong> 15-20 years, the screws<br />

have been tightened and the developers are dealing<br />

with it, right? To mix my metaphors, right? You know<br />

how to boil a frog, right?<br />

Carl Franklin: Yeah you put him in cold water and<br />

turn up the heat slowly, right?<br />

Scott Hanselman: So, why do you think that people<br />

are so excited about JavaScript and AJAX and about<br />

PowerShell, because suddenly there’s all the power<br />

<strong>of</strong> .NET, and all the freedom <strong>of</strong> a dynamic language<br />

is kind <strong>of</strong> coming out. So, I think that, yes, part <strong>of</strong> it<br />

is that Ruby is such a dynamic language, but I think<br />

if you take a look at C# 3.0 like at shrinkster.com/hei<br />

you are going to see that Anders and a few folks that<br />

are designing C# are adding this dynamicism ?<br />

Carl Franklin: Dynamism…<br />

Scott Hanselman: That too (laughs). They are adding<br />

that to C# 3.0 and they are making that happen. And<br />

its technologies like LINQ at shrinkster.com/heh and<br />

that are making that possible, right? Where you can<br />

have all <strong>of</strong> the benefits <strong>of</strong> a compiler, but also some <strong>of</strong><br />

that dynamic behavior that you see in an interpreted<br />

language. And this is a really interesting question,<br />

what is limiting this technology in the Micros<strong>of</strong>t<br />

world right now? And I think it’s the fact that underneath<br />

VB, and underneath C# there is that IL, right?<br />

Carl Franklin: Right.<br />

Scott Hanselman:There is that compiler. What do we<br />

lean on? What do we count on? Well, C++ programmers,<br />

they really count on the compiler. I don’t have<br />

a lot <strong>of</strong> bugs, or I know that nothing horrible is going<br />

to happen, because the really, really strict compiler<br />

told me it was cool. Right? They know that something<br />

horrible is wrong before they run because the<br />

compiler catches it. But there were some dynamic<br />

language aspects that VB 6.0 had (variants) that you<br />

really didn’t know the behavior until you actually run<br />

it. So, they moved that responsibility out <strong>of</strong> the compiler<br />

a bit, but where’d the responsibility go? They<br />

took 20% <strong>of</strong> the responsibility out <strong>of</strong> the compiler.<br />

There weren’t a lot <strong>of</strong> unit tests, I didn’t want to use a<br />

lot <strong>of</strong> unit tests in VB 6.0, did you?<br />

Carl Franklin: No, there wasn’t even such a thing<br />

really.<br />

Scott Hanselman: Exactly. So, then about 20% <strong>of</strong><br />

the “responsibility” -- this theoretical responsibility,<br />

<strong>of</strong> who is going to check stuff - kind <strong>of</strong> fell<br />

on the floor. That got picked up when we moved<br />

into the .NET world, where we had really strong<br />

compilers, but also some dynamic aspects <strong>of</strong><br />

things with Reflection and with Late Binding<br />

that allowed us to do things that could get us in<br />

trouble, but we picked that ball up with Test-Driven<br />

Development. So now we lean on the compiler<br />

half-way and on the tests half-way.<br />

Carl Franklin: So, you are saying with test driven development,<br />

or even just you are running nUnit, for<br />

crying out loud, we are, we can now -- variants aren’t<br />

evil anymore, is this what you are saying?<br />

Scott Hanselman: That’s a good way to put it sort <strong>of</strong><br />

like an inflammatory way to put it, but you are right.<br />

Carl Franklin: I am trying to eek out the truth here…<br />

Scott Hanselman: No, you are right, because what I<br />

am saying is that, dynamic languages are less evil if<br />

you have a parachute, right?<br />

Carl Franklin: Okay.<br />

Scott Hanselman: So, our parachute in C++ was a really<br />

hardcore compiler that wouldn’t let us get away<br />

with anything.<br />

Carl Franklin:Yeah.<br />

Scott Hanselman: And we had a less <strong>of</strong> a parachute<br />

in VB 6.0, and this is why you get object reference<br />

not set.<br />

14<br />

September 2006 Volume: 4 Issue: 9 Visit us at www.dontnetdevelopersjournal.com


Carl Franklin: Yeah.<br />

Scott Hanselman: All the time, with all VB applications,<br />

right? This is because… what does that really<br />

mean, what does an exception mean? If you get an<br />

unhandled exception in your application it means,<br />

something happened that you weren’t ready for,<br />

something exceptional happened, something you<br />

didn’t expect occurred. Now, in a dynamic language<br />

I can write a Ruby application and I can hit save,<br />

and no one is going to do anything, no one cares<br />

that I have syntax errors all through it. So, it’s on me<br />

to then fill it up with better testing. So, what I am<br />

saying is that, really awesome tests within an interpreted<br />

dynamic environment can actually take the<br />

place <strong>of</strong> the compiler because all the compiler does<br />

is syntax checks, your intent.<br />

Carl Franklin: Right.<br />

Scott Hanselman: But a test is a really great way <strong>of</strong><br />

expressing your intent.<br />

Carl Franklin:And it goes beyond the compiling<br />

stage, it actually looks for results.<br />

Scott Hanselman: Exactly.<br />

Carl Franklin: Yeah.<br />

Scott Hanselman: So, then someone could really get<br />

some real good work done. If they had a language<br />

they had a really great compiler, but also a flexible<br />

enough syntax that would let them do the kinds <strong>of</strong><br />

things like ActiveRecord that one would want to do,<br />

in a per format way and then they could built tests<br />

around it, and they have the best <strong>of</strong> both worlds and<br />

I think that five years from now, we are going to see<br />

a very Ruby-esque C Sharp and a very Ruby-esque<br />

VB, as people start to realize that Smalltalk has had<br />

these kinds <strong>of</strong> features for 30 years, and we have<br />

been kind <strong>of</strong> <strong>of</strong>f in the desert, in our strongly typed<br />

hardcore compiler languages. We are going to find a<br />

very dynamic middle ground that’s going to involve<br />

very few lines <strong>of</strong> code. So, what are some examples<br />

<strong>of</strong> ActiveRecord and/or Rails in the Micros<strong>of</strong>t world?<br />

Carl Franklin: Well before you answer that question,<br />

let me ask this, which maybe on the listeners’ mind<br />

also: What if you are not embracing Test-Driven<br />

Development?<br />

Scott Hanselman: Wow, be afraid…<br />

Carl Franklin: And not everybody does.<br />

Scott Hanselman:: Well everybody should!<br />

Carl Franklin:But they won’t!<br />

Scott Hanselman: And that will mean more work<br />

for you and me. Let me give you an example. I had<br />

an intern this summer - I wanted to actually get<br />

him on the show but it didn’t work out- and we<br />

gave our presentation to everybody, and he worked<br />

on an ASP.NET application. They did some online<br />

Banking stuff for some things that I am working on<br />

which I can’t talk about. This is a sixteen-year-old<br />

sophomore in High School, okay? The guy’s never<br />

coded C# before, he has had a little bit <strong>of</strong> Java in<br />

High School, he did a little VB 6.0, and he played<br />

with Ruby once, or twice. In 30 working days, he<br />

put together a nice clean online banking site, where<br />

someone could log in and pay their bills. Now<br />

how did he accomplish this? We used test-driven<br />

development; we used Watir; we used the Web application<br />

testing in Ruby for all <strong>of</strong> our tests; and we<br />

used Test-Driven Development. He wrote 168 tests<br />

in Watir that would just beat on this thing, most <strong>of</strong> it<br />

with negative testing.<br />

This is a guy who has no computer science background<br />

- he is a 16-year-old kid who just thought<br />

about, “What would someone evil do to try to break<br />

my application?” So, he had hundreds <strong>of</strong> tests that<br />

would put garbage in textboxes, try to transfer too<br />

much money, or just fight with the application. He<br />

pounded on it like only a 16-year-old could. And we<br />

ended up with more code in our tests than we did<br />

in our application because the goal was, for the application<br />

to do exactly what it needed to do, and no<br />

more, and allow the only correct input.<br />

When he coded in this style, he ended up with<br />

a lot <strong>of</strong> work in the tests, but then that test<br />

became his parachute. Then we have to go<br />

and do some refactoring. After we gave the<br />

presentation he had a few hours before the<br />

end <strong>of</strong> his summer, and we had a bunch <strong>of</strong><br />

requirements, they said “We need these<br />

things changed”, he said, “Oh man I don’t<br />

want to change this application, it’s going<br />

to break everything.” and I said “No, it’s<br />

not, you’ve got a 168 tests that will make<br />

sure you know immediately if anything<br />

is broken.” And we were right. That<br />

sense <strong>of</strong> <strong>con</strong>fidence <strong>of</strong>, ‘Wow I can<br />

refractor this,’ we cut another 100 lines<br />

out <strong>of</strong> the application and the 168<br />

tests passing were our <strong>con</strong>fidence<br />

that this thing works, not just that it<br />

compiles because compiling just says the syntax is<br />

right, but the intent, the semantics, were correct.<br />

Carl Franklin: Well, Scott that’s a very good argument<br />

why people should use and embrace Test Driven<br />

Development. However, there are those people that<br />

won’t simply because it’s not required.<br />

Scott Hanselman: Yeah I understand what you are<br />

saying. And they’ll have buggy code.<br />

Carl Franklin: Well the question is, will it be buggier<br />

with a dynamic C# and a dynamic VB?<br />

Scott Hanselman: Yeah, here’s an example. Let’s say I<br />

wrote some test scripts … to work on an application<br />

and let’s say that you’re going to write it in an application<br />

that has only object, totally dynamic.<br />

Podcast<br />

Visit us at www.dontnetdevelopersjournal.com September 2006 Volume: 4 Issue: 9<br />

15


Podcast<br />

Carl Franklin: Right.<br />

Scott Hanselman: And then I’ll write it in C# and our<br />

test will run against both applications.<br />

Carl Franklin: Yes.<br />

Scott Hanselman: Right? So, the exact same tests<br />

we’ll do the exact same stuff to a browser, and the<br />

App just happens to be written in two different -- totally<br />

different languages. As long as all 100 and some<br />

odd tests pass, does it really matter? What language<br />

it was written in?<br />

Carl Franklin: Okay, now take away the tests.<br />

Scott Hanselman: There you go; if the tests weren’t<br />

there we can’t know.<br />

Carl Franklin: Right. So, the question is if you have a<br />

dynamic language that’s set up to allow this kind <strong>of</strong><br />

variant behavior…<br />

Scott Hanselman: Exactly, yeah that’s a good point.<br />

Carl Franklin:…does it put -- does it make it a liability<br />

not to use Test-Driven Development?<br />

Scott Hanselman: I think it does, that’s a good point.<br />

So, doing Dynamic development -- doing development<br />

in a dynamic language without a lot <strong>of</strong> tests<br />

could potentially open you up to a lot <strong>of</strong> weird data<br />

typing, <strong>con</strong>version type <strong>of</strong> errors. That’s why I think<br />

that negative testing is so important.<br />

Scott Hanselman: Right? Every few years we go and<br />

we see the big DataGrid presentation.<br />

Carl Franklin: Right.<br />

Scott Hanselman: Except this year… we dragged the<br />

DataGrid over and it did AJAX, okay? With things like<br />

BLinq, B-L-I-N-Q at shrinkster.com/heg, which is<br />

the closest thing to Rails on ASP.NET that Micros<strong>of</strong>t<br />

has come out with.<br />

Carl Franklin: Okay.<br />

Scott Hanselman: Let’ssay you basically dynamically<br />

generate an entire ASP.NET Website for<br />

displaying, creating data, doing all CRUD type<br />

data based on a database schema without writing<br />

any code at all. So, there is not even any dragging<br />

around, I mean they don’t even go into – you don’t<br />

even have to go to Visual Studio. You just say “Do<br />

it,” and it happens.<br />

Carl Franklin: Wow!<br />

Scott Hanselman: So, I think when people see that<br />

we could go and rewrite the Hanselminutes Admin<br />

with a tool like this BLinq prototype - once people<br />

see that if they don’t move, then they don’t move<br />

forward - then maybe that’s not what they wanted,<br />

right? Micros<strong>of</strong>t is trying to make products to make<br />

life easier. If Micros<strong>of</strong>t doesn’t do it somebody else<br />

will and people will say ooh… I want that.<br />

Carl Franklin: So, then that brings us to the other<br />

<strong>con</strong>clusion, which is if you do not -- if there are more<br />

people not doing Test Driven Development, than<br />

doing Test Driven Development, are they not going<br />

to move to the new dynamic languages and… stay<br />

with the old version <strong>of</strong> .NET.<br />

Scott Hanselman: There were too many nots there; I<br />

got <strong>con</strong>fused…<br />

Carl Franklin: In other words, let’s say 65% <strong>of</strong> the<br />

development going on does not use Test Driven<br />

Development, which -- it’s even more than that.<br />

Scott Hanselman: Oh, I am sure it’s lots.<br />

Carl Franklin: Right. So, C# 3.0 comes out VB.NET<br />

17.0 whatever it is, comes out and they are both dynamic<br />

languages; is that going to keep people from<br />

upgrading to the new version, possibly <strong>of</strong> a new<br />

framework, <strong>of</strong> a new language?<br />

Scott Hanselman: That’s a good question. I think<br />

that the beat marches on, right? Things <strong>con</strong>tinue<br />

to move forward. The issue is that when you go to<br />

a Micros<strong>of</strong>t presentation every three or four years<br />

they sit down with the Northwind database. They<br />

drag a DataGrid over and they say, “Look what I did<br />

without writing a single line <strong>of</strong> code.” They’ve been<br />

doing this for 15 years.<br />

Carl Franklin: Sure.<br />

Carl Franklin: …I don’t know if that’s a strong<br />

enough argument for me but we’ll see.<br />

Scott Hanselman:That’s a very good question.<br />

Carl Franklin: Basically, what it means is that people<br />

are being forced into Test Driven Development, and<br />

will they go? So, it would be interesting to see.<br />

Editor’s Note. By “Test-Driven Development”, Carl<br />

means “Test First” which is the most extreme form<br />

<strong>of</strong> TDD.<br />

Scott Hanselman: Well, let me ask you this, then.<br />

Here’s a better argument. Were people forced into<br />

using Source Control, when Micros<strong>of</strong>t started giving<br />

away Visual SourceSafe? Before anyone in the Windows<br />

world was using Source Control - they were using<br />

Source Control in the Unix world - but Micros<strong>of</strong>t<br />

people would just zip up their files and say, “Here<br />

you go, that’s the version for today” and I’ll zip it up<br />

and put it over on a share.<br />

Carl Franklin:Yeah.<br />

Scott Hanselman: People are going to use Test-Driven<br />

Development because,<br />

a. It’s free and,<br />

b. It works.<br />

Carl Franklin: Once you experience it. So, you think<br />

there will be a competitive advantage to being Test-<br />

Driven?<br />

16<br />

September 2006 Volume: 4 Issue: 9 Visit us at www.dontnetdevelopersjournal.com


Scott Hanselman: Oh yeah. Let me give you an example,<br />

there is a shareware developer who had some<br />

applications that I installed because… if it exists I’ve<br />

run it and installed on my machine at least once and<br />

I went to -- I had some trouble with it so I emailed<br />

him, and you know what he said, it’s amazing. He<br />

says, “Go to the Help about” I am like “Okay, Help<br />

about” and he says, “You see a button there that says<br />

‘Run tests’”. He actually ships his nUnit tests, and in<br />

the Help about menu hid an nUnit runner. So, he<br />

actually shipped them and I ran his entire test suite<br />

on my machine, which let him see if the environment<br />

that I was running on would pass his tests and<br />

one <strong>of</strong> the tests didn’t pass because he had made an<br />

assumption about a path.<br />

Carl Franklin: Neat.<br />

Scott Hanselman: What a brilliant way to do production<br />

time debugging.<br />

Carl Franklin: There’s no doubt about it that when<br />

you reach the end <strong>of</strong> a life cycle <strong>of</strong> an application<br />

built with Test-Driven Development you’ve got a superior<br />

product. I guess the people on the other side<br />

<strong>of</strong> the camp -- and I am not taking a position either<br />

way-- I am saying the people on the other side <strong>of</strong> the<br />

camp say, “Well, if you live in an ivory tower world,<br />

where there’s an infinite amount <strong>of</strong> time and money<br />

to develop, great, but for the rest <strong>of</strong> the real world<br />

people out here we’re trying to build apps as fast as<br />

we can, as most stable as we possibly can.”<br />

Scott Hanselman: Right. Well, let me give you one<br />

more argument, and then we’ll do a couple <strong>of</strong> links<br />

and move on.<br />

Carl Franklin: Okay.<br />

Scott Hanselman: So, you can express the business<br />

requirement in a simple test, right? The requirement<br />

is they must be able to read a record, change it, and<br />

save it and these <strong>con</strong>ditions will be met. You can say<br />

that in English - or any language for that matter. You<br />

can say it in prose and write it in a Word document.<br />

But as I’ve said before, a Word document doesn’t<br />

have any teeth, right? Word documents don’t break<br />

builds. Exceptions stop applications, but Word documents<br />

don’t actually do anything; but if I can write a<br />

test in a very simple straightforward way that says,<br />

“This is my intent”, right? We spent so much writing<br />

code to express our intent. But, we’re expressing that<br />

intent to the computer, right? But the computer is<br />

not the customer here; the customer is whoever has<br />

hired me to write this application.<br />

Carl Franklin: Sure.<br />

Scott Hanselman: So, my tests are the best description<br />

in computer language <strong>of</strong> what that person’s<br />

intent is. So, if you look at Test-Driven Development<br />

as a way <strong>of</strong> translating that person’s intent into<br />

something the computer understands and then also<br />

proving that the requirement is met. You can look at<br />

all <strong>of</strong> those green bars that light up in nUnit telling<br />

you that your application works well. That’s basically<br />

saying, “My requirement was met for this application.”<br />

And if you look at it that way, you’ll spend less<br />

time writing specs because specs aren’t useful, right?<br />

Specs can’t test my application, only tests can. So,<br />

that might be a better argument.<br />

Carl Franklin: Okay.<br />

Scott Hanselman: One other thing I wanted to<br />

mention was that, we are talking about Rails -- what<br />

it was like to have Rails, and/or ActiveRecord on<br />

Windows.<br />

Carl Franklin: Right.<br />

Scott Hanselman: There’s something worth checking<br />

out called the Castle Project. You can take a look at<br />

shrinkster.com/hen. A Castle is a set <strong>of</strong> tools-- you<br />

can work with them together or you can use them<br />

independently-- that let’s you… integrate your applications<br />

[better] and the really interesting ones are<br />

MonoRail. MonoRail - they used to call it Castle on<br />

Rails – is basically Rails for ASP.NET. It’s a different<br />

style, it’s more <strong>of</strong> a Model-View-Controller, what<br />

they call MVC, or action-packed way <strong>of</strong> doing things.<br />

There is an interesting explanation about how it<br />

works, why you might want to do it, why it would<br />

be useful, if you like Rails you might check that<br />

out. But the most interesting thing I think is<br />

a layer that sits on top <strong>of</strong> nHibernate, right?<br />

nHibernate is the Object Relational Mapper,<br />

that let’s you take objects and map them with<br />

XML or relationship files into the database.<br />

This is a version <strong>of</strong> ActiveRecord and the<br />

ActiveRecord pattern for .NET. It’s built on<br />

top <strong>of</strong> NHybernate but it frees you <strong>of</strong> all<br />

that XML tediousness.<br />

Carl Franklin: All right, so a couple more<br />

-- we’ve got time for a couple <strong>of</strong> more<br />

links.<br />

Scott Hanselman: Sure, yeah. If you<br />

want to check out Blinq, which is B-L-I-N-Q,<br />

there’s a really good tutorial at Arstechnica at shrinkster.com/hej<br />

and then FTPOnline has a great tutorial<br />

from last month before last on how to use Blinq at<br />

shrinkster.com/hek and again you can download<br />

Blinq at shrinkster.com/heg and then do check out<br />

the Castle project-- It’s a really interesting way to<br />

think about things. It’s different from ASP.NET. It’s<br />

not how you are used to doing Web applications and<br />

that might be a good thing-- at shrinkster.com/hen.<br />

• • •<br />

You can listen to this entire show at http://shrinkster.<br />

com/hxg<br />

Podcast<br />

Visit us at www.dontnetdevelopersjournal.com September 2006 Volume: 4 Issue: 9<br />

17


Methodologies<br />

The Five Cs <strong>of</strong> Agile Management<br />

Courage, <strong>con</strong>text, course, cadence, and cost<br />

By Robert Holler<br />

About the Authors<br />

Robert Holler is president & CEO, Version-<br />

One LLC, a developer <strong>of</strong> .NET-based Agile<br />

planning and management s<strong>of</strong>tware.<br />

robert.holler@authors.<strong>sys</strong>-<strong>con</strong>.com<br />

Anyone who has ever been responsible for<br />

leading or managing a s<strong>of</strong>tware development<br />

project knows that s<strong>of</strong>tware isn’t easy.<br />

Successfully coordinating and dealing with project<br />

sponsors, customers, unexpected risks, and changing<br />

scope challenges even the most experienced<br />

project leader.<br />

Over the last decade, several methodologies<br />

(such as Extreme Programming, Scrum, Crystal,<br />

Feature-Driven Development, DSDM, and Lean<br />

Development) have emerged to help companies<br />

effectively deal with projects incorporating tight<br />

deadlines, volatile requirements, and/or emerging<br />

technologies. Applicable to a wide variety <strong>of</strong> today’s<br />

s<strong>of</strong>tware projects, these agile approaches have<br />

gained tremendous industry momentum due to<br />

their overall simplicity and laser focus on business<br />

value, accelerated delivery cycles, and ability to<br />

adapt to changing business demands.<br />

Leading projects in an environment that embraces<br />

change and rapid delivery can be a daunting<br />

responsibility in any organization. Companies and<br />

individuals with the desire to change the fundamental<br />

rules <strong>of</strong> the s<strong>of</strong>tware game and accept the<br />

empirical nature <strong>of</strong> s<strong>of</strong>tware development are<br />

faced with numerous challenges. To capitalize on<br />

the evolving nature <strong>of</strong> agile development, today’s<br />

leadership community must embrace and direct<br />

five key aspects <strong>of</strong> agile development – Courage,<br />

Context, Course, Cadence, and Cost.<br />

Courage<br />

Agile development isn’t for the faint <strong>of</strong> heart.<br />

It doesn’t mean that project management won’t<br />

ultimately be simplified, just that any new way <strong>of</strong><br />

doing business requires practice and hard work.<br />

Don’t be afraid to fail, especially early on. With<br />

agile development, at least your failures are limited<br />

in scope to a couple <strong>of</strong> weeks, at which point you<br />

re-evaluate the situation and adapt accordingly.<br />

Use these early iterations to learn, adjust, and<br />

stabilize. Teams, and people, generally get good at<br />

what they practice. In agile development, planning,<br />

estimating, and delivering occur every few weeks<br />

as opposed to once a year. Your team will quickly<br />

develop a rhythm. Focus on removing whatever<br />

obstacles get in the way and let this rhythm emerge<br />

and solidify as soon as possible.<br />

Also one <strong>of</strong> the four primary values <strong>of</strong> Kent Beck’s<br />

Extreme Programming (XP) methodology, courage<br />

takes on a much broader and strategic scope outside<br />

the boundaries <strong>of</strong> pure product development.<br />

S<strong>of</strong>tware development requires many interfaces<br />

– customers, other project teams, customer support,<br />

pr<strong>of</strong>essional services, external stakeholders,<br />

human resources – and the <strong>con</strong>fidence to step up<br />

to the plate and be willing to enact positive change<br />

in the face <strong>of</strong> tradition can be a risky, but ultimately<br />

a rewarding and valuable experience.<br />

Context<br />

With much <strong>of</strong> the fundamental project infrastructure<br />

– scope, priorities, estimates, schedules,<br />

and risks – in a state <strong>of</strong> flux, it has never been more<br />

important to steer and manage decisions in an<br />

overall business <strong>con</strong>text. While functional value can<br />

<strong>of</strong>ten drive the details <strong>of</strong> a project, business values<br />

have to drive project goals. Force hard decisions<br />

about the business and project <strong>con</strong>text as early as<br />

possible. Get as simple answers as possible to key<br />

questions such as:<br />

• What’s the project’s vision?<br />

• What are its primary goals and business drivers?<br />

• What are the values that should drive key project<br />

and product decisions?<br />

• What are the expectations <strong>of</strong> project sponsors<br />

and stakeholders?<br />

The answers to these questions serve as the basis<br />

for future decision-making, as well as providing a<br />

thread that can prevent a project from spiraling out <strong>of</strong><br />

<strong>con</strong>trol. The degree <strong>of</strong> visibility afforded these priorities<br />

can serve as a foundation for managing <strong>con</strong>flict<br />

and stakeholder negotiations. Get answers to them<br />

on a single page or project Web page, and keep them<br />

visible throughout the life <strong>of</strong> the project. They will<br />

serve as a bar to which you will be held accountable.<br />

Course<br />

Context and course are complementary ideals.<br />

Where <strong>con</strong>text defines overall circumstances, course<br />

defines direction and progress. Just because some<br />

agile approaches suggest not looking in detail beyond<br />

one to two iterations, do not be lured into believing<br />

that longer-range planning isn’t necessary. Longerterm<br />

s<strong>of</strong>tware plans serve as a roadmap for important<br />

18<br />

September 2006 Volume: 4 Issue: 9 Visit us at www.dontnetdevelopersjournal.com


interim business and project decisions. Iteration,<br />

milestone, and release plans remain critical components<br />

in planning and measuring progress. Keep<br />

in mind that the further out your plan, <strong>con</strong>fidence<br />

levels diminish, but a three-to-six-month roadmap<br />

can serve as a <strong>con</strong>tinuous reality check in the face <strong>of</strong> a<br />

changing environment.<br />

Make a <strong>con</strong>certed effort every week to revisit<br />

your objectives, re<strong>con</strong>cile overall direction, review<br />

time frames and deadlines, and communicate<br />

variances. Reality has proven that rarely does time<br />

get made up on s<strong>of</strong>tware projects. This is especially<br />

true in the world <strong>of</strong> agile development. Change is<br />

accepted and even fostered for business reasons,<br />

even if technical in nature, so be prepared to communicate<br />

in addition to delivering early and <strong>of</strong>ten.<br />

Methodologies<br />

Cadence<br />

I’d like to take credit for using the term cadence<br />

in this <strong>con</strong>text, but its initial utterance came from<br />

Gary Evans <strong>of</strong> Evanetics, Inc. (http://www.evanetics.com)<br />

in a <strong>con</strong>versation we had about the virtues<br />

<strong>of</strong> agile development. At VersionOne, we have <strong>con</strong>sistently<br />

used the <strong>con</strong>cept <strong>of</strong> cadence, or rhythm,<br />

to <strong>con</strong>vey one <strong>of</strong> the most beneficial effects <strong>of</strong> agile<br />

development. Most successful agile teams get into<br />

a powerful groove that can significantly benefit ongoing<br />

commitment and reliability. And this deep<br />

ingrained grasp <strong>of</strong> delivery in its broadest sense can<br />

serve as significant mitigator <strong>of</strong> risk.<br />

Instead <strong>of</strong> delivering once a year or more, agile<br />

teams deliver working, tested, installable s<strong>of</strong>tware<br />

every iteration. In this type <strong>of</strong> environment,<br />

unexpected loose ends greatly diminish, defects<br />

are easier to manage, integrated builds are se<strong>con</strong>d<br />

nature, and production deployment becomes more<br />

streamlined and foolpro<strong>of</strong>. This is not to say that<br />

some problems won’t arise, but those that do will<br />

be more manageable.<br />

To early practitioners, this team rhythm is a goal<br />

to strive for as early as possible. To experienced<br />

practitioners, this rhythm greatly simplifies agile<br />

development efforts. Additionally, with this kind <strong>of</strong><br />

stable rhythm natural to a team, greater predictability,<br />

both in the short term and the long term,<br />

can be achieved and counted on for project planning<br />

and execution purposes.<br />

Cost<br />

Even agile development can’t escape the financial<br />

component <strong>of</strong> s<strong>of</strong>tware development. The<br />

challenge and advantage <strong>of</strong> agile development is<br />

the opportunity to justify the cost with business<br />

value and customer benefit along the way. Instead<br />

<strong>of</strong> managing projects as cost centers, agile development<br />

views s<strong>of</strong>tware as an investment, with the<br />

highest possible returns being generated earliest in<br />

the development cycle.<br />

While you may not have the same direct influence<br />

over the business benefit that you do over<br />

the cost, the opportunity for business to extract<br />

value, <strong>con</strong>tinuously align business and technology<br />

objectives, and adjust to changing business<br />

dynamics is extremely valuable. Although the<br />

cost model for most agile development projects<br />

is fairly straightforward (i.e., cost per iteration),<br />

fundamental changes to any financial assumptions<br />

still need to be managed and communicated<br />

as they evolve. In agile, while changes may<br />

occur more frequently, they should also be more<br />

obvious, making decisions regarding impact and<br />

future plans easier.<br />

Conclusion<br />

There are so many things that today’s project<br />

leaders can pay attention to during s<strong>of</strong>tware<br />

development projects. A major goal <strong>of</strong> agile<br />

development is to simplify the process <strong>of</strong> s<strong>of</strong>tware<br />

development, minimizing and streamlining those<br />

aspects and artifacts that actually matter outside<br />

<strong>of</strong> working s<strong>of</strong>tware. Agree early on with sponsors,<br />

stakeholders, and team members the necessary visibility<br />

requirements <strong>of</strong> a project in light <strong>of</strong> working<br />

s<strong>of</strong>tware being available for review, feedback, and<br />

potential deployment on a frequent basis. While<br />

planning and tracking a project may require less<br />

effort, the challenge and importance <strong>of</strong> communication<br />

in a changing and somewhat chaotic world<br />

is amplified exponentially.<br />

“Instead <strong>of</strong> managing projects as cost centers, agile development views<br />

s<strong>of</strong>tware as an investment, with the highest possible returns being generated earliest<br />

in the development cycle”<br />

Visit us at www.dontnetdevelopersjournal.com September 2006 Volume: 4 Issue: 9<br />

19


Feature<br />

C# Generics<br />

Introducing<br />

Leverage code reuse without sacrifi cing type safety<br />

By Robert R. Hauser<br />

About the Author<br />

Robert R. Hauser has a master’s degree in<br />

computer science specializing in artifi cial<br />

intelligence with more than 15 years <strong>of</strong> development<br />

experience in C, C++, C#, and Java,<br />

building s<strong>of</strong>tware for networking, fi le<strong>sys</strong>tems,<br />

middleware, and natural language processing.<br />

He works at Recursion S<strong>of</strong>tware, which<br />

is focused on providing a next-generation<br />

application platform for .NET and Java. The<br />

company’s agent-based Voyager Edge<br />

product gives s<strong>of</strong>tware engineers maximum<br />

fl exibility to develop dynamic, intelligent,<br />

mobile, and decentralized applications easily<br />

in the s<strong>of</strong>tware language(s) <strong>of</strong> their choice on<br />

the devices, virtual machines, and operating<br />

<strong>sys</strong>tems they need to target, while leveraging<br />

all <strong>of</strong> their existing code. Recursion S<strong>of</strong>tware<br />

also provides extensive toolkit libraries for<br />

C++, Java, and C# .NET to aid further in the<br />

development <strong>of</strong> next-generation applications<br />

How <strong>of</strong>ten have you<br />

wanted to reuse some<br />

code you previously<br />

wrote but it didn’t quite fit in<br />

your current project? Code<br />

reuse is an <strong>of</strong>t-touted benefit<br />

<strong>of</strong> modern object-oriented<br />

programming. With the advent<br />

<strong>of</strong> generic support in the C#<br />

language appearing in the<br />

.NET Framework 2.0 developers<br />

have new leverage for writing<br />

code that can be reused<br />

without compromising type<br />

safety.<br />

About Type Safety<br />

A key benefit to languages<br />

that support type checking is<br />

type safety. Type safety at build<br />

time and/or runtime prevents<br />

code from manipulating data<br />

<strong>of</strong> an incorrect type. As part<br />

<strong>of</strong> its type safe approach, C#<br />

detects type mismatch errors<br />

at build time in the compiler<br />

and at runtime in the Common Language Runtime<br />

(CLR). The following is a type mismatch error that<br />

C# will catch at build time:<br />

Object a = new Object();<br />

String b = a;<br />

The se<strong>con</strong>d line will generate a compiler error<br />

because a plain Object can’t be used as a String<br />

while maintaining type safety. Unfortunately, buildtime<br />

type checking is ineffective if explicit casts<br />

are used. A cast lets the programmer circumvent<br />

the type checking at build time. For programming<br />

languages without runtime type checking this can<br />

result in invalid operations on data types having<br />

unpredictable <strong>con</strong>sequences, such as manipulating<br />

a fragment <strong>of</strong> a String data type as if it were an int.<br />

C# provides type safety checks at runtime as well as<br />

build time. Here is a type mismatch error that won’t<br />

be caught until the program is run:<br />

Object a = new Object();<br />

String b = (String)a;<br />

The program will build without errors. However,<br />

when running the program, it will generate an<br />

InvalidCastException on the se<strong>con</strong>d line because<br />

the Object referred to by ‘a’ can’t be <strong>con</strong>verted to<br />

the type String.<br />

This type mismatch error is blatant but serves to<br />

remind us to avoid explicit casting since it negates<br />

the type safety provided at build time. Relegating<br />

these errors to program runtime incurs additional<br />

20<br />

September 2006 Volume: 4 Issue: 9 Visit us at www.dontnetdevelopersjournal.com


Now Evan can use the collection as follows:<br />

Feature<br />

List1 aList = new List1();<br />

aList.Add(“a”);<br />

aList.Add(“b”);<br />

for (String item = aList.Remove();<br />

item != List1.NO_ITEM;<br />

item = aList.Remove())<br />

{<br />

// Do something with item<br />

}<br />

List1 is type safe, meaning it accepts and returns<br />

String types and this type safety is enforced at build<br />

time.<br />

Eventually Evan has to track the products waiting<br />

for the oven, items in the oven, and goods<br />

under the display counter. This is an opportunity<br />

for reusing the previously programmed collection,<br />

List1. The problem is he now needs to collect<br />

BakeItems, not Strings. Evan could copy the source<br />

code to form a new collection by replacing String<br />

with BakeItem, but this approach has dire <strong>con</strong>sequences.<br />

If an error is found later then both copies<br />

<strong>of</strong> the source code will have to be fixed. The baker<br />

has also indicated that Evan will also have to track<br />

waiting customers, inventory shipments, and other<br />

things. Writing separate collection code for each<br />

data type is clearly not desirable.<br />

Evan decided to rewrite the collection with the<br />

most generally available type, Object. This results<br />

in a List2 class where the only difference is the<br />

replacement <strong>of</strong> all uses <strong>of</strong> the data type String with<br />

Object noted in bold text. Below is a fragment <strong>of</strong><br />

List2.<br />

productivity cost (See “The Zen <strong>of</strong> Strong Typing”<br />

sidebar). In fact, now that we have the flavor <strong>of</strong> the<br />

type checking performed at build time, languages<br />

providing type safety features shouldn’t lead us into<br />

situations that require us to turn <strong>of</strong>f type safety by<br />

introducing such casts.<br />

Reusing Code Without Generics<br />

Evan, a programmer hired to write s<strong>of</strong>tware for a<br />

local bakery, needs to manipulate strings. The implementation<br />

<strong>of</strong> a minimal collection is shown in<br />

Listing 1. The collection is a singly linked list called<br />

List1. Since Evan is only interested in storing String<br />

type data at this point, he builds the collection to<br />

fit the need to store String data. The bold text in<br />

Listing 1 shows the locations where the collection is<br />

specific to the type String.<br />

public class List2<br />

{<br />

public static Object NO_ITEM =<br />

default(Object);<br />

internal class ListNode<br />

{<br />

public ListNode next = null;<br />

public Object obj = NO_ITEM;<br />

}<br />

// …<br />

public virtual void Add(Object obj)<br />

// …<br />

public virtual Object Remove()<br />

// …<br />

Deprecating List1 in favor <strong>of</strong> List2 requires refactoring<br />

the earlier string collecting code.<br />

List2 aList = new List2();<br />

aList.Add(“a”);<br />

aList.Add(“b”);<br />

Visit us at www.dontnetdevelopersjournal.com September 2006 Volume: 4 Issue: 9<br />

21


Feature<br />

for (Object item = aList.Remove();<br />

item != List2.NO_ITEM;<br />

item = aList.Remove())<br />

{<br />

String theValue = (String)item;<br />

// Do something with theValue<br />

}<br />

The advantage <strong>of</strong> this rewrite is that we can use<br />

List2 for any type <strong>of</strong> thing we want to collect. For<br />

primitive types, C# will perform a <strong>con</strong>version to<br />

an appropriate object in a process called boxing.<br />

This lets us transparently handle primitive types as<br />

if they were objects. Therefore, we could use List2<br />

with int as follows:<br />

}<br />

aList.Add(1);<br />

aList.Add(2);<br />

for (Object item = aList.Remove();<br />

item != List2.NO_ITEM;<br />

item = aList.Remove())<br />

{<br />

int theValue = (int)item;<br />

// Do something with theValue<br />

For Evan, List2 seems well prepared for String,<br />

int, BakeItem, Customer, or anything else he might<br />

need. However, note that a cast is now necessary<br />

to <strong>con</strong>vert from the most general type, Object, to<br />

the actual type, String or int in the code fragments.<br />

Recall that explicit casting is just the situation we<br />

want to avoid so we don’t negate the type checking<br />

done at build time. For instance, during the development<br />

effort, Evan accidentally added an instance<br />

<strong>of</strong> a data type called BakeItem to a List2 collection<br />

meant only to <strong>con</strong>tain Customer elements. The<br />

error will appear as an InvalidCastException at runtime,<br />

with no indication <strong>of</strong> problems at build time.<br />

This weakens the original intent <strong>of</strong> having<br />

multiple different homogenous collections in a<br />

way that allows any individual collection to accept<br />

unintended data types. Even with the explicit casts<br />

that negate build time type checking, this approach<br />

to reusing code is superior to the alternative <strong>of</strong><br />

writing code for a multitude <strong>of</strong> type-specific collections.<br />

This code reuse versus type safety tension<br />

was a problematic state <strong>of</strong> affairs with C# 1.1. With<br />

the addition <strong>of</strong> generics to C# 2.0 there is now a<br />

better solution to hand.<br />

Reusing Code with Generics<br />

By using generics Evan can get the benefit <strong>of</strong><br />

type safety at build time without having to duplicate<br />

code. Let’s see how with a code fragment<br />

showing the major changes needed. The full code<br />

for generic List3 is found in Listing 2.<br />

public class List3<br />

{<br />

internal static T NO_ITEM = default(T);<br />

internal class ListNode<br />

{<br />

public ListNode next = null;<br />

public T obj = NO_ITEM;<br />

}<br />

// …<br />

public virtual void Add(T obj)<br />

// …<br />

public virtual T Remove()<br />

// …<br />

With the syntax immediately after the class<br />

name, we indicate a type parameter, T, which<br />

stands in place <strong>of</strong> some actual type that’s specified<br />

at each location where a new List3 is used. In fact,<br />

we won’t call this class List3 but rather List3.<br />

Within the class itself the type parameter T is<br />

used as if it were a real type (like our old String or<br />

Object types). The C# build time and runtime will<br />

<strong>con</strong>spire to use this generic class, List3, as a<br />

template that can be used to generate as many<br />

“regular” classes as needed for each actual type<br />

that a program specifies to take the place <strong>of</strong> the<br />

type parameter T.<br />

Here’s how Evan can refactor the string manipulation<br />

code to use the generic class List3.<br />

List3 aList = new<br />

List3();<br />

aList.Add(“a”);<br />

aList.Add(“b”);<br />

for (Object item = aList.Remove();<br />

item != List3.NO_ITEM;<br />

item = aList.Remove())<br />

{<br />

// Do something with item<br />

}<br />

Note that where we want to use List3 we<br />

must specify what the type parameter really is – in<br />

this case String. The C# runtime will create a class<br />

for List3, unless it’s already done so. You<br />

don’t need explicit casting because the Remove()<br />

method for List3 returns a String. At build<br />

time we have the advantage <strong>of</strong> full type checking<br />

that prevents entering the wrong type <strong>of</strong> object into<br />

the collection.<br />

Where the same code could operate identically<br />

on many types, using generics elegantly solves the<br />

tension <strong>of</strong> how to write reusable code while maintaining<br />

type safety.<br />

Back to the Bakery<br />

Back at the bakery, Evan uses the generic<br />

List3 with Strings, Customer objects, and<br />

22<br />

September 2006 Volume: 4 Issue: 9 Visit us at www.dontnetdevelopersjournal.com


Visit us at www.dontnetdevelopersjournal.com September 2006 Volume: 4 Issue: 9<br />

23


Feature<br />

BakeItem objects:<br />

List3 waitingForOven = new<br />

List3();<br />

List3 inOven = new<br />

List3();<br />

List3 onDisplay = new<br />

List3();<br />

With three collections the s<strong>of</strong>tware is modeling<br />

the process as bakery items are queued for the<br />

oven, cooked, and then put on display. To facilitate<br />

some cooking experiments, the baker asks Evan to<br />

track how many <strong>of</strong> a particular bakery item were in<br />

the oven at the same time. Each bakery item should<br />

maintain the maximum and minimum number <strong>of</strong><br />

items in the oven during its cook time.<br />

First, Evan creates an IWatermark interface and<br />

implements that interface in the BakeItem class.<br />

The IWatermark interface is as follows and the<br />

BakeItem class that implements this interface is<br />

shown in Listing 3.<br />

// Do stuff with IWatermark methods<br />

}<br />

Unfortunately this approach is undesirable.<br />

It adds useless code in the general case where<br />

List3 is used with a type that doesn’t implement<br />

IWatermark. Another pitfall <strong>of</strong> this approach is that<br />

any object that implements IWatermark will be<br />

instrumented in any List3 that it’s put in, not<br />

just the collection that’s being used to represent the<br />

oven. Finally, this strategy hides the intrusive nature<br />

<strong>of</strong> List3 and thus can hide type mismatch<br />

errors.<br />

Leaving the List3 as general as possible, Evan<br />

implements List4 with a <strong>con</strong>straint that it requires<br />

anything put in it to implement IWatermark.<br />

See Listing 4 for the full watermarking collection.<br />

Here’s the first line:<br />

public class List4 : List3 where T : Iwatermark<br />

public interface IWatermark<br />

{<br />

int HighMark { get; set; }<br />

int LowMark { get; set; }<br />

}<br />

The bakery items that are in the oven are in a<br />

List3 collection in a variable called inOven.<br />

If this collection knew that it held items that<br />

implement the IWatermark interface then it could<br />

invoke the required instrumentation during the<br />

additions and removals <strong>of</strong> items in the collection.<br />

Having looked at the documentation for C#<br />

generics, Evan <strong>con</strong>siders placing a <strong>con</strong>straint on<br />

a generic parameter type. Evan <strong>con</strong>siders turning<br />

List3 into an intrusive <strong>con</strong>tainer that makes use<br />

<strong>of</strong> the properties available via IWatermark. It would<br />

look like this:<br />

public class List3 where T : Iwatermark<br />

However, this would require implementing<br />

IWatermark on all <strong>of</strong> the classes List3 <strong>con</strong>tains,<br />

including Customer types and String types. This<br />

would also prevent List3 from using primitive<br />

types. A more viable approach could be to alter<br />

List3 to test internally if the type that it holds<br />

implements IWatermark and if so then record the<br />

necessary statistics. We could make use <strong>of</strong> C#’s<br />

is operator in the implementation <strong>of</strong> List3 to<br />

test type compatibility dynamically for any T with<br />

IWatermark as follows:<br />

public virtual void Add(T obj)<br />

{<br />

if (obj is IWatermark) {<br />

A generic class can be used as a base class so<br />

Evan chose to specialize List4 from List3<br />

but added a <strong>con</strong>straint that List4 requires all<br />

items put in it to implement IWatermark. The<br />

generic class List4 is explicit about its intrusive<br />

nature on the objects it holds and C#’s type safety<br />

provisions come into play at build time and runtime<br />

to prevent it from operating on inappropriate<br />

types. Here’s the modified code snippet:<br />

List3 waitingForOven = new<br />

List3();<br />

List4 inOven = new<br />

List4();<br />

List3 onDisplay = new<br />

List3();<br />

Delving Deeper<br />

Through illustration we introduced generic<br />

classes and how to put a <strong>con</strong>straint on a generic<br />

type parameter.<br />

C# allows for generic classes, structs, interfaces,<br />

delegates, static methods, and instance methods.<br />

Any <strong>of</strong> these can be parameterized based on type<br />

and we’re not limited to a single type parameter.<br />

For example, we could have a generic class<br />

Several where each type parameter could<br />

be the same or unrelated.<br />

Several a = new Several();<br />

You can use generic methods to define algorithms<br />

that are clearly separated from the data type<br />

on which they operate. A generic method specifies<br />

type parameters following the method name:<br />

24<br />

September 2006 Volume: 4 Issue: 9 Visit us at www.dontnetdevelopersjournal.com


public U DoSomething(U u, V v)<br />

Both parameters and return type can be made<br />

generic and the type parameters don’t have to<br />

be the same as type parameters on the class. The<br />

C# compiler can do type inferencing on method<br />

parameters so that, for the above method, instead<br />

<strong>of</strong> coding DoSomething(“name”, 2) you<br />

can simply type DoSomething(“name”, 2).<br />

The compiler infers the generic type parameters<br />

from the data types <strong>of</strong> the method parameters.<br />

Properties and indexers don’t allow for introducing<br />

new type parameters but they can use any type<br />

parameters <strong>of</strong> the class that <strong>con</strong>tains them. Generic<br />

methods support <strong>con</strong>straints on their type parameters<br />

like classes.<br />

You can apply <strong>con</strong>straints to each type parameter<br />

separately and, for a type parameter T,<br />

the <strong>con</strong>straint begins with the syntax, “where T<br />

:”. The list <strong>con</strong>tinues as comma-separated items<br />

beginning with an optional base class followed<br />

by any interfaces. Following these, you can list<br />

a default <strong>con</strong>structor <strong>con</strong>straint using “new()”<br />

– which requires the type parameter to support<br />

the public default <strong>con</strong>structor. Finally, you may<br />

add a reference or value <strong>con</strong>straint with a class or<br />

struct keyword. The class Test shows multiple<br />

<strong>con</strong>straints.<br />

class Test where T : MyBase, IComparable, IMy-<br />

Interface, new()<br />

Use <strong>con</strong>straints with caution since they reduce<br />

generality and can prevent code reusability.<br />

Operators, such as the == operator, can’t be a <strong>con</strong>straint.<br />

In List3 we commented out the code:<br />

//if (obj == NO_ITEM)<br />

// throw (new Exception(“Cannot hold the<br />

default() value.”));<br />

This code fragment isn’t permitted because the<br />

type parameter T can’t be guaranteed to have the<br />

== operator available. Our original design <strong>of</strong> the list<br />

collection is using default(T) as the value to return<br />

if the list is empty. For reference types, default(T)<br />

resolves to a null and produces zero-filled <strong>con</strong>tents<br />

for value types. Since we can’t <strong>con</strong>strain type<br />

T to require the == operator, the compiler won’t<br />

allow its use and we remove it. However, this code<br />

disallowed entering the default(T) value into our<br />

List3 collection and we’ve now introduced<br />

ambiguity. When Remove() returns the default(T)<br />

value, how will the code using the collection distinguish<br />

whether the list is empty, or that the default<br />

value has actually been removed?<br />

When storing only reference types such as String<br />

and Object, it was reasonable for our collection<br />

to reject storing null values. However, if the fully<br />

generic List3 is used as List3 it makes little<br />

sense to disallow adding the value ‘0’ – which is<br />

default(int) – to the collection. Although we don’t<br />

undertake it here, the right approach to this problem<br />

is to redesign the collection to be able to hold<br />

any value, including default(T), and to use another<br />

mechanism to indicate that the list is empty.<br />

In the boxing process mentioned previously<br />

involves taking the primitive and creating a simple<br />

object for it from heap memory. When we use<br />

generics with primitives, no such process is necessary<br />

and we get a modest performance gain for<br />

doing things in the more elegant way. The performance<br />

gain is negligible in typical programs doing<br />

network and file operations, but for programs that<br />

spend the bulk <strong>of</strong> their time looping through collections<br />

<strong>of</strong> primitive types the gain is dramatic.<br />

The type information about generics, the number<br />

<strong>of</strong> type parameters and their <strong>con</strong>straints, is fully<br />

supported by the C# 2.0 bytecode and runtime.<br />

This provides type safety for reflection even when<br />

using third-party assemblies <strong>con</strong>taining generics.<br />

Generic Collections<br />

In our illustration we used only a minimal<br />

generic collection. Due to their utility, collections<br />

are <strong>of</strong>ten the first classes to become generic. C#<br />

2.0 provides the following collection classes in the<br />

System.Collections.Generic namespace:<br />

• Dictionary : A key and value collection<br />

• LinkedList : A linked list<br />

• List : A list (implemented on an array)<br />

• Queue : A queue<br />

• SortedDictionary : A key and value sorted<br />

collection<br />

• SortedList : A sorted linked list<br />

• Stack : A stack<br />

There is also System.ComponentMode.<br />

BindingList, which can fire state change events.<br />

C# defines several generic interfaces including<br />

IEnumerable, which enables all collections to<br />

support enumeration and C#’s for each keyword.<br />

This is how it works on a LinkedList:<br />

LinkedList a = new<br />

LinkedList();<br />

a.AddLast(1);<br />

a.AddLast(2);<br />

foreach (int item in a)<br />

{<br />

// Do something with item<br />

}<br />

Both System.Collections.Generic.List and<br />

System.Array have many useful generic methods,<br />

many designed to use four generic delegates in the<br />

System namespace.<br />

Feature<br />

Visit us at www.dontnetdevelopersjournal.com September 2006 Volume: 4 Issue: 9<br />

25


Feature<br />

public delegate void Action(T t);<br />

public delegate int Comparison(T x, T y);<br />

public delegate U Converter(T from);<br />

public delegate bool Predicate(T t);<br />

Combining generic delegates with utility methods<br />

enables powerful processing <strong>of</strong> collections.<br />

Unfortunately, collections other than List and<br />

Array lack equivalent generic utility methods but<br />

there are third-party generic libraries, such as<br />

Recursion S<strong>of</strong>tware’s C# Toolkit, that support C#<br />

generics and provide extensive additional collections,<br />

algorithms, functions, and predicates.<br />

Conclusion<br />

We focused on the tension between writing reusable<br />

code and maintaining type safety, and then<br />

showed how C# 2.0 provides generics to address the<br />

problem elegantly. With generics, you can employ<br />

the type safety features <strong>of</strong> C# without preventing<br />

effective code reuse where the data type varies.<br />

Resources<br />

• Juval Lowy. “An Introduction to C# Generics.”<br />

http://msdn.micros<strong>of</strong>t.com/library/default.<br />

asp?url=/library/en-us/dnvs05/html/csharp_generics.asp<br />

• Generics. http://msdn.micros<strong>of</strong>t.com/vcsharp/2005/overview/language/generics/<br />

• Bill Venners with Bruce Eckel. “Generics in C#,<br />

Java, and C++ – A Conversation with Anders Hejlsberg,<br />

Part VII.” http://www.artima.com/intv/<br />

generics.html<br />

• “An Extended Comparative Study <strong>of</strong> Language<br />

Support for Generic Programming.” http://www.<br />

osl.iu.edu/publications/prints/2005/garcia05:_<br />

extended_comparing05.pdf<br />

• Andrew Kennedy and Don Syne. “Design and Implementation<br />

<strong>of</strong> Generics for the .NET Common<br />

Language Runtime.” http://research.micros<strong>of</strong>t.<br />

com/projects/clrgen/generics.pdf<br />

• Standard ECMA-334. “C# Language Specification.”<br />

4th edition. June 2006. http://www.ecma-<br />

international.org/publications/standards/Ecma-<br />

334.htm<br />

LISTING 1 – A MINIMAL COLLECTION CLASS<br />

FOR STRING TYPES<br />

public class List1<br />

{<br />

public static String NO_ITEM =<br />

default(String);<br />

internal class ListNode<br />

{<br />

public ListNode next = null;<br />

public String obj = NO_ITEM;<br />

}<br />

internal ListNode head = null;<br />

internal ListNode tail = null;<br />

internal int length = 0;<br />

public List1() {}<br />

public virtual void Add(String obj)<br />

{<br />

if (obj == NO_ITEM)<br />

throw (new Exception(“Cannot hold<br />

the default() value.”));<br />

ListNode node = new ListNode();<br />

node.obj = obj;<br />

if (++length == 1)<br />

head = node;<br />

else<br />

tail.next = node;<br />

tail = node;<br />

}<br />

public virtual String Remove()<br />

{<br />

if (length > 0)<br />

{<br />

ListNode node = head;<br />

head = head.next;<br />

--length;<br />

return node.obj;<br />

}<br />

}<br />

}<br />

return NO_ITEM;<br />

LISTING 2 – A MINIMAL GENERIC COLLEC-<br />

TION CLASS FOR ANY TYPE<br />

public class List3<br />

{<br />

internal static T NO_ITEM = default(T);<br />

internal class ListNode<br />

{<br />

public ListNode next = null;<br />

public T obj = NO_ITEM;<br />

}<br />

internal ListNode head = null;<br />

internal ListNode tail = null;<br />

internal int length = 0;<br />

public List3() {}<br />

public virtual void Add(T obj)<br />

{<br />

//if (obj == NO_ITEM)<br />

// throw (new Exception(“Cannot<br />

hold the default() value.”));<br />

ListNode node = new ListNode();<br />

node.obj = obj;<br />

if (++length == 1)<br />

head = node;<br />

else<br />

tail.next = node;<br />

}<br />

tail = node;<br />

public virtual T Remove()<br />

{<br />

if (length > 0)<br />

{<br />

26<br />

September 2006 Volume: 4 Issue: 9 Visit us at www.dontnetdevelopersjournal.com


}<br />

}<br />

ListNode node = head;<br />

head = head.next;<br />

--length;<br />

return node.obj;<br />

}<br />

return NO_ITEM;<br />

LISTING 3 – A CLASS FOR REPRESENTING<br />

BAKERY ITEMS<br />

public class BakeItem : IWatermark<br />

{<br />

private int id;<br />

public BakeItem(int i)<br />

{<br />

id = i;<br />

}<br />

public int Id<br />

{<br />

get { return id; }<br />

set { id = value; }<br />

}<br />

// Implement IWatermark<br />

private int highMark = default(int);<br />

private int lowMark = default(int);<br />

public int HighMark<br />

{<br />

get { return highMark; }<br />

set { highMark = value; }<br />

}<br />

public int LowMark<br />

{<br />

get { return lowMark; }<br />

set { lowMark = value; }<br />

}<br />

}<br />

LISTING 4 – A SPECIAL COLLECTION TO RE-<br />

CORD WATERMARKS IN EACH ITEM COLLECTED<br />

public class List4 : List3 where T :<br />

IWatermark<br />

{<br />

public override void Add(T obj)<br />

{<br />

base.Add(obj);<br />

obj.LowMark = length;<br />

for (ListNode node = head; node !=<br />

null; node = node.next)<br />

{<br />

// Since T is <strong>con</strong>strained we can<br />

use the HighMark property here<br />

if (node.obj.HighMark < length)<br />

{<br />

node.obj.HighMark = length;<br />

}<br />

}<br />

}<br />

public override T Remove()<br />

{<br />

T result = base.Remove();<br />

for (ListNode node = head; node !=<br />

null; node = node.next)<br />

{<br />

// Since T is <strong>con</strong>strained we can<br />

use the LowMark property here<br />

if (node.obj.LowMark > length)<br />

{<br />

node.obj.LowMark = length;<br />

}<br />

}<br />

return result;<br />

}<br />

}<br />

– <strong>CON</strong>TINUED FROM PAGE 7<br />

A Se<strong>con</strong>d Glance at SharePoint 2007 Features<br />

• Feature / Site Template Association - Allows you to indicate that,<br />

when your feature is active, all new provisions <strong>of</strong> an existing site template<br />

will also <strong>con</strong>tain an activated feature <strong>of</strong> your choosing (could be<br />

your own, could be a different feature). Example: you could make it<br />

so that, once your My Company Corporate Policy feature is installed,<br />

all future provisions <strong>of</strong> the Team Collaboration site template also<br />

include the Corporate Default Document Libraries feature. Hopefully<br />

I don’t need to explain further how unbelievably useful this type <strong>of</strong><br />

Feature element can be.<br />

There are two key points that I want to make here. The first is that<br />

Features allow you unprecedented ability to create new functionality<br />

for SharePoint and extend existing functionality. The se<strong>con</strong>d key point<br />

is that Features aren’t just something that SharePoint implemented to<br />

allow you to extend things, SharePoint itself is built using Features.<br />

After I wrote some code to enumerate the list <strong>of</strong> hidden features <strong>con</strong>tained<br />

in my test Farm (the source code is in my latest book, Micros<strong>of</strong>t<br />

SharePoint 2007 Development Unleashed), I found that SharePoint<br />

installs with 136 Features. Most <strong>of</strong> these are hidden Features, but, because<br />

they use the Feature infrastructure, all <strong>of</strong> that pluggable flexibility<br />

is available to you as a SharePoint developer.<br />

Here’s a quick list <strong>of</strong> some <strong>of</strong> the non-hidden Features that ship with<br />

SharePoint 2007:<br />

• Team Collaboration<br />

• Review Workflows<br />

• Premium Web<br />

• Slide Library (splits a PPT presentation<br />

into individually viewable<br />

slides on the site without<br />

breaking the PPT file open)<br />

• Premium Web Application<br />

• Premium Root Site<br />

• Transaction Management Library<br />

• Global Web Parts - stock set <strong>of</strong><br />

web parts that can be used at<br />

any scope<br />

• Enhanced Search<br />

• Base Web Application<br />

• Spell Checking<br />

• Signatures Workflow - standard<br />

workflow for getting "sign <strong>of</strong>f" on<br />

documents<br />

• Reporting<br />

• Premium Site<br />

• Publishing Web - not sure, but<br />

I'm pretty sure this is what enables<br />

the Wiki functionality<br />

• Base Web<br />

• Base Site<br />

• Basic Search<br />

• Translation Workflow - workflow<br />

for sending a document through<br />

rounds <strong>of</strong> translation into multiple<br />

languages<br />

• Expiration Workflow<br />

• Excel Server - I could write an<br />

entire chapter on..oh wait, I<br />

did...<br />

• Search Web Parts<br />

• Publishing Site<br />

• Issue Tracking Workflow - That's<br />

right... the "Issue Tracking" list<br />

that comes with SharePoint is<br />

workflow enabled.<br />

So at this point, you should probably saying to yourself, “Self! I MUST<br />

get a hold <strong>of</strong> SharePoint 2007 when it comes out!!”<br />

Here’s the bottom line: If you are going to be developing and writing<br />

code for SharePoint 2007, then you must learn about Features and<br />

Solutions. Features aren’t simple, isolated points <strong>of</strong> extensibility (this<br />

assumption is easy to make, I made it too). Features are, in fact, the<br />

exoskeleton on which ALL <strong>of</strong> SharePoint is based.<br />

Feature<br />

Visit us at www.dontnetdevelopersjournal.com September 2006 Volume: 4 Issue: 9<br />

27


Protocols<br />

Exploring FTP in .NET 2.0<br />

Micros<strong>of</strong>t’s heart wasn’t in it<br />

By Alexander Gladshtein<br />

About the Author<br />

Alex Gladshtein is a product manager at The<br />

CBORD Group, Inc. in Ithaca, NY. CBORD<br />

is the world’s largest supplier <strong>of</strong> food and<br />

nutrition s<strong>of</strong>tware solutions, campus-wide<br />

ID card programs, cashless dining, and<br />

housing management <strong>sys</strong>tems. Alex holds<br />

undergraduate and graduate degrees from<br />

the University <strong>of</strong> Michigan, and when not<br />

obsessing about .NET he enjoys spending<br />

time with his wonderful wife and cheering on<br />

the Michigan Wolverines.<br />

agladshtein@hotmail.com<br />

Fall the network protocols missing from .NET<br />

1.0 and 1.1, the most notable is the File<br />

Transfer Protocol (FTP). Manually implementing<br />

the protocol for s<strong>of</strong>tware developers was<br />

an unreasonable expectation, especially <strong>con</strong>sidering<br />

the complexity <strong>of</strong> the protocol in comparison<br />

to such protocols as HTTP or TELNET. This led to a<br />

thriving third-party component industry that has<br />

been perfecting FTP implementations since VB3<br />

(and has been an ideal example <strong>of</strong> the argument for<br />

buying versus building when FTP functionality was<br />

required, especially with a price point <strong>of</strong> around<br />

$250). With the coming <strong>of</strong> .NET 2.0, Micros<strong>of</strong>t has<br />

introduced a native FTP communications implementation,<br />

but with some very curious design<br />

decisions that may complicate programming sufficiently<br />

so as to make abandoning commercial FTP<br />

components ill-advisable for many users.<br />

The FTP Protocol<br />

FTP is the standard protocol for transferring files<br />

between computers and is based on RFC 959. The<br />

protocol can be used to upload files, download<br />

files, get directory listings, navigate directories, and<br />

<strong>con</strong>duct a number <strong>of</strong> different file operations. As<br />

long as both computers <strong>con</strong>form to the FTP specification,<br />

the type <strong>of</strong> computer or operating <strong>sys</strong>tem<br />

is irrelevant. The basic operating principal is that<br />

a client accesses files stored on a server, which will<br />

in turn respond to the client, returning requested<br />

information or executing requested tasks.<br />

One <strong>of</strong> the more interesting aspects <strong>of</strong> FTP is that<br />

it uses two <strong>con</strong>nections for communications rather<br />

than a single <strong>con</strong>nection used by most other protocols<br />

such as HTTP or SMTP. The first <strong>con</strong>nection<br />

is the Control <strong>con</strong>nection, which is responsible for<br />

log-ins and command processing. This <strong>con</strong>nection<br />

normally occurs over port 21 and uses the Telnet protocol<br />

for communications. The se<strong>con</strong>d <strong>con</strong>nection is<br />

the Data <strong>con</strong>nection which is responsible for sending<br />

files to the server, receiving files from the server, and<br />

sending directory listings from the server.<br />

Unlike the Control <strong>con</strong>nection, the Data <strong>con</strong>nection<br />

remains open only when data is transferred<br />

and only a single Data <strong>con</strong>nection is available<br />

at one time; however, data can be uploaded and<br />

downloaded simultaneously over that <strong>con</strong>nection.<br />

The operational process <strong>of</strong> communications over<br />

the Data <strong>con</strong>nection <strong>con</strong>sists <strong>of</strong> a client choosing an<br />

unused port to listen for the incoming data <strong>con</strong>nection.<br />

Then, the client lets the server know which port<br />

is listening by sending a PORT command over the<br />

Control <strong>con</strong>nection. This is followed by a follow-up<br />

command over the Control <strong>con</strong>nection to initiate an<br />

action such as STOR or RETR. The server then <strong>con</strong>nects<br />

to the client and begins the transfer <strong>of</strong> data.<br />

.NET 2.0 FTP Implementation<br />

The first striking aspect <strong>of</strong> the Micros<strong>of</strong>t implementation<br />

is their decision to extend the previous<br />

WebResponse/WebRequest model designed<br />

for HTTP to FTP with the FtpWebRequest and<br />

FtpWebResponse. This is somewhat <strong>of</strong> an unusual<br />

choice, as a pure request/response model is ideal<br />

for a stateless protocol like HTTP. FTP operations<br />

normally maintain their state while a series <strong>of</strong> commands<br />

are executed to fulfill an operation. What<br />

Micros<strong>of</strong>t has done is encompass a series <strong>of</strong> protocol-level<br />

requests into a single programmatic-level<br />

request. For example, programmatically a user will<br />

create a single request for directory list, but behind<br />

the scenes multiple requests will be sent <strong>con</strong>taining<br />

the USER, PASS, PWD, CWD, TYPE I, PASV, and<br />

NLST commands, most likely in that order.<br />

The next unusual aspect <strong>of</strong> the implementation<br />

is a result <strong>of</strong> using the WebRequest/WebResponse,<br />

which means that the programmatic model is URI<br />

based (rather than the more intuitive approach <strong>of</strong><br />

specifying a server, location, and filename). In this<br />

model, every request requires a valid URI such as<br />

ftp://balrog/source when creating a request for a file<br />

list. This is fine when working with HTTP, as every request<br />

is stateless, but when attempting to carry out a<br />

series <strong>of</strong> actions in FTP, specifying a new request with<br />

a URI for each action quickly becomes unwieldy.<br />

Further complicating the architecture is the<br />

requirement to understand the underlying <strong>con</strong>nection<br />

model to effectively implement a series <strong>of</strong><br />

commands. By default, KeepAlive is true, which is<br />

as it should be, as KeepAlive specifies whether the<br />

<strong>con</strong>trol <strong>con</strong>nection to the FTP server is closed after<br />

the request completes. The non-intuitive aspect is<br />

that to close the <strong>con</strong>nection, KeepAlive must be set<br />

to false during the last request, which will issue a<br />

QUIT command to the server. A more intuitive approach<br />

would be to have a more direct mechanism<br />

28<br />

September 2006 Volume: 4 Issue: 9 Visit us at www.dontnetdevelopersjournal.com


to close the <strong>con</strong>nection, such as a Close method.<br />

To specify which operation to execute, the WebRequestMethods.Ftp<br />

structure is used. While this<br />

structure supports most <strong>of</strong> the common operations<br />

such as WebRequestMethods.Ftp.ListDirectoryDetails<br />

and WebRequestMethods.Ftp.DownloadFile, a<br />

number <strong>of</strong> commands are missing. The most notable<br />

absent commands are directory navigation, which is<br />

understandable as a request/response model makes<br />

such commands more complicated to implement.<br />

This does not mean that there is no way to send<br />

custom commands. While FtpWebRequest.Method<br />

is normally used with the WebRequestMethods.Ftp<br />

structure, any valid command that a server will recognize<br />

can be specified as a string. This string can be a<br />

command or a series <strong>of</strong> commands that are separated<br />

by a “\n” character. Unfortunately, the Method property<br />

does not allow the use <strong>of</strong> a custom command<br />

that requires accompanying data or one that requires<br />

the server to respond to the command with data.<br />

Micros<strong>of</strong>t also warns that no command should be<br />

sent that will alter the state <strong>of</strong> the <strong>con</strong>nection, such as<br />

MODE, PASV, or PORT.<br />

On a more positive note, Micros<strong>of</strong>t did include<br />

support for SSL-based security in the FTP implementation<br />

with the FtpWebRequest.EnableSsl and<br />

FtpWebRequest.ClientCertificates properties. The<br />

technique they use is Explicit security, which sends<br />

an AUTH TLS command. While Explicit security is<br />

the most common approach for encrypting FTP<br />

communications, a technique that is absent is Implicit<br />

security, which does not send any command,<br />

but rather expects to establish a secure <strong>con</strong>nection<br />

immediately after establishing the TCP <strong>con</strong>nection.<br />

Supporting both techniques guarantees the widest<br />

possible security compatibility, but supporting<br />

Explicit should be sufficient.<br />

Finally, the one missing element that is probably<br />

my greatest personal pet peeve is the lack <strong>of</strong> directory-list<br />

parsing. Identifying files from a directory<br />

command is a very common task, and while the<br />

parsing code can be written manually, supporting<br />

both the MS-DOS style listing and the various<br />

UNIX listings can be a time-<strong>con</strong>suming activity,<br />

especially in the area <strong>of</strong> maintenance.<br />

For simple FTP operations, such as uploading or<br />

downloading a file, the FtpWebRequest/FtpWebResponse<br />

classes can be bypassed by using System.<br />

Net.WebClient. This class will support http:, https:,<br />

ftp:, and file: scheme identifiers, and is useful in<br />

very simple scenarios.<br />

The Sample Application<br />

Many FTP implementations are used to poll an<br />

FTP server directory based on a predetermined<br />

schedule, search for a specific file, and download<br />

the file if present. The sample presented here is a<br />

Windows Forms application that polls a server on a<br />

predetermined schedule, requests a directory listing,<br />

looks at the result to see if the target file is present,<br />

downloads the file if present, and, for good measure,<br />

uploads another file to a separate directory. The application<br />

is designed to showcase the FTP functionality<br />

and is structured as such. An actual application<br />

that executed such a task would <strong>of</strong>ten run as a<br />

service, use a Server Timer rather than a Windows<br />

Timer, operate on a worker thread or asynchronously<br />

have the different FTP requests organized by functions,<br />

and have better FileIO error handling.<br />

In the sample application visible in Listing 1,<br />

Steps 1-5 demonstrate the code necessary to retrieve<br />

a directory list from an FTP server. The basic<br />

process is to initialize an FtpWebRequest class with<br />

the URI as demonstrated below:<br />

FtpWebRequest listRequest = (FtpWebRequest)WebRequ<br />

est.Create(ftpServer + remoteDir);<br />

This is the first step in every type <strong>of</strong> FTP request.<br />

The next step is to instantiate a NetworkCredential<br />

class. If this step is skipped, the request will be<br />

assumed as anonymous. Remember to repeat this<br />

for every request, even on the same<br />

<strong>con</strong>nection, and to make sure not to<br />

change the information unless you plan<br />

to make a new <strong>con</strong>nection.<br />

NetworkCredential credentials = new<br />

NetworkCredential(username, password);<br />

listRequest.Credentials = credentials;<br />

The third step is to specify the command<br />

to execute:<br />

listRequest.Method = WebRequestMethods.Ftp.ListDirectory;<br />

The fourth step is to initialize the object to hold<br />

the response:<br />

FtpWebResponse listResponse = (FtpWebResponse)list<br />

Request.GetResponse();<br />

The last step is to read the data from the response:<br />

reader = new StreamReader(listResponse.GetResponseStream());<br />

This order <strong>of</strong> operation is repeated for most<br />

activities that read data from the server, with the<br />

exception <strong>of</strong> uploading, where data is written to<br />

the RequestStream. Steps 6-12 demonstrate the file<br />

downloading process and Steps 13-22 explain file<br />

uploading, which as mentioned above, does differ<br />

slightly from the data retrieval model.<br />

While these three operations look like disparate<br />

processes in code, they are in fact occurring over a single<br />

Protocols<br />

Visit us at www.dontnetdevelopersjournal.com September 2006 Volume: 4 Issue: 9<br />

29


Protocols<br />

<strong>con</strong>nection in a non-broken stream <strong>of</strong> client-server communications.<br />

Listing 2 is a record <strong>of</strong> the FTP communications<br />

for the code discussed above and fully demonstrates<br />

how state is preserved until the <strong>con</strong>nection is closed at<br />

the end <strong>of</strong> the upload process. (Listing 2 can be downloaded<br />

from http://dotnet.<strong>sys</strong>-<strong>con</strong>.com.)<br />

Conclusion<br />

While Micros<strong>of</strong>t has done a great job with the<br />

other network protocols in .NET 2.0, the FTP<br />

implementation leaves much to be desired. This is<br />

somewhat a surprise as FTP is arguably the most<br />

important end-user protocol after HTTP and SMTP.<br />

Shoehorning the protocol into a request/response<br />

model does not do it justice, especially in functionality<br />

and usability. The only positive is that FTP is<br />

now available in .NET, and most operations can be<br />

executed. However, I would hold on to those thirdparty<br />

components if you want to implement FTP<br />

quickly and with flexibility.<br />

LISTING 1: FTP COMMUNICATIONS SOURCE<br />

CODE<br />

[C#]<br />

using System;<br />

using System.Collections.Generic;<br />

using System.ComponentModel;<br />

using System.Data;<br />

using System.Drawing;<br />

using System.Text;<br />

using System.IO;<br />

using System.Net;<br />

using System.Windows.Forms;<br />

namespace FTPFileChecker<br />

{<br />

public partial class Form1 : Form<br />

{<br />

private string downloadFile = “sourcecode.<br />

zip”; //The name <strong>of</strong> the file to download<br />

private string localDir = “c:\\”;<br />

private string remoteDir = “//source”; //The<br />

directory on the FTP server where downloadFile<br />

is located<br />

private string uploadFile = “result.txt”; //The<br />

name <strong>of</strong> the file to create on the FTP server when<br />

uploading<br />

private string targetDir = “//result”; //The<br />

name <strong>of</strong> the upload directory<br />

private string ftpServer = “ftp://balrog”;<br />

//The name <strong>of</strong> the FTP server<br />

private string username = “tuser”; //The<br />

username for login<br />

private string password = “CBORD”; //The<br />

password for login<br />

public Form1()<br />

{<br />

InitializeComponent();<br />

}<br />

private void buttonStart_Click(object<br />

sender, EventArgs e)<br />

{<br />

timer1.Enabled = true;<br />

buttonStart.Enabled = false;<br />

}<br />

private void timer1_Tick(object sender,<br />

EventArgs e)<br />

{<br />

bool isPresent = false;<br />

labelLastAttempt.Text = DateTime.Now.<br />

ToString();<br />

try<br />

{<br />

isPresent = CheckForFile();<br />

if (isPresent == true)<br />

{<br />

labelLastResponse.Text = “Process Successful”;<br />

labelLastTransfer.Text = labelLastAttempt.Text;<br />

}<br />

else<br />

{<br />

labelLastResponse.Text = “File Not Found”;<br />

}<br />

}<br />

catch (Exception ex)<br />

{<br />

timer1.Enabled = false;<br />

MessageBox.Show(ex.Message);<br />

buttonStart.Enabled = true;<br />

}<br />

}<br />

private void buttonStop_Click(object sender,<br />

EventArgs e)<br />

{<br />

timer1.Enabled = false;<br />

buttonStart.Enabled = true;<br />

}<br />

private bool CheckForFile()<br />

{<br />

StreamReader reader = null;<br />

Stream downloadResponseStream = null;<br />

Stream uploadRequestStream = null;<br />

FileStream downloadFileStream = null;<br />

FileStream uploadFileStream = null;<br />

try<br />

{<br />

//**********************************************<br />

**********************************<br />

//Look for the file on the server by getting a<br />

List from the server<br />

//Step 1. Create the FtpWebRequest with the required<br />

URI - in this case “ftp://balrog/source”<br />

FtpWebRequest listRequest = (FtpWebRequest)WebRe<br />

quest.Create(ftpServer + remoteDir);<br />

//Step 2. Create and set the credentials<br />

for login with the username and password<br />

NetworkCredential credentials = new NetworkCrede<br />

ntial(username, password);<br />

listRequest.Credentials = credentials;<br />

//Step 3. Specify the command to execute - in<br />

this case “ls”<br />

listRequest.Method = WebRequestMethods.Ftp.<br />

30<br />

September 2006 Volume: 4 Issue: 9 Visit us at www.dontnetdevelopersjournal.com


ListDirectory;<br />

//Step 4. Create the FtpWebResponse to reference<br />

the incoming data<br />

FtpWebResponse listResponse = (FtpWebResponse)li<br />

stRequest.GetResponse();<br />

//Step 5. Initialize the StreamReader with the<br />

GetResponseStream that <strong>con</strong>tains the List<br />

reader = new StreamReader(listResponse.GetResponseStream());<br />

string result = reader.ReadToEnd();<br />

textResult.AppendText(textResult.Text + label-<br />

LastAttempt.Text + “\r\n” + result + “\r\n”);<br />

//**********************************************<br />

***********************************<br />

//Check if the file is in the listing<br />

if (result.ToUpper().Contains(downloadFile.ToUpper()))<br />

{<br />

//**********************************************<br />

*******************************<br />

//The file is there so <strong>con</strong>tinue to the download<br />

phase<br />

//Step 6. Create a new FtpWebRequest, this time<br />

with a URI <strong>of</strong> “ftp://balrog/source/sourcecode.<br />

zip”<br />

FtpWebRequest downloadRequest = (FtpWebRequest)<br />

WebRequest.Create(ftpServer + remoteDir + “//”<br />

+downloadFile);<br />

//Step 7. Set the credentials for the request.<br />

If NetworkCredentials are missing or changed,<br />

the request will attempt<br />

//to authenticate the user once again as either<br />

anonymous if the credentials are missing or to a<br />

new value if the<br />

//credentials are changed. This will most likely<br />

result in an error as the server is not in authentication<br />

mode.<br />

//This is proper operation as FtpWebRequest.<br />

KeepAlive is set to “true” by default and thus<br />

the previous <strong>con</strong>nection<br />

//is being re-used.<br />

NetworkCredential downloadCredentials = new<br />

NetworkCredential(username, password);<br />

downloadRequest.Credentials = downloadCredentials;<br />

//Step 8. Specify that the operation will be a<br />

file upload<br />

downloadRequest.Method = WebRequestMethods.Ftp.<br />

DownloadFile;<br />

//Step 9. Create the FtpWebResponse to reference<br />

the incoming data<br />

FtpWebResponse downloadResponse = (FtpWebRespons<br />

e)downloadRequest.GetResponse();<br />

//Step 10. Initialize the Stream with the GetResponseStream<br />

that <strong>con</strong>tains the data<br />

downloadResponseStream = downloadResponse.GetResponseStream();<br />

string fileName = downloadFile;<br />

//Step 11. Initialize the FileStream for the<br />

local file<br />

Protocols<br />

Visit us at www.dontnetdevelopersjournal.com September 2006 Volume: 4 Issue: 9<br />

31


Protocols<br />

downloadFileStream = File.Create(localDir + “\\”<br />

+ downloadFile);<br />

byte[] downloadBuffer = new byte[1024];<br />

int downloadBytesRead;<br />

//Step 12. Read the date from the Response and<br />

write it to the FileStream<br />

while (true)<br />

{<br />

downloadBytesRead = downloadResponseStream.<br />

Read(downloadBuffer, 0, downloadBuffer.Length);<br />

if (downloadBytesRead == 0)<br />

break;<br />

downloadFileStream.Write(downloadBuffer, 0,<br />

downloadBytesRead);<br />

}<br />

textResult.AppendText(“Download complete.” +<br />

“\r\n”);<br />

//**********************************************<br />

*****************************<br />

//**********************************************<br />

*****************************<br />

//The download has finished so proceed to the<br />

upload phase<br />

//Step 13. Create a new FtpWebRequest, this time<br />

with a URI <strong>of</strong> “ftp://balrog/result/result.txt”<br />

FtpWebRequest uploadRequest = (FtpWebRequest)We<br />

bRequest.Create(ftpServer + targetDir + “//” +<br />

uploadFile);<br />

//Step 14. Set the credentials for the request.<br />

NetworkCredential uploadCredentials = new<br />

NetworkCredential(username, password);<br />

uploadRequest.Credentials = uploadCredentials;<br />

//Step 15. Specify that the operation will be a<br />

file upload<br />

uploadRequest.Method = WebRequestMethods.Ftp.<br />

UploadFile;<br />

//Step 16. Set KeepAlive to false to that a<br />

“QUIT” command is issued to the server, telling<br />

it to close the<br />

//<strong>con</strong>nection after the file transfer.<br />

uploadRequest.KeepAlive = false;<br />

//Step 17. As a habit, Proxy should be set to<br />

null during uploads as this operation does not<br />

support an Http proxy<br />

uploadRequest.Proxy = null;<br />

//Step 18. Initialize the Stream with the GetRequestStream<br />

that will have data written to it<br />

uploadRequestStream = uploadRequest.GetRequest-<br />

Stream();<br />

//Step 19. Initialize the FileStream with the<br />

file to upload<br />

uploadFileStream = File.Open(localDir + “\\” +<br />

uploadFile, FileMode.Open);<br />

byte[] uploadBuffer = new byte[1024];<br />

int uploadBytesRead;<br />

//Step 20. Read the data from the local file and<br />

write it to the Request<br />

while (true)<br />

{<br />

uploadBytesRead = uploadFileStream.<br />

Read(uploadBuffer, 0, uploadBuffer.Length);<br />

if (uploadBytesRead == 0)<br />

break;<br />

uploadRequestStream.Write(uploadBuffer, 0, uploadBytesRead);<br />

}<br />

//Step 21. Close the Request as this must be<br />

done before before getting the Response.<br />

uploadRequestStream.Close();<br />

//Step 22. Create an FtpWebResponse to hold the<br />

server Response. The StatusDescription for the<br />

FtpWebResponse<br />

//should tell you that the transfer was complete.<br />

FtpWebResponse uploadResponse = (FtpWebResponse)<br />

uploadRequest.GetResponse();<br />

textResult.AppendText(“Upload complete.” +<br />

“\r\n”);<br />

//**********************************************<br />

*****************************<br />

return true;<br />

}<br />

else<br />

{<br />

return false;<br />

}<br />

}<br />

catch (UriFormatException ex)<br />

{<br />

throw ex;<br />

}<br />

catch (WebException ex)<br />

{<br />

throw ex;<br />

}<br />

catch (IOException ex)<br />

{<br />

throw ex;<br />

}<br />

catch (Exception ex)<br />

{<br />

throw ex;<br />

}<br />

finally<br />

{<br />

//Step 23. Cleanup<br />

if (reader != null)<br />

reader.Close();<br />

if (downloadResponseStream != null)<br />

downloadResponseStream.Close();<br />

if (uploadRequestStream != null)<br />

uploadRequestStream.Close();<br />

if (downloadFileStream != null)<br />

downloadFileStream.Close();<br />

if (uploadFileStream != null)<br />

uploadFileStream.Close();<br />

}<br />

}<br />

}<br />

}<br />

32<br />

September 2006 Volume: 4 Issue: 9 Visit us at www.dontnetdevelopersjournal.com


Visit us at www.dontnetdevelopersjournal.com September 2006 Volume: 4 Issue: 9<br />

33


Feature<br />

Clean and Protect a Large .NET<br />

Code Base with Coding<br />

Standards and Unit Testing<br />

How to avoid runtime exceptions, instability, crashes,<br />

degraded performance and insecure backdoors<br />

By Hari Hampapuram and Matt Love<br />

About the Authors…<br />

Hari Hampapuram is currently Paras<strong>of</strong>t’s<br />

director <strong>of</strong> development. He has extensive<br />

experience in building s<strong>of</strong>tware development<br />

tools at Micros<strong>of</strong>t Research, Intrinsa, Philips<br />

Semi<strong>con</strong>ductors, and AT&T Bell Laboratories.<br />

Hari has a PhD in computer science<br />

from Rutgers University.<br />

Matt Love is a s<strong>of</strong>tware development<br />

manager with Paras<strong>of</strong>t Corporation. He has<br />

been involved in the development <strong>of</strong> Jtest,<br />

Paras<strong>of</strong>t’s automated code analysis and<br />

unit testing tool, since 2001. Matt has been<br />

developing since 1997 and earned his BS<br />

in computer engineering from University <strong>of</strong><br />

California San Diego.<br />

Developer testing done early in the<br />

s<strong>of</strong>tware’s lifecycle is known to have<br />

a high positive impact on application<br />

quality, since this is the phase<br />

where finding and fixing bugs is cheapest, easiest,<br />

and fastest. Ideally, coding standard checking and<br />

unit testing would be done on every piece <strong>of</strong> code<br />

before it was added to a team’s code base. However,<br />

this is not always practical. Many organizations<br />

don’t give developers the time and resources<br />

needed for this testing. Moreover, most organizations<br />

don’t develop applications “from scratch”<br />

by writing new code for all required functionality.<br />

Rather, they typically make incremental enhancements<br />

to a large amount <strong>of</strong> functioning legacy<br />

code, or add their own code to extend third-party<br />

or Open Source packages. The resulting code bases<br />

could include legacy code written by the organization,<br />

code obtained via a merger/acquisition, code<br />

obtained from an outsourcer, or code that was<br />

developed by the Open Source community and<br />

downloaded from the Internet.<br />

Consequently, most teams accumulate large and<br />

complex code bases with at least some code that<br />

hasn’t been subject to coding standard analysis and<br />

unit testing. This involves several critical risks:<br />

• When the application is used in a way that development<br />

and QA didn’t anticipate (and didn’t<br />

test), the code might throw unexpected runtime<br />

exceptions that cause the application to become<br />

unstable, produce unexpected results, or even<br />

crash.<br />

• The code might open the only door that an attacker<br />

needs to manipulate the <strong>sys</strong>tem and/or<br />

access privileged information.<br />

• Small coding mistakes could lead to significant<br />

performance or functionality problems.<br />

• The code’s functionality might be broken as the<br />

application evolves over the course <strong>of</strong> its lifecycle.<br />

If your team already has a large and complex code<br />

base (hundreds <strong>of</strong> thousands <strong>of</strong> lines, or millions<br />

<strong>of</strong> lines), it’s not too late to benefit from coding<br />

standard analysis and unit testing. As long as these<br />

practices are automated and applied properly, they<br />

can still be used to identify critical problems before<br />

the release/deployment — as well as satisfy any<br />

<strong>con</strong>tractual obligations for performing unit testing or<br />

complying with a designated set <strong>of</strong> standards.<br />

This article explains a simple strategy that has<br />

proven to deliver fast and significant improvements<br />

to large and complex .NET code bases:<br />

1. Use coding standard analysis to identify bugs<br />

and bug-prone code.<br />

2. Use unit-level regression testing to ensure that<br />

the functionality is intact and use unit-level reliability<br />

testing (exercising each function/method<br />

as thoroughly as possible and checking for unexpected<br />

exceptions) to ensure that all code base<br />

changes are reliable and secure<br />

Both steps can be automated to promote<br />

<strong>con</strong>sistent implementation and allow your team<br />

to reap the potential benefits without disrupting<br />

your development efforts or adding overhead to<br />

your already hectic schedule. Moreover, automating<br />

these practices lets you <strong>con</strong>centrate on deeper<br />

design/logic issues during code review.<br />

1. Use coding standard analysis to<br />

identify bugs and bug-prone code<br />

WHY IS IT IMPORTANT?<br />

Complying with coding standard rules is a<br />

proven way to achieve the following key benefits:<br />

34<br />

September 2006 Volume: 4 Issue: 9 Visit us at www.dontnetdevelopersjournal.com


time and effort — to design and implement the<br />

tests then track the problem back to a specific line<br />

<strong>of</strong> code. Using an automated code analysis tool,<br />

code that may cause memory leaks now or in the<br />

future can be automatically detected in se<strong>con</strong>ds<br />

— without requiring team members to write any<br />

tests or manually track down the root cause <strong>of</strong><br />

reported memory leaks.<br />

Feature<br />

1. Detect bugs or potential bugs that impact reliability,<br />

security, and performance.<br />

2. Enforce organizational design guidelines and<br />

specifications (application-specific, use-specific,<br />

or platform-specific) and error-prevention<br />

guidelines abstracted from specific known bugs.<br />

3. Improve code maintainability by improving class<br />

design and code organization.<br />

4. Enhance code readability by applying common<br />

formatting, naming, and other stylistic <strong>con</strong>ventions.<br />

Rules that provide benefit number 1 will be<br />

referred to in the text as Group 1 rules; rules that<br />

provide benefit number 2 will be referred to in the<br />

text as Group 2 rules, and so on.<br />

For an example <strong>of</strong> why it’s important to check<br />

coding standards even after the code is written,<br />

assume that coding standard analysis reveals that<br />

code from a frequently used module violates the<br />

“Avoid static collections” rule. This rule is important<br />

because it identifies code that could cause<br />

memory leaks. Static collection objects (i.e., ArrayList<br />

etc.) can hold a large numbers <strong>of</strong> objects,<br />

making them candidates for memory leaks. How<br />

can .NET have memory leaks? If you put a shortlived<br />

object into a “static” collection, that object<br />

will be referenced by the collection for the life <strong>of</strong><br />

the program if you forget to remove the object from<br />

the collection when you’re done with the object. If<br />

you’ve already removed all other references to the<br />

object, it can be difficult to see that it’s still referenced.<br />

Any memory leaks that resulted from this coding<br />

issue might be uncovered through pr<strong>of</strong>iling or load<br />

testing. However, this would require <strong>con</strong>siderable<br />

WHAT’S REQUIRED TO DO IT?<br />

a. Decide which coding standard rules to check.<br />

First, review industry-standard .NET coding<br />

standard rules and decide which ones are most applicable<br />

to your project and will prevent the most<br />

common or serious defects. The rules defined by<br />

Micros<strong>of</strong>t’s .NET Framework Design Guidelines<br />

and the rules implemented by automated .NET<br />

static analysis tools <strong>of</strong>fer a <strong>con</strong>venient place to<br />

start. If needed, you can supplement these rules<br />

with the ones listed in books and articles by .NET<br />

experts. Note that while many tools focus on rules<br />

that examine the IL code, it is also helpful to check<br />

rules that examine the source code; this enables<br />

you to check for many code issues that cannot be<br />

identified by IL-level analysis (for example, formatting<br />

issues, empty blocks, etc.).<br />

Also, <strong>con</strong>sider rules that are unique to your<br />

organization, team, and project (for instance, an<br />

informal list <strong>of</strong> lessons learned from past experiences).<br />

Do your most experienced team developers<br />

have an informal list <strong>of</strong> lessons learned from past<br />

experiences? Have you encountered a specific bug<br />

that can be abstracted into a rule so that the bug<br />

never occurs in your code stream again? Are there<br />

explicit rules for formatting or naming <strong>con</strong>ventions<br />

that your team is required to comply with?<br />

Because legacy code bases are typically very<br />

large, checking a legacy code base requires a special<br />

strategy. It’s important to recognize that legacy<br />

code’s compliance with design and development<br />

rules won’t be <strong>con</strong>sistent because different parts <strong>of</strong><br />

the code base probably originated from different<br />

sources. Applying rules from Groups 3 and 4 to the<br />

entire code base is likely to result in an impractically<br />

large number <strong>of</strong> rule violations that might be<br />

more overwhelming than helpful at this stage <strong>of</strong> the<br />

project. We strongly recommend that legacy code<br />

checking focus initially on rules from Groups 1-2,<br />

which will identify significant problems that should<br />

be corrected before the release/deployment.<br />

b. Automatically check the code base and respond<br />

to findings.<br />

Manually checking whether a large and complex<br />

code base follows coding standard rules would be<br />

incredibly slow, resource-intensive, and errorprone.<br />

Even if you had the vast resources required<br />

to review the code base manually, some rule<br />

violations would inevitably be overlooked, and just<br />

Visit us at www.dontnetdevelopersjournal.com September 2006 Volume: 4 Issue: 9<br />

35


Feature<br />

one overlooked rule violation could cause serious<br />

problems.<br />

A more practical, thorough, and accurate way<br />

to check whether a large code base complies with<br />

coding standard rules is to use an automated coding<br />

standard analysis tool to check the entire code<br />

base at a scheduled time each night.<br />

The two complementary strategies below are well<br />

suited to the nature and size <strong>of</strong> legacy code:<br />

• Smoke alarm mode. Run a smaller rule set (including<br />

only Group 1 and Group 2 rules) on the<br />

entire code base to check if the code has critical<br />

problems. If violations are found, treat them as<br />

bugs (fix them immediately).<br />

• Gradual “fix it” mode: Select a code module, run<br />

a full rule set on it, then fix/refactor the code as<br />

needed. This mode is used to improve general<br />

compliance. Be sure to use this mode to check all<br />

new and modified code. If possible, check that<br />

code is compliant immediately after it’s written<br />

and before it’s committed to source <strong>con</strong>trol.<br />

It’s also possible that different modules in the<br />

legacy code base call for different rules, especially<br />

from Group 2. For instance, some code analysis<br />

tools let users apply a filter to enable or disable a<br />

specific rule or a group <strong>of</strong> rules for a given set <strong>of</strong><br />

files, which allows such custom-tailoring <strong>of</strong> the<br />

rules to the nature and origin <strong>of</strong> the code. This can<br />

be thought <strong>of</strong> as “file-based” or “directory-based”<br />

application <strong>of</strong> specific rules.<br />

c. Hold weekly reviews for bug root cause analysis<br />

and prevention.<br />

Hold weekly meetings to analyze the root cause <strong>of</strong><br />

the various bugs (defects that were reported by testers,<br />

customers, etc. — not violations <strong>of</strong> development<br />

rules) that were fixed during that week. The best time<br />

to do root cause analysis on a bug is when it’s still fresh<br />

in your mind. After root cause analysis, try to identify a<br />

set <strong>of</strong> rules that will prevent the same bugs from reoccurring<br />

then add these rules to the set <strong>of</strong> development<br />

rules you check.<br />

2. Use unit-level regression testing to<br />

ensure that the functionality is intact<br />

and use unit-level reliability testing to<br />

ensure that new code is reliable and<br />

secure<br />

The next step toward reliable and secure code<br />

is to do unit-level regression testing on all existing<br />

code, and then do unit-level reliability testing (also<br />

known as white-box testing or <strong>con</strong>struction testing)<br />

on any code that’s added or modified. Regression<br />

tests capture existing functionality and don’t report<br />

any errors until a code modification changes that<br />

functionality. Reliability tests use unexpected stimulus<br />

and report any errors immediately. In .NET<br />

development, this involves exercising each method<br />

as thoroughly as possible for both categories <strong>of</strong><br />

tests and checking for unexpected exceptions in<br />

reliability tests.<br />

WHY IS IT IMPORTANT?<br />

A large base <strong>of</strong> legacy code is a huge investment<br />

<strong>of</strong> time and resources. Its functionality has to be<br />

protected from undesired changes if some <strong>of</strong> that<br />

code is modified. After obtaining a certain level<br />

<strong>of</strong> acceptance, it’s critical not to go backwards by<br />

introducing bugs in functionality during the maintenance<br />

<strong>of</strong> legacy code.<br />

However, if your testing only checks expected<br />

functionality, you can’t predict what could happen<br />

when untested paths are taken by well-meaning users<br />

exercising the application in unanticipated ways — or<br />

taken by attackers trying to gain <strong>con</strong>trol <strong>of</strong> your application<br />

or access to privileged data. It’s hardly practical<br />

to try to identify and verify every possible user path<br />

and input or analyze every possible exception from<br />

legacy code. It’s important to identify the possible<br />

paths and inputs that could cause unexpected exceptions<br />

in new and security-sensitive code because:<br />

• Unexpected exceptions can cause application<br />

crashes and other serious runtime problems. If<br />

unexpected exceptions surface in the field, they<br />

could cause instability, unexpected results, or<br />

crashes. Many development teams have had<br />

trouble with applications crashing for unknown<br />

reasons. Once these teams started identifying<br />

and correcting the unexpected exceptions that<br />

they previously overlooked, their applications<br />

stopped crashing.<br />

• Unexpected exceptions can open the door to<br />

security attacks.<br />

Many developers don’t realize that unexpected<br />

exceptions can also create significant security<br />

vulnerabilities. For instance, an exception in login<br />

code could allow an attacker to completely bypass<br />

the login procedure.<br />

WHAT’S REQUIRED TO DO IT?<br />

a. Design, implement, and execute test cases<br />

for the entire code base.<br />

Create a “functional snapshot”: a suite <strong>of</strong> unit tests<br />

that exercise the methods and capture their current<br />

behavior, which is assumed to be correct. By creating<br />

this test suite, you establish a baseline against<br />

which you can compare code and identify changes<br />

expected when the code base grows and evolves.<br />

Testing legacy code will still produce exceptions<br />

that may be either defects or correct behavior<br />

under certain <strong>con</strong>ditions. In this respect, the exceptions<br />

are similar to other outcome values from tested<br />

code that needs assertions for regression testing.<br />

One way to capture exception behavior in a regression<br />

test is to use the ExpectedExceptionAttribute<br />

available in Open Source Nunit tests (version 2.0<br />

36<br />

September 2006 Volume: 4 Issue: 9 Visit us at www.dontnetdevelopersjournal.com


or later). NUnit tests with this attribute will pass<br />

only when the correct exception type and message<br />

is thrown. Applying this approach to a large code<br />

base will produce a “functional snapshot” that<br />

includes scenarios that throw exceptions.<br />

Manually developing the required number,<br />

scope, and variety <strong>of</strong> unit test cases to exercise each<br />

branch <strong>of</strong> code is impractical when you test each<br />

class as it’s completed, and is virtually impossible<br />

when you are working with a large existing code<br />

base. Achieving the scope <strong>of</strong> coverage required for<br />

an effective test suite mandates that a significant<br />

number <strong>of</strong> paths is executed. For example, in a<br />

typical 10,000-line program, there are approximately<br />

100 million possible paths; manually generating<br />

input that would exercise all <strong>of</strong> those paths is<br />

infeasible and practically impossible.<br />

When trying to create a baseline <strong>of</strong> tests for a<br />

large code base, a tool that automatically generates<br />

test code is essential. Team resources can then be<br />

focused on reviewing and addressing the reported<br />

test case failures and exceptions.<br />

b. Review and respond to test findings.<br />

The next step is to <strong>con</strong>figure the automated testing<br />

tool to execute the complete regression test<br />

suite unobtrusively — all <strong>of</strong> the baseline unit tests<br />

— each night.<br />

Each test case failure (a test case that doesn’t<br />

produce the baseline outcome expected for a set<br />

<strong>of</strong> baseline input[s]) indicates a change in the<br />

code’s behavior. This change may be intentional or<br />

unintentional. When code functionality changes<br />

intentionally — as a result <strong>of</strong> a feature request,<br />

specification change, etc. — test cases related to<br />

that behavior are expected to fail because the new<br />

expected outcomes will be different than the ones<br />

recorded in the baseline. However, very <strong>of</strong>ten, other<br />

test cases will also fail unexpectedly. If so, this reveals<br />

a complex functional problem caused by the<br />

code modifications. If no unexpected failures are<br />

identified, you know that the modifications didn’t<br />

break the existing functionality. The appropriate<br />

response to a test case failure depends on whether<br />

the change was expected. If the new outcome is<br />

now the correct outcome, the expected test case<br />

outcome is updated, and it becomes part <strong>of</strong> the<br />

baseline. If not, the code is corrected.<br />

After you handle reported failures, review and<br />

address the unexpected exceptions reported for<br />

new code (you can <strong>con</strong>figure the tool to “accept”<br />

all the exceptions reported for legacy code so<br />

only newly introduced ones will be reported).<br />

Each method should be able to handle any valid<br />

input without throwing an unexpected runtime<br />

exception. If code shouldn’t throw an exception<br />

for a given input, the code should be corrected.<br />

If the exception is expected or if the test inputs<br />

are not expected/permissible, document those<br />

requirements in the code and tell the tool that<br />

they’re expected. This prevents most unit testing<br />

tools from reporting these problems again in future<br />

test runs. Moreover, when other developers<br />

extending or reusing the code see documentation<br />

that explains that the exception is expected<br />

behavior, they’ll be less likely to make mistakes<br />

that introduce bugs.<br />

Ideally, you’d have the resources required to identify,<br />

review, and resolve unexpected exceptions across<br />

the entire code base. However, a more feasible strategy<br />

is to focus initially on finding and fixing exceptions<br />

in new code then later extend those efforts across the<br />

most critical and frequently used modules.<br />

MAKING INCREMENTAL IMPROVEMENTS<br />

The highest priority is establishing a baseline for<br />

protecting legacy code and establishing a <strong>sys</strong>tem<br />

to safeguard against new defects entering the code<br />

base. After the baseline and safeguards are in place,<br />

you can start thinking about incremental improvements<br />

on existing code as resources permit. Some<br />

options are:<br />

• Review and address exceptions<br />

reported for legacy code.<br />

• Extend your unit test suite to<br />

make tests more realistic, and<br />

verify the critical functionality<br />

described in the specification.<br />

If any classes received less than<br />

75% coverage, we recommend<br />

that you customize the automated<br />

test case generation settings<br />

(for instance, by modifying<br />

automatically generated stubs,<br />

adding realistic objects or stubs,<br />

or modifying test generation settings)<br />

so that the automated test<br />

case generation can cover a larger<br />

portion <strong>of</strong> that class during the<br />

next test run.<br />

• Phase in more coding standards<br />

to identify and prevent additional coding problems.<br />

For instance, start implementing rules that<br />

improve code maintainability by improving class<br />

design and code organization, and rules that<br />

enhance code readability by applying common<br />

formatting, naming, and other stylistic <strong>con</strong>ventions.<br />

• Identify critical modules <strong>of</strong> code that should<br />

undergo more thorough rule compliance and<br />

reliability testing. Utility code that’s used from<br />

many parts <strong>of</strong> the application is the most sensitive<br />

to performance problems and unexpected inputs<br />

because that code is invoked so <strong>of</strong>ten in so many<br />

ways. Front-end code for user interfaces, resource<br />

loading, or other communication is the most vulnerable<br />

to security attacks, and it’s an entry point<br />

into the <strong>sys</strong>tem for unexpected input.<br />

Feature<br />

Visit us at www.dontnetdevelopersjournal.com September 2006 Volume: 4 Issue: 9<br />

37


Tools<br />

Effective Database Change Management<br />

Versioning, developing <strong>con</strong>currently, deploying and upgrading<br />

By Christoph Wienands<br />

About the Author<br />

Christoph Wienands is S<strong>of</strong>tware Engineer<br />

at Siemens Corporate Research, NJ. He<br />

received his Diplom-Informatiker (FH) at the<br />

Furtwangen University <strong>of</strong> Applied Sciences,<br />

Germany. Furthermore, he is MCSD and his<br />

expertise is .NET technology. His current research<br />

activities include S<strong>of</strong>tware Factories,<br />

model-driven development and domain-specifi<br />

c languages. Due to his research activities<br />

he is frequent speaker at <strong>con</strong>ferences such<br />

as UML World or SD West. He is co-Author<br />

<strong>of</strong> “Practical S<strong>of</strong>tware Factories in .Net”<br />

(Apress, July 2006).<br />

christoph@wienands@siemens.com<br />

Have you ever been on a project where<br />

s<strong>of</strong>tware development worked beautifully<br />

but developing and maintaining<br />

the database always caused unexpected<br />

problems and bugs? Do your changes <strong>con</strong>stantly<br />

get overwritten by other developers, or is only one<br />

person at a time allowed to make changes? Do<br />

you find, after two or three major releases, that it’s<br />

impossible to create upgrade scripts for existing<br />

production databases? After experiencing these<br />

frustrations and more, I decided to address them.<br />

In this article, I will take a close look at the problems<br />

that many s<strong>of</strong>tware projects face with database<br />

development, analyze the cause, and recommend<br />

best practices to work around them. The good news<br />

is that with the tools available today, you will be able<br />

to give a <strong>con</strong>siderable boost to your database development<br />

process (even though there is still room for<br />

improvement). The bad news is that the techniques<br />

described in here are highly addictive!<br />

Source Control for Databases<br />

Source <strong>con</strong>trol for databases (or version <strong>con</strong>trol,<br />

or revision <strong>con</strong>trol) is the management <strong>of</strong> multiple<br />

revisions <strong>of</strong> a database. Each revision represents a<br />

number <strong>of</strong> changes that were made by developers,<br />

which include changes to the database schema, the<br />

<strong>con</strong>tained data, programmability (stored procedures,<br />

user-defined functions, triggers), and permissions.<br />

Just as with source code, it is <strong>of</strong> utmost<br />

importance to keep track <strong>of</strong> these changes as the<br />

database evolves.<br />

Some common but suboptimal approaches<br />

for database version management, <strong>of</strong>ten applied<br />

because <strong>of</strong> a lack <strong>of</strong> better alternatives, include:<br />

• Checking a large binary database backup into<br />

source <strong>con</strong>trol<br />

• Checking in one huge build script<br />

• Not checking in the database at all<br />

The most prominent problem encountered in all<br />

<strong>of</strong> these approaches is that traceability <strong>of</strong> changes<br />

gets completely lost. Currently I'm not aware <strong>of</strong> any<br />

diff tool (a tool that compares two files for differences)<br />

that can compare two compressed, binary<br />

backups to determine which tables or stored procedures<br />

were changed between two revisions. This<br />

lack <strong>of</strong> traceability will soon lead to tedious and error-prone<br />

manual work when it comes to creating<br />

scripts for upgrading production databases.<br />

When we compare the above approaches to the<br />

way we version regular source code (e.g., for a .NET<br />

assembly), we notice a big difference. The source<br />

code for a .NET assembly is broken down into<br />

many small files, each <strong>con</strong>taining only one class<br />

at a time (by using partial classes, even one class<br />

could be spread across multiple files). Therefore,<br />

when developers create new features or fix bugs,<br />

only a small number <strong>of</strong> files will actually change.<br />

Subsequently, source <strong>con</strong>trol will pick up only<br />

these few changed files and check them in. A revision<br />

log combined with a diff tool will allow anybody<br />

to pinpoint changes in code within se<strong>con</strong>ds.<br />

We can accomplish the same for database development<br />

by increasing the granularity in source<br />

<strong>con</strong>trol. Rather than checking in one big chunk <strong>of</strong><br />

data (e.g., a one-file build script), we need to break<br />

down the repository representation <strong>of</strong> the database<br />

into many small pieces. Clearly we will need<br />

tool support for this. Ideally, such a tool supports<br />

round-trip engineering, which means we can build<br />

a database right from a local checkout folder <strong>of</strong><br />

the repository (working copy), make changes to<br />

the database using tools such as SQL Management<br />

Studio, and script out the database to the local<br />

checkout folder again in order to check in only a<br />

few modified pieces.<br />

Figure 1 shows the file-based representation <strong>of</strong> a<br />

database as it was created by the DbGhost tool from<br />

Innovartis. Each entity in a database (such as a table,<br />

stored procedure, or login, etc.) is scripted to an<br />

individual file. Furthermore, in this screenshot you<br />

can see the overlay i<strong>con</strong>s from TortoiseSVN (the UI<br />

for source <strong>con</strong>trol tool Subversion). At a glance, you<br />

can detect which parts <strong>of</strong> a database were changed<br />

(the red exclamation mark) or which parts are new<br />

(the blue plus).<br />

38<br />

September 2006 Volume: 4 Issue: 9 Visit us at www.dontnetdevelopersjournal.com


Tools<br />

Such a fine-grained, file-based approach provides<br />

a number <strong>of</strong> great benefits:<br />

• Results in a very detailed trail <strong>of</strong> changes in<br />

source <strong>con</strong>trol<br />

• Establishes a direct correlation between source<br />

code changes and database when you check in<br />

modifications from both parts at the same time<br />

• Enables incremental and fast updates <strong>of</strong> working<br />

copies/local checkout folders (no need to update<br />

a binary BLOB <strong>of</strong> hundreds <strong>of</strong> megabytes with<br />

every database change)<br />

• Allows for all the standard source <strong>con</strong>trol operations<br />

such as branching, merging, or tagging<br />

Furthermore, proper database source <strong>con</strong>trol is<br />

the foundation <strong>of</strong> any <strong>of</strong> the advanced techniques<br />

that I discuss in the remainder <strong>of</strong> this article.<br />

As a side note I'd like to mention that many tools<br />

provide their own proprietary storage type and<br />

format, which is not necessarily text-based like the<br />

one shown above. Rather, they use a proprietary<br />

format, such as snapshot files or a SQL database<br />

as source <strong>con</strong>trol repository. While this approach<br />

might be better, e.g., with regards to performance,<br />

I prefer a purely file-based approach because it<br />

tightly integrates with the same source <strong>con</strong>trol<br />

<strong>sys</strong>tem used for the related s<strong>of</strong>tware project.<br />

Concurrent Development<br />

I have seen many s<strong>of</strong>tware projects rely on<br />

databases where database development was a<br />

true bottleneck, unpredictably delaying s<strong>of</strong>tware<br />

development. Looking closer into these projects,<br />

it turns out that in these cases only one developer<br />

would be allowed to work on a database because<br />

<strong>con</strong>current development was not be feasible. This<br />

directly relates to the suboptimal versioning practices<br />

that I outlined in the beginning <strong>of</strong> this article<br />

because none <strong>of</strong> these projects was able to cope<br />

with <strong>con</strong>current changes to a database by multiple<br />

developers.<br />

What would <strong>con</strong>current database development<br />

look like when using a fine-grained database<br />

representation? First <strong>of</strong> all, there are two major<br />

approaches, each with its own pros and <strong>con</strong>s:<br />

• Shared database: All developers work on the<br />

same development database and make their<br />

changes.<br />

• Sandbox approach: Each developer works on<br />

a local, isolated copy <strong>of</strong> the database, created<br />

directly from the repository.<br />

In my opinion, the latter approach clearly wins<br />

because <strong>of</strong> the complete isolation <strong>of</strong> development<br />

environments. Once a developer completes a work<br />

package, he will script out the database, run tests,<br />

and check in his changes.<br />

The interesting part is to see what happens in<br />

Figure 1: File-based representation <strong>of</strong> a database<br />

the case <strong>of</strong> merge <strong>con</strong>flicts. First <strong>of</strong> all, the chances<br />

that two developers will change the same database<br />

object are already much lower when using a finegrained<br />

representation. Se<strong>con</strong>d, a good source <strong>con</strong>trol<br />

tool will be able to merge most <strong>of</strong> the changes<br />

automatically (e.g., both developers modified a<br />

different column). But just as when working with<br />

regular source code, an automatic merge can (and<br />

will) fail periodically, producing a merge <strong>con</strong>flict<br />

that needs to be resolved manually. Figure 2 shows<br />

how the process to resolve this merge <strong>con</strong>flict<br />

works.<br />

For the most part, such an approach will take<br />

the guesswork out <strong>of</strong> merging <strong>con</strong>flicting changes<br />

in database development and<br />

enable <strong>con</strong>current development<br />

by multiple developers,<br />

effectively eliminating the<br />

typical bottleneck encountered<br />

in linear development.<br />

UPGRADING<br />

PRODUCTION <strong>SYS</strong>TEMS<br />

The last problem I will write<br />

about in this article is related<br />

to upgrading databases in<br />

production <strong>sys</strong>tems. After<br />

deploying a number <strong>of</strong> applications<br />

<strong>of</strong>, let’s say, version<br />

1.0, your customers likely start<br />

busily collecting gigabytes<br />

<strong>of</strong> valuable business data.<br />

Chances are, when version<br />

2.0 comes out, the underlying<br />

database schema will have<br />

gone through quite a number<br />

Figure 2: Merge <strong>con</strong>flict upon check-in<br />

Visit us at www.dontnetdevelopersjournal.com September 2006 Volume: 4 Issue: 9<br />

39


Tools<br />

Figure 3: Creating and applying an upgrade script<br />

<strong>of</strong> changes in the following areas:<br />

1. Programmability: Stored procedures, user-defined<br />

functions and triggers<br />

2. Database schema: Tables, columns, <strong>con</strong>straints,<br />

primary and foreign keys<br />

3. Static data: Lookup tables, categorizations, dictionaries,<br />

etc.<br />

In the first category, the changes are fairly easy to<br />

deal with by simply dropping and re-creating these<br />

programmability objects. The real problem arises<br />

with changes <strong>of</strong> type 2 and 3.<br />

Some changes to the database schema are<br />

simply additions, such as new columns or tables.<br />

However, other changes are the result <strong>of</strong> complex<br />

refactorings, data normalizations or denormalizations.<br />

The key to any database upgrade strategy is<br />

to transform your customers’ live production data<br />

in such a way that no data gets lost and that the<br />

transformed data adheres to all <strong>con</strong>straints and is<br />

coherent with what your application expects.<br />

In general, there are two strategies to perform<br />

such an upgrade:<br />

• Create an empty database <strong>of</strong> v2.0 and move all<br />

v1.0 data over while performing the necessary<br />

transformations.<br />

• Perform an on-the-fly upgrade by executing a<br />

sequence <strong>of</strong> small schema changes and data<br />

transformation steps to bring the v1.0 production<br />

database to v2.0.<br />

While the first approach can be somewhat easier,<br />

the drawbacks are obviously the need for twice the<br />

disk capacity and possibly lengthy upgrades, as<br />

gigabytes <strong>of</strong> data need to be moved from one database<br />

to the other. So let’s see how we can establish<br />

a predictable and reliable process to develop onthe-fly<br />

database upgrade scripts.<br />

Again, the key to success is tool support, this<br />

time using a database synchronization tool. Such a<br />

tool takes a source and a target database, compares<br />

all database objects between the two (schema,<br />

data, programmability, and security), and then<br />

upgrades the target database by creating, modifying<br />

and dropping objects in the target database.<br />

The result <strong>of</strong> this synchronization process is two<br />

exact database copies. During the synchronization,<br />

the DB sync tool will record a SQL script with all<br />

executed commands, which will allow replaying<br />

the upgrade process later.<br />

Figure 3 outlines the principal process <strong>of</strong> creating<br />

an update script:<br />

1. Build database v1.0 from source <strong>con</strong>trol (target<br />

database).<br />

2. Build database v2.0 from source <strong>con</strong>trol (source<br />

database).<br />

3. Synchronize the two databases and record the<br />

synchronization process as upgrade script.<br />

4. Generalize the automatically created upgrade<br />

script by rearranging and enhancing it with data<br />

transformations to prevent data loss.<br />

5. Most important: Verify your upgrade script by<br />

upgrading a new database v1.0 to v2.0. Use the<br />

database comparison function <strong>of</strong> your DB sync<br />

tool to test whether the upgraded DB is truly<br />

identical to the database v2.0 created straight<br />

from the repository.<br />

Steps 1 through 3 can easily be automated<br />

through scripting and the like. The intermediate<br />

result <strong>of</strong> Step 3 is a script specific to upgrading<br />

your development database from v1.0 to v2.0.<br />

However, this script is not yet suited to upgrade<br />

production databases. The following is a simple<br />

example that explains why. Let’s say you performed<br />

a database refactoring and renamed a<br />

table from T1 to T2. A synchronization tool will<br />

only see a missing table T2 and an obsolete table<br />

T1 in the target database since it doesn’t know the<br />

semantics <strong>of</strong> a rename operation. Therefore, during<br />

synchronization it will drop the obsolete table<br />

T1, create the missing table T2, and then populate<br />

the new table T2 with whatever data your development<br />

database v2.0 <strong>con</strong>tained. If you applied the<br />

recorded upgrade script to a production database,<br />

the result would be data loss, as it drops table T1<br />

during the upgrade.<br />

The process <strong>of</strong> generalizing this specific upgrade<br />

script through enhancing and correcting will be<br />

more or less complicated, depending on how much<br />

guidance the synchronization tool provides. An<br />

excellent example for such guidance is shown in<br />

Figure 4, a screenshot taken from DbMaestro by<br />

Extreme. As you can see, the script is annotated<br />

with a number <strong>of</strong> warnings about potential data<br />

loss. Continuing our table rename example, such an<br />

annotated script would easily allow you to match<br />

the CREATE statement for table T2 and the DROP<br />

statement for table T1, and manually replace them<br />

with an sp_rename statement. In more complicated<br />

database changes like (de)normalizations and refactorings,<br />

you will actually need to create data trans-<br />

40<br />

September 2006 Volume: 4 Issue: 9 Visit us at www.dontnetdevelopersjournal.com


Tools<br />

formations, e.g., through UPDATEs and INSERTs,<br />

and place them before any lossy schema changes.<br />

Unfortunately, I’m not aware <strong>of</strong> any tool that currently<br />

recognizes operations, such as renames, and<br />

automatically creates the corresponding schema<br />

upgrade and data transformation operations. However,<br />

there are a number <strong>of</strong> best practices you can<br />

use to facilitate the process <strong>of</strong> creating a generalized<br />

upgrade script:<br />

• During database development, check in <strong>of</strong>ten,<br />

preferably after each work item or refactoring<br />

step (just as when working with source code).<br />

• Document the operations performed on each<br />

database revision, such as 'Normalized Table A<br />

into Table A and B’ or ‘Renamed Table C to D’.<br />

This will give better clues to the person generating<br />

the upgrade script.<br />

• Rather than trying to create one huge upgrade<br />

script that spans dozens <strong>of</strong> revisions, create and<br />

<strong>con</strong>catenate multiple small upgrade scripts, each<br />

spanning only a few or even just one revision.<br />

Summary<br />

In this article I showed several techniques for addressing<br />

common problems in database development.<br />

A proper integration into source <strong>con</strong>trol is<br />

the foundation for any advanced technique such<br />

as <strong>con</strong>current development, testing, or creating<br />

upgrade scripts.<br />

Prerequisite to each <strong>of</strong> these techniques is tool<br />

support (see the References section below). Speaking<br />

for myself, I have yet to find the perfect tool for<br />

all these tasks as each tool has its own strengths<br />

and weaknesses. Therefore, it might even make<br />

sense to work with two individual tools, depending<br />

on the tasks that you need to perform.<br />

Lastly, I need to mention that for all the techniques<br />

described in this article I could only give a<br />

high-level introduction. Each one <strong>of</strong> them requires<br />

a certain amount <strong>of</strong> practice to use it efficiently,<br />

such as handling the check-out/check-in process<br />

<strong>con</strong>fidently, <strong>con</strong>sequently re-testing the build<br />

script before checking in, and getting an eye for<br />

manipulating automatically generated upgrade<br />

scripts. Nevertheless, you now should be on your<br />

way to establishing a reliable database change<br />

management process with predictable outcomes<br />

tailored to your own needs.<br />

References<br />

• Ambler, S.W. (2003). Agile Database Techniques.<br />

Wiley.<br />

• Appleton, B., Berczuk, S., Konieczka, S. (2004).<br />

“Applying Agile SCM to Databases.” Crossroads<br />

News. January. www.cmcrossroads.com/articles/<br />

agilejan04.pdf<br />

• ApexSQL: http://www.apexsql.com/<br />

• DbGhost: http://www.dbghost.com<br />

• DbMaestro: http://www.dbmaestro.com<br />

• Revision Control: http://en.wikipedia.org/wiki/<br />

Revision_<strong>con</strong>trol<br />

• SQL Tools: http://www.redgate.com<br />

Figure 4: Upgrade script with warnings about lossy schema changes<br />

Advertiser Index<br />

ADVERTISER URL PHONE PG<br />

InterS<strong>of</strong>t www.inters<strong>of</strong>tpt.com/webuistudio 2<br />

IBM IBM.COM/TAKEBACK<strong>CON</strong>TROL/FLEXIBLE 3<br />

telerik www.telerik.com/ajaxvideos 4<br />

ESRI www.esri.com/develop 1-888-288-1277 6<br />

ActiveEndpoints www.activebpel.org/soa 9<br />

MapInfo www.mapinfo.com/sdk 11<br />

Synaptris www.intelliview.com/netj 1-866-99IVIEW 13<br />

Kapow www.kapowtech.com 23<br />

Gate.com www.gate.com/supercharge 886-233-0602 31<br />

CFDynamics www.cfdynamics.com 866-233-9626 33<br />

Franklins.net www.franklins.net 877-273-4838 43<br />

Visual Paradigm www.visual-paradigm.com 45<br />

Hosting.com www.hosting.com 51<br />

Paras<strong>of</strong>t www.paras<strong>of</strong>t.com/JDJmagazine 888-305-0041X3501 52<br />

General Conditions: The Publisher reserves the right to refuse any advertising not meeting the standards that are set to protect the high editorial<br />

quality <strong>of</strong> .Net Developer’s Journal. All advertising is subject to approval by the Publisher. The Publisher assumes no liability for any costs<br />

or damages incurred if for any reason the Publisher fails to publish an advertisement. In no event shall the Publisher be liable for any costs or<br />

damages in excess <strong>of</strong> the cost <strong>of</strong> the advertisement as a result <strong>of</strong> a mistake in the advertisement or for any other reason. The Advertiser is fully<br />

responsible for all financial liability and terms <strong>of</strong> the <strong>con</strong>tract executed by the agents or agencies who are acting on behalf <strong>of</strong> the Advertiser.<br />

Conditions set in this document (except the rates) are subject to change by the Publisher without notice. No <strong>con</strong>ditions other than those set forth<br />

in this “General Conditions Document” shall be binding upon the Publisher. Advertisers (and their agencies) are fully responsible for the <strong>con</strong>tent<br />

<strong>of</strong> their advertisements printed in .Net Developer’s Journal. Advertisements are to be printed at the discretion <strong>of</strong> the Publisher. This discretion<br />

includes the positioning <strong>of</strong> the advertisement, except for “preferred positions” described in the rate table. Cancellations and changes to advertisements<br />

must be made in writing before the closing date. “Publisher” in this “General Conditions Document” refers to <strong>SYS</strong>-<strong>CON</strong> Publications,<br />

Inc. This index is provided as an additional service to our readers. The publisher does not assume any liability for errors or omissions.<br />

Visit us at www.dontnetdevelopersjournal.com September 2006 Volume: 4 Issue: 9<br />

41


Monkey Business<br />

A Short History <strong>of</strong> Basic on Mono<br />

Mono gets new VB compiler and runtime<br />

By Dennis Hayes<br />

About the Author...<br />

Dennis Hayes is an independent<br />

s<strong>of</strong>tware <strong>con</strong>sultant in Atlanta,<br />

GA, and has been involved with<br />

the Mono project for over four<br />

years.<br />

dennisdotnet@yahoo.com<br />

HThe highlight <strong>of</strong> this release is the new<br />

MonoBASIC compiler and runtime. The<br />

availability <strong>of</strong> BASIC on Mono has waxed<br />

and waned over the years. During the early days<br />

<strong>of</strong> Mono, BASIC received little or no attention. The<br />

biggest reason was that all the effort was going<br />

into the C # compiler. In addition, the early Mono<br />

adopters were not very interested in VB; in fact at<br />

the time, there was much debate in the VB community<br />

in general about upgrading to VB.NET because<br />

<strong>of</strong> the complexity <strong>of</strong><br />

VB.NET, and the lack <strong>of</strong><br />

backwards compatibility<br />

with VB6. Also, unlike C#<br />

which was released as a<br />

ECMA and ISO standard,<br />

VB.NET was, and still is,<br />

a proprietary product<br />

with no publicly available<br />

definition (that has<br />

the details needed by a<br />

compiler writer).<br />

As time went on and<br />

more programmers<br />

joined Mono, several<br />

people started to work<br />

on a VB.NET compatible<br />

compiler based on a fork <strong>of</strong> the C# compiler. Also,<br />

work began on the Micros<strong>of</strong>t.VisualBasic and Micros<strong>of</strong>t.VisualBasic.CompilerServices<br />

namespaces.<br />

BASIC on Mono received a big break in 2004<br />

when Mains<strong>of</strong>t donated a complete copy <strong>of</strong> the<br />

VB.NET runtime libraries written in Java (see .NDJ<br />

Vol. 2 Iss. 6). A group <strong>of</strong> Mono developers, including<br />

myself, <strong>con</strong>verted the libraries into C#.<br />

As more people began to use Mono as a Web<br />

server, BASIC, and the BASIC compiler became<br />

more important because <strong>of</strong> the large number <strong>of</strong><br />

Web pages being written in VB. Because they were<br />

compiled on the fly, they could not be compiled<br />

using the Micros<strong>of</strong>t compiler, and then executed<br />

under Linux like applications could. At this point,<br />

Novell put a group <strong>of</strong> programmers on the Mono-<br />

BASIC compiler project, and for a while, good<br />

progress was made. With the release <strong>of</strong> .NET and<br />

ASP.NET 2.0, ahead <strong>of</strong> time compilation <strong>of</strong> VB code<br />

became possible; BASIC<br />

for Mono was sent to the<br />

back burner, and the coders<br />

working on MonoBA-<br />

SIC were set to work on<br />

System.Windows.Forms,<br />

which is needed for the<br />

Mono 1.2 release.<br />

Now, Mains<strong>of</strong>t has<br />

made another major<br />

<strong>con</strong>tribution to Mono by<br />

donating a set <strong>of</strong> VB.NET<br />

runtime libraries (written<br />

by Rafael Mizrahi and<br />

Boris Kitzner) written in<br />

BASIC with two test suits:<br />

one to test low level functions<br />

that is written in C#, and one to test high level<br />

functions that is written in BASIC.<br />

Combined with the new VB9 compatible<br />

MonoBASIC compiler written by Rolf Bjarne as<br />

part <strong>of</strong> the Goggle Summer <strong>of</strong> Code, which can<br />

now compile itself, Mono is close to having a full<br />

VB.NET compatible stack. Currently the new BASIC<br />

runtime is being distributed with the standard<br />

Mono package, but since the new VB 2005 compat-<br />

“There is a new way to test drive Mono, along with SUSE<br />

Enterprise Edition version 10.0; VMWare has a free virtual OS<br />

“player” that you install as an application”<br />

42<br />

September 2006 Volume: 4 Issue: 9 Visit us at www.dontnetdevelopersjournal.com


ible compiler is just at the point <strong>of</strong> compiling itself,<br />

it is being distributed in a separate package until it<br />

matures a bit more.<br />

Other Additions to Version 1.1.17<br />

The big news in System.Windows.Forms (SWF) is<br />

support for printing; this was one <strong>of</strong> the last major<br />

parts <strong>of</strong> SWF still not implemented. The key to<br />

implementing printing was upgrading to the new<br />

version (1.2) <strong>of</strong> Cairo.<br />

COM is now beginning to be supported with the<br />

addition <strong>of</strong> Runtime Callable Wrappers (RCWs), which<br />

allow managed code to call unmanaged components.<br />

The Postgress database classes have been updated<br />

to support Postgress Release Candidate 3.<br />

IronPython 1.0 RC2 is now supported, System.<br />

IO.Ports includes much more functionality, and the<br />

registry code has been updated to support the .NET<br />

2.0 API. There were also a number <strong>of</strong> significant<br />

improvements made throughout Mono. The complete<br />

lists can be found at http://www.go-mono.<br />

com/<strong>archive</strong>/1.1.17/ and http://www.go-mono.<br />

com/<strong>archive</strong>/1.1.17.1/<br />

Odds and Ends<br />

There will be a meeting <strong>of</strong> the Mono developers<br />

on October 23rd and 24th in Cambridge Ma. See<br />

http://www.go-mono.com/meeting/ for details. It is<br />

open to all, so drop in if you can make it.<br />

There is a new way to test drive Mono, along<br />

with SUSE Enterprise Edition version 10.0; VMWare<br />

has a free virtual OS “player” that you install as an<br />

application. The application can then play a “disc”<br />

<strong>con</strong>taining any operating <strong>sys</strong>tem created by the<br />

“full” version <strong>of</strong> VMWare. Novell has released a<br />

“disc” that can be played by the VMWare “player”<br />

which <strong>con</strong>tains the full version <strong>of</strong> SUSE, including<br />

Mono and many applications, both native and<br />

Mono based.<br />

The se<strong>con</strong>d Goggle Summer <strong>of</strong> Code has been<br />

completed. Portable.NET had a student who<br />

mostly completed his task <strong>of</strong> porting Libjit to the<br />

Alpha processor. There have been a number <strong>of</strong><br />

other improvements in Libjit since the last release.MonoDevelop,<br />

the .NET integrated development<br />

environment for .NET on Linux has released<br />

version 1.2. This IDE is really maturing, so I will<br />

cover this release in more detail next month,<br />

but in the mean time, you can read the release<br />

notes and see some screen shots at http://www.<br />

monodevelop.com/Release_notes_for_MonoDevelop_0.12.<br />

Now that Callisto, the simultaneous release <strong>of</strong><br />

10 projects related to the Eclipse IDE, has been released,<br />

plans are being made for next year’s release,<br />

tentatively dubbed Europa<br />

Monkey Business<br />

Visit us at www.dontnetdevelopersjournal.com September 2006 Volume: 4 Issue: 9<br />

43


First Look<br />

A First Look at Visual<br />

Studio 2005 Code Snippets<br />

Why didn’t we get this sooner?<br />

By Tommy Newcomb<br />

About the Author<br />

Tommy Newcomb works for Magenic as an<br />

IT <strong>con</strong>sultant in the Chicago area.<br />

His main focus is developing Web application<br />

and E-commerce work using Micros<strong>of</strong>t<br />

technologies. He lives with his wife, Emily,<br />

and baby daughter, Jaqueline, in the Chicago<br />

suburbs.<br />

zoomonkey@gmail.com<br />

Every now and then a new development tool<br />

comes along that is so simple and elegant it<br />

leaves us wondering, why didn’t we get this<br />

sooner? Introducing Intellisense Code Snippets, it’s<br />

a new feature available in Visual Studio (VS) 2005<br />

that makes inserting routine code much faster and<br />

easier.<br />

Using snippets is as simple as right-clicking in<br />

the IDE and left-clicking on the Insert Code Snippet<br />

pop-up. This opens the Visual Studio Code<br />

Snippet Picker, similar to Intellisense, which lets<br />

you insert pre-packaged code statements directly<br />

into your class. There’s nothing extra to install or<br />

download so you can start using snippets packaged<br />

with VS 2005 immediately.<br />

Visual Basic (VB) and C# both take advantage<br />

<strong>of</strong> snippets; however, VB seems to have been the<br />

target language for the snippet designers while C#<br />

was more <strong>of</strong> an afterthought. C# has most <strong>of</strong> the<br />

same functionality, but not quite. In this article, I’ll<br />

use both languages and make sure to point out the<br />

differences along the way. If you’re attached to a<br />

certain language you needn’t worry; they both have<br />

their advantages and disadvantages and behave<br />

almost identically.<br />

The benefits that you’ll gain from using snippets<br />

are tw<strong>of</strong>old. First, they reduce repetitive, time-<strong>con</strong>suming<br />

typing. For example, if you intend to write<br />

something common like a For loop, try…Catch<br />

block, or an If statement. You needn’t waste time<br />

typing the code for these simple statements. Your<br />

snippet will be inserted and you only have to<br />

modify the parts <strong>of</strong> the statement specific to your<br />

code.<br />

The se<strong>con</strong>d and perhaps greater value is that you<br />

no longer have to remember the exact syntax for a<br />

particular statement. If, for example, you needed to<br />

create a task-oriented snippet that’s more complex<br />

than an If statement, such as code that will create a<br />

parameterized stored procedure, an inserted snippet<br />

will give you a perfect example. This eliminates<br />

that Google-search crapshoot that we do when we<br />

can’t quite remember the exact syntax.<br />

Again, to get started you can begin today by<br />

using the packaged snippets in the IDE, but there’s<br />

much more to snippets through customization as<br />

we’ll see. There are dozens <strong>of</strong> snippets provided<br />

in VS 2005, and there are many more available for<br />

download. This article will show you how to create<br />

your own snippets, modify existing ones, and demonstrate<br />

how to create your own snippet library to<br />

improve your productivity, giving you an edge as a<br />

highly productive developer.<br />

Inserting Snippets<br />

We’re ready to insert our first snippet. While in<br />

the VS 2005 IDE, you can right-click where you’d<br />

like to place your snippet and select the Insert<br />

Snippet…command from the Snippet Inserter<br />

menu. In VB only this same step can be accomplished<br />

by typing “?” then the Tab key. When you<br />

do this you’ll notice several sub-categories, such<br />

as Collections and Arrays, Common Coding Patters,<br />

etc. When you select a sub-category and then<br />

select a particular snippet, a tool tip displays both<br />

a description <strong>of</strong> the snippet and its shortcut value.<br />

If you know your snippet’s shortcut value, you<br />

can skip all <strong>of</strong> this clicking and simply type in the<br />

shortcut then hit the Tab key to insert your snippet<br />

<strong>of</strong> code.<br />

For many snippets, the shortcut value is the<br />

same as the first word in the statement. So, if you<br />

wanted to throw an exception in your code you<br />

simply type “throw” then hit the Tab key and it<br />

appears. If you aren’t quite sure what your snippet<br />

shortcut is, you can find it by typing a portion<br />

<strong>of</strong> the keyword. So when you type “Try” then “?”<br />

then hit the Tab key, all <strong>of</strong> the exception-handling<br />

statements show up in the Snippet Inserter for you<br />

to choose. If you don’t like your inserted snippet,<br />

executing a Ctrl-Z will remove it as quickly as it appeared.<br />

Many times you have existing code that needs<br />

to be modified. Snippets are intelligent in that they<br />

44<br />

September 2006 Volume: 4 Issue: 9 Visit us at www.dontnetdevelopersjournal.com


Visit us at www.dontnetdevelopersjournal.com September 2006 Volume: 4 Issue: 9<br />

45


First Look<br />

don’t always overwrite existing selected code with<br />

your insert. If, for example, you have existing code<br />

and you want it surrounded with an If statement,<br />

you can highlight the code and insert your snippet.<br />

Your selected code will remain and your If and End<br />

If tags will surround it. A big time saver for me is<br />

when I have to add Region tags to my code. Using<br />

snippets I select the block <strong>of</strong> code that I want inside<br />

my Region and Insert Snippet and – voilà – it’s<br />

done. It saves scrolling, cursor placement, mouse<br />

handling, and typing.<br />

If your snippet code relies on an Imports statement<br />

or a project reference, VB will add the necessary<br />

Imports and project references as well. It’s here<br />

that VB has an advantage over C#.<br />

We can see that code snippets have more intelligence<br />

than simply cutting and pasting code. In<br />

this next VB example, notice how we can insert a<br />

Generic List item snippet by choosing: Insert Snippet<br />

| Collections and Arrays | Create a list with items<br />

<strong>of</strong> a single Type.<br />

Names and names from the list. Pretty cool, huh?<br />

The following code is inserted:<br />

‘ Backing storage -- a generic list<br />

Dim names As New List(Of String)()<br />

‘ Add an item to the Collection<br />

names.Add(“John”)<br />

The areas that are specific to your code that require<br />

a change are highlighted in green. Hitting the<br />

Tab key will quickly move your cursor over these areas;<br />

hit shift-tab to reverse. The highlighted linked<br />

items, such as the names variable in our code, all<br />

change to the same value when one occurrence<br />

is changed. It’s smart enough to know that we’re<br />

naming all the variables in our statement the same.<br />

When you’re finished making changes to a field, hit<br />

the Escape key to commit and move on.<br />

Now, suppose that you want to loop through<br />

the previously inserted Generic List using a For<br />

Each…Next statement. Click Insert Snippet | Common<br />

Coding Patterns | Conditionals and Loops | For<br />

Each…Next Statement.<br />

When your cursor is on the first variable name,<br />

in this case Item, if you hit Ctrl + Space, a list <strong>of</strong><br />

all your declared variables (<strong>of</strong> the same type as<br />

the cursor replacement) will pop up. This lets me<br />

choose from my previously declared variables in<br />

this selectable list. Notice that I can select my-<br />

Note that in VB once your snippet has been inserted<br />

and modified it will remain highlighted until<br />

you close the file and all its associated files (like a<br />

Form’s design view).<br />

Snippets: Inside the Magic<br />

Let’s take a look inside the Generic List snippet<br />

that comes with VS 2005. To begin with, a snippet is<br />

really just an XML file, and the snippetformat.xsd<br />

schema defines the rules for valid snippets.<br />

In Listing 1, we use the CreateAStronglyTyped-<br />

Collection.snippet file located at C:\Program Files\<br />

Micros<strong>of</strong>t Visual Studio 8\VB\Snippets\1033\collections<br />

and arrays\. These files are easy to modify<br />

and create, which we’ll get to, but first look at our<br />

collection-snippet XML. The CodeSnippet element<br />

defines our entire snippet. The Format attribute in<br />

the CodeSnippet element is simply the version it’s<br />

been given. Following that we see the Header, Title,<br />

and Author tags that are intuitive enough, but notice<br />

the Shortcut tag. This tag defines the shortcut<br />

we used earlier when we typed the value and hit<br />

the Tab key to insert our snippet.<br />

The Description tag is what you see displayed in<br />

the Intellisense browser description and does just<br />

what it says, but you should also note that there<br />

are three more tags that can be used in the Header<br />

tag: Keyword, HelpURL, and SnippetTypes. The<br />

Keyword element(s) provide a standardized way <strong>of</strong><br />

searching through snippets, the HelpURL is simply<br />

a help file reference link, and the SnippetsTypes are<br />

a bit more complex – and useful.<br />

SnippetTypes tell the snippet how to behave on<br />

insert. There are two types you can use with custom<br />

code snippets: SurroundsWith and Expansion.<br />

Expansion simply means to insert the code at the<br />

cursor. The SurroundsWith tag will actually insert<br />

46<br />

September 2006 Volume: 4 Issue: 9 Visit us at www.dontnetdevelopersjournal.com


the snippet around the code that’s selected. In the<br />

diagram, I have a couple <strong>of</strong> lines that are based on<br />

a <strong>con</strong>dition.<br />

When I choose Surround With… from the menu<br />

then the If option, the If code literally surrounds<br />

my selected text:<br />

Next, let’s take a closer look at the Snippet element.<br />

As I mentioned before, only VB supports the<br />

Reference tag that provides information about the<br />

necessary references that your snippet requires.<br />

This is specified under in the snippet element:<br />

<br />

System.Windows.Forms.dll<br />

<br />

<br />

<br />

The Assembly element is the name <strong>of</strong> the reference<br />

<strong>of</strong> course and the URL tag is a link to a more<br />

detailed description. The Imports is also a potential<br />

child element <strong>of</strong> the Snippet element and is<br />

similar to the Reference element. VS 2005 adds the<br />

necessary Imports to your code when the snippet is<br />

inserted:<br />

<br />

<br />

Micros<strong>of</strong>t.VisualBasic<br />

<br />

<br />

<br />

To describe the Declarations element, let’s look<br />

at the following XML taken from VB’s Select…Case<br />

Snippet in Listing 2.<br />

Notice that the Declarations element <strong>con</strong>tains<br />

the Literal and Object elements. These are what<br />

make automatic code replacement possible. The<br />

Literal element identifies a replacement placeholder<br />

for code that’s <strong>con</strong>tained entirely within<br />

the snippet. Examples <strong>of</strong> literals might be strings,<br />

integers, or variable names. The Object element<br />

identifies a placeholder, required by the snippet,<br />

which is likely to be outside the snippet itself. ASP.<br />

NET <strong>con</strong>trols, Windows Forms <strong>con</strong>trols, or an<br />

instance <strong>of</strong> an object or type make good candidates<br />

for this kind <strong>of</strong> element declaration.<br />

There’s also an optional element called Function<br />

that’s only supported in Visual C# and Visual J# the<br />

details <strong>of</strong> which are beyond the scope <strong>of</strong> this article.<br />

However, I’ll mention that there are only four<br />

built-in functions (you can’t create your own) that<br />

essentially specify the function that executes when<br />

the literal or object receives focus in your code.<br />

And finally, we come to the code. The code<br />

element defines the actual code and replacement<br />

items that will be inserted. Notice the Select…Case<br />

statement in Listing 2. The code element <strong>con</strong>tains<br />

the text that’s inserted within the CDATA brackets.<br />

The $ acts as a delimiter for the replacement literal<br />

and object. The attributes for the Code element are:<br />

• Kind – Specifies the type <strong>of</strong> code the snippet<br />

has and where the snippet can be inserted. (See<br />

http://msdn2.micros<strong>of</strong>t.com/en-us/library/<br />

ms171421.aspx for more information).<br />

• Delimiter – The value that delimits replacement<br />

variables. The default is: $.<br />

• Language – Values must be: VB, CSharp, VJSharp,<br />

or XML<br />

Building a Snippet From Scratch<br />

Let’s look at the C# snippet in Listing 3 that I<br />

created using Notepad. Since C# doesn’t have an<br />

IsNumeric function, unlike VB, I created one using<br />

this snippet. It will be the beginning <strong>of</strong> a snippet<br />

library that I’ll keep for future use. The file is named<br />

IsNumeric.snippet.<br />

The two Literal values in Listing 3 demonstrate<br />

the use <strong>of</strong> replacements after the code has been<br />

inserted. The replacements in the comments section<br />

are highlighted after the insert is executed. The<br />

developer then has to change the class name and<br />

the developer’s values.<br />

First Look<br />

Visit us at www.dontnetdevelopersjournal.com September 2006 Volume: 4 Issue: 9<br />

47


First Look<br />

LISTING 1<br />

Making Your Snippet Available for Use<br />

Now that we’ve created an XML snippet file, we<br />

need to make it available to the snippet picker<br />

using the Code Snippet Manager tool in VS 2005.<br />

In the VS IDE, select Tools | Code Snippet Manager.<br />

Here you can browse the snippet folders and gain<br />

information about the snippets you’ve installed.<br />

When you select a snippet from your list, you’ll<br />

see its description, shortcut, snippettype, and author<br />

information. You’ll only see snippets here because<br />

the manager filters to only show snippet files.<br />

The Add button should actually be renamed<br />

Add Folder, because that’s what it really does. It<br />

lets you select and add a folder <strong>con</strong>taining snippets,<br />

not individual snippets themselves. It should<br />

be noted that when you have a folder selected in<br />

the manager and you add a new folder it won’t put<br />

the new folder underneath your selection as you<br />

might expect. Instead it will be added directly to<br />

the root.<br />

The Import button is what you use to add your<br />

individual snippets. If you choose Import, and select<br />

your snippet, a new dialog will open up and let<br />

you place it in your directory structure. I used the<br />

Import button to add our new IsNumeric.snippet.<br />

We can see it under the My Code Snippets directory<br />

in the Code Snippet Manager.<br />

The remove button removes the folder from the<br />

manager; it doesn’t, however, remove it from your<br />

<strong>sys</strong>tem. Also note that the snippets you see in the<br />

manager are the ones you’ll be able to insert using<br />

the Intellisense Snippet Picker.<br />

And finally, the Search Online button takes you<br />

to an MSDN search text box where you can search<br />

for snippets by keyword. However, I’ve had more<br />

success using Google to find snippets instead.<br />

Other Tools<br />

If you’re interested in creating your own snippets,<br />

and you want to use a tool with more power<br />

than Notepad, check out the Visual Basic Snippet<br />

Editor tool located at http://msdn.micros<strong>of</strong>t.<br />

com/vbasic/downloads/tools/snippeteditor/. This<br />

is a shared source initiative project that’s currently<br />

in RC mode. This tool is essentially a UI application<br />

that lets you create snippets, add and modify<br />

parameters like title, author, and description, but in<br />

a more user-friendly GUI way. It also lets you preview<br />

your snippet as it will appear in the IDE and<br />

validates for any compilation errors that may exist.<br />

A similar tool doesn’t exist yet for C#. Now, there’s a<br />

project that’s screaming to be developed.<br />

Security Considerations<br />

Security <strong>con</strong>siderations are an important issue<br />

since snippets <strong>con</strong>tain both source code and<br />

hyperlinks. You needn’t worry about snippets that<br />

come packaged in VS – unless they’re modified <strong>of</strong><br />

course. However, downloaded snippets have potential<br />

problems. For one, a file with a .snippet suffix<br />

doesn’t necessarily mean that it’s plain text XML.<br />

These files should be scanned with virus protection<br />

just like any other downloaded file.<br />

The snippet code itself could be malicious.<br />

Fortunately, it’s source code and you can read it,<br />

but if you are unsure <strong>of</strong> its origin, look it over carefully.<br />

Another problem that may arise is malicious<br />

code lurking in collapsed Region tags. You insert<br />

the snippet, run it, and your <strong>sys</strong>tem is damaged<br />

because <strong>of</strong> code you didn’t see.<br />

As we discussed above, snippets may <strong>con</strong>tain<br />

references that are automatically added to your<br />

project when the snippet is inserted. This may<br />

reference a file that was downloaded alongside you<br />

snippet. This could cause damage as well.<br />

Finally, the HelpURL link in the snippet could<br />

execute a malicious script on a separate site, or link<br />

the user to an <strong>of</strong>fensive site. It’s wise to read the<br />

snippet’s XML before installing it.<br />

Building a Snippet Library<br />

I hope that I’ve given you direction in building<br />

your own personal library, or perhaps, beginning a<br />

library for your company or organization. Having a<br />

personalized snippet library helps decrease development<br />

time and maintain coding standards. My aim<br />

is to get you started; you can decide how a snippet<br />

library can help you and your organization.<br />

48<br />

September 2006 Volume: 4 Issue: 9 Visit us at www.dontnetdevelopersjournal.com


introductory<br />

subscription <strong>of</strong>fer!<br />

A TRULY INDEPENDENT<br />

VOICE IN THE WORLD OF .NET<br />

.NET Developer’s Journal is the leading<br />

independent monthly publication targeted at .NET<br />

developers, particularly advanced developers. It brings<br />

.NET developers everything they need to know in order<br />

to create great s<strong>of</strong>tware.<br />

Published<br />

monthly, .NET<br />

Developer’s<br />

Journal covers<br />

everything <strong>of</strong> interest to<br />

developers working with<br />

Micros<strong>of</strong>t .NET technologies<br />

– all from a<br />

completely independent<br />

and nonbiased perspective.<br />

Articles are carefully<br />

selected for their prime<br />

technical <strong>con</strong>tent – technical<br />

details aren’t watered down with<br />

SUBSCRIBE ONLINE!<br />

www.<strong>sys</strong>-<strong>con</strong>.com/dotnet/<br />

or Call<br />

1 888 303-5282<br />

Here’s what you’ll find in<br />

every issue <strong>of</strong> .netdj:<br />

Security Watch<br />

Mobile .NET<br />

.NET Trends<br />

Tech Tips<br />

Standards Watch<br />

Business Alerts<br />

.NET News<br />

Book and S<strong>of</strong>tware<br />

Announcements<br />

.NET Developer’s Journal is for .NET<br />

developers <strong>of</strong> all levels, especially those “in<br />

the trenches” creating .NET code on a daily<br />

basis:<br />

• For beginners:<br />

Each issue <strong>con</strong>tains step-by-step tutorials.<br />

• For intermediate developers:<br />

There are more advanced articles.<br />

• For advanced .NET developers:<br />

In-depth technical articles and columns<br />

written by acknowledged .NET experts.<br />

Regardless <strong>of</strong> their experience level, .NET<br />

Developer’s Journal assumes that everyone<br />

reading it shares a common desire to understand<br />

as much about .NET – and the business<br />

forces shaping it – as possible. Our aim<br />

is to help bring our reader-developers closer<br />

and closer to that goal with each and every<br />

new issue!<br />

lots <strong>of</strong> needless opinion and commentary.<br />

Apart from the technical <strong>con</strong>tent, expert analysts<br />

and s<strong>of</strong>tware industry commentators keep developers<br />

and their managers abreast <strong>of</strong> the business<br />

forces influencing .NET’s rapid development.<br />

Wholly independent <strong>of</strong> both Micros<strong>of</strong>t Corporation<br />

and the other main players now shaping the course <strong>of</strong><br />

.NET and Web services, .NET Developer’s Journal<br />

represents a <strong>con</strong>stant, neutral, expert voice on the<br />

state <strong>of</strong> .NET today – the good, the bad, and the<br />

ugly…no exceptions.<br />

SAVE16%<br />

OFF<br />

THE ANNUAL COVER PRICE<br />

Get 12 issues <strong>of</strong> .NETDJ<br />

for only $ 69 99 !<br />

ANNUAL<br />

COVER PRICE:<br />

$83.88<br />

YOU PAY<br />

$<br />

69 99<br />

YOU SAVE<br />

$13.89<br />

OFF THE ANNUAL<br />

COVER PRICE<br />

OFFER SUBJECT TO CHANGE WITHOUT NOTICE<br />

Visit us at www.dontnetdevelopersjournal.com September 2006 Volume: 4 Issue: 9<br />

49


First Look<br />

LISTING 2<br />

<br />

<br />

Variable<br />

Object<br />

<br />

Replace with an expression.<br />

VariableName<br />

<br />

<br />

Case1<br />

<br />

<br />

Replace with a valid value <strong>of</strong> the expression.<br />

<br />

1<br />

<br />

<br />

Case2<br />

<br />

<br />

Replace with another valid value <strong>of</strong> the expression.<br />

<br />

2<br />

<br />

<br />

<br />

<br />

<br />

<br />

<br />

<br />

classname<br />

Insert an IsNumeric function<br />

into your code.<br />

ClassName<br />

<br />

<br />

developer<br />

Replace with the developer’s<br />

name.<br />

None<br />

<br />

<br />

<br />

<br />

/// File: $classname$.cs<br />

/// Developer: $developer$<br />

///<br />

/// Description: IsNumeric takes an object<br />

and returns a true if it’s “numeric”<br />

/// False if it’s not.<br />

///<br />

/// <br />

/// <br />

/// boolean<br />

internal static bool IsNumeric(object<br />

anyObject)<br />

{<br />

// IsNumeric tests to see if a passed-in<br />

object is numeric or not.<br />

double dOutValue;<br />

End Select]]><br />

<br />

LISTING 3 ISNUMERIC.SNIPPET<br />

<br />

<br />

<br />

<br />

IsNumeric<br />

Tommy Newcomb<br />

Inserts an IsNumeric function.<br />

IsNumeric<br />

<br />

Expansion<br />

return Double.TryParse(Convert.<br />

ToString(anyObject), System.Globalization.NumberStyles.Any,<br />

System.Globalization.NumberFormatInfo.InvariantInfo,<br />

out dOutValue);<br />

}]]><br />

<br />

<br />

<br />

<br />

50<br />

September 2006 Volume: 4 Issue: 9 Visit us at www.dontnetdevelopersjournal.com


Visit us at www.dontnetdevelopersjournal.com September 2006 Volume: 4 Issue: 9<br />

51


52<br />

September 2006 Volume: 4 Issue: 9<br />

Visit us at www.dontnetdevelopersjournal.com

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!