Creative Commons and PowerPoint Slides

I am a fan of creative commons as I have mentioned before.   I use other people’s works like their photos and their music in things that I do. I like to consider myself karmatically even with creative commons as I also contribute to creative commons with my photos, my blog posts and my podcast.  I like to give as much as I get. 

Using Creative Commons work is easy, because the creator has already told you how you can use their work by affixing it with a creative commons license.  You don’t even have to ask permission, as long as you follow the terms of the license.  I personally will sometimes ask and almost always let the creator know that I used the work with the caveat that if they are not cool with it, I will take it down.  I have never had someone ask me to take it down and most people are thrilled to hear that someone is using their work (It makes my day when I see someone use something I created).

The most open form of creative commons is the by attribution license that says you can pretty much do anything with it, as long as you give the original author credit.  I prefer to use this license when I consume other people’s work, because it is the least ambiguous.  I get leery about using the non-commercial ones on this site, because even though I make no money off of it, it is somewhat related to my job (I blog only on my own time, but I do blog about work).  The only thing that is non-ambiguous about the attribution clause is how you give attribution.

Some works are easy

If you are using a photo on a web page it is very easy to link back to the author’s original work and you can use the “tooltip” and text to say who the work belongs to.  I take it a step further by actually saying the photo is “used under creative commons” with a link to the specific license.  I consider this a form of advertising for the creative commons licenses. 

Some works are not so easy

It gets a little tougher to do attribution with other forms of work.  Take the theme music to the Thirsty Developer podcast.  Both the intro and outro music are original works by Pete Prodoehl.  It would be very wonky in the show if as the music was fading out we did a voice over saying “The proceeding music was created by Raster and is used under creative commons”.  I have tried a couple of times to do a voice over at the end of the show saying something to that effect and I have never been happy with it.  So instead of putting it in the show, we created a web page called “show credits” with the attribution to Pete and to Erik Klimczak who created our logo.  The link to “Show Credits” is visible on every page of the web site.

What about PowerPoint Slides?

Dave Bost asked me how I gave attribution to photos that I use in PowerPoint Slides to the creator.  I love the Style of using a photo rather than a bunch of bullet points.  It can be very powerful to have the photo help tell the story and maybe add a title to the slide.  I have a couple of decks that are nothing but a collection of photos that I use.  Because the audience is seeing the slide and not necessarily interacting with it, it is hard to use a link.  You also don’t want a lot of text to get in the way of the photo itself.  So here is the methodology I use:

  • In the “notes” of the slide deck put the full url to the original work or the author’s profile or home page (as appropriate).  This allows anyone that you send the deck to get the full information.  You do always share your deck, right?
  • Put a visible, but tastefully sized Creative commons logo on the slide deck with a link to the license.  The Creative Commons site has all kinds of logos and links.
  • Next to the logo put “By ” where creator is the name or handle of the user.

Here is an example of one that I use when talking about REST and SOAP.  The photo is Whoa ! Betamax Tapes ! originally taken by shinnygogo:

Serialization

I hope that most people consider this appropriate attribution.  Hope this helps.

Book Review: DHTML Utopia

Last month I did a review of the Book HTML Utopia: Designing without Tables using CSS and how it taught you to use the advanced features of Cascading Style Sheets (CSS) instead of the “old school” table based layouts.  CSS is one of the mandatory skills that anyone doing web development needs to have.  Another is a good understanding of JavaScript and the HTML DOM (Document Object Model).

Quick Review

DHTML Utopia: Modern Web Design Using JavaScript & DOM by Stuart Langridge is a great book for learning how to write client side browser code that takes advantage of the richness of browser DOM.  The book is well written and includes thorough, clear and precise examples.  In today’s environment of richer and richer client applications, this can be a great tool for learning the ins and outs of this style of client side development in a robust, supportable fashion.

Why not just use a framework?

Just this last week Scott Guthrie announced on his blog that Microsoft would be included jQuery inside of Visual Studio (starting within the next few weeks as a download).  jQuery is just one of many great JavaScript frameworks that have abstracted out much of the complexity of dealing with JavaScript and the HTML DOM.  jQuery (like the other frameworks) creates an abstraction layer that means you do not have to deal with the differences between browsers or the differences between versions of a browser.

With so many great frameworks out there, why would you need to learn the “raw” or “low level” coding that is discussed in the DHTML Utopia?  Technically you would not need to, but if you are like me you have a natural curiosity of what is going on below the covers, even if you use a framework like jQuery.  And that understand is what this book gives you.  Also no framework will ever cover every use case, so it is good to know the details, in case you need to drop down and “roll your own” solution.

A little dated

The book was published in 2005, which means that much of the material is probably 4 years old (due to the publishing lead times).  As a result, the specific browser versions are at least one major version out of date (example: all Internet Explorer discussion are version 6, not the current version).   If you disregard the specific discussions, the book does a good job of standing the test of time for 2 reasons: the specific issues that he discusses are still prevalent on the Internet today and more importantly he talks about some great techniques for not coding to specific browser versions anyway.

Architecture by Baseball: The 2 Out Rally

This is the eight in a series of articles about how we can learn about software architecture by studying and comparing it to the sport of Baseball.  This series was inspired by the book Management by Baseball.

One of the most exciting things at a baseball game can be the 2 out rally.  A 2 out rally is simply when a team has nobody on base, 2 outs and they put together a series of hits (or walks) that usually lead to one or more runs being scored.  All rallies are exciting, but a 2 out rally can be particularly exciting, because the team is down to its last out in the inning.  One bad swing of the bat can bring the rally and the inning to a close. 

Probably the most exciting of the two out rallies comes when not only is the inning on the line, but the game is as well.  This comes when the home team is behind (or tied) in the bottom of the 9th inning.  It can get to the point where one swing of the bat can win or lose the game.  There is a real excitement as the hometown crowd cheers their heads off for the rally to continue.

I remember Gene Mauch doing things to me at Philadelphia. I’d be sitting there and he’d say, ‘Grab a bat and stop this rally.’
Bob Uecker

Exciting and Nerve Wracking

If you are a baseball manager you like the 2 out rally, but you would much prefer for the rally to start with no outs.  Probably most players feel this way too.  2 out rallies are very exciting, but they come with the high price that one mistake and the rally is over.

2 out rally in Software Projects

I have never been on a software project where the work was distributed evenly across the time allotted for the project.  If you have 6 weeks to complete the project, you would assume the best way to accomplish the project would be to do roughly 1/6th of the work in the first week, 1/6th of the work in the second week and so on.  I can point to some literature on this, but I think I will keep this anecdotal, because everyone who is reading this article is either nodding along or chuckling at how true this statement is. 

Development teams just don’t “pour on the gas” until the deadline is fast approaching (or as I like to call it, until there are 2 outs).  I have been on projects where we did about 25% of the work in the last 10% of the project timeline.  Through lots of heroics and many long hours in the last part of the project, we usually have gotten something put out by the deadline.  When we haven’t it is usually because one thing tripped us up (like an infrastructure issue or a bug that is hard to track down).  In baseball terms, this is the one bad swing of the bat that killed the rally.

Architecture has an even harder time with 2 outs

Thus far I have been talking about the habits of the project team a whole, and not necessarily about the role of the software architect.  There is a special burden on the software architect to get moving on a project before there are “2 outs”.  A solid architecture needs to be in pace early and all outstanding issues need to be resolved before the pressure of the project deadlines are upon you.

There is another thing to note as part of this discussion.  Baseball is a sport that is not judged by time.  As long as the team keeps hitting, the game can literally go on forever.  How long the game took is merely an interesting statistic that gets put in the record books.  Software projects are usually not so lucky – there is almost always a deadline involved.

Since baseball time is measured only in outs, all you have to do is succeed utterly; keep hitting, keep the rally alive, and you have defeated time. You remain forever young.
Roger Angell

Why people like open source

A couple of weeks ago at the Codeapalooza event in Chicago I spent some time in and out of the Open Spaces area.  I think that it is really cool that open spaces has even made its way into traditional track based events.  Next time you are at an event that has an open spaces area, give it a try, it is a nice break from the traditional sessions and you meet some very cool people there; not that there are not cool people in the sessions, but those are very eyes forward and you may be sitting next to a real cool person and not know it…

One of the topics that kept coming up was work on open source projects.  I think it was Aaron Erickson that mentioned that if money was no object, that he would still work in technology, but would spend his time contributing to open source projects.  I got to wondering if anyone has cataloged the reasons why people like open source.  I searched and searched, but I did not really see the type of list that I was looking for (although Bob Sutor’s list is close), so I thought I would take a crack at making one myself.  This list is by no means complete (I am sure that I have missed some reasons). 

Also I realize that this subject can be very touchy to some people and I know there have to be some people reading this post who are rolling their eyes at the fact that I am a Microsoft employee and I am writing about open source.  There is no hidden agenda that I am trying to impart in this message; I am just collecting the thoughts and observations that I have seen about open source over the years.  I have used open source in the past and continue to do so right now – the Blog Engine that is running this site is dasBlog, which is an open source project. 

If you think I have things wrong, I would like to hear from you.  Either post a comment on this article or contact me.


People like the hard dollar cost

I know there is great debate about the fact that “free software” should mean certain freedoms and not the actual cost of the software.  I will set that aside for now and assert that a lot of people like the fact that many open source projects do not cost them hard dollars up front.  It is especially appealing to people who are boot-strapping a start-up company and have to watch every dollar.  In addition to the startups, I know that many corporate development teams also look at Open source developer tools for the same reason.  If you have 300 developers working in an organization arming them each with a $500 piece of software would cost $150,000 – a figure that could push it into a capital purchase depending on the account rules that you follow.

As an editorial note:  If you like an open source project, consider donating to it.  Donating can be thorough a cash donation, or it could be a contribution of something even more valuable, your time.  You don’t have to write code on the project to contribute; you can test the application, answer forum questions, write or proofread documentation (or any of 101 other things that need to be done on a software project).

People like the comfort of having the source code available

What is the first question that people ask when you give them a class library for them to use in their code?  ‘Can I have the source code?’ is the question that I hear most often (followed shortly by ‘is the source code in VB or C#”?).  People like the utility of using code that has already been written, but there is a natural curiosity to see how it is implemented and a natural distrust if they cannot see how it is implemented.  They want to read the code and learn for themselves what you did to implement the solution.  There is also a natural fear that if something goes wrong and they don’t have the source code, they will not be able to fix the problem.

When Scott Guthrie announced that Microsoft was releasing the source code to .NET Framework Libraries this is the open source desire that Microsoft was trying to address.  The announcement did not change the cost of the framework (which is no cost – as long as you license the operating system), nor did it give the users the right to modify the framework or to redistribute it in any way.  It did give the developer the access to the source code to debug problems that they might encounter.  It was a big step forward for developer productivity.

People want to extend the software

Generally with Open Source licenses you have the right to extend that software (make changes, add functionality, etc).  Some of the licenses require you to release your modified version of the source code if you distribute it and some do not.  I would say it is not the great majority of people, but there are a number of people who are interested in “standing on the shoulders of giants” by building on the work already done by open source projects.  Examples of this are everywhere: 

  • Own a TiVo?  It is a customized version of Linux (note: they have run into legal problems in the past with the distribution of their modifications – there is even a Word for it – Tivoization).
  • Own a hardware router?  Probably running a version of Linux under the covers.
  • Run Firefox?  It uses the Mozilla web browser code that was originally the Netscape code based released into Open Source.

These examples are not just people who have used open source (there are numerous examples of that).  These are people that used an open source project as a platform to build something else upon.

People like the open source licenses compared to traditional licenses

There are lots of open source licenses; the OSI keeps a list of the open source licenses that met certain criteria:  http://www.opensource.org/licenses/alphabetical.  A couple of the names of the licenses might surprise you:  Microsoft, IBM, Intel and Apple all have licenses that are open source according to the OSI criteria.  There is great variety to the licenses: some are short and some are quite long, some are easy to read and some have lots of legalese. Because of the great variety of licenses, it is important that you read and understand it before you adopt it (or before you incorporate it into your own project).

I must admit that passion for the license is the one thing that I don’t understand.  Several months ago I got into a heated discussion with someone about how the Apache license was not “open source” enough, because it did not contain some of the language that the GPL did.  I was in the interesting position of defending the Apache license.  We agreed to quit talking about it.

People like the sense of community

What brought me to writing this article was the comments that I heard in the Open Space area and it took me a while to “get” that when people where talking about contributions to open source projects that it transcended the items that I have listed before this.  Traditional commercial software companies (like Microsoft) try very hard to listen to their customers and deliver improvements that people are asking for in the product.  This happens way more than you think and it took me working for “the empire” for two years to fully realize how much we do listen.  They also take direct feedback about what is broken in the product, especially during the Beta cycles (have you ever checked out connect?).  But open source allows people to contribute directly to the effort.  Don’t like that a bug has been around for 2 versions?  Then fix it.  Want a new feature, implement it.

This direct connection with the effort creates a sense of community (granted, not all communities are harmonious) that you don’t get out of just using commercial software.  Building and being a part of something is special.  I think that is why you see so many developer and designer communities, but you don’t see a lot of user groups around applications (I love going to Web414 and the .NET User Group, but you won’t see me at an outlook user group).

Finding Inspiration

This is the first in a series of articles on a programming project that I am working on.  I thought it would be interesting to share some of the aspects of the project as I go along.  Some of the articles will be technical in nature and some of them will not be.

A career of projects handed to me

The great majority of my career people have told me the projects that I am going to work on.  It started in college when you were given the input files and told exactly what the program should do and it was your job to craft the program to get the desired results.  I feel no ill will against the professors for doing this, as someone who was a professor’s aid and had to grade the programming projects, I can tell you it is a lot easier when everyone is working on the exact same thing.  You can see quickly see if they got the program to compile and run, got the correct output and spend the majority of your time looking at the fit and finish of the program (“Has Ricky ever heard of comments?”).  The nice thing is that college prepared me for the typical Information Technology career where you are handed your assignments.

 

I spent about 1/2 of my career in consulting and about 1/2 of it in industry (with a year off to teach).  For the most part there is little difference in the 2 sides of the profession, there are business problems to solve and they bring in resource use technology to solve the problem.  A lot of the projects that you get assigned to are existing applications that need to be modified, we call those legacy.  Occasionally you are assigned a project that is a new project, these are highly desirable.  Even better is when you get to be on a project where they have not made any decisions about the technology or the business process to be solved, this is pretty much the pinnacle of the projects.  I have been lucky enough to work on two of these in my career.

 

The only think that is better than the project where only the business problem to solve is decided is when you get to pick the problem to solve.  To get this you have to be really lucky, create a startup or work on a side project.  I have picked the side project route.

Lacking Inspiration

I am not a very creative person; I alluded to this in my recent article on crayons and creativity.  It is tough work for me to think of ideas that are true stream of consciousness.  My innate lack of creativity and years of projects being handed to me have really stunted my inspiration.  I really envy people who come up with “an idea a minute” (My buddy Josh Holmes is really good about that – he amazes me with how many ideas he comes up with).

 

One of the most inspired people I know is my wife Jodie.  It is not software related, but she will take something that she sees and turn it into the most incredible window displays at her work.  She will buy an antique camera at a garage sale for $1 and turn it into a wonderful story.  Me, I go right for the screwdriver and try to get the camera working.  When I asked her about how she finds the inspiration she told me:

 

There is art everywhere; you just have to open your eyes
-Jodie Clarkin

Patterns for Inspiration (about programming projects)

I have been thinking a lot about different projects that I can tackle and I have noticed that the general ideas fall into one of a few patterns.  By no means is this a complete list, but I think most inspiration can fall into one of these patterns:

 

I can do that better – Probably one of easiest ways to become inspired is to see something implemented and to recreate the idea, but to do it in a “better” way.  Better is a very suggestive term, because one person’s better is another person’s awful.  Sometimes the difference between the applications is merely cosmetic.

 

I can use that idea, but in a different way –  This is where you see an application of technology and think about other ways you can use the technology (or idea). A great example of this is seeing data (such as hotel locations) displayed on a map in a web application and you think of other map based data that you can overlay on a map in a similar fashion.

 

How would I do that in – This is when you see something implemented, but wonder how you would do it in another technology.  Don’t confuse this with the “I can do it better” pattern; this is implementing the same idea in a radically different way using a different platform.

 

Seeing a problem and fixing it – This is the first one on the list that does not involve a reference implementation, thus it gets a lot harder to implement than the previous ones.  This is taking a pure business problem and dreaming up the solution and implementing it.

 

Radically changing how things are done – The previous pattern was about changing and improving the way things are done (sometimes by leaps and bounds).  This last pattern is about destroying and totally recreating something.

 

Stay tuned for the idea I am going to work on (I have narrowed it down to a couple of choices).

Book Review: Building Scalable Web Sites

In May-July of this year I did a talk on Building Scalable and Usable Web Applications in Indianapolis, Downers Grove, Milwaukee, Chicago and Appleton for our ArcReady series that we run in about 18 cities in the Central United States.  One of the items I mentioned as a good reference for learning about the ins and outs of web site scalability was the book Building Scalable Web Sites by Cal Henderson, the chief architect of Flickr.

Quick Review

This is a great book for someone who wants to understand the issues with creating a truly Internet Scale application.  The title of this book is a little misleading, because it is about much more than just scaling out your web site.  With chapters on Internationalization / localization and other important topics, it really should be called something like “Handbook for creating an Internet application”.  If you are a Flickr fan, it is also a very interesting peak into how some of the features of the site are implemented.  This is one of the few technology books that I have read more than once, it is that valuable of a resource.  It is also a great book to keep on the shelf and revisit specific topics as you work on creating your next great Internet web site.

Street Credibility

I suppose that anyone could write a book on building scalable web applications, but there are a select few sites on the Internet that have achieved true Internet Scale.  Internet Scale is a term that gets thrown around a lot, but to cut through the clutter I would just say that if you are in the top 100 traffic sites, you are pretty much there.  And the fact that Flickr is a photo sharing site adds a lot to his discussions on scaling web applications.  The fact that literally hundreds of people are uploading multi-megabyte photos to the site every minute, 24 hours a day is a real testament to the scalability of the site. 

Standing the test of time

One of the hallmarks of a good technology book is that it stands the test of time (for at least a few years).  You can go to any used books store and find lots of copies of Visual Basic .NET 1.1 books that are less than 5 years old that are collecting dust, because they were too wired into the specific features of the technology from that slice in time.  They may have been great books at the time, but their shelf life (pun intended) was as long as the technology was new and hot, once Visual Basic .NET 2.0 came out, the 1.1 books were yesterday’s news.  The Henderson book does a good job of focusing on the architecture and fundamental development issues around large scale web sites, as opposed to focusing on specific features in any platform, language, tool or technology.  A good example of this is the fact that Flickr is not written on Microsoft technologies (most of it is PHP), but I got a lot out of it, even though I primarily work with the Microsoft web stack.

A word of caution

If you are currently experiencing a scalability problem with your web application, this book will not necessarily solve the problem for you.  You will not turn to page 10 and see the list of common scalability issues, see that you are experiencing number 8 on the list and then turn to page 101 for the answer to that problem.  This book does make you think about the root causes of the scalability issues in your application, and more importantly it is a great guide to follow as you start to add new features to your application.

Architecture by Baseball: Stats


This is the seventh in a series of articles about how we can learn about software architecture by studying and comparing it to the sport of Baseball.  This series was inspired by the book Management by Baseball.



“I don’t think baseball could survive without all the statistical appurtenances involved in calculating pitching, hitting and fielding percentages. Some people could do without the games as long as they got the box scores.” – John M. Culkin


Did you know that for the current season right handed batters that face CC Sabathia (the pitcher in the photo for this article) are batting .218 against him (which is a good opponents batting average for a pitcher)?  But against left-handed batters his opponents’ batting average is a fantastic .136? 


Did you know that the 1888 Washington Senators had the lowest team batting average when they batted a mere .207?


Did you know that Cal Ripken, Jr., who is best known as baseball’s “iron man” for playing in 2,632 consecutive games, is also the all time leader for grounding into double plays with 350 of them over his career?


All professional sports have a battery of statistics that are tracked as part of the game and the subsequent reporting of those games, but most people would agree that baseball statistics are far more numerous and are actually a part of the fabric of the game.  One of the most fascinating aspects is how we use statistics to compare players from different eras.  Because the game has evolved slowly over the years and has not been radically redefined along the way, we are able to make meaningful comparisons.  As a result we can meaningfully compare the consecutive game hitting streak that Joe DiMaggio had in the 1940s, with the one that Willie Keeler had in the 1890s to the one that Pete Rose had in the 1970s.


More than just being a history lesson and interesting footnotes to articles about games, statistics are actually used in the decision making process before and during a game.  A manager (as an example) may look at the matchup between the batters in his lineup and how they have fared against the probably pitchers for that day’s game.  He may chose to give a batter the day off as a result of this.  The general manager will also use statistics to judge whether they should draft a player, or sign a particular player as a free agent.


As much as statistics are important in baseball, we should remember the role of statistics is to enhance the experience, not to become the experience.  Don’t spend all your time computing the batting average of the player coming up to bat with runners in scoring position (RISP) with less than 2 outs, which is a statistic that you will hear of frequently.  This quote says it best:



“Baseball isn’t statistics – baseball is (Joe) DiMaggio rounding second.” – Jimmy Breslin

Software Architecture Statistics

Software engineering has some really interesting statistics that are used as part of the development process.  Some of the most common or the most interesting statistics are:



  • Lines of code – when we are working on an application, particularly one that already exists (a legacy application), the number of lines of code is an interesting piece of data.  For example, if you are going to make major changes to an application it will be a lot easier to change one that has 100,000 lines of code than one with 2,000,000 lines of code (just due to the sheer size of the application).
  • Cyclomatic Complexity – measures how complicated the application is by analyzing the number of paths through the program.  This adds some “depth” to the lines of code statistic, because a program that is 50,000 lines of code can be much more complex than one that is 100,000 lines of code.
  • Function Points – a way to calculate the size of an application by how many things it does.  Function points are very user centric, so “back end” applications are skewed a bit in the number of function points.

While software architecture has some interesting statistics, we as an industry do not use them as much as we should (this statement is part opinion and part anecdotal observation).  We also fail horribly as an industry to track statistics over time, so that we can do meaningful comparisons to past projects.  The best example of past experience would be comparing estimates to actuals on a project and comparing an upcoming release to a past release.  Generally those kinds of statistics are not captured in a meaningful fashion, although modern Application Lifecycle Management and Agile Software Development tools and techniques are changing that for the positive.


As we start to embrace more and better statistics in software architecture, we need to keep in mind that software architecture is still a very human process.  Statistics can help us judge our actions and make meaningful decisions, but the statistics cannot become the outcome of the process. 



“If you dwell on statistics, you get shortsighted. If you aim for consistency, the numbers will be there at the end.” – Tom Seaver

Software Architecture Certifications

I attended the third meeting of the Chicago Architects Group (event web site – the official web site is coming soon) in downtown Chicago on Thursday the 21st.  The CAG is a community based group that is focused on the discipline of software architecture (all facets of it: solutions architecture, enterprise architecture and infrastructure architecture).  Even though the group is in its infancy, I think the organizers are on to something very cool by creating a community for architects to share knowledge and war stories.

Tonight’s discussion was kicked off with a presentation by Carl Franklin, who is one of the CAG co-founders and a solutions architect with a local Chicago firm, Triton-Tek.  The CAG meetings are supposed to be group discussions that are catalyzed by a short presentation.  Carl’s topic was “What is an architect?” that was all about the role of the architect and how the role has evolved over the last 15 (or so) years as the industry has come to recognize the importance of software architecture and the unique role of the software architect.  His presentation did lead to some very interesting discussions.  We talked a lot about certifications during the discussion.

Certificate

It is only natural to talk about certification for software architects, because the traditional architects (the kind that build buildings and bridges) go through a rigid education and certification process before they can carry and use the title of architect.  In software architecture, the most common certification is achieved through the “Costanza” process.  The lovable character from Seinfeld who always wanted to be an architect, so he just announced himself as an architect.  Some of the discussion at the event echoed this with one of the common patterns that people see is senior developers who carry the title of architect because their company thinks that is the next logical step for a senior developer.

There are some real key differences in how software architect are certified and how traditional architects are.  In the United States, traditional architect are certified by state and generally there is a board that is appointed by the state to perform the certification or at least to have oversight in the certification process.  So it is a group of practicing architect that make the decisions, not just a government employee.  In addition there is a national board that provides some oversight and provides consistency from state to state (very useful as people move from project to project).

In software architecture, the certifications are most commonly developed and awarded by a software company (such as Sun, Microsoft, BEA, etc) .The company oriented certifications are not bad in the absence of true discipline based certification process, but there is some natural alignment to the specific companies technology stack.  There is at least one vendor neutral certifications, by the Open Group.  It does separate itself from any one company, but it does not seem to have achieved the same popularity as the vendor certifications and it lacks the rigor of the traditional architect certification process (might be an unfair comparison, because software architecture is not nearly as old as traditional architecture). 

Regardless of how you come down on the certification discussion, I think that we would all agree that the education and experience is much more important than the actual process of certification and the nice certificate that you get to hang on the wall when you are finished.

Recently I had a short conversation with my wife and a few of her friends.  They were talking about getting their certifications in their respective fields (My wife is studying for her certification as an Optician).  I was fascinated by the number of hours of training, apprenticeship and continuing education that someone has to take to become certified in a field such as cosmetology.  Even though there are certifications for software architecture, none of them are required in order to be a practicing software architect.  I think it is a real contradiction because a bad haircut grows out in a month or two, you can live with a bad architecture for years.  🙂

Book Review: HTML Utopia

Every once in a while you read a technical book that has a profound impact on what you do from a day to day basis.  In 2004 I had a web designer friend of mine look at my personal web site to give me advice on something I was trying to do.  He right clicked and did a “view source” and the first thing that he said was “Oh, you are still using tables” and he promptly handed me a copy of the first edition of HTML Utopia: Designing without Tables using CSS and told me to read it and come back when I “Caught up to the 2000s”.  I have not done a table based layout since reading the book.  A few years ago I noticed that there was a second edition of the book and I felt I needed a refresher course, so I bought the updated copy of the book.  The book was not just an update, the co-author added quite a bit of new content.  The updates made a good book even better.

Quick Review

If you are looking for a book to help you make the leap from using table based web pages to using well formed HTML and Cascading Style Sheets (CSS) this is the book for you.  It is very easy to read and it is a technology book that you can actually read.  The book presumes very little experience with CSS (although it does assume that you know web development).  If you already are familiar with CSS and just looking for a reference book, there are more complete references available.

About Table-less design and CSS

Tables were commonly used in the 1990s for layout.  There were a lot of advantages to using tables in the early days of the web, but times have changed.  CSS has been around for a while, but it was mostly used for styling (applying fonts and colors).  CSS2 (the second rev of the specification) added features for using CSS to do true page layout.  Once browsers were updated to properly use the specification (NO IE6 jokes, please) it became possible to limit the use of tables to tabular data, which was probably the intention of the original specifications.

About the book

As I mentioned in the quick review, one of the best things about this book is that it reads very easy in a style that makes it easy to learn the ins and outs of CSS Positioning.  One of the neat things is you also can start to apply what you are learning within a couple of chapters, you don’t have to finish the whole book.  The topics gets more advanced as you go through the book, but each chapter builds nicely on the previous chapters (that is one downside to the book; it is hard to go right to a topic that is in the middle of the book).

In addition to the learning part of the book, it also contains a good sized appendix that is a reference of the most common CSS elements and how to use them.  It is not an exhaustive list of elements, nor are they defined in great detail.  It is a serviceable reference if you know the element and are just looking for a quick refresher.

Online and Offline

One of the neat things about the book is that all of the samples are built around a case study, the fictional site Footbag Freaks that is dedicated to the sport of hacky sack.  The use of a consistent sample throughout the book is good, but it is augmented with the actual working site on the Internet, which allows you to interact with it in your browser(s) and get the latest sample code.  The site seems to have been updated a couple of times to keep it abreast of updates to the major browsers.  It is great that a book can have an ever green component to it like the working case study.

Note: In case you “view source” on this web site, there are a couple of tables used to layout the comments pages, but those are from generated code, not anything I did.

SOAP: The Betamax of Web Services?

I think most people know about the format wars between VHS and Betamax that took place in the 1980s and early 1990s even if you were not around during the that time.  It has been mentioned a lot over the last couple of years because of the brief format war between Blu-Ray and HD DVD

One of the reasons you hear so much about the format war is that Beta became the punch line on a lot of jokes over the years.  My favorite one is the Married with Children Episode where Peggy has to drive across the state line to “Bob’s Beta and Bell-Bottom’s” to rent tapes.  Kelly says the in episode “‘We are the last family on earth with Beta”.  The other reason that VHS versus Beta is well known because it has been studied over the years as a classic marketing case study.  The common telling of the story is that Beta was released first and even though it had a vastly superior quality to VHS, it lost out to VHS.

Another format war brewing?

If you follow Web Service technologies, you will know that there are a few standards running around, but there are 2 big players in the game: SOAP and REST (I don’t mean to leave out or offend the people that use XML-RPC or POX).  SOAP predates REST by a couple of years (although you can argue that people were using REST before the term was coined).  You could also argue that SOAP has the superior technology, because it supports a lot more features than REST does.  In this way you could say that SOAP is like Betamax.  Even with the superior technology and a 2 years lead, REST services seem to be much more popular on the Internet than SOAP based services.

Market share begets market share

One of the reasons that Betamax eventually lost out was a lead in adoption that VHS gained when the formats were first released.  VHS took the initial lead because the format supported a 2 hour recording time, while Betamax only supported a 1 hour recording time.  Tape rental stores and retailers who sell tapes are naturally going to carry more inventory for the format that has the larger market share.  The people who are looking to purchase new units will see that there are more tapes available in the format and that will influence the purchase of their new unit.  You can see how this cycle can grow and grow.

This is one of the things we are seeing with REST and SOAP.  If more people are consuming REST web services, then more people will provide support for REST Services.  As use of REST increases, you will see situations where fewer and fewer websites are offering exclusively SOAP services and some sites will offer only REST services.  Although you are still seeing popular APIs like Flickr and Amazon Web Services supporting both of the formats.

Is there room for both web services?

There are some differences between the format wars.  In the case of Beta versus VHS, the consumer was very locked into the platform (format) and the cost to switch platforms was very high.  The cost was twofold:  The players were expensive and when you had invested in consumables (the tapes) they were worthless on the other player.  This made picking the “wrong choice” a high risk.  You saw the risk of making the “wrong choice” in the recent Blu-Ray and HD-DVD cause a behavior that made people sit out the decision until the format war was resolved.

The cost of consuming (as a client) one or the other services formats is rather low.  Most of the heavy lifting is done by the respective framework that you are using.  Also long as the format of the service is properly abstracted away from the rest of your application (which you should be doing anyway), the cost should be relatively low.  Unless you are using a feature of SOAP that is not available in REST.

The cost of providing (as a server) two separate services formats has been traditionally higher than the cost of consuming one or the other.  The reason is obvious, the client picks one or the other – you have to provide both for each and every service that you create.  I personally think that the cost will reduce over time; we have already seen the burden reduced on the .NET Platform with the advent of REST (called web basic) style end points in WCF.

I think that there is room for both web service formats.  I think you will continue to see SOAP being used within the enterprise and between enterprises when the coordination features like transactions are more necessary.  For producing and consuming public Internet web services, REST will probably be the most commonly used services format.  Either way, I hope that none of us winds up like Peggy Bundy looking for a way to consume our services.