Monday, December 31, 2007

Python Versus Erlang for an MMOG

As I mentioned in my previous post, I'm in the process of evaluating the core infrastructure for a massively multiplayer on-line game (MMOG).

The current technology contenders for the server are Stackless Python (and it's eventual successor PyPy) or Erlang.

In both cases, they provide for concurrent processing using the Actor Model, which we have already determined is our preferred development model (remember, there are no correct models, just more and less useful ones, and this is our useful model). Stackless and Erlang both provide native support for this model, using the tasklet in Stackless and the process in Erlang.

In terms of real-world implementations, Stackless is being successfully used by Eve Online for both the server and the client. Eve Online shares one important philosophy that we want to emulate as well: no sharding of the user population. Python is an incredibly efficient language to program in, sharing strong points of both object oriented and functional languages. Our programmers (me and two others) are also much more familiar with procedural, object oriented programming. All things being equal, we would certainly choose Stackless Python.

But all things aren't equal. Python suffers from a significant barrier in its scalability: the Global Interpreter Lock (the "GIL") prevents Python from scaling to multiple CPUs efficiently. Even without the GIL, multiple processors yield a significant complexity challenge as you deal with locks and concurrent threads.

In fact, that can be seen in Eve Online, as they struggle to deal with very large population centers overwhelming the ability to Stackless to scale horizontally. Stackless Python scales horizontally for multiple processes that don't share state (such as multiple zones in a game), but within a single state construct, you are limited to a single CPU.

This is where Erlang comes in. Built from the ground-up to support concurrency, both on a single machine and on multiple machines, it doesn't suffer from the state construct problems. Erlang is a functional language, and one of the implications of that is there is no state to be shared between processes. Processes can be placed "anywhere", and all state communication between processes is done via messages.

In terms of the real-world, which is an important consideration for me, Erlang came out of the Swedish telecom giant Ericsson as a solution for building telecom switches that required huge scalability and Nine "9's" of reliability. For myself, coming from a Java background, seeing what it takes to hit five "9's", that is a huge motivator. Another large example is Amazon's SimpleDB being written in Erlang.

In terms of gaming, Open Poker was recently re-written using Erlang and Vendetta Online rewrote their AI functionality using Erlang.

So, the downsides...
  • There isn't an established body of work to follow in the game
    industry.

  • We don't have any experience writing systems using functional
    languages.

  • There are few to no existing libraries to simplify our lives.

On the other hand, the upsides are significant...
  • Highly scalable across multiple processors and multiple
    systems, for free.

  • Ability to hot-swap code, to patch the system without shutting it
    down.

  • A database
    (Mnesia) that
    supports the highly concurrent, distributed environment that Erlang
    provides.

In short, the benefits are all around the concurrency and
scalability. Personally, I want to see an environment where the
number of actors (people, NPCs, etc) is not arbitrarily limited.
Right now, I'm heavily leaning towards Erlang.

Wednesday, December 26, 2007

Building a Massively Multiplayer Online Game

I've been interested in the game industry for years, both in terms of playing as well as building. The challenges associated with building a game are significantly different than those associated with the normal day job, and the opportunity to provide something that others enjoy certainly appeals to my creative side.

In the early 1990s, I was introduced to the MUD and spent far too much time playing there and far too little time on my computer science studies. Since that time, bringing together a large group of people into a single virtual community has been the only logical direction for games to go (In my not-so-humble opinion :), and that has been realized over the last few years with many exceedingly good massively multiplayer online games (MMOGs) being published.

Of course, the developer in me has always thought that I could do it better. So I am now putting my time (since I'm short on the money) where my mouth is. A group of four of us has started the process of incubating a MMOG, starting with base ideas and features we'd like to see.

Being the type of person who likes to here myself wax prolific, I will be blogging about the experience here. As this venture is intended to be commercial in nature, I won't be giving details about anything that would fall into the "competitive advantage" category, but as there are many different aspect to building a highly scalable game that have nothing to do with the competitive advantage aspect, there should be lots to discuss.

As the guy who is architecting this, there are a lot of decisions that need to be made early on: what technologies do we use, how do we segregate processes, how do we enable virtually unlimited scaling, etc. This is a case where the naive solution will absolutely not work.

So, over the next year or more, expect to see a lot of entries as I discuss trade-offs and decisions. It's going to be a lot of fun for me; hopefully it will be useful for you.

Monday, December 17, 2007

Festering Pustuals

As I ponder more and more about the issues faces with Software Development, I can't help but agree with this quote. I think more damaging is that there is only a small percentage of people who even recognize that sickness abounds...

We know about as much about software quality problems as they knew about the Black Plague in the 1600s. We've seen the victims' agonies and helped burn the corpses. We don't know what causes it; we don't really know if there is only one disease. We just suffer -- and keep pouring our sewage into our water supply.

-- Tom Van Vleck

Thursday, December 6, 2007

Being a Craftsperson or a Laborer

I'm coming to realize that the world of software development is very similar to the world of construction. Given that particular comparison, it is obvious that it probably isn't a very compelling comparison, either.

There are a lot of vocational programmers out there, keeping the wheels of business information running. Business needs those legions of framers, plumbers, electricians, and drywallers to keep systems talking to each other. And lest I come off even more arrogant, some of them are amazingly good at what they do. But, just like in construction, these people are not, generally, paid to innovate and their working environment is never optimal.

Joel Spolsky said it well:
...if you’re not very, very careful when you graduate, you might find yourself working on in-house software, by accident, and let me tell you, it can drain the life out of you.

The next level comes in software companies around the world. Life is usually better here, though the developers may or may not be. The work is usually more interesting because, to a greater or lesser degree, innovation is what allows that business to stay in business. Unfortunately, most companies, over a period of time, lose the need and ability to innovate: The need because the rely on it being too expensive for other businesses to change from their software and the ability because process and formality overwhelms spontaneity and creativity.

These are your commercial builders. Their scale is usually larger, the complexity of the result is much greater. There are a few, very good people leading things up with a mass of...you got it, framers, plumbers, electricians, and drywallers.

Then there is the top tier. These are the best companies to work for. Innovation isn't something that is talked about in meetings as a "goal", then promptly followed up with enough crap to stifle any desire to do anything; innovation is the norm. Management exists to ensure that technologists get what they need to do their job. Customers look at the results and say "Wow." Businesses choose to change to them, because not changing will cost them more.

They are places where you can write about your job like this:
And with that, I'm going to get back to my "day job" project(s), which are sadly confidential. But they're so cool that if I did tell you about them, you'd be so overwhelmed that you'd have to go sit in one of our $5000-ish massage chairs, like the dude sitting next to me right now.

Not surprisingly, the percentage of really good people who work in the top tier is much higher. The top-tier survives at the top because of that talent (not just in development, mind you). These are the master craftsmen, brought in to do the finish work on one-of-a-kind structures around the world.

I love crafting software. I love software development. I am, however, getting tired of needing to unplug the blocked sewage because the company won't invest in the business it is in. I've heard and tried to buy into the argument "if you cut investment, you'll make more profit", but those companies with $5000-ish massage chairs are proving otherwise.

Monday, December 3, 2007

Laptops for Developers

All developers should have laptops and all conference rooms should have projectors. Honestly, I'm surprised there are software companies out there that think otherwise, but I just found that out to be the case today.

Reasons:
  1. Given that work areas are so open, it is really obnoxious for a few people to gather around one person's monitors for anything. It is distracting to everybody else around them.
  2. Code reviews are much easier with a projector and a white-board. Switching between code modules while doing chalk-talk is inefficient using paper. Code review notes belong in the code, not on a piece of paper that will get lost.
  3. Your good developers will spend time outside of work doing "geek" stuff. You encourage that to be work related if you provide the environment.
  4. At some point, almost every developer will find that they are traveling, be it for a conference, an emergency customer issue, or to train a development group in Indiana. You've made them 100% inefficient by no providing a laptop.
  5. Needing to be 100% at the office to "work" is anachronistic, especially for development. Providing a mobile development environment means a developer can work productively for 3 hours at home before his dentist appointment instead of taking 1/2 day off of work. That means better productivity and happier employees, since they don't have to waste leave for life-maintenance.
The days of "the laptop is too slow" are gone. So are the days of "the laptop is too expensive." The only thing preventing laptops from being used in place of desktops is attitude, and that attitude costs real dollars.

Friday, November 30, 2007

Parallelizing Haskell

OK, I admit I don't know much about Haskell. It is sitting on the front of my "list of things to learn about" to the point that I downloaded and installed GHC and started going through a tutorial. This is the reason why: Use those extra cores and beat C today! (Parallel Haskell redux):

So with an off-the-shelf Linux box, you can write simple (but parallel) Haskell will outperform gcc's best efforts by a good margin -- today! Multicore programming just got a lot easier.


Now, to be clear: this is a very simplistic sample, but it does show the power of building code in a functional way.

Now, my question becomes, can Stackless Python beat it in cases where we aren't dealing with the global interpreter lock?

Monday, November 19, 2007

Programming, Circa 2013

Something is going to change. I don't think I'm the only one who can feel it. We are seeing an impedance mismatch between our applications, the languages we use to write them, and the underlying hardware. The mismatch is slowing development, and customers are becoming less and less tolerant of that.

First, people are wanting to decompose applications and use pieces of them. Look at mash-ups, web services, RSS, and other similar technologies. The application that starts at line one and ends at line N has long been gone, but now we are seeing applications where every component is not always running together, not in application space nor on servers themselves.

Second, the programming languages have diverged in some ways. We see two options: "backend" and "frontend." The backend languages have not had any significant change in the last decade. Java and .NET are pretty much where they were ten years ago. A bigger change, perhaps, is in our use of JavaScript, but, unfortunately, these changes demand more componentization rather than providing it..

Finally, hardware is scaling horizontally more than vertically, meaning we are going to see more cores to improve performance rather than more cycles. In order for applications to scale, they are going to have to scale horizontally as well.

The trouble is, as it stands today, componentizing applications and scaling them horizontally is hard. It is also more and more necessary. I think we are near the tipping point in an evolutionary change. The last evolutionary change was the application virtual machine (both the Java Virtual Machine and the CLR), which separated applications from the underlying hardware: input, output, memory management, etc. What it didn't do was remove from applications the onus of knowing where or when something was going to happen. That is what's coming.

The key to the next evolution is spacial and temporal isolation. This is possible today, just as building a highly-available multi-threaded application was possible 20 years ago using low-level languages. The difference is that it will now be virtualized in some fashion.

The paradigm change will push developers into writing more independent blocks of code that are known to be independent, with a VM that handles scheduling those blocks and communicating between those blocks. Inter-task communication will be allowed only at the beginning and end of blocks.

The concept is illustrated well with Stackless Python. Each tasklet exists independently with communication occuring at the beginning and end of the tasklet.

Once the application is properly broken into the tasklet concepts, the underlying compiler and VM will handle spacial and temporal translocation. Spacial translocation will allow a tasklet to run "anywhere", be it a seperate processor or a seperate system. Temporal translocation will allow tasklets to be run in near-arbitrary order and with arbitrary concurrency. Near-arbitrary order means the VM, as time goes on, will be able to predict which tasklets need to be run in which order. Arbitrary concurrency means that, assuming order is properly maintained so inputs match outputs, any tasklet can run concurrently with any other tasklet.

I believe that, somewhere, somebody has already begun the solution. My goal now is to find it. In 1998, I started using Java because I could sense the evolution inherit with it; I'm expecting that, by 2010, I'll be using "the next thing" and it will be in broad use by 2013. I'll let you know when I find it.

Wednesday, November 14, 2007

Ropes, Pegs, and Ladders

I've figured it out. It took me a long time, but it finally happened. I'm not a peg: I'm a rope.

Huh? Hear me out. You see, most companies have ladders with holes for the rungs to fit into. Holes of different shapes and sizes: square for engineering, round for business, octogonal for management. If you work at a particularly good company, there are multiple ladders for each type of position, with differently-shaped holes that slightly different pegs can fit into. That keeps really good "individual contributors" (gak, I hate that phrase) from becoming really bad managers.

So, back to me (which is, afterall, what this is all about...). I'm a rope. Don't get me wrong, I have my pegs: I'm a good software architect. I'm a competent manager. I'm pretty good in pre-sales environments. I'm good presenting to customers, executives, and others. I'm also good at product management.

Ouch, my arm hurts from patting myself on my back. Seriously, I'm not the best at any of those. I've had far better managers than I manage. I've worked next to better engineers than I. They fit those holes better than I do.

Every company of great ladders I've worked with seems to have a few people who don't fit their holes perfectly. These people tend to become the go-to guys in a lot of different situations. They are good at working with customers, with other organizations within the company, and with engineers themselves. They are very technical, yet able to make the lay-person understand the techno-babble. They also have a strong understanding of the business behind software, leading to better technical decision making. Because of these skills, they get called on to perform a lot of different tasks.

Unfortunately, while every company I've worked for has these people, no company seems to want to recognize the need for these people. Organizationally, these ropes are "out of it" because they don't fit perfectly in the engineering, product management, sales, consulting, support, or any other organization, yet they are incredibely valuable to all of them. They are the ropes that tie the ladders together.

So, even though it may not look at good to my future boss, whoever he or she may be, I proudly declare: I am a rope. I can accomplish aspects of all of the above at the same time, handle communicating to customers and to other organizations within your company, and, by so doing, make your product more successful.

I look forward to the day when companies realize that not all people in the organization belong on ladders and that crossing the organizational divide is not to be feared but to be embraced.

Until then, will you hand me that chisel over there? I've got to shape this peg...

Monday, November 12, 2007

Bucking The OS Establishment

A couple of weeks ago, I'd had enough. I was approaching Window's Tithe Day and decided to buck the conventional Top Ten Reasons not to Install Ubuntu. I feel so rebellious!

I've been a linux fan since around 1993, when I bought my first PC (a Packard Dell 486/50) for school. I was in the Computer Science program at the University of Utah. I could not tolerate Windows 3.1 in that Un*x world, so I quickly become involved in the Linux community.

After downloading my 26 floppies using work's 56k connection, I went to town installing Slackware for the first time. Then installed it again and again as I learned what I was doing. Over time, my confidence grew and I become an officer in the fledgling Salt Lake Linux Users Group. There, I found the opportunity to learn a lot about Linux (and get lots of free stuff from early Linux vendors. A BIG shoutout to O'Reilly for their major support).

So, nearly 15 years later, my Dell Latitude D820 is now running Ubuntu Gutsy Gibbon and, for the most part, it is loving it. Emboldened by my rebelliousness and my success, I've also installed Xubuntu on my wife's old Sony Vaio P-II 266. I'm also running a MythTV Fedora server.

I have my future plans laid out that include a dual duo-core server running XDMCP for multiple low-end laptop X terminals for each of my kids, separating my MythTV server and clients, getting proper filtering put in place for my kids, and, finally, converting my wife.

Now, if only Blizzard would come out with an official Linux client for World of Warcraft...


From a post by Joel Spolsky in Joel on Software:

It’s hard to get individual estimates exactly right. How do you account for interruptions, unpredictable bugs, status meetings, and the semiannual Windows Tithe Day when you have to reinstall everything from scratch on your main development box?

Saturday, November 10, 2007

Snake Wrangling as a Method of Teaching Young Programmers

I'm happy. My daughter wants to start being a computer geek, like me. She has shown more interest in computers than any of my other children and has now expressed interest in programming.

To that end, I've installed Xubuntu on an old Sony Pentium-II 233 laptop we had lying around. Her primary use is going to be authoring her novel using Open Office, but secondarily, we'll use it for teaching her Python.

I've chosen to teach her using Python because it has a good signal-to-noise ratio between coding-overhead and coding-response. It is very easy to do easy things, unlike Java, C#, and other languages of that nature that have a lot of programmer-overhead. It is also more structured, making it easier to read code, as opposed to languages like Perl.

In support of this teaching, I've found a couple of interesting things. First is a great book targeted at teaching kids Python called Snake Wrangling for Kids by Jason R Briggs. Kudos to Jason for this: I hope you get it published soon.

The other interesting thing I've found is Guido Van Robot. According to the website, "Guido van Robot, or GvR for short, is a programming language and free software application designed to introduce beginners to the fundamentals of programming." To be honest, it is a bit buggy, but interesting nonetheless.

An important point is that I'm teaching an early-teen. I don't think this would work for younger kids. Other interesting technologies include
  • Scratch, which would certainly appeal to younger programmers.

  • Storytelling Alice for early teens (targeted especially at girls).

  • Computer Science Unplugged is a neat program for teaching computer science without programming...and without computers.

  • Squeak is a Smalltalk-based system for teaching programming, math, and computer science concepts to younger children. This one has a lot of promise, and I will be introducing my younger children to it.

What have you found for teaching programming and computer science? This is close to my heart, because I am a geek and want my kids to be geeks, too.

I'm sure there are more sources out there, and as I run across them or as I get feedback from my daughter, I'll post. I'd love to hear your feedback as well.

Friday, November 9, 2007

Technical Debt in Agile Development

In a previous post, I talked about technical debt and how you can use that concept to understand the real costs of design and implementation trade-offs. Being a believer in Agile development, I wanted to explore what technical debt means in an Agile world.

I think one of the interesting questions around this is if it is possible to implement a feature maintaining the principle of Simplicity is Essential and still manage to take on debt.

For instance, if I have to implement a logging system, the feature calls for logging to a database, and I start by logging to the file system as a prototype, there are three options I can now take with this feature, none of which I would call debt. My choices in making a change in scope are to 1) change the requirement such that file logging is OK, mark it complete, and add RDB logging as a seperate feature in the backlog; 2) not call the feature complete and continue implementing the RDB features; or 3) pull the feature from the code altogether and put the feature back into the backlog. None of these introduce a debt in the code.

Any new feature that gets partially implemented will fall in that category: either the requirements get refined to handle it, the requirements are completed, or the code is removed. Every other case that comes to mind is in the category of "I would like to do X because it feels right, but it is not the simplest thing." Do you agree?

Actually, I don't. Read on to find out why...


There are several kinds of debt that can build up in an Agile project. We'll start with the most basic and work our way up to the harder to quantify debt.


  • Poor Unit Tests

    In agile development, this may be the worst possible debt you can accumulate. This is check-cashing, 35% interest plus weekly fees debt. Failure to put proper emphasis on unit tests will always come back with huge future costs. In test-first methodologies, there is never a time where it is appropriate to take on this debt. In test-soon methodologies, there may be one time where the cost of the debt won't be felt exponentially: at the end of a particular release. The caveat here is that your organization has to have the maturity to pay the debt immediately following the release, or the interest will rack up.

    In my opinion, the only time this would ever be acceptable is if the fate of the business/product rested on it. The cost of poor unit tests is far too high to risk in any but the most extreme circumstances.

  • Failure to Refactor

    Now we get into some of the optional aspects of software development. It is possible to forgo refactoring on projects in favor of just pushing the code out. The challenge with this is that systems get brittle as pieces start connecting in more tenuous methods. "I'll just tack this on" is a familiar sentiment, especially when refactoring forces not just application code changes, but *gasp* unit test changes. A lot of work for one little feature, but, ironically, each time the decision is made not to refactor, the pain of refactoring grows.

    This is debt that can be easily tracked. Strong developers will know when a piece of code should be refactored. Any time it isn't refactored, it should be tracked.

  • Over-Engineering a Solution

    The Dot-com boom was a marvel of over engineering. I worked for a dot-com that built, from scratch, every piece of infrastructure it needed to build a portal. These included a web templating engine, an object-relational mapper (built on a directory server, no less), a mail server, a COM-like framework in Java, and many others. In addition to building all of these, they were built in some theoretical "perfect" mindset, such that they were slow, difficult to use, and didn't really meet the needs of the clients (other developers) who were trying to get an application built.

    On top of that pain, we had to maintain the code going forward. As things changed, we were forced to update code that seemed to only exist to make our own lives more difficult. We had a huge over-engineering debt that significantly impacted our ability to produce features for customers, which impacted our own business success.

    That debt is harder to quantify, because no developer likes to believe that he is over-doing something. Worse yet, you pay for it at least twice: once to spend more time building it and again to maintain it. I believe this kind of debt should never be incurred and the only way to prevent it is through continual code reviews...imagine that, another tenant of Agile programming coming to the front. I have never experienced over-engineering coming back and paying for itself in the future: don't do it.

  • Over-Simplifying a Solution
    This is debt because an expectation has been made to a customer and the solution does not meet those real needs. This generally happens when there isn't a solid understanding of requirements, and it causes debt through customer service issues and other interruptions to the development plan. This is the hardest one to quantify, because in some cases you just don't know what you don't know, but it is also the most obvious debt from a customer perspective.

    My experience has made this The Money Pit debt. It is debt you have no idea you are getting in to, always has to be paid immediately upon discovery, and generally causes much pain and suffering. The only way to prevent this debt from occurring is close customer relations and regular feedback (does this sound familiar?).



What other forms of debt do you see as you do Agile development? How do you minimize that in your work? What is your favorite color? All comments welcome.

Thursday, November 8, 2007

Knowing the Costs of Development Choices

In most software companies, before any feature is developed, it has gone through a process whereby product management has determined that there is financial merit in developing a feature. This is usually done by looking at the incremental value the feature provides to the product divided by the estimated cost of development. Incremental value is typically determined by the increased sales the feature will generate or by the increased cost the market will pay for the product.

If we take it as a given that all of the assumptions in determining the numbers above are consistent, then, by using the above method, we can put two features side-by-side and compare the value of developing them by the return generated. In fact, this is often used as a base for feature sets, but other, real-business items like sales commitments, unforeseen maintenance issues, etc., also trod in, taking the ideal world and populating it with ugly necessities.

I've been pondering for some time on ways to quantify the costs associated with design and implementation trade-offs. Just as two features can be put side-by-side, two implementation strategies can be put side-by-side and objective decisions ought to be able to be made around those. Unfortunately, most of those decisions I've seen have come down to, at a most basic level, personal preference.

Furthermore, communicating to a non-technical audience the costs and financial risks associated with those trade-offs is nearly impossible. Because of those challenges, a breakdown occurs between development, wanting the "no risk" option, and business, who wants to properly evaluate the best ROI. The breakdown often turns into a rift as development wants "the right thing" and business wants "the cheap thing" with only weak objective communication. My technical side wants to do "the right thing", but I'm pragmatic enough to understand that sometimes trade-offs have to be made, so I try to blunder through that communication minefield, getting beat up on both sides.


Coming out of this pondering is a concept that has, apparently, been around for some time, but which is new to me. This is the notion of technical debt. Steve McConnell has a great write-up about it at his blog.

The short of it is that technical trade-offs can be measured and quantified in terms of "debt" akin to any other investment debt. Make a bunch of crappy code, you are increasing your "credit card" debt. Make a thoughtful decision to implement something using a major shortcut, you are increasing your "investment" debt.

And, just like real debt, technical debt comes with interest. The more debt you take on in your code, the more you pay to create new features and to maintain the code. Eventually, you can spend all of your money on interest payments, and provide no features.

The notion of technical debt and the effects of those debts are concepts that any business person can understand. Using a vocabulary around technical debt, better informed decisions can be made and, more importantly, a method of tracking that debt can be implemented so you can know the built-up cost of that financing.

Now that it is quantified and tracked, development and business can start making intelligent decisions about how much debt a project can handle. Fast moving, early-phase projects will likely need to rely on more debt; mature projects will probably want to limit that debt; near end-of-life projects can probably start retiring debt (a concept I wish existed in the real world!).

The important take aways become:

  • Don't increase your debt through sloppiness. Debt is debt, and you have to pay for it.

  • Take on debt acknowledging the future costs and only take it on when you are willing to pay that future cost.

  • Track your debt so you know how much it is going to affect your project, both now and in the future.

  • Make sure you make payments on that debt, both when you are reaching your borrowing limit and at regular intervals. Failure to pay on the debt regularly will compound the interest, causing you to reach the borrowing limit sooner.



As always, I'd love to hear your feedback. Have you managed technical debt before? How did you manage it? Was the concept an effective tool for you?

Wednesday, September 12, 2007

Innovation and Employee Free Time

Amazon, Google, 3M: Three companies that have been very successful, with a great deal of their success coming as a result of their innovation.

Amazon has altered the concept of what a web store is. In the early web, it was possible to put up your niche store and carry on. Amazon has shown that, just like in real life, the Wal-mart rules reign supreme. More importantly, they've implemented it better than anybody else.

Google has altered what the web is and how people interact with it. It is not surprising that "google" has become a verb, but their impact has been on much more than just finding data. Google Maps woke people up to how the web could be an interactive experience.

3M has a history of innovation. In fact, their entire corporate culture is built around it, with a high percentage of annual income coming from new product introduction. Not one year. Every year for over 70 years. Non-stop innovation.

There are commonalities between these companies. They have decided that innovation is their lifeblood. Most companies will say that they want to be innovative, but Google, Amazon, and 3M have put their money where their mouth is. Each of them give developers a significant amount of free time, usually around 20%, to "play."

What is play time? It is a time for their employees to step away from their day jobs to experiment. They get a chance to try new things that they wouldn't get to otherwise. They get to make changes to existing code that isn't in the "project plan" to see if their idea makes their products better or worse. They get to integrate new technologies, learn new techniques, interact with people other than their direct co-workers.

In short, they get to innovate. Rarely does innovation come as a part of the normal development process. It comes during an unexpected trip down a wrong or curious path. Without giving employees this opportunity to play, you won't innovate. The best you'll do is copy, and slowly at that.

Thursday, August 16, 2007

My Dream Job (What's Yours?)

Around 10 years ago, I was working with Microsoft's DirectX technology. As Microsoft was an important partner (as they are for everybody it seems), I attended the Microsoft DirectX Conference following CGDC. I was introduced to my dream job at that point. This person was the DirectX Evangelist. Later on, I briefly met this individual in Redmond at a campus visit, and that cemented my desire.

What brought this to mind was a job listing I saw recently. Like most technologists, I spend time running around the Internet looking at what other options are available. A notable large organization has a position open as their OSS Evangelist. This got me reminiscing about that DirectX Evangelist and dream jobs.


Being a champion for a technology would be awesome. Needing to be highly technical to champion the technologies to other technologists while remaining human to the non-technologists is hard; hard is fun. It is like when I would "act" in the high school plays: a little stage fright, a few jitters, and then doing some improv when I forgot my lines.

I guess I'm the weird technologist. I actually like interacting with people. Go figure. I also recognize not everybody is like me, so, to the point.

The real issue is getting from the ranks of Everyman and into a position that you love. As one's "love" changes over time, it is important to be constantly working towards it. Take opportunities to grow, even if you don't think you'll need it now. Present at a conference; take a product management class (great for software developers!); find somebody in another organization and become "job buddies" and teach each other about your jobs.

I once looked at the resume of a guy who had literally spent 10 years working on the dialogs in a word processor. He had not pushed himself at all, and, not surprisingly, we did not even consider him for the position. Even today, a good friend I worked with there and I still joke about it.

Don't be the Dialog Guy! If you can't find a way out of being the Dialog Guy at the company (and, truly, you should be able to), change jobs. Keep learning new things. Stay passionate. Again, don't be afraid to change jobs to get closer to your passion.

And, yes, I did apply for the OSS Evangelist job.

Development Management in Large and Small Organizations

In an Agile world, what is the responsibility of the Development Manager? The answer, not surprisingly, depends on the organization. In some respects, their responsibilities are the same: ensuring individual growth, managing conflict, etc. As there are numerous books on that, I won't address it here.

On the other hand, the responsibilities do change between a small organization and a large organization. It is important to understand the difference because the personality that will thrive in a small company dev manager role will suffocate in a large company. Conversely, a personality that would thrive in a large company may flounder in a small company.

Large Organizations

I don't want to use the word politics. The negative connotations associated with that don't help get the job done. But what is the job?

Communication. And communication. And more communication. Nobody makes command decisions in a large organization, so accomplishing anything requires communicating with others. Communicating status, assisting others in communicating their needs for you, interacting with others to meet the people you need to know, and then communicate some more.

Once you've done all your communication, do it some more.

Once you've done it some more, you are ready to begin moving roadblocks in front of your staff. And this is your job! You are the silent tow truck, strongly letting noodles think they are moving themselves, unaware that you are pulling them where you want them to go. As your developers run across problems, you need to use your communication skills to find ways around them. Find the people you've interacted with and help them understand how they can help you and how you can help them.

Small Organizations

In a small organization, the primary need is to know the customer. That will provide you the foundation to understand everything else in job.

You need to understand your product. You need to have an understanding of what you are building and why you are building it. To have a very productive team, you need to be in a position to answer developer's specific questions about requirements. You don't answer what should be built but you need to be able to answer questions about how to build it.

More specifically, the dev manager is the multitasker. Obviously, communication is still important, but you are now a key stone in getting things done. Product owners will not have the time to get you everything you need. Customers will have questions that services (if it exists) will not be able to answer. There will be 10 times the work to be done than you could ever hope to do. And everybody is going to look to you for the answers.

Both jobs are challenging. Different people will excel at one more than another. As you look at moving from one organization to another, there will be a lot of things going through your mind about pros and cons. Make sure this is one of them.

Thursday, August 9, 2007

Reducing Maintenance Costs


I saw this one over at the Geekend Blog and had to share. I guarantee, if everybody (including, but not limited to developers) worked with this mindset, maintenance would be MUCH cheaper!

Saturday, August 4, 2007

Building Great Teams

I admit I am not an expert on this topic. I have not built dozens of teams, picking and choosing individuals for their individual merits, capabilities, and personality types. I have not read dozens of psychology texts on interpersonal interactions.

On the other hand, I have been on teams that have worked amazingly well and that have failed miserably. Based on that experience, I have identified the key success factors in building great teams.

I recently changed jobs and left the best team I had ever worked with. The team was the single strongest motivator I had to stay at the job, and to this day I still get together regularly with most of the team. I was managing the team, but I don't take credit for them being the exceptional team they turned into. In fact, I was traveling a lot, and I think that actually helped in strengthening the team.

Here are the success factors in building a great team:

Believe in the Product - Work for the Customer

I do not believe that it is possible for a team to become great if they don't truly believe in what they are doing. This is one of the areas that I think I was able to influence my team, because I believed in the product and I did everything I could to pass that on to the team. I spent time with customers, my team spent time with customers, we worked closely with product management, sales, and services to understand the customers. We knew that our product could make our customers' lives better.

To that end, we were all working towards a single goal of making our customers' lives better. Many times, there are ulterior motives driving people: wanting to look good for the boss, strengthening one's resume, etc. My goal was to make the customers, their needs, and how our product addressed those needs real for the team, and, in the process, all of the ulterior motives disappeared.

Self-Direction

My team didn't have a choice, as I was out of the office a great deal of the time. In many cases, teams can begin to wander aimlessly in those circumstances, but a great team will not have that problem. The key is the customer focus mentioned above. While there may be occasional disputes on what is best for the customer (especially when the customer isn't always in the work room as Agile methods would like), those disputes are healthy and usually result in a better product (provided decision making is accounted for as mentioned below).

Agile methods promote the self-direction of teams for many reasons, but in my opinion, the greatest benefit of this is teams become closer. People have to learn to interact and disagree with others amicably. People don't always worry about themselves and their needs. They begin propping others up, instead of mentally tearing them down. Work starts to happen as a team instead of a group of individuals.

Leadership

Leadership is important, but not in the way many people think. As the manager, I had leadership responsibilities. As technology specialists, some of the engineers had leadership responsibilities. The reality is that everybody had some leadership responsibilities.

Rather than a top-down leadership style, a great team will come together with everybody leading in various ways and at various times. More importantly, a great team will not have members who place themselves above other members. Every member realizes that every other member has strengths and weaknesses, and by bringing those together, the team and the product become better for it.

The true leadership in great teams is helping people overcome their shortcomings and increasing their capabilities.

Decision Making

Self-directing teams that are completely focused on the customer...there should be no need to have a final decision maker, right? Wrong. Occasionally, there will be disputes about the best way to accomplish something. If left unchecked, these disputes can cause wedges that will eventually break a team apart.

On features, the final decision is the job of the customer. That may be the actual customer, the product manager, or some other person who is THE designated customer proxy. Technology is a tougher issue, because people get religious about technology and customers are often unable to make those decisions.

It is critical that somebody be designated as the decision maker for those decisions. Typically this person is designated the "architect", but the title isn't important. What is important is that she is trusted by the other members of the team to make sounds technical decisions and to do so with the customer's best interests in mind.

Team Size

Keep teams small. If you have a large group, break them down smaller. Agile methods encourage this for a reason. The smaller the groups, the easier it is for people to grow together and teams to become stronger. If you have to break groups apart, pick representatives of each of the groups and let them come together in a group of their own. Great groups are rarely larger than eight or 10 people.

There is obviously much more that could be said about how to build great teams. However, I strongly believe these five items are the most important predictors of success. Cultivate these, and your teams will become stronger; starve these, and your teams will become brittle.

To the team I worked with from around 2002 through 2006, thank you for giving me an amazing opportunity to see what a team can be.

Friday, August 3, 2007

Agile: More Cost Efficient

Now that we've talked about how we got into the bad habits of waterfall processes, let's switch gears and talk about why Agile methods should be embraced within the organization.

Software is expensive. It is a significant investment, and, as with any investment, there is an expectation of a return. In the IT world, the expectation for the return is based on reduced business costs, improved business function, or some other measurable benefit. In the product world, the expectation is that the return will, in some way, generate additional revenue. In any case, if the return is less than the return on a savings account, then the company is better off banking the money.

Am I arguing to not create software? Definitely not. However, I am arguing that thought needs to happen on how to maximize the return. I'll discuss three aspects of that: the costs of quality, the problems of over-engineering, and the pain of failure.

Quality

The costs of fixing an error in a software project increases non-linearly over time. These are real costs in terms of resolution time and schedule slippage. The worst case is an error found by a customer, which costs the most, affects new product development, and impacts brand reputation.

Waterfall methods have quality scheduled in, but is not a core to the methodology. Worse, testing is back-loaded on the project, significantly increasing the cost involved to fix a defect.

Agile methods, on the other hand, have a very strong quality focus and testing is front-loaded on the project. By front-loading testing, defects are easier to find and easier to fix, with significantly less cost.

Over-Engineering

Studies have shown that nearly half of features developed are never used. Another fifth of the features are seldom used. Each of these features increases the cost and risk of the product without providing any additional value.

Waterfall methods try to pull together all the requirements up front. Because of the challenge of prioritizing all of the requirements, I've often seen prioritization happening at the feature or use-case level. This means that if a use case has three scenarios, A, B, and C, and only A is high priority, all three scenarios will be implemented, even though B and C are not important.

Agile methods encourage building only what you know you need as you know you need it. Rather than implement A, B, and C, Agile methods encourage working with the customer at the time of implementation to determine that you only need to build A. B and C are not implemented, so the costs are not incurred from a development and testing perspective.

Failure

Failure encompasses two problems: outright project death and projects that become significantly challenged through cost and schedule overrun, poor quality, or developed software not meeting the customer's needs. As software complexity increases, likelihood of failure also increases, reaching as high as three-fourths of projects failing.

As complexity has significant correlation with failure, anything that can be done to reduce the complexity also reduced likelihood of failure. This is one of the great benefits of Agile methods.

I am reminded of the epigram (which is way overused), "How do you eat an elephant?" Answer, "One bite at a time." Agile methods help all aspects of the project to be carved up, from requirements through testing.

This covers the software cost issues at a very high level. In my next installment, I will talk about the suitability of a software project to a given need, problems waterfall methods introduce with that, and how Agile methods shine in solving that problem.

Monday, July 30, 2007

The Changing Product Landscape

It is no surprise that we are in the middle of a significant change in the way software and content are priced, developed, and distributed.
  • Google apps, Zoho, Hotmail, and others have set an expectation about the complexity of "free" software.
  • Salesforce.com, Omniture, and others are setting expectations about how complex software served as a service can be.
  • Yahoo!, Project Opus, Blogger, and many more just being conceived are setting expectations about how applications and, more importantly, their data ought to be interacted with.
For product managers of traditional software products, this is scary stuff. People want to be able to interact with their applications wherever they go and expect to be able to interact with the data in many ways, perhaps not even "in" the original application. And expectations are being set that this ought to be free or a standard piece of the software.

For new products, this cannot be overlooked. Product managers need to embrace this and understand how it affects pricing, features, etc. Software architects need to know the technologies. Developers need to be designing for it.

New products that don't have an appropriate answer to these concepts will fail. Note that an appropriate answer might be to not have those features. But if you don't think about it before your customer does and explain why you don't, your customers will just think it is an oversight.

SaaS, Web2.0, and Mashups are changing things in a huge way. Are you innovating or are you looking in your rearview mirror?

Sunday, July 29, 2007

Agile: Why It Scares Management (Part 3)

This is the third installment in my series on Agile development. This will also be the final article on why Agile methods scare management. In previous installments, I discussed the impact of process-driven and command-and-control corporate cultures on development. This final installment will discuss funding, messaging, and sales. In short, it is about the dollars (or euros, pounds, yen, or whatever your currency is).

In order to get a project funded, there has to be a reasonable assurance that it is going to make money. Answers to questions like how many engineers should be assigned, how much effort should be spent in marketing, etc, are determined by taking the feature set and doing research to understand how much are people willing to pay for a set of features. If a development organization turns around and says, "Well, we don't really know what features will be in," to the financiers of a project, that is like saying, "Well, we don't really know how much this project will be worth." Any investor wants to feel comfortable about the investment she is making and the return it will bring.

Once the project is funded and underway, it is time for marketing to go to work because salespeople will not be able to sell a product that customers don't know anything about. Innovation is a current buzz word in the marketing world, leading to what Doug Hall would call a Meaningful Difference. The fear of marketers is that, in an Agile world, they don't know the Meaningful Difference because nothing is "set in stone." The only thing worse than having no Meaningful Difference is to talk about one and then have it not show up.

Continuing downstream from marketing are the salespeople. Studies have shown that salespeople are most effective when they are dependable and honest1. Having a moving target to sell against makes that dependability and honesty difficult to cultivate and keep. Instead, salespeople wait to sell the new product, even though it would otherwise be advantageous to do so.

In the first three articles, I've discussed the very real fears and concerns about Agile development. People who are familiar with Agile development will know that many of the fears are unfounded. In the next several articles, I will discuss how these fears are unfounded, and how, in reality, the things that are most feared can be turned into its greatest assets.

1. "Jump Start Your Marketing Brain", Doug Hall, pg 198

Friday, July 27, 2007

On Innovation

consultaglobal has posted on innovation and its relationship to solving existing problems. The four-box diagram and related description is a keeper, really identifying the innovators from the followers (those looking forward versus those looking in the read-view mirror, in the post's metaphor).

Across the board, software development is about innovation. Without innovation, you are not differentiating yourself from the market. One of the product manager's responsibilities is to understand the market well enough to know what is innovative, but it isn't just the PM's responsibility. The PM needs to share that vision with the rest of the team, from the software developers and testers through the sales and services organizations to ensure the the innovative vision is maintained.

Everybody in the development cycle needs to feel like they understand what is innovative about their product. If you don't understand it, go find out. Talk to your boss, to the product manager, to the CTO. Talk to whomever you need to, but find out. It makes your job much more fun and far more fulfilling.

Friday, July 20, 2007

Blogging and Comments

Joel Spolsky posted that the comments section on blogs are pretty much useless, given the amount of inane anonymous postings that can occur.

I disagree. I don't see blogging as a way for me to pontificate without any thought to others' views. It is others' views that make life interesting. I blog and I read blogs as a learning process. My blog has my thoughts and opinions. They can be wrong. If they are, I want an opportunity to reshape them.

I like Joel's blog. It is often very interesting. At times, though, he takes himself and what he has to say a bit too seriously.

But there is something to be said regarding getting the <75 IQ crowd to go find something else to do in their mother's basement.

Thursday, July 19, 2007

Agile: Why It Scares Management (Part 2)

This is the second installment in my series on Agile development. In the first installment, I discussed some of the history around process expectations. This time I'll discuss some of the corporate cultures that Agile development upsets.

Corporate structures are command and control based. Every manager gets asked "what is happening, when will it be done, how much will it cost," so every manager expects to be able to ask that question and get a definitive answer. If I am running a factory, I need to be able to answer questions around how many units can be produced, what load are we currently running at, etc. At the executive level, that expectation naturally flows into the software parts of the company.

In the 1970's, there was an effort to put more process behind software development than the "code it and fix it" non-process that was happening. Some organizations were having success with iterative methods, but, in one of those twists of fate, iterative was going to take the backseat for a while. In the 1980's, the US DoD released a software procurement standard1 that was based on the waterfall model. This became the basis of procurement standards worldwide.

One of the key papers2 written in the 1970's that was used in support of waterfall development was written by Winston Royce. Ironically, the paper very clearly states that there is a need to plan on doing the development more than once before releasing in all but the most simple, straightforward systems.

So we have a management structure that expects top-down answers and a historical predisposition towards trying to manage the software lifecycle in a waterfall fashion. Much in the same way as "alcohol may intensify the effects of this drug", putting the two together causes greater mayhem. Many of the managers at a decision-making level in the marketplace today were the same engineers being taught the values of waterfall historically.

In other words, we have a layer of management who was instilled the values of the waterfall method. They were told that they could answer the questions their managers were asking them. And they were ensured that their products could be successful. Now, being on the other side, they need those same values, answers, and success. They want to go with what they know, even if it has been proven unsuccessful and down right risky.

I've discussed a little about how cultural history, corporate structures, and individuals' histories have made the adoption of Agile methods difficult. In the last installment of "Why It Scares Management", I will discuss the financial aspects that scare management. Following that, we'll begin talking about the benefits and how those can and should reassure everybody involved with the development process.

1. DOD-STD-2167, United States Department of Defence
2. "Managing the Development of Large Software Systems", Winston Royce, Proceedings of the IEEE Westcon, 1970

The Millennial Generation, the Internet, and My Kids

I have always been fascinated by computers. By the time I twelve, there were two computers in my house, an amazing RadioShack TRS-80 Color Computer (with 16k memory module) and a blazing Atari 600XL. The Atari was actually mine; the TRS-80 I shared with my brother. Computers have been a part of my blood and my life for nearly as long as I can remember.

Even today, I've got a Linux MythTV media server, a Hauppauge MediaMVP acting as a MythTV client, a fully wired and wireless networked house, and so on and so forth. Even my wife, who hadn't done much beyond word processing on a computer before we met, is now an avid computer user.

It is, therefore, somewhat disconcerting to me that my kids aren't as fascinated by them as I am. I certainly don't expect them to follow in my footsteps (even though I would love to teach them how to code!), but beyond a few websites they use regularly and a little desire to e-mail, there isn't much interest. My oldest is 13 and doesn't know what a blog is.

So, this is where I become unsure. Computers are a significant portion of any professional's life at this point. The Millennial Generation will be much more so. I want to ensure that my children are not just prepared with the knowledge and understanding of the power of the Internet, but that they are honed with that knowledge. On the flip side, I also know the dangers that lurk there. I don't want to unwittingly push them into a jungle they aren't ready to deal with.

The goal is to train them to understand how to take advantage of the incredible power of the Internet without getting lost in the incredible wastelands that populate it as well.

So these are my questions: How much do I push my kids out into the vast jungle that is the Internet? How much do I worry about it? How much do I let them explore it organically?

Agile: Why It Scares Management (Part 1)

One hundred years ago, Henry Ford started his efforts that culminated in the assembly line. I don't think anybody would argue the efficiencies gained by the assembly line. Around the same time, Frederick Taylor was studying workers and optimizing their actions using Scientific Management.

A hundred years of improving process to be able to better analyze costs, risks and delivery followed. Six Sigma is the latest incarnation of that. Any repeatable task can be analyzed and made better. As you make it better, you make it faster and more predictable. A Model T coming off the assembly line every five minutes.

For a hundred years, we have learned if you analyze the process, if you make more definitions, you make the problem more predictable. And for the last 30 years, this is how software has been managed. Analyze the process, define it more, and expect faster and more predictable development.

Process is just the first factor. In part two, I will continue to discuss the reasons why Agile software development scares management.

Wednesday, July 11, 2007

Open Source versus Commercial Software Quality

I was reading through questions asked on LinkedIn and came across this question:

Can we consider that for a given application one will find fewer bugs in open source than in a branded proprietary application? (Link)

A fascinating question with a lot of opinion-based answers. I don't think there are many software products out there that don't incorporate open source in some way, so it seems like an important question to understand.

OSS proponents often use the argument that the more eyes on a piece of code, the less bugs in that piece of code (usually stated as Linus' Law, "Given enough eyeballs, all bugs are shallow."). In fact, this is really considered to be a self-evident truth in that community.

Personally, I don't like "self-evident" truths. I also don't like subjective answers like "OSS programmers are coding what they love, so they make better code" or "OSS projects have many, many programmers and, according to Brook's law, that's where bugs cluster." Give me real numbers, not theoretical possibilities.

The reality is that there is little hard data on the subject. A couple of items I've found:
  • The best data I could find on the comparison of open source and closed source is in this paper published by Stamelos, et al, from Aristotle University of Thessaloniki. It isn't a true side-by-side comparison, but rather a comparison of open source code analysis to the tool's industry "standard" recomendations. The opinion: open source software is better than an OSS detractor might expect, but it still isn't up to the industrial "standard" of the tool. The conclusion: more data is still needed.
  • Michlmayr, Hunt, and Probert of the University of Cambridge discuss open source processes and quality management and their effect on OSS quality in their paper. The conclusion is that OSS presents some significant challenges around ensuring the quality of software produced. Unless an OSS project has a specific focus on quality and process, it likely will suffer in that regard. Once again, this is a small sample set and more data is needed.
My experience with software would lead me to believe the results of the Michlmayr paper. It is difficult to get people to volunteer to focus on the corner cases and the boundary conditions; that isn't "fun." In addition, to polish a product takes a good amount of governance to make sure that fixes aren't introducing more risk than what they are fixing. This also seems difficult to achieve in a volunteer army. Not impossible, just difficult.

The question of quality is but one question is determining whether to use open source or closed source software. Other factors that need to be thought about include support costs (both in terms of finances and time), opportunity costs (or, potentially, wins), licensing costs, and training costs. It is only when all of these items (there are probably others, too) are calculated that you can begin to see the value of open source versus closed source to your project.

I would love to read more definitive research on this topic. If you know something I don't, let me know!

Tuesday, July 10, 2007

What is this all about, anyway?

Another blog. Yet another person, using the ubiquity and economy of Internet publishing to pontificate. Is it something interesting or more drivel? Today, I can't answer that; over the following weeks and years, I hope to do so. And I sure hope it isn't more drivel.

I am a software developer. I mean that in a larger sense than a "coder", which is something I have done, but is only a component of what I mean by a software developer. I'm talking about all of the roles in the software development process, from gathering customer requirements to the UI design process, from architecting and implementing systems to managing those doing these things. I have done each of these to some degree in my career. I am fascinated by all aspects of the software development process, from the high-level business objectives to the low-level determination of quality.

The goal of this blog is simple: get myself and others in the software business thinking and talking about better ways to develop software. How do we make software that is easier to use, higher in quality, and more robust financially? I am expecting this to be a learning process. I will share my thoughts and opinions, my research, and my experiences; you can share yours. Together, maybe we can learn something and make software a little bit better.

Beyond that goal, I also think that part of being a successful software developer is to have a life outside of the code. I'll undoubtedly have something to say about that, too.