Friday, November 30, 2007

Parallelizing Haskell

OK, I admit I don't know much about Haskell. It is sitting on the front of my "list of things to learn about" to the point that I downloaded and installed GHC and started going through a tutorial. This is the reason why: Use those extra cores and beat C today! (Parallel Haskell redux):

So with an off-the-shelf Linux box, you can write simple (but parallel) Haskell will outperform gcc's best efforts by a good margin -- today! Multicore programming just got a lot easier.


Now, to be clear: this is a very simplistic sample, but it does show the power of building code in a functional way.

Now, my question becomes, can Stackless Python beat it in cases where we aren't dealing with the global interpreter lock?

Monday, November 19, 2007

Programming, Circa 2013

Something is going to change. I don't think I'm the only one who can feel it. We are seeing an impedance mismatch between our applications, the languages we use to write them, and the underlying hardware. The mismatch is slowing development, and customers are becoming less and less tolerant of that.

First, people are wanting to decompose applications and use pieces of them. Look at mash-ups, web services, RSS, and other similar technologies. The application that starts at line one and ends at line N has long been gone, but now we are seeing applications where every component is not always running together, not in application space nor on servers themselves.

Second, the programming languages have diverged in some ways. We see two options: "backend" and "frontend." The backend languages have not had any significant change in the last decade. Java and .NET are pretty much where they were ten years ago. A bigger change, perhaps, is in our use of JavaScript, but, unfortunately, these changes demand more componentization rather than providing it..

Finally, hardware is scaling horizontally more than vertically, meaning we are going to see more cores to improve performance rather than more cycles. In order for applications to scale, they are going to have to scale horizontally as well.

The trouble is, as it stands today, componentizing applications and scaling them horizontally is hard. It is also more and more necessary. I think we are near the tipping point in an evolutionary change. The last evolutionary change was the application virtual machine (both the Java Virtual Machine and the CLR), which separated applications from the underlying hardware: input, output, memory management, etc. What it didn't do was remove from applications the onus of knowing where or when something was going to happen. That is what's coming.

The key to the next evolution is spacial and temporal isolation. This is possible today, just as building a highly-available multi-threaded application was possible 20 years ago using low-level languages. The difference is that it will now be virtualized in some fashion.

The paradigm change will push developers into writing more independent blocks of code that are known to be independent, with a VM that handles scheduling those blocks and communicating between those blocks. Inter-task communication will be allowed only at the beginning and end of blocks.

The concept is illustrated well with Stackless Python. Each tasklet exists independently with communication occuring at the beginning and end of the tasklet.

Once the application is properly broken into the tasklet concepts, the underlying compiler and VM will handle spacial and temporal translocation. Spacial translocation will allow a tasklet to run "anywhere", be it a seperate processor or a seperate system. Temporal translocation will allow tasklets to be run in near-arbitrary order and with arbitrary concurrency. Near-arbitrary order means the VM, as time goes on, will be able to predict which tasklets need to be run in which order. Arbitrary concurrency means that, assuming order is properly maintained so inputs match outputs, any tasklet can run concurrently with any other tasklet.

I believe that, somewhere, somebody has already begun the solution. My goal now is to find it. In 1998, I started using Java because I could sense the evolution inherit with it; I'm expecting that, by 2010, I'll be using "the next thing" and it will be in broad use by 2013. I'll let you know when I find it.

Wednesday, November 14, 2007

Ropes, Pegs, and Ladders

I've figured it out. It took me a long time, but it finally happened. I'm not a peg: I'm a rope.

Huh? Hear me out. You see, most companies have ladders with holes for the rungs to fit into. Holes of different shapes and sizes: square for engineering, round for business, octogonal for management. If you work at a particularly good company, there are multiple ladders for each type of position, with differently-shaped holes that slightly different pegs can fit into. That keeps really good "individual contributors" (gak, I hate that phrase) from becoming really bad managers.

So, back to me (which is, afterall, what this is all about...). I'm a rope. Don't get me wrong, I have my pegs: I'm a good software architect. I'm a competent manager. I'm pretty good in pre-sales environments. I'm good presenting to customers, executives, and others. I'm also good at product management.

Ouch, my arm hurts from patting myself on my back. Seriously, I'm not the best at any of those. I've had far better managers than I manage. I've worked next to better engineers than I. They fit those holes better than I do.

Every company of great ladders I've worked with seems to have a few people who don't fit their holes perfectly. These people tend to become the go-to guys in a lot of different situations. They are good at working with customers, with other organizations within the company, and with engineers themselves. They are very technical, yet able to make the lay-person understand the techno-babble. They also have a strong understanding of the business behind software, leading to better technical decision making. Because of these skills, they get called on to perform a lot of different tasks.

Unfortunately, while every company I've worked for has these people, no company seems to want to recognize the need for these people. Organizationally, these ropes are "out of it" because they don't fit perfectly in the engineering, product management, sales, consulting, support, or any other organization, yet they are incredibely valuable to all of them. They are the ropes that tie the ladders together.

So, even though it may not look at good to my future boss, whoever he or she may be, I proudly declare: I am a rope. I can accomplish aspects of all of the above at the same time, handle communicating to customers and to other organizations within your company, and, by so doing, make your product more successful.

I look forward to the day when companies realize that not all people in the organization belong on ladders and that crossing the organizational divide is not to be feared but to be embraced.

Until then, will you hand me that chisel over there? I've got to shape this peg...

Monday, November 12, 2007

Bucking The OS Establishment

A couple of weeks ago, I'd had enough. I was approaching Window's Tithe Day and decided to buck the conventional Top Ten Reasons not to Install Ubuntu. I feel so rebellious!

I've been a linux fan since around 1993, when I bought my first PC (a Packard Dell 486/50) for school. I was in the Computer Science program at the University of Utah. I could not tolerate Windows 3.1 in that Un*x world, so I quickly become involved in the Linux community.

After downloading my 26 floppies using work's 56k connection, I went to town installing Slackware for the first time. Then installed it again and again as I learned what I was doing. Over time, my confidence grew and I become an officer in the fledgling Salt Lake Linux Users Group. There, I found the opportunity to learn a lot about Linux (and get lots of free stuff from early Linux vendors. A BIG shoutout to O'Reilly for their major support).

So, nearly 15 years later, my Dell Latitude D820 is now running Ubuntu Gutsy Gibbon and, for the most part, it is loving it. Emboldened by my rebelliousness and my success, I've also installed Xubuntu on my wife's old Sony Vaio P-II 266. I'm also running a MythTV Fedora server.

I have my future plans laid out that include a dual duo-core server running XDMCP for multiple low-end laptop X terminals for each of my kids, separating my MythTV server and clients, getting proper filtering put in place for my kids, and, finally, converting my wife.

Now, if only Blizzard would come out with an official Linux client for World of Warcraft...


From a post by Joel Spolsky in Joel on Software:

It’s hard to get individual estimates exactly right. How do you account for interruptions, unpredictable bugs, status meetings, and the semiannual Windows Tithe Day when you have to reinstall everything from scratch on your main development box?

Saturday, November 10, 2007

Snake Wrangling as a Method of Teaching Young Programmers

I'm happy. My daughter wants to start being a computer geek, like me. She has shown more interest in computers than any of my other children and has now expressed interest in programming.

To that end, I've installed Xubuntu on an old Sony Pentium-II 233 laptop we had lying around. Her primary use is going to be authoring her novel using Open Office, but secondarily, we'll use it for teaching her Python.

I've chosen to teach her using Python because it has a good signal-to-noise ratio between coding-overhead and coding-response. It is very easy to do easy things, unlike Java, C#, and other languages of that nature that have a lot of programmer-overhead. It is also more structured, making it easier to read code, as opposed to languages like Perl.

In support of this teaching, I've found a couple of interesting things. First is a great book targeted at teaching kids Python called Snake Wrangling for Kids by Jason R Briggs. Kudos to Jason for this: I hope you get it published soon.

The other interesting thing I've found is Guido Van Robot. According to the website, "Guido van Robot, or GvR for short, is a programming language and free software application designed to introduce beginners to the fundamentals of programming." To be honest, it is a bit buggy, but interesting nonetheless.

An important point is that I'm teaching an early-teen. I don't think this would work for younger kids. Other interesting technologies include
  • Scratch, which would certainly appeal to younger programmers.

  • Storytelling Alice for early teens (targeted especially at girls).

  • Computer Science Unplugged is a neat program for teaching computer science without programming...and without computers.

  • Squeak is a Smalltalk-based system for teaching programming, math, and computer science concepts to younger children. This one has a lot of promise, and I will be introducing my younger children to it.

What have you found for teaching programming and computer science? This is close to my heart, because I am a geek and want my kids to be geeks, too.

I'm sure there are more sources out there, and as I run across them or as I get feedback from my daughter, I'll post. I'd love to hear your feedback as well.

Friday, November 9, 2007

Technical Debt in Agile Development

In a previous post, I talked about technical debt and how you can use that concept to understand the real costs of design and implementation trade-offs. Being a believer in Agile development, I wanted to explore what technical debt means in an Agile world.

I think one of the interesting questions around this is if it is possible to implement a feature maintaining the principle of Simplicity is Essential and still manage to take on debt.

For instance, if I have to implement a logging system, the feature calls for logging to a database, and I start by logging to the file system as a prototype, there are three options I can now take with this feature, none of which I would call debt. My choices in making a change in scope are to 1) change the requirement such that file logging is OK, mark it complete, and add RDB logging as a seperate feature in the backlog; 2) not call the feature complete and continue implementing the RDB features; or 3) pull the feature from the code altogether and put the feature back into the backlog. None of these introduce a debt in the code.

Any new feature that gets partially implemented will fall in that category: either the requirements get refined to handle it, the requirements are completed, or the code is removed. Every other case that comes to mind is in the category of "I would like to do X because it feels right, but it is not the simplest thing." Do you agree?

Actually, I don't. Read on to find out why...


There are several kinds of debt that can build up in an Agile project. We'll start with the most basic and work our way up to the harder to quantify debt.


  • Poor Unit Tests

    In agile development, this may be the worst possible debt you can accumulate. This is check-cashing, 35% interest plus weekly fees debt. Failure to put proper emphasis on unit tests will always come back with huge future costs. In test-first methodologies, there is never a time where it is appropriate to take on this debt. In test-soon methodologies, there may be one time where the cost of the debt won't be felt exponentially: at the end of a particular release. The caveat here is that your organization has to have the maturity to pay the debt immediately following the release, or the interest will rack up.

    In my opinion, the only time this would ever be acceptable is if the fate of the business/product rested on it. The cost of poor unit tests is far too high to risk in any but the most extreme circumstances.

  • Failure to Refactor

    Now we get into some of the optional aspects of software development. It is possible to forgo refactoring on projects in favor of just pushing the code out. The challenge with this is that systems get brittle as pieces start connecting in more tenuous methods. "I'll just tack this on" is a familiar sentiment, especially when refactoring forces not just application code changes, but *gasp* unit test changes. A lot of work for one little feature, but, ironically, each time the decision is made not to refactor, the pain of refactoring grows.

    This is debt that can be easily tracked. Strong developers will know when a piece of code should be refactored. Any time it isn't refactored, it should be tracked.

  • Over-Engineering a Solution

    The Dot-com boom was a marvel of over engineering. I worked for a dot-com that built, from scratch, every piece of infrastructure it needed to build a portal. These included a web templating engine, an object-relational mapper (built on a directory server, no less), a mail server, a COM-like framework in Java, and many others. In addition to building all of these, they were built in some theoretical "perfect" mindset, such that they were slow, difficult to use, and didn't really meet the needs of the clients (other developers) who were trying to get an application built.

    On top of that pain, we had to maintain the code going forward. As things changed, we were forced to update code that seemed to only exist to make our own lives more difficult. We had a huge over-engineering debt that significantly impacted our ability to produce features for customers, which impacted our own business success.

    That debt is harder to quantify, because no developer likes to believe that he is over-doing something. Worse yet, you pay for it at least twice: once to spend more time building it and again to maintain it. I believe this kind of debt should never be incurred and the only way to prevent it is through continual code reviews...imagine that, another tenant of Agile programming coming to the front. I have never experienced over-engineering coming back and paying for itself in the future: don't do it.

  • Over-Simplifying a Solution
    This is debt because an expectation has been made to a customer and the solution does not meet those real needs. This generally happens when there isn't a solid understanding of requirements, and it causes debt through customer service issues and other interruptions to the development plan. This is the hardest one to quantify, because in some cases you just don't know what you don't know, but it is also the most obvious debt from a customer perspective.

    My experience has made this The Money Pit debt. It is debt you have no idea you are getting in to, always has to be paid immediately upon discovery, and generally causes much pain and suffering. The only way to prevent this debt from occurring is close customer relations and regular feedback (does this sound familiar?).



What other forms of debt do you see as you do Agile development? How do you minimize that in your work? What is your favorite color? All comments welcome.

Thursday, November 8, 2007

Knowing the Costs of Development Choices

In most software companies, before any feature is developed, it has gone through a process whereby product management has determined that there is financial merit in developing a feature. This is usually done by looking at the incremental value the feature provides to the product divided by the estimated cost of development. Incremental value is typically determined by the increased sales the feature will generate or by the increased cost the market will pay for the product.

If we take it as a given that all of the assumptions in determining the numbers above are consistent, then, by using the above method, we can put two features side-by-side and compare the value of developing them by the return generated. In fact, this is often used as a base for feature sets, but other, real-business items like sales commitments, unforeseen maintenance issues, etc., also trod in, taking the ideal world and populating it with ugly necessities.

I've been pondering for some time on ways to quantify the costs associated with design and implementation trade-offs. Just as two features can be put side-by-side, two implementation strategies can be put side-by-side and objective decisions ought to be able to be made around those. Unfortunately, most of those decisions I've seen have come down to, at a most basic level, personal preference.

Furthermore, communicating to a non-technical audience the costs and financial risks associated with those trade-offs is nearly impossible. Because of those challenges, a breakdown occurs between development, wanting the "no risk" option, and business, who wants to properly evaluate the best ROI. The breakdown often turns into a rift as development wants "the right thing" and business wants "the cheap thing" with only weak objective communication. My technical side wants to do "the right thing", but I'm pragmatic enough to understand that sometimes trade-offs have to be made, so I try to blunder through that communication minefield, getting beat up on both sides.


Coming out of this pondering is a concept that has, apparently, been around for some time, but which is new to me. This is the notion of technical debt. Steve McConnell has a great write-up about it at his blog.

The short of it is that technical trade-offs can be measured and quantified in terms of "debt" akin to any other investment debt. Make a bunch of crappy code, you are increasing your "credit card" debt. Make a thoughtful decision to implement something using a major shortcut, you are increasing your "investment" debt.

And, just like real debt, technical debt comes with interest. The more debt you take on in your code, the more you pay to create new features and to maintain the code. Eventually, you can spend all of your money on interest payments, and provide no features.

The notion of technical debt and the effects of those debts are concepts that any business person can understand. Using a vocabulary around technical debt, better informed decisions can be made and, more importantly, a method of tracking that debt can be implemented so you can know the built-up cost of that financing.

Now that it is quantified and tracked, development and business can start making intelligent decisions about how much debt a project can handle. Fast moving, early-phase projects will likely need to rely on more debt; mature projects will probably want to limit that debt; near end-of-life projects can probably start retiring debt (a concept I wish existed in the real world!).

The important take aways become:

  • Don't increase your debt through sloppiness. Debt is debt, and you have to pay for it.

  • Take on debt acknowledging the future costs and only take it on when you are willing to pay that future cost.

  • Track your debt so you know how much it is going to affect your project, both now and in the future.

  • Make sure you make payments on that debt, both when you are reaching your borrowing limit and at regular intervals. Failure to pay on the debt regularly will compound the interest, causing you to reach the borrowing limit sooner.



As always, I'd love to hear your feedback. Have you managed technical debt before? How did you manage it? Was the concept an effective tool for you?