Monday, September 15, 2008

Personas in Software Development

Personas are a very valuable tool in the software development process. Personas allow you to humanize your customer, giving you a more "personal" relationship with them. By humanizing the customer, you are more easily able to empathize. It isn't until we are empathizing with our users that we can start the process of building great software.

The most common way to create personas is to distill the market information from your existing customer/user pool. If you provide project management software, it is likely that you have a group of fairly similar project managers and a group of fairly similar task assignees. By looking at these two groups, you may find a few very similar groups: highly-technical PMs, non-technical PMs, occasional-use assignees, and frequent-use assignees, for instance. You now have the basis for creating four personas that will help you understand their needs and then guide your product to meet those needs.

Not every product has an existing set of customers/users, however. This is most often the case when building a brand new product. In my opinion, this is perhaps when personas become most useful. (Bad metaphor alert!) Building a new product is like creating a piece of art (Ouch, that was painful!) The comparison, however, is valid. Just as in creating art, you have infinite options (just start by thinking what are you creating: painting, drawing, sculpture, etc, then deal with subject...), so to do you have boundless options in a product.

With a wide-open canvas, personas give you a way to make decisions and understand compromises. By knowing who I'm building for, I understand whether I need to spend more time simplifying my interface (for the novice) or providing more customization (for the expert). I also have a criteria to judge whether a feature is going to be successful by understanding motivations that would cause people to use...or abandon...the product.

So a new product means that, in some ways, I'm making up my personas. My personas are my target market, which I know very little about. However, if we don't understand our target customers, there is no point in building the personas. Somewhat of a Catch 22.

We often ask the customer "how do you do X?" because we think "X" is pretty cool and would make a good feature. Stepping back, we need to understand why somebody would being doing "X" in the first place. What are our target customers' reasons for using our product in the first place: what is motivating them?

Motivation is the most useful piece of the persona. If we don't understand our customer's motivation, the persona will not help us build better software. Instead, one of two things will happen: we will either be led astray by building features to fulfill needs that don't exist or we will recognize the tool as useless and put it in the back of the toolbox.

Building out personas in this case is an iterative process. Start with the information you know: what group are you targeting your product at. Make some best guesses as to what is motivating them, then validate those guesses with actual potential customers. Circle back around. As with any iterative process, don't get stuck in an infinite loop! Be pragmatic and realize when you are approaching equilibrium.

There is a lot of information that can be put into personas. Taken to one extreme, you can follow the toolkit on George Olsen's blog. It is a fairly comprehensive toolkit that will give you a very detailed persona. Regardless of whether you decide to use all the information, it is worth reading to understand things you could think about.

The other extreme is some basic biographic and demographic information. Without enough information, you won't have the detail you need to either empathize with the persona or, worse yet, to understand the motivations.

Here is a basic template of the information I use. Take it for what it is worth. The idea is to get enough information to make logical decisions without being overloaded with too much information.



Basic Information
  • Name:

  • Description:

  • Photo:

Demographics
  • Age:

  • Sex:

  • Occupation:

  • Location:

  • Marital Status:

  • Children:

  • Income:

  • Education:

Technographics
  • Computer:

  • Cell Phone:

  • PDA:

  • Other:

  • Primary Device:

  • Web:

  • Phone:

  • Applications:

Information Usage
  • Information Used:

  • Web Sites:

  • Applications:

  • Paper Usage:

  • Access/Day:

  • Locations/Day:

  • % Mobile:

  • Mobile Type:

  • Primary Connection Speed:

  • Mobile Connection Speed:

Psychographics
  • Social Network Role: [Connector/Spanner/Broker/Specialist]

  • Acceptance of Innovation:

  • Technology Attitudes:

  • Technology Religions:

  • Technical Proficiencies:

  • Hobbies:

Goals and Needs
  • Usage Goals:

  • Emotional Goals:

  • Motivations:

  • Needs:

  • Frustrations:

Proficiencies
  • Computer Proficiency: [Novice, Advanced Beginner, Intermediate, Expert]

  • Web Proficiency:

  • [Application] Proficiency: [fill in appropriate applications (e.g. salesforce.com)]

  • Social Network Proficiency:

Persona Details
  • Persona Type: [Focal/Secondary/Unimportant/Affected/Exclusionary/Stakeholders]

  • Business Relationship:

  • Persona Relationships:

Profile Narrative
Here is a few paragraphs of relevant background information about the persona




Most of the applications I've been involved with in the design process are web application, so this tends to be skewed that direction. The Technographics and Information Usage sections may vary depending on the type of application you are building. Fill in with data appropriate for your needs, but remember to validate it!

Finally, there are a lot of bullet items there. People like stories; it is easier to relate to stories than databases. If you can put the relevant information into the narrative, do it! It makes it more human. Just make sure you do capture all of the information you need for your personas. If handheld device type is critical, put it in both places: make it easy to relate to and easy to look up.

Building personas is not easy, but, really, the other option is to put some answers you think might be correct on the dart board...and throw a blindfold on while you are at it. You've got to know who you are building your product for!

Tuesday, September 2, 2008

Wireframes Using Balsamiq Mockups

I think one of the most difficult times during product development is during the initial user interface design, specifically the period between napkin chicken-scratches and high-fidelity mockups. It is during this period when you are typically deciding on application workflow and beginning to expose that workflow to people outside the product team.

The problem is that napkin chicken-scratches don't give enough of a sense of the real use of the application. Conversely, high-fidelity mockups don't provide people with enough imagination latitude to think about other possibilities. My solution in the past has been to use Photoshop to do line-drawings mockups like this:

Photoshop Wireframe

They are good in that they enable people to "fill in the blanks" with their imagination, which we can capture with further iterations. The problem is, though, that they are still tedious and time consuming to edit. For a long time, I've wanted a tool that gave me the ability to do these line-drawing mockups but much easier.

Not too long ago, I stumbled across Balsamiq Mockups and quickly fell in love. It offers the "use your imagination" lines of a whiteboard with the reusability and easy editing of a document. Take a look:

Balsamiq Wireframe

The current feature set is reasonble. It gives a good editing environment with a nice set of controls. It has really hit about 90% of what I need.

Perhaps more relevant is that, in the month I've been using it, that number has gone from 80% to 90%. The author is very good about developing and deploying new features. Balsamiq is a one-man micro-ISV, but the level of support he gives puts any other software shop I've worked with to shame. Peldi gets a big "ataboy" for that!

So, what's missing? Really, only two things:
  1. First is importable images. Images are all placeholder boxes, which aren't quite sufficient on visual applications. This feature is currently in development.

  2. Second is application workflow. While I can create mockups of the ten screens in my app, I can't show how or why a user transitions from one to the next. I currently have to do that out-of-band from the application, which slows my work. Balsamiq has also indicated that this will be addressed in a future version as well.

That's it. Sure, there may be some other controls that are missing, but every piece of software has that. Given the quick turn-around on many features, I don't think there is much to worry about there.

If you do much application UI brainstorming, definitely check out this tool!

Friday, May 30, 2008

Our Biggest Obstacle

I like reading Reg Braithwaite's blog. I think he is quite insightful and a talented writer. Recently, he posted on why we are the biggest obstacle to our own growth. Another blogger (err...Sam) posted a follow up positing that it isn't age but arrogance that is our greatest obstacle.

Now, I will be first to step forward when somebody calls for the arrogant people to come to the front of the room (if my friends aren't pushing me there, first ;). I have also passed that magic age of 35 where, for some reason, people seem to lose the ability to learn how to program the clock on their electronic appliances (Twelve o'clock flashers). So, by all accounts, I should be hopeless.

But I don't think I am (introspection is always a challenging task). As I look back upon my career, it has been marked by a steady amount of change. What I've realized is that when I'm comfortable in a job, I become extremely uncomfortable. That consistently pushes me on to new and different challenges, which require learning to be able to achieve.

I would argue that it is neither age nor arrogance that are the obstacle. Those are co-symptoms to the real cause: complacency. Complacency comes when you've decided that you've paid your dues and you can rest on your achievements, and that can happen due to age, arrogance, position, or any of a number of other ways. The co-symptoms determine how you react to that complacency, but do not drive it to begin with.

Identifying whether you've become complacent or not is straightforward: if you are doing the same things in the same ways now as you were six months ago, and you find that comforting, then, in all likelihood, you've become complacent. Are you annoyed about changes in strategy or excited? Do new technologies make you groan or smile? Does a career change fill you with dread or excitement? Most people become very comfortable in their daily patterns, but that comfort, if not managed, will inevitably lead to complacency.

If you notice that you are becoming complacent, take action. Get out of your comfort zone. Learn a new technology (like Bungee Labs' :), take some classes, volunteer for new responsibilities at work, or find a new job. The point is that by stepping outside of your comfort zone, you break the cycle of complexity. Breaking that cycle removes our self-imposed obstacles and allows us to continue growing.

Wednesday, April 23, 2008

The Rise of Platform as a Service

I have some pretty exciting news. A couple of weeks ago, I wrote about moving beyond Assembly language web development, mentioning some of the new technologies playing in the Platform as a Service (PaaS) space. In particular, I wrote about Bungee Labs, who is doing some very exciting things to simplify development of highly-interactive web applications. Well, as it turns out, Bungee Labs and I both agreed that me working for them would be a good idea, and I am now employed by them.

First, the fine print: I am now an employee of Bungee Labs. However, this blog is mine, not Bungee Lab's. Therefore, anything I say on it is my own opinion and should never be taken as an official statement from Bungee Labs. Official statements from Bungee Lab can be found on our official Bungee Connect Developer Network blog.

The same day I received my employment offer at Bungee Labs, Google announced their App Engine beta. I have to admit, that set me back for a few minutes. Google is a huge player and not one that a small company can go head-to-head with and expect to win. I'm sure I'm not the only one who thought that, either.

However, the more I thought about it, the more I realized that this is actually a very good thing for Bungee Labs. How could the entry of the 800-pound gorilla into the PaaS space be a good thing for a small company like Bungee Labs? It is all about defining a market. As Eric Sink has pointed out many times, trying to go into a software market with no competition is incredibly challenging because you have to not only sell people that your solution is worth buying, but you also have to sell them that your solution solves a problem they really have. By having competition, you ensure that people already understand they have a problem, so you only have to worry about the first challenge (which is big enough as it is).

Google has now validated that Platform as a Service is a viable solution to the significant problem of building and managing the network infrastructure around an Internet business. Of course, Amazon's Infrastructure Services had already shown that. The thing that Google brings is the concept of building an application on top of somebody else's platform. That is what Bungee Labs has already been doing.

Obviously, if it were only about deploying an application on a grid managed by somebody else, Bungee Labs would be in trouble and I would still be at my last job. Google is taking care of the letting the market know that PaaS is truly a solution to a real problem they have; now Bungee Labs can focus on showing that we are the platform provider that makes most sense to use to solve that problem. That reduces the marketing overhead significantly and allows us to focus on technology to a greater extent.

In the near future, I'll give a new developer's perspective on the differences between Bungee Labs' and Google's development environments for their PaaS offerings. It is important to understand what you get and what you give up when choosing a PaaS provider.

After spending the last 18 months being several-to-many years behind the cutting-edge of web development, it is very nice to be back on the edge.

Monday, April 7, 2008

Technology, Education, and the Next Generation

I've always believed that kids are capable of way more than we give them credit for. Orson Scott Card explored this idea in Ender's Game. As I've watched my own kids grow, I've seen the depth of capability that hasn't been remotely tapped by current schooling.

When my son was four, he could name any dinosaur you pointed at, from the ever present Tyrannosaurus Rex and Apatasaurus to the much more obscure Compsagnathus and Troodon. His favorite was Deinonychus. His ability to soak up information on this topic was astounding to me. Lilly is astounding in her ability (and it is for real; my wife is good friends with her mom). And, if you are a software developer and want to be humbled, check out Dmitri Gaskin's Google Tech Talk. Dmitri is 12 years old and is a contributor to Drupal. I have no doubt in his ability to code JavaScript circles around me.

Robert Cringely explored the topic of the effects of technology and change on education in a series of posts recently. The gist being that what education means and the whole process of achieving an education is about the go through a major upheaval. Technology, change, and access to information are all combining to alter not just how we learn but what the end-result of learning is.

This is not news to anybody in a technology industry who is continually scrambling to keep up with the changes in their field. If I had left my education when I finished college in 1996, I don't know if I would even be employable today. I certainly wouldn't be doing anything interesting.

The thing that kids like Dmitri show is that the revolution in education is already taking place. The Internet has provided these kids with the access to information that allows them to reach into their own potential. Dmitri did not become competent enough to give a Google Tech Talk through his school learning. It was because he has a passion and the information is now available for him to follow that passion.

Kids like Dmitri are on the leading edge of this wave. A wave that will quickly show that knowing how to learn something will become the most valuable skill; the trained skill will become secondary. This, in my mind, is the most compelling aspect of Unschooling. When a child is interested in something, their potential is truly amazing. Unlocking that potential is something that our traditional educational institutions have not done very well. In my generation, it was very challenging for a kid to excel in spite of the system. Today, it is easy and becoming easier.

That doesn't mean that training will not be important. A doctor is still going to need a lot of training to become skilled at her profession. However, as the pace of change increases, the skill of being able to continue learning will differentiate the best doctors from the so-so doctors. Technology will help, but it will still require people to drive it. Of course, this is where Dmitri's, my son's, and Lilly's generation will have a serious advantage: knowing how to find information is something they will have grown up with.

For those of us who are guiding this generation, we need to be careful. Our own innate fears and prejudices about learning could serve as a stumbling block for those we are trying to help. Technology and information access have changed the educational landscape, and we need to understand and work within the new boundaries. For some of us, that will be easier than others. No matter what, though, it is going to change. And, just like learning a foreign language, if we aren't doing everything we can to become fluent, idomatic speakers, we will be left lacking understanding.

Thursday, April 3, 2008

Beyond Assembly Language Web Development

Many years ago, as I was finishing my last year of college (OK, maybe it wasn't that long ago ;), I worked for a game company...err...entertainment software company. I was hired in conjunction with two other junior programmers and handed a shiny, new, smoking-fast Pentium 120MHz computer, the best computer in the company.

At this time, PC games were written for DOS. There was little to no standardization between hardware drivers and getting little things like network game play working was a serious challenge. If you didn't have a SoundBlaster audio card, you likely would be hearing the game, either.

That same year, Microsoft released Windows 95 (OK, maybe it was that long ago...). Microsoft made a strategic decision that they wanted to be the best operating system to write games for, so they released the DirectDraw, DirectAudio, and DirectPlay APIs, to give developers a standard to write to, regardless of the underlying hardware.

With my spanking, new machine and the two other junior programmers, I was given the task of trying to port a DOS game (WWF Wrestlemania, if memory serves) to Windows and DirectX. The game had started life as an arcade game written completely in assembly (68000 processor, I believe). That assembly was converted to i386 assembly by a tool. Our job was to find the graphics hooks and replace them. In the end, we determined the combination of game (too much assembly to digest) and DirectX (immaturity, primarily) were too much and decided against the port.

Two years later, I was working for a company that produced 3D models for just about every purpose imaginable, from car commercials to major movies. We partnered with Microsoft, who was interested in models for the fairly new Direct3D API. At that time, I got to play around with the latest DirectX as well as attend a few game development conferences. The difference those two years had made was astounding. PC games were now exclusively built on DirectX and most work was done in higher level languages.

Aside from a nice stroll down memory lane, what is the point of this story? The point is that between around 1995 and 1998, the game industry made a significant switch out of its "assembly" days and into its "higher level" days. By assembly, I don't mean just coding in op-codes, but also needing to worry about the bare metal, the hardware underneath, from graphics and sound cards to network protocols. As it moved up to its higher level days, game developers, for the most part, were able to worry more about making a great game and not so much about the particular behavior of a piece of silicon.

I've been doing web development of some form or another since 1994. I've watched HTML grow and mature, I've seen tables turn into divs through CSS, and watched JavaScript make that which was static dance. At any given point, to build a significant web application, I have to be proficient in multiple languages and technologies, including HTML, CSS, JavaScript, XML, JSON, HTTP, PHP, JSP, Java, Python, and others. I have to understand the Web at what I consider the "assembly" level.

We are seeing tools like Intel's Mash Maker, JackBe's JackBuilder, Dapper, and others are making it easier to pull together various content sources. That's a start, but it still leaves the majority of the heavy lifting to be done at the assembly code level.

Thankfully, that is changing. One of the most interesting technologies I've seen in the past few years is in beta now. In my opinion, it takes us up above the assembly level for the first time on the Web. Instead of worrying about how to connect things, various pieces of the web are all parts of your object model. Connectivity with them is part of the platform.

Bungee Labs is providing a Platform-as-a-Service (PaaS) offering that is quite amazing in its ability to be both a general purpose framework as well as be a core integration technology. In the process, they've successfully abstracted the developer from the assembly code of the Web, just like Microsoft did for game developers a decade ago.

PaaS is still a fairly new concept. Perhaps the most notable example is Salesforce.com's Force.com, which provides an extension platform for Salesforce.com. Bungee Lab's offering is significantly more interesting, not just for its thoroughness, but also for its generality. Bungee Lab's WideLens calendar application is a great example of the flexibility and strength of the platform.

Anyway, moving on from my sales pitch here, the point being that we are at a very exciting time on the web. Cloud services, PaaS, and very smart people are taking us to a higher level of development on the web. That means we, as developers, will gain the benefits of being both more productive, building more business value, and of being able to build cooler apps.

Technology is cool!

Tuesday, March 18, 2008

Poisonous People at Work

If you haven't seen Ben Collins-Sussman and Brian Fitzpatrick's Google Talk on How Open Source Projects Survive Poisonous People, it is well worth a look. They do a great job describing the debilitating effects negative people can have on an open source project and how to survive those effects.

Unfortunately, poisonous people don't just live inside the Cat-5 and Fiber networks around the Internet. They also live around us every day. And, worst of all, perhaps, they live with us at work.

Unfortunately, work may be the worst place to have a poisonous person. To be sure, the qualities that make somebody poisonous for an Open Source project may not be the same characteristics of what makes a person poisonous in the workplace, but the effects of the poison are the same: people become disgruntled and leave, projects stumble and fail, milk sours before its due-date.

Maybe a future blog will cover how to survive working with a poisonous person. Today, however, I'm going to focus on identifying the poisonous person. What I'm really hoping for here is that poisonous people will self-identify and work to resolve their venomous ways. However, I'm not kidding myself; the traits that make poisonous people poisonous in the first place are likely to prevent that from happening.

So, I'm not covering how to survive a poisonous person and I don't think I'm going to change any poisonous people. What's the point? First and foremost, it is to help others, trapped by the venom around them, realize that they are not alone. Similar to the group strength of a Twelve Step Program, there is strength to be had just knowing you are not alone.

What are the traits of a poisonous person in the workplace? Poisonous people are not always pricks. I've worked with poisonous people who I enjoyed going out to lunch with, enjoyed getting stomped by at the Foosball table, etc. It also holds true that all pricks are not poisonous. Some people are just unpleasant, but their unpleasantness does not cause the workplace organism to sicken.

In fact, I'd go a step further and say that any one characteristic mentioned here does not make a person poisonous, or, perhaps, the venom is just too weak to matter in a strong workplace organism. It isn't until a person pulls together several of these that they have the ability to sicken the workplace and those in it.

I've identified four adjectives that describe a poisonous person and ranked them from least- to most-venomous. I'm sure that there are more; feel free to add to my list in the comments.

  1. Maverick: Poisonous people tend to be fairly well isolated. Part of this is the effect of the poison, pushing people away. However, part of it is self-imposed. Poisonous people have their way of doing things, and the company can come around to them when it's ready to. In the meantime, the poisonous person will ensure everybody else knows they are wrong.

  2. Confrontational: Communicating with the person is never smooth. In fact, people cringe when they have to talk to him or her because they know it will be painful. People get to where they will either not talk to the poisonous person about anything they should be discussing, or they will stop caring and just let him have his way. The poisonous person considers both of these to be successful outcomes.

  3. Single-Minded: Invariably, the poisonous person will have a single, guiding purpose that is brought into every conversation, design meeting, defect report, etc. Everybody in the office will know what this person values, because every single decision needs to be heavily weighed around his value nexus. This can be anything: security, a belief in a flavor of object oriented design, the customer is always right, Emacs is the holy grail, etc.

  4. Arrogant: The final, and most venomous trait, is arrogance. It takes the preceding traits and amplifies them with an attitude of "I am right, you're wrong; I'm smart, you're dumb; etc."


Of course, just about every engineer fits into these buckets in some ways. I know that I do.

The real issue is the effects that poisonous people have on the companies and projects they work with. Most people, even with varying levels of these traits, are good to work with. On the flip-side, I've seen poisonous people nearly destroy a product release that would have tanked the company. I've seen poisonous people cause project be significantly delayed in getting started. And I've seen poisonous people cause projects to string out indefinitely with their single-minded determination.

Putting my cards on the table, I have to admit that I'm hoping if you are poisonous, you'll use this opportunity to mend your ways and move to a more peaceful co-existence with your fellow employees. I didn't lie about thinking you won't change, but here's to hope. This is my easy, four-step program for detoxing yourself.

First, and foremost, be quiet. Stop trying to prove to everybody how smart you are and just be quiet. Listen. Hear other people. You are going for the "walk a mile" thing here; trying to break down your single-mindedness so you can see how things that are important to other people can and should be valued.

Second, start asking questions instead of talking at people. There is a proverb that states "We have two ears and one mouth, that we may listen the more and talk the less." Believe it and live it. You still should be focusing on hearing other people. Don't ask questions about things that are important to you. Ask questions about things that are important to them.

Third, give input, but let other people "win." You see, outside of the poisonous world, this is called compromise. You don't have to be right about everything to win. Give your input, let others give their input, then let the others have their way. This will drive you nuts, but it is a critical step towards removing the toxins.

Fourth, do it all over again, every time.

As the toxins leave, you'll see that discussions aren't about me versus you or about you winning and getting your way. Instead, they are about how we best accomplish things. Your single-mindedness, originally one of the poisonous traits, will become an asset as you are seen as an expert who understands not just your focus, but also the implications that focus has on other areas and how to deal with the real-world problems those implications bring on. The influence you were trying to force upon others will be requested and appreciated.

In closing, a reminder. Almost every engineer has some of those poisonous traits above. The advice I've given here applies to all of us. Do yourself and your co-workers a favor and try listening more and talking less.

I'll be back with some ideas of how to deal with poisonous people in the future. In the meantime, if you are working in a toxic environment, know that you are not the only one and that we commiserate with you!

Monday, February 25, 2008

Architect Versus Developer

Architect
We are adding these architectural requirements to the project. The purpose is to bring consistency between applications and enable us to realize better re-use across our code base.

Developer
Those "requirements" to add any value to the application from my customer's perspective. They introduce complexity and risk as an added "benefit."

Architect
The complexity is a balancing act. While it may introduce some complexity to your application, the goal is to reduce the total complexity as taken from an organizational perspective. In other words, while we may add some here, we balance it out by reducing complexity in other place and by having consistency, which makes complexity more tolerable.

Developer
In the meantime, I'll never get my application done because I'll be building an infrastructure that doesn't give me an application. Customers don't buy architecture, they buy features.

Architect
But architecture enables features to be built. Done carefully, it provides for cost reduction and faster response in development. It also allows for better deployments, which means happier customers.

Developer
Go back to drawing your pictures and leave me alone to build my application.

Architect
It is that myopic attitude that codes us into a corner every time. Somebody has to look at the big picture.


I'm in an interesting dilemma. I'm the architect. I'm supposed to be thinking big picture. But I'm a developer at heart. I think pragmatically. This is leading me to ponder a lot.

The Assignment

Take a series of products (web and thick) that have been developed over time and distance, brought together now mostly by acquisitions, and meld them into a seamless whole. In the process, open up the internals so customers can customize their workflows using our products. Oh, and do this without disrupting development.

Do this without disrupting development? Architecture should never be a disruption to development; it should be the foundation. As such, if it is being seen as a disruption, that means that there is something out alignment. Until your environment is realigned, you will struggle. From my experience, these are the most likely to be out of alignment.

The Architecture: The architecture itself may be causing the misalignment. Most often, when the architecture is at fault, it is because it is too complex for the problem it is trying to solve. This tends to cause ripple effects through the organization as people fight back. Since this is the architect's primary domain, this should always be checked first. Be honest about it. As architects, we have to decompose complex problems, but it is possible to over do it.

Far less often, but still possible, is an architecture that doesn't accomplish what it is setting out to do. If the architecture isn't complete, it will be difficult for people to catch the vision.

Developers: Developers and architects should not be constantly at odds, but sometimes it can feel that way. In my experience, the most challenging developers are the ones who want to know "Why?" It isn't that they shouldn't be allowed to ask, but rather that answering that question can be time consuming.

Answering "Why?" is a two-fold process. First, each aspect of the architecture should correspond to a real business driver. Second, there should be management agreement with those drivers. Ideally, all developers will be satisfied with the business drivers, but at some point, you may be forced to say "because it's the way we are doing it" and without the management agreement, you won't get anywhere.

Management: Management can throw things into confusion if they do not have a firm understanding of the answers to "Why?" There are problematic developers that will try to use management's lack of understanding. Management doesn't need to know the details, but they need to be sold well enough that they will back you up.

But management support isn't just about the problematic developer. It is also about the costs incurred while building the architecture. There is a cost associated with building an architecture, but it is a cost that should have positive business impact. Make sure you have management alignment.

I don't have things aligned yet. The fact that my assignment is coming under the terms of "don't disrupt development" means that management isn't clear on the value of changing the architecture versus continuing down the siloed application path. Instead, there needs to be a clear understanding that we are trading off a certain set of features for the opportunities that a consistent architecture provide.

And, honestly, if I'm having a conversation with myself like that one above, then I need to work on "why's" myself. If you aren't aligned with yourself, well, then you have a real problem.

Wednesday, February 20, 2008

Language Shootout Followup

In my previous post on Our Language Shootout on choosing between Jython, JRuby, and Groovy, I made the following comment:
"This is especially important around the APIs, since it uses the Java APIs directly..."

Which received this question:
What caused you to come to that conclusion? I can certainly see it, but only if you continue to program Java but in Groovy instead. If that's the case, why take the performance hit just to write the same (type of) code? (You could just as easily "not learn" the other languages and program Java in their syntax.)

Rather than continuing down the comment trail, I wanted to respond to this specifically because it was a very important factor in our decision.

I firmly believe that for us to see the benefit of using a language like Groovy, we have to change the way we write code. It isn't just a matter of writing Java without types; and, if that is what it becomes, our experiment will fail. Regardless of whether we are using Python, Ruby, or Groovy, we better be writing idiomatic Python, Ruby, or Groovy code and not writing idiomatic Java code in said language.

Personally, this actually pushed me more towards Ruby, as I think it would force people to abandon their Java ways a little more forcefully. However, there are real business factors that need to be balanced. In my previous post, I talked about the current disruption versus the long-term gains. Organizationally, we have to be careful how much current disruption we take.

So that brings me back to my quote and the question raised. As I mentioned, every one of these language implementations can use the Java APIs. However, beyond the Java APIs, Jython and JRuby also include the Python and Ruby core libraries as well.

What that means is that, in Jython and JRuby, you now have multiple ways of skinning the same cat: one way is through the language core libraries and one is through the Java APIs. In general, that probably isn't a big deal, except the languages are optimized around their respective libraries. For me to write idiomatic Python, I really need to learn the Python core libraries. The same holds true for Ruby as well as Groovy.

But with Groovy, those core libraries are the Java APIs, which means that my developers don't have to learn a new API set in the process. That reduces the disruption in my current development process.

Now, that said, I also think that the Java APIs will reduce that top-line efficiency that could be gained. So this becomes a financial decision that includes things like amortization and the time-value of money. Given our current needs, I can't amortize this investment over much more than six months. Add to that the fact that money today is more valuable than that money in six months.

So that is why we felt that Groovy won that point for our company's needs at this particular time. If I were starting from scratch, I can guarantee that it would be the same decision as some of these Java-transition sticking points wouldn't have the pull and instead top-line efficiency would be paramount.

Not that Groovy is a bad place to be. :)

Tuesday, February 19, 2008

Our Dynamic Language Shootout

As a fresh lead architect at a company coming late to the world of web based software, I've been given some significant challenges, which I'll be writing about over the coming months. We have some very interesting challenges as we take a 20 year old architecture and move it into the 21st century. Our solutions currently leverage a combination of C, Java, Perl, and others.

One of the things that is clear to us is the need to quickly revamp our user interfaces. There has been a LOT of discussion around about the benefits of dynamic languages and the ability to be more nimble when using them. As a result, we are making an investment in a dynamic language to help us in that regard. As we surveyed the landscape, we felt that our best options to look at were Ruby, Python, and Groovy. Scala is an interesting option, but we did not feel that we were ready for that significant of a switch.

For a variety of deployment reasons, we've decided that whatever we choose will be deployed on the JVM. As a result, this comparison is for the JVM versions of the languages, e.g. JRuby, Jython, and, of course, Groovy, which has no other deployment option. I want to also clarify that I have the most experience with Python and I really like the language. There is no doubt that the language influenced me in my evaluation, but I really tried to remain objective in spite of that.

As I did the evaluation, I tried to come up with a broad spectrum of important information. Others at my company gave feedback on the important characteristics. In the end, these are the features that we felt were most important: the interaction between Java and the selected language, the IDE support, the learning curve, existing web frameworks, and the existing community support for the JVM implementation of the language.

Java Interaction

Several factors make up this feature. The most obvious is how easy it is to call into Java. Since we have a large amount of code in Java, we need to be able to easily access it. Of course, all of the languages manage this without any problems.

The more interesting aspect of this is what happened the other way. All of the languages support compiling down to byte code, but how difficult is it to access code written in the language for Java. Also, since each of the languages are, in some way, a super-set of Java functionality, there needs to be a down-cast to the Java sub-set. What did that look like?

Groovy: Groovy was, without a doubt, the most straight-forward. Because Groovy supports applying types, overriding class methods is clean. Instantiating a Groovy class is the same as instantiating a Java class.

Jython: Jython is pretty similar to Groovy in its bi-directional support. It isn't quite as clean as the Groovy implementation as you are forced to use Docstrings to provide the additional type information that the class needs.

JRuby: Going from Java to JRuby is not trivial, even though JRuby compiles down to a class. The compiler seems to be primarily for faster JRuby-to-JRuby interaction.

Winner: Groovy

IDE Support

In the Java world, the IDE reigns supreme. As I sit here typing this blog in Emacs, I'm perfectly comfortable leaving the IDE behind. In reality, most of our Java engineers would not be. The flip-side is that, with dynamic languages, the needs of the IDE are less than they are with Java. Our organization has standardized on IntelliJ IDEA, so that colors this.

I did not spend a lot of time looking at language-specific IDEs. Since our developers are Java developers and will continue developing Java code, we'd prefer to have them be in one environment.

Groovy: IntelliJ has a really good Groovy plug-in. IntelliJ seems pretty committed to Groovy as well. Honestly, the support was good enough that I didn't look at the Eclipse support. That commercial-level support is comforting.

Jython: PyDev with its commercial extensions was pretty good, if a little buggy. As I said, though, IDEA is our chosen platform, so a switch would be disruptive.

JRuby: There is an Eclipse plug-in for JRuby, but it was pretty weak. The IntelliJ plug-in seemed to be better.

Winner: Groovy

Learning Curve

We recognize that there is going to be a disruptive effect by bringing a new language into our environment. We know that, for some amount of time, productivity will be reduced with a follow-on increase in productivity. The variables that come into play are how long does it take to come back to current levels of productivity and how much of an increase in productivity do we gain when the line flattens out at the end.

In the end, this is all supposition and subjective blather. Take it for what it is worth, and remember we are talking about Java engineers here.

Groovy: As a super-set of Java, it has a very straight-forward learning curve from Java. This is especially important around the APIs, since it uses the Java APIs directly. I honestly don't know whether the top-line productivity is as high as Python and Ruby, but I don't have any evidence that it is not. My gut feel is that the Python and Ruby libraries are optimized more towards their languages and will give a higher top-line.

Jython: Python's pseudo-code syntax is a short hop from Java. While the Java APIs can be used, they aren't going to be as efficient as the native Python libraries. The biggest hurdle is the learning curve of those libraries.

JRuby: Given its closer functional ties, the learning curve for Ruby is highest of the three. It also has the same issues around the Java and native libraries. I honestly think that, once the curve is passed, JRuby could offer the most productivity. I've been nothing but impressed by what I've read about Ruby in that regard.

Winner: Groovy

Existing Web Frameworks

To a greater or lesser degree, the entire Java web world is open to each of these languages. However, the thing that made Ruby so powerful was Rails. Similarly, compare Python alone versus Python with a mature framework like Django. Groovy followed Ruby's lead by adding Grails, based heavily on Rails. These frameworks leverage the strengths of these languages, and, in my opinion, that is a significant piece of what makes these languages great.

Groovy: Grails is based on Rails, with the "heavy lifting" underneath being done by Spring and Hibernate. I like the maturity of the underlying technologies. I think Grails is on its way, if it doesn't get usurped by the Java platform desire to make everything unbearably complicated. Given Groovy and Grails heavy Java emphasis, that is a major concern of mine.

Jython: *sigh* is all I can say here. While CPython has some great options, Jython went nowhere for two years. The main cause of this is two-fold: Jython's current version is 2.2.1, whereas CPython is 2.5 and so many frameworks require compiled C code for performance. Jython is just now coming back from that hiatus, but there aren't many options available for it. It looks like Django will be available soon, which will give it a much-needed boost, but in the meantime, it is a pretty desolate sphere that pretty much requires you to use a native Java technology.

JRuby: With its direct port of Rails, JRuby seems to come out on top here. Rails is a great package with some great options. JRuby does suffer some of the same compiled C problems as Jython, but since Rails is really the only web framework for Ruby, all focus could go towards that. Python does not have a "one and only" framework in the same way.

Winner: JRuby

JVM Community Support

We have an existing install and knowledge base built around the JVM that we are keeping. The disruption of moving to another deployment platform would be outrageous in a real business environment. As such, we focused looking at community support to the JVM support community. In the end, community support will make or break all of these languages.

Fortunately, regardless of choice, they all have some great communities. There are exciting things happening in each of the communities. Honestly, I with I had more time so I could participate more deeply in the communities.

Groovy: As the JVM is the only target for Groovy, the entire Groovy community is the JVM community. This obviously has some significant advantages for people looking to deploy on the JVM. It also seems to be picking up a lot of mind-share as the defacto "Java Scripting Language", which is helping that community.

Jython: As I mentioned above, Jython went through a dry period for a few years. That seems to have ended and a lot of exciting things are happening. First, the Jython community is doing a significant upgrade to bring Jython to the 2.5 Python specification. Second, PyPy is doing some very exciting things with Python overall, including the ability to target the JVM, LLVM, and C, and JavaScript back-ends in an optimal fashion.

JRuby: Sun made a nod to the JRuby community when it hired the core JRuby developers. There is a lot of effort being made to make JRuby a better deployment option than CRuby, and I honestly think it has some great possibilities.

Winner: Groovy (by a nose, and only because of the number of people using it)

Conclusion

I don't think it should surprise you at this point that we chose Groovy. Even being openly biases towards Python first and Ruby second (hey, it's cooler :), I could not, in good conscience, choose either of them for melding into our existing environment.

If I were starting from scratch on a project, my choice would be very different. If I wanted to target the JVM, I would choose JRuby (at least until Jython 2.5 and Django are available); if I wasn't targeting the JVM, then it would be, for my Python, but I'd be equally comfortable choosing Ruby.

Regardless, it is going to be exciting to breath some new life into some stilted development practices. I have good confidence that we will be very successful with this. In a later post, I'll discuss some of the ways we are going to be using Groovy and how we will decide on using Groovy or Java for the development of function points. I will also come back and discuss whether we get the benefit we hope out of this, but that will be some time before that can be determined.

Saturday, February 16, 2008

An Arc-Tangent

There has been a lot of talk going on about Arc, Paul Graham's LISP derivative. I've been watching this discussion with some interest, not because I'm a LISPer, but because it is bringing up some interesting questions about programming languages and software development.

First, I want to be clear that this is not a critique of Arc. As I said, I'm not a LISPer. I would not be qualified to say much about the language itself. I have a great amount of respect for what Paul is trying to do: he is trying to make an appreciable difference in how software is built. Whether I agree with him or not does not matter in that context. He is putting his sweat into his beliefs. In that context, even if I totally disagreed with everything he said and did, he has still done it.

What is driving this post is the question of what makes a language "best." For instance, the primary tenant driving Paul's development is code size:
...making programs short is what high level languages are for. It may not be 100% accurate to say the power of a programming language is in inverse proportion to the length of programs written in it, but it's damned close.

I wholeheartedly agree that brevity is an important aspect of high level programming languages. I've switched from Java to Python in the majority of my work for exactly that reason. However, I do not think that the primary aspect of importance is brevity. In fact, I would go so far as to say that there is no primary driver of a good language.

So what would my idea language look like? Like I said, brevity is important, but just as important is self-documentation. With those two items, you have a very good start to a language. However, those are not the only important aspects, in my mind. Added to brevity and self-documentation are clear flow control, a strong set of built-in libraries, and an emphasis in simplicity. In the end, what I'm looking for is an efficient language.

There is no doubt there is a relationship between code size, program grok-ability, and development speed. Code that is too long requires too many page faults to understand. Development speed is similarly related, where longer code has more constructs that need to be developed, tested, and debugged. Quite literally, the more you can do with a single line of code, the less opportunity you have to introduce a problem.

But can that go too far? For example, can you say that code that is too short slows down development? Can you put too much on a single line of code? At some point, I think the answer is 'yes'. At some point, code reaches a sufficiently dense size that it requires multiple mental translation phases to expand to a reasonable vocabulary, and therefore reasonable understanding. The litmus test there is "Do I think I need to comment this code to understand WHAT it is doing?" (as opposed to WHY it is doing it, which may be valid in any case). This is very hard to measure because the line of "too short" is going to vary significantly by developer and experience with a particular language, but I do firmly believe it is there. Furthermore, for code that somebody else will have to read (including yourself in the future), if you pass that point, you are going to pay a penalty in the future.

So, regarding brevity, the ideal code length is the point where grok-ability and development speed curves have the most area under them. Java, in my opinion, is high on grok-ability but very low on development speed. Perl is low on grok-ability but high on development speed. Python has come in just right, for me. My short experience with Ruby and Groovy seem to put them in that category as well. Your mileage may vary.

Grok-ability brings up an important point, which is self-documentation. To what level does a language encourage self-documentation. If a language requires constant commenting or some other form of context switch to understand it, then it is probably not a highly efficient language. This, of course, is also impacted by one's knowledge of said language: the more you know the language, the easier it is to understand the language, the more self-documenting it is. This is most important in the context of interacting with others' code.

An interesting challenge with self-documentation is understanding all of the ways code flows through the system. Branches, loops, goto's, breaks, labeled breaks, exceptions, come from's, alter, signals, continuations, function calls and undoubtedly others that I've never heard of, all increase the difficulty in understanding what the code does. Obviously branching and loops are necessary, but at some point, the complexity of it all might just overwhelm. As an example, a great many people consider exceptions to be evil. I'm not one of those, but a language ought to understand the implications of its flow control on the people who are both writing and reading the code.

If brevity was everything that mattered, APL would have a much larger mind-share than it does. Included above are some of the things I think are important, but they are not all of them. As you go through your process of choosing a language, make sure that you understand your needs. And, if Arc is the right one for you, happy trails for you.

Thursday, January 24, 2008

On Computer Science Degrees

I graduated in 1996 from a fairly prestigious computer science program. It took me 5 and a half years to get the degree, which only put me about nine months behind the average CS student (and that was having a wife and two kids by the time I graduated). Through most of my degree, I was also punishing myself extra by working full time (to support said wife and kids). There were days that I wanted to call professor:sleep(infinity). on my professors, not the least of which because I knew I'd never be using this stuff in real life. Now, nearly 12 years later, I can honestly say that I was SO wrong.

It is now interesting to look around at the debate that seems to be raging about whether Computer Science degrees are useful, whether a more vocational approach should be favored, and whether anything learned in academia is valuable to those of us who build software instead of write papers for a living. To get a feel for the argument, you can see What Good is a CS Degree, No Disrespect, and The Perils of Java Schools. The choice of blogs here is intentional. I admit to my bias: Computer Science degrees are essential, and, furthermore, as Joel says, they have to be hard. Unless your desire is to be a technician.

Now, the caveat in there is that if all you have is a CS degree, with nothing else on your resume, you better have one-heck of an interesting project that you worked on in school. I've been involved in hiring through most of my 14 years of real-world experience and I've found two things to be generally true of people with a good CS degree:

People with a Good CS Degree Can Solve Problems

Computer science is about abstraction, in many ways. In fact, if you look at any given running system, you will see many layers of abstraction: From the bottom up, the OS abstracts the hardware and the run-time abstracts the OS. Coming from the other end, you generally have requirements that abstract a business need and a DSL (or an API...same thing) that abstracts pieces of those requirements. Sitting in the middle is the application: the domain of the programmer. And one thing that computer science programs are good at doing is teaching people to think abstractly.

Unfortunately, this is not the end of the discussion. If it were, there would be no debate. That leads us into my second finding:

People with a Good CS Degrees Cannot Always Code

I remember interviewing a recent graduate from a reasonable computer science program. He had recently been awarded his Master's degree. He was a good thinker, but it became increasingly clear in the interview that all he could do was think. He didn't have a shred of code in him.

Coming out of a computer science program expecting to have all of the bases covered is not realistic. The CS degree gives a very solid foundation, but it is still only a foundation. Unfortunately, because of experiences like mine above, I won't take a CS degree as evidence alone that you will be able to learn to develop software. I want to see that you have really coded, either in a part-time job or internship, in a real project at school, or a personal or open source project. I'm not looking for perfect code (I still haven't found my own perfect code); I'm just looking for proof that the rubber has met the road somewhere.

My final observation about this:

Sometime, Somewhere, You Will Need It

I can't count how many times I said, "I'll never use this." Like I'm ever going to write a compiler. Sure, I'm going to write an operating system. The only people who will use probabilities are the people teaching them. Whine, whine, whine. I freely admit it. I also regret it.

You see, I did write that compiler - it was a DSL for doing SQL-like queries against a content management system.

Oh, and I did write that operating system - it was a scheduler for a game.

And, yeah, about that probability class - I'm now learning all about searching and relevance ranking.

Sure, I never had to go "all the way", but the classes didn't teach that anyway. The classes laid a foundation. They showed me the toolkit I have available. And, to date, while the interfaces to the tools may have changed somewhat (rubber instead of plastic handles), the toolkit hasn't really evolved that much in that last 10 years. I'm not seeing anything to indicate that it will be changing much in the next ten.

Fortunately, so far, I'm doing OK (I think) even given the things I never thought I'd use. With hindsight now, I wonder how much better off I'd be if I had embraced it and said, "I wonder how I'll be able to use this," instead of complaining that I never would. Remember, programming is an abstraction. Among all the other abstractions above, it is also an abstraction between what needs to be done and the best way to get it done, and that is what the Computer Science degree is all about.

Wednesday, January 23, 2008

My Twitter Experiment

The world has been raving about Twitter for some time now. I like to give fads a chance to settle in to see whether I can find value in them or not.

Twitter has reached its settle-point in my continuum. I'm now going to try it out for a while. Feel free to follow me: I am "SoftwareMaven".

My experiment here is to determine if I can find a deeper layer of the web, below the standard search tier, of interesting information, especially in fast-changing arenas. If, however, it turns out to be another method of providing up-to-the-minute gossip, then I will probably tune out.

If you are working on interesting things throughout your day, let me know. I'm interested in finding interesting people to follow myself.

Saturday, January 5, 2008

A new year. Always a good time to take a look around and see what needs updating. It was painfully obvious that my blog needed a face lift. So, I officially announce my newly refaced blog.

Aside from the template change, I've changed the name and URL of my blog as well. Originally, I had an idea for where I wanted to take my blog that made sense under the 'cmssphere' name; but, as I've looked at where my blog has really gone, I realized the name just didn't fit.

The Software Maven

Wikipedia defines a Maven as
...a trusted expert in a particular field, who seeks to pass his or her knowledge on to others.

Of course, only time will tell if I can gain your trust, but in the mean time, I'm certainly up for trying to pass on my knowledge to others on the one field I actually know something about: software.

As the name implies, software will continue to be the primary focus of this blog: how do we get from an idea to dollars in pocket.

Innerbrane.com

First, no, I didn't spell it wrong.

Innerbrane might be a pre-fledgling, not-quite-conceived, perhaps soon-going-to-be company. As my newly tooled "About the Software Maven" states, I've become engaged in developing a massively multiplayer online game. Innerbrane might, just might, become the vehicle for that. If not, it might become the vehicle for something else.

In String Theory, brane provide an anchor point for certain kinds of strings to connect to. Lots of facinating things come out of the math from them, but the one I like the best is that the entire universe is a holographic view of strings connected to a brane that surrounds the universe (please remember, I am the software maven, not the physics maven; my pseudo-understanding comes from popular science reading :). I'm hoping that I can find some connate relationship in the gaming world.

Even more beautiful is that I can change the name and URL of the blog without affecting you at all. I hope you enjoy the new look!