Nowhere in that scenario do I see any of the following information:
- Does it work?
- Does it make things easier than the old way?
- Are the users happy with it?
Lately I’ve seen a lot of projects like this. Although the project is originally designed to make things better for people, somewhere along the way the better part is forgotten in an effort to launch on time. The mission morphs into an effort to complete a task rather than to make an improvement. The original intent is forgotten as deadlines loom.
The result? The new system is up and running, but it takes three times as long to manipulate the data than the old, manual way did. Or no one knows how to use it. Or people are frustrated, but no one is listening. The project launched on time, so attention has turned elsewhere.
You’ve probably figured out that I’m currently on the user end of some implementations that have left me frustrated and fuming. Sheepishly, though, I have to admit that I’m not without fault of my own. For example, I recently took on a big project to combine services at a single vendor. In the end we launched as advertised, but no one was happy. Thankfully, after a short period of assuming oh-they-just-don’t-like-change, some voices I trust rose out of the fray and convinced me that I needed to re-evaluate my solution. Admitting failure wasn’t easy, but I’m glad I did. It allowed me to find a solution that provided even better results than I had expected from the original execution.
No project should be considered complete without measuring it against the incremental utility it was designed to provide. No matter what the project, we should always ask the people who interact with it–not the people who designed or implemented it–for their feedback. If the end result isn’t better than the original state, we need to keep working on it until it is.