Just do it
Last week I noted Clay Shirky’s comment that for internet innovation “the cost of trying is lower than the cost of analyzing.”
I noted that this was an affirmation of the “Probe and Learn” process advocated by Lynn and Morone in the 90s. Many customer engagement techniques discovered by analyzing high-tech firms in the 70s-90s seem even more relevant in the internet era. “Probe and Learn”, “user toolkits” (von Hippel), “experimentation” (Thomke), and others seem to be more relevant and useable in the Internet era.
This week’s internet example comes from a course at Stanford that assigned students to develop applications for Facebook.
B.J. Fogg, the instructor, noted “Many crummy trials beat deep thinking.”
http://www.slideshare.net/bjfogg/10-million-in-10-weeks-what-stanford-learned-building-facebook-apps
His key insights from observing the students include
Speed & flexibility in launch & iterations
- Many crummy trials beat deep thinking
- Flexibility beats quality
- Deadly: Getting too attached to one app idea.
Trying is cheaper than analyzing! Crummy trials beat deep thinking!
Nike seems to have it right: Just do it TM
Linear (or parallel…whatever), analysis-heavy, processes for NPD don’t seem to fit this model…they think about it; they don’t…just do it.
Crummy trials and beta culture hit the mark. They are a big part of financial services innovation. Get it out there and see what the customers like.
But there still should be some way to manage it and make it more effective. That’s what we need.
Very interesting! This is a topic I have no experience with, so two main questions come to mind about how applicable the Stanford experiment might be to innovation in other services:
1. With Facebook Apps, there are essentially no switching costs – users can install and try out Apps easily, quickly, and often, and only stick with what works for them. Just as App developers are running “many crummy trials” to see what works and what doesn’t, so are users. Is it possible that the willingness and ability of users – as well as developers – to engage in this kind of services speed-dating is essential to making the “many crummy trials” strategy work? If so, might non-zero switching costs (learning how to use complicated new software or services, inputting user data, establishing connectivity with other services the user already uses, etc), reduce the usefulness of the “many crummy trials” approach? Could this limit the strategy’s application to fields like financial services?
2. I also wonder whether “many crummy trials” might have hidden costs for industry leaders with established brands. For the Stanford students, a failed App cost nothing more than the students’ labor and some minimal server hosting and related fees for as long as the App was up. For a company with an established brand, however, might the “many crummy trials” harm the reputation of a brand? Could this limit the technique’s utility to industry leaders?
Any thoughts on either of these issues from people with experience in NSD?
When possible we would identify the users who really wanted — or stood to gain — from the innovation and told them they were Beta-testing. In reality they were Alpha testing something we thought might be good enough.
Any other approaches?
As of right now there are no major brands in the top 50 applications on Facebook. It’s likely they are afraid of hurting their brand by doing something innovative and new – but eventually if they want to play the game they’ll have to figure out the rules. It takes a completely different way of thinking about innovation and most major brands haven’t learned to do that yet.
Regarding crummy trials hurting brand, it’s an interesting question. There is certainly that potential. Facebook is a big brand now; it has made mistakes and recovered from their screw-ups, at least with their main audience. I think people are learning not to expect perfect execution all the time.
BJ Fogg
Stanford Persuasive Technology Lab