Tuesday, December 15, 2009

India Needs Public Policy And Service Innovation And Not Web 2.0 Companies

The second most populous country with the fourth largest spending power, India, saw a surprising 7.9% YOY GDP growth well above the expectations of 6.3%. The Indian stock market recovered much quicker since the US financial meltdown. In fact one of my friends who oversees sales of a European earthmoving equipments company in India complained that the European manufacturers have not increased their production to meet the increased demand in India due to the faster economic recovery, especially in the infrastructure sector. When I switch on a news channel in India I hear all about incentivizing manufacturing companies to increase indigenous production that will fuel the growth of the industrial sector. I also see commercials ranging from baked potato chips to a service to transfer money using virtual currency on a mobile phone targeted to the fast growing middleclass. The marketers have no problem understanding the rich and the middle-class of India and designing the products for them.

What I don’t see is entrepreneurs catering to the people at the bottom of the pyramid. Vivek Wadhwa’s guest posts on TechCrunch stirred quite a controversy especially the one on the “reverse brain-drain” - the Indians returning to India from US. NYTimes recently carried a story on the innovation pace in India. There are more angel investors in India than even before. A few people from Infosys have started their own VC fund. This is all good but I don’t think that the entrepreneurs are pursuing the right opportunities. I have written before about the opportunities to cater to the people at the bottom of the pyramid and I will repeat again that it will be a huge mistake to equate India’s needs with those of the developed countries. India has a little over 300 million people that are below poverty line (450 million by international definition – who earns less than $1.25). What it simply means that at least the one-third population of India has no guarantee that there will be food at the table and access to affordable healthcare. India has significant challenges in getting the basic services right and educating its people and providing healthcare to them. These people will do just fine without Web 2.0 companies.

Here is an example and an opportunity for the kind of innovation that I am referring to:

Electronic voting machine

India has been trying for many years to improve its voting process where the votes are regularly rigged in many parts of the country – it’s called “booth capturing”. The ex. Chief Election Commissioner N. Gopalaswamy (also father of a close friend) helped revolutionize the voting process with the introduction of electronic voting machines with a tiny little feature called “12 second delay” that made all the difference. This delay prevented the votes to be “stuffed” even if the machine was physically compromised. The machine also has an algorithm to recognize a pattern to detect the votes being cast every 12 seconds and simply discard them if needed. This is a great example where technology is being used to fight the corrupt behavior during the elections.

Universal Healthcare Card

This is a huge opportunity. The Universal Healthcare Card is an attempt to insure 300 million people (below the poverty line) with the cost of $1B, which is a small fraction of overall healthcare spending of $45B, which in turn is only 4% of the GDP. This policy has administrative and operational challenges to fight corruption and to ensure that people below poverty line actually benefit out of this plan. I see this as a socioeconomic problem that technology can help solve to provide accessible healthcare to the people who really need them without any pilferage.


What India really needs the most is the entrepreneurs who can get involved in the public policy and create service innovation to remove the fundamental roadblocks that India has on its way to become a developed nation. We need more people like Nandan Nilekani who left Infosys to spearhead the efforts of the national Unique Identification Project. He is an ultra smart entrepreneur who understands the challenges associated with such a project, has deep passion for public policy, and is fully committed to make things happen.

Are you an entrepreneur up for such a challenge?

Saturday, October 31, 2009

Google Does Not Have Innovator's Dilemma

I asked a question to myself: "Why has Google been incredibly successful in defending and growing its core as well as introducing non-core disruptive innovations?". To answer my own question I ran down Google's innovation strategy through Clayton Christensen's concepts and framework as described in his book "Seeing What's Next". Here is the analysis:

Google's latest disruptive innovation is the introduction of free GPS on the Android phone. This has grave implications for Garmin. To put this innovation in the context it is a "sword and shield" style entrant strategy to beat an incumbent by serving the "overshot customers". The overshot customers are the ones who would stop paying for further improvements in performance that historically had merited attractive price premium. Google used its asymmetric skills and motivation - Android OS, mapping data, and no direct revenue expectations - as a shield to enter into the "GPS Market" to serve these overshot customers. Google later turned its shield into a "sword" strategy by disinteremediating the map providers and incentivizing the carriers with a revenue-share agreement.

On the other hand Google's core search technology and GMail are a couple of examples of "incremental to radical" sustaining innovations where Google went after the "undershot customers". The undershot customers are the ones who consume a product but are frustrated with its limitations and are willingly to switch if a better solution exists. The search engines and the web-based email solutions existed before Google introduced its own solutions. GMail delighted the users who were frustrated with their limited email quota and the search engine used better indexing and relevancy algorithms to improve the search experience. I find it remarkable that Google does not appear to be distracted by the competitors such as Microsoft who is targeting Google's core with Bing. Google continued a slow and steady investment into its sustainable innovation to maintain the revenue stream out of its core business. These investments include the next generation search platform Caffeine, social search, profiles, GMail labs etc.

Where most of the companies inevitably fail Google succeeded by spending (a lot of) money on lower-end disruptive innovations against "cramming" their sustaining innovation. Google even adopted this strategy internally to deal with the dilemma between its sustaining and disruptive innovations. One would think that the natural starting point for Google Wave would be the GMail team but it's not true. In fact my friends who work for Google tell me that the GMail team was shocked and surprised when they found out that some other team built Google Wave. Adding wave-like functionality in the email would have been cramming the sustaining innovation but innovating outside of email has potential to serve a variety of undershot and overshot customers in unexpected ways. This was indeed a clever strategy.

So, what's next?

If I were AT&T I would pay very close attention to Google's every single move. Let's just cover the obvious numbers. The number of smartphone units sold this year surpassed the number of laptops sold and the smartphone revenue is expected to surpass the laptop revenue in 2012. Comcast grew their phone subscribers eight-fold with the current number exceeding 7 million. Google Voice has over 1.4 million users of which 570,000 use it seven days a week. Even though Google does not like its phone bill Google seems to be committed to make Google Voice work. This could allow Google to serve a new class of overshot customers that has a little or no need of land line, desire to stay always-connected, and hungry for realtime content and conversations. Time after time Google has shown that it can disintermediate players along its value chain. It happened to NavTeq and Tele Atlas and it is happening to other players with Google Power Meter and Chrome.

Many people argue that Chrome OS is more disruptive. I beg to differ. I believe that Chrome OS does not have near term disruption trajectory. Being wary of hindsight bias, I would go back to the disruptive innovation theory and argue that Chrome OS is designed for the undershot customers that are frustrated with other market solutions at the same level. For the vast majority of the customers it does not matter. If Google does have a grand business plan around Chrome OS it certainly will take a lot of time, resources, and money before they see any traction. I see the telco disruption happening much sooner since it serves the overshot customers. I won't be surprised if Google puts a final nail in telco's coffin and redefines the telephony.

Wednesday, October 28, 2009

Branding On The Cloud Is Part Business Part Mindset

As it goes "on the Internet, nobody knows you're a dog". Actually people do. Recently AT&T asked their employees to fake the net neutrality. Employees were asked to use their personal email addresses to petition against net neutrality. The internal memo ended up on the blogs and Twitter in minutes. Forcing your brand down your employees' throats is not particularly a smart idea.

Is your brand ready for the cloud? This is not a question that many companies ask until their brand gets caught in a cloud storm. The storm is about the customers, partners, and suppliers discussing your products and brand in the public using social media, report problems using the SaaS tools, and engage into the conversations in ways that you never anticipated. Recently Seth Godin announced an initiative to help companies launch brand in public. It stirred quite a controversy and created confusion. He had to pull back. The organizations are simply not ready. The organizations are unclear on how to monitor, synthesize, and leverage the conversations that are happening on the cloud. The cloud enables the people to come together to share and amplify their conversations. .

Whether you are a SaaS ISV, non-SaaS ISV, or not even a software company, what can you do as an organization to build your brand on the cloud? It is part business past mindset:

Don't dread failures instead use them to amplify brand impact:

Recently an enterprise SaaS ISV, Workday, experienced an unplanned 15-hour outage. Not so surprisingly customers responded well with the outage. SaaS essentially made the outage a vendor's problem. Unclear? Take an example of the analog world. Occasionally I have experienced power outage in my neighborhood (yes, even in supposedly modern silicon valley). The wider the outage faster it got resolved. The utility folks feverishly worked to resolve the problem that impacted hundreds of subscribers. Coming back to Workday's outage, while Workday had all hands on the deck to resolve the outage the management team personally picked up the phone and started calling the customers to reassure them that the outage will be resolved soon. They extensively used the social media during and after the outage to be transparent about the overall situation. Now it gets even more interesting. They reached out to a key blogger, Michael Krigsman, who analyzes IT failures to brief him on what happened and extended an invitation to have a chat with the CEO. Michael Krigsman has a great post 'A matter of Trust' covering this outage and his subsequent conversations.

Workday used its outage not only to underscore the fact that why people think they are better of with a SaaS vendor but also used the opportunity to strengthen their brand proposition amongst the customers, analysts, and bloggers.

Building brand leveraging SaaS delivery model to act in realtime:

If you are a SaaS vendor ask yourself whether you are leveraging the SaaS delivery model to strengthen your brand in realtime. Jason Fried from 37 Signals was quite upset upset with Get Satisfaction when 37 Signals got labeled as “not yet committed to an open conversation”. A couple of people from Get Satisfiction immediately responded, apologized, and changed the parts of the tool in minutes that caused the problems. Similarly Twitter postponed its scheduled downtime to accommodate the protest against the outcome of the election in Iran. A former deputy national security advisor to George W. Bush, Mark Pfeifle, went to the extent to comment that Twitter founders should have won the nobel peace prize for postponing the downtime.

Being able to demonstrate the support for what you believe in has significant positive impact on your brand. Don't underestimate the power of social media on the cloud. Twitter has changed culture of Comcast.

Empower your employees to be your mavens:

As Malcolm Gladwell puts it customers don't retain their soap wrappers to call the toll free number to let the manufacturer know if they are unsatisfied. But if someone does call, you know that, you discovered a maven whom you should serve at any cost. That person will start the word-of-mouth epidemics. Chances are that some of your employees are already having conversations on the cloud. Make them mavens of your brand. Get Satisfiction is an example of a great tool that a company can use to encourage their employees to get closer to the customers using the alternate customer support channels. Glassdoor is another example of such a tool that not only works as a great salary benchmarking tool but also provides insights into culture of an organization. Primarily designed as a tool for the external candidates the tool has potential to be used by the internal executives to objectively assess the employee sentiment and help improve the external brand perception as projected by the employees. Focus on your employees and how they can better connect with the customers and partners using the tools and open communication channels on the cloud.

I am not ignoring the negative aspects of the cloud being an open medium that isn't perfect. It never will be. As Bruce Schneier describes the commercial speech arms race - "Commercial speech is on the internet to stay; we can only hope that they don't pollute the social systems we use so badly that they're no longer useful."

I am optimistic. The cloud is a great platform for social participation that, if used wisely, could strengthen your brand.

Monday, September 28, 2009

Augmented Reality Will Change Enterprise Software For Real

Augmented Reality (AR) has seen a sudden buzz in the last few weeks. The announcements just keep coming; Layar announced a 3D API and Wikitude announced AR API. VentureBeat recently ranked the emerging start-ups in augmented reality. AR is still a nascent domain with many quirks and twists but it is for real and it is going to cause disruptions in many dimensions. This is how I see it would affect the enterprise software:

No interface will be the interface

The augmented reality uses the most natural interface, the reality, and layers information on top of it essentially eliminating the need to have an artificial interface. Users will prefer in-context user experience at the locations where they perform their primary task compared to unnatural static experience on their current devices. I also see the impact and potential for innovation in the MVC frameworks. The AR opens up a lot more opportunities for the developers and designers, who were constrained by the traditional technological barriers, to innovate new UI frameworks that have higher affordance and closer mapping to users’ mental model against an unproductive artificial user interface. Getting closer to user’s mental model is going to make the user experience a pleasure and the users more productive. Check out this Layar video:



Data will be the new design

With the growing popularity of AR once considered a nice to have feature, the alternate data consumption, will become the core requirement of the enterprise software. The users are likely to access data with a variety of new clients in unanticipated ways. The widespread adoption of RSS feeds made the interaction and visual design of a blog less relevant against burning the feeds to deliver the content in realtime. Similarly accessibility to a range of rich enterprise data in real-time is going to outweigh everything else. The users will create new environments and experiences. This emergent behavior is a golden opportunity for the companies that have captured rich enterprise data but have faced challenges to make it accessible and useful to the end users.

The SaaS, the cloud, and mobility will be base expectations

The AR applications require the data to be accessed from a range of physical locations on mobile devices without any latency. This distributed data need combined with the nature of the AR deployments where one company does not own an end-to-end solution will necessitate the data and the apps to be delivered from the cloud to optimize the solution. The users will not only demand that the application be accessible from the mobile devices but the mobile devices might be the primary and in some cases the only interface to the business information. Emerging technology trend such as cloud-based rendering, when combined with such AR deployments, has potential for some killer innovative applications.

These are exciting times and I hope that the entrepreneurs tap into the world of augmented reality and make it real by creating innovative experiences that demonstrate technology excellence, create new business models, and make it a real pleasure to interact with the enterprise software.

Thursday, September 17, 2009

True Entrepreneurial Spirit Is Believing In A BHAG

GigaOM has a post "How Start-ups can win big with VCs" that muddies their point of view of having a clear value proposition with not doing something because no one may want this or someone else has already done it. I added the following comments to that post:

I agree with the viewpoint about honing the pitch. However I have a different take on some of the start-ups. It’s one thing not to know what the value proposition is but it is other thing to believe in a BHAG. Many start-ups had huge success when people initially thought that they could live without that. Twitter is one of those examples. Also, there is nothing wrong in duplicating what someone else is doing. Presence of similar companies signal that there is a market. It is now up to the new entrant to beat the competition by solving the problem well. When Google announced Gmail it was one of the last (as of now) web-based email that was introduced. Google would not have released Gmail or even the search engine if they would have thought that other people are already solving this problem.

I welcome the entrepreneurial spirit of the Silicon Valley. This innovation engine is amazing. I was watching the panelists beat up http://anyclip.com at Techcrunch 50 suggesting that the content deals are hard to come by. I liked the answer: “No one thought that we would have a black president one day”. The company acknowledges that it is an astronomic task but the reward is very high if they can pull that one off. We all know the story about Steve Jobs, iTunes, and the music industry. Let’s not forget that we can repeat the history only if we believe into these start-ups and give them an opportunity to succeed.

Monday, August 31, 2009

Amazon Customers Can Now Get A Placebo Cloud

That would be the new Virtual Private Cloud (VPC) by Amazon.

I am a big proponent of the public cloud but I am a bigger proponent of giving what the customers really want. Amazon had resisted offering a private cloud but they finally gave in and offered a private cloud or at least this is what they want the customers to believe. The bloggers are already questioning whether VPC is a true private cloud. Regardless of the arguments whether the VPC is really a “virtual” private cloud or a “virtually" private cloud, I believe, this placebo cloud is likely to help the customers overcome the cloud computing adoption barriers:

Security: The placebo cloud would alleviate the perceived risk of adopting the cloud computing. The perceived risk is based on the customers’ past experiences. The customers believe that anything that they can connect using VPN must be safe even if they are tunneling into a set of shared resources. The customers will get an environment what they believe is safe and secure to deploy and consume the applications.

Ownership: The VPC does not let the customers own the computing but still provides a sense of ownership. If Amazon’s marketing engine does a good job the customers would be less wary about the lack of ownership.

Virtualization: The customers are not necessarily clear about the real differences between virtualization and the cloud and they necessarily don’t care as long as their business goals are realized. The VPC would allow the customers to work with the existing technology stack that they already understand such as VPN and network-virtualization. The VPC would also empower the partners to help the customers build the bridge from their on-premise systems to the cloud to create a hybrid virtualization environment that spans across various resources.

Even if I personally favor the public cloud I do want to see the customers buy into the cloud computing and later make a decision whether they should move to the public cloud to leverage the cloud in its true sense.

Thursday, August 27, 2009

SOAP may finally REST

Lately I have observed significant movement in two transformational trends - adoption of REST over SOAP and proliferation of non-relational persistence options. These two trends complement each other and they are likely to cause disruption sooner than later.

The enterprise software that required complex transactions, monitoring, and orchestration capabilities relied on the SOAP-based architecture and standards to realize their SOA efforts. The consumer web on the other side raced towards embracing RESTful interfaces since they were simple to set up and consume. There are arguments on both the sides. However, lately the market forces have taken the side of REST even if REST has significant drawbacks in the areas such as security and transactions. This once again proves that a simple and good enough approach that conforms to loose contracts outweighs a complex solution that complies to stricter standards even if it means compromising certain critical features. The web is essentially an unreliable stateless medium and any attempts to regulate it is less likely to work in our favor.

Many argue that the self-describing standards for SOAP are its strength over the RESTful services that lacks such features. However designing a RESTful service is fairly trivial since it allows to learn and experiment by being iterative unlike a relatively complex upfront learning process associated with the SOAP-based architecture. There has been a flurry of activities in the messaging middleware by Google that makes these RESTful interface even more compelling. This includes Google Wave Federation and PubSubHubbub. The developers are more likely to prefer these messaging protocols against SOAP and that would mean more RESTful APIs in the Pushbutton Web. Easy consumability reduces the initial adoption barrier and that's the key to success in many cases.

Since I last blogged about the continuum of the database on the cloud from schemaless to full-schema new persistence options have emerged such as RethinkDB and HadoopDB and many debates have spurred questioning the legacy of the RDBMS. For a cloud-like environment the statelessness, ad hoc persistence design, and instantaneous horizontal scale go well with the RESTful architecture. The growing popularity of SimpleDB and CouchDB along with many discussions on how to achieve CRUD with REST signal that the persistence is becoming more RESTful and schemaless.

I was convinced quite some back that REST was certainly the future for the consumer web but the latest trends have made me believe that the REST will see its adoption in the enterprise software accelerated much sooner than I had originally expected. This is like Java and Internet; the organizations embraced Java and the Internet at the same. The same will be true for the cloud and REST. When the companies consider moving to the cloud they will reconsider their SOA and persistence strategy and will likely adopt REST and alternate persistence models.

The cloud might be the last nail in the SOAP coffin.

Tuesday, August 18, 2009

SaaS 2.0 Will Be All About Reducing The Cost Of Sales

A clever choice of the right architecture on right infrastructure has helped the SaaS vendors better manage their operational infrastructure cost but the SaaS vendors are still struggling to curtail the cost of sales. As majority of the SaaS vendors achieve feature and infrastructure cost parity, reducing the cost of sales is going to be the next biggest differentiation for the SaaS vendors to stay competitive in the marketplace.

Direct sales model is highly ineffective and cost-prohibitive for the SaaS vendors as it does not scale with the volume business model that has relatively smaller average deal size. The role of the direct sales organization will essentially get redefined to focus on the relationship with the customers to ensure service excellence and high contract renewal rates in addition to working on long sales cycles for large accounts.

How can a SaaS vendor reduce the overall cost of sales to maintain healthy margins and growth?

This is a difficult nut to crack. There are no quick fixes. There is no easy way to optimize the tale end of the process without holistically redesigning the entire SaaS life cycle.

Self-service demos to "self-selling" trials:

Fundamentally the direct sales model for an on-premise software sales has been all about initial investment into the right demos to model customer scenarios and align the sales pitch to match the solution needs. The SaaS vendors moved away from this model as much as they could and replaced it with the self-service demos or trials. However these demos are not "self-selling" and still requires intervention from the direct sales people at various levels.

The SaaS vendors need to move from self-service demos to the self-selling ones that are not only fully functional out-of-the-box but also articulate the solution capabilities implicitly or explicitly. The demo is not just about showing what problems you are solving but it is also about how well it maps to the customers' pain points. It is like buying a hole and not a drill. The demo and the product should scream out loud the value proposition without making customers go through a webinar or a series of PowerPoint slides.

Customer acquisition to customer retention:

SaaS companies have traditionally focused their sales and marketing budget on customer acquisition against retention. While customer acquisition is a necessity the increasing SaaS competition could result into the current customers ditching the vendors. Customer support is the new sales model. Design your customer support organization and operations to retain customers. Don't let the contract renewals slip through the cracks.

Your customers are the biggest asset that you have. Market new solutions to them as an up-sell. One of the powerful features of a SaaS platform is to be able to integrate and push the new products effortlessly to the existing customers and have them try it out before they start paying you. Modernize your internal tools to track the usage analytics to better understand your customers, sales activities and effectiveness of the marketing campaigns. You have a problem if you cannot tell which customer is using what, who are the right partners, who needs training and support etc. If you haven't lately looked at the tools that your sales people use this is the right time. I would not expect a SaaS vendor to reduce the cost of sales without empowering the sales force with the true customer, competitior, and partner intelligence.

Low-touch persuasions to hi-touch interactions:

Low-touch one-to-one selling does not scale. Replicate the Avon model. Design a great ecosystem of your channel partners to whom you can pass on the cost of sales. Align the incentives and encourage the partners to sell but ensure the customer support and overall brand integrity. This strategy would require an extensive partner program with sizable investment in training and tracking what and how the partners are selling but this investment will go long way.

Reserve the direct sales force engagement for large hi-touch CIO type deals where you are required to go whole nine yards before you get a contract. The key is to have a highly variable sales force and extremely efficient compensation model to deal with a variety of prospects and customers. One size does not fit all.


Low-barrier adoption to zero-barrier productivity:

The SaaS model pioneered the low-barrier adoption empowering the LOB to sign up and start using the software without an approval or help from the IT. Eliminate any and all barriers to further penetrate the adoption. Do not enforce upfront credit-card requirements and even skip the registration if you can. Let the customers use the software with the minimum or no information up front. Demonstrate value when asking for more information e.g. Picnik lets you manipulate image in any way you want but would ask you to register if you want to save images. There should be no paper work whatsoever, not even a physical contract. Allow customers to bring in the content from other sources such as Flickr, Facebook etc. Allow the customers to have access to a live sandbox as a step before the dedicated trial. Starting from a blank canvas could be a hindrance to evaluate a product.

Tuesday, July 28, 2009

Designing An Innovation Incubator To Prevail Over Innovator's Dilemma

The large scale software companies often deal with the tension between incremental and revolutionary innovation. They know that if they only keep listening to their customers' requests the very same customers will put them out of the business. Clayton Christensen has captured this phenomenon in The Innovator's Dilemma. Over a period of time these companies have managed to execute the incremental innovation really well to deliver the same software release after release and occasionally introduce new products. However most of these companies struggle to incubate revolutionary innovation inside the company since it is fundamentally a different beast. The executives are often torn between funding the revolutionary initiatives to ride the next big wave and funding the incremental innovation that the current customers and the market expects. It is absolutely imperative for the executive management to differentiate between these two equally important but very different types of innovation opportunities. Many companies have set up in-house incubators to bring revolutionary innovation to the market but in most cases the incubators are set up as yet another department inside the company that shares the same legacy and bureaucracy. Following are some suggestions on setting up and running an incubator to avoid the innovation disappear down the rat hole:

6x6 cubicle in Iowa won't cut it: There is nothing wrong with Iowa but I won't build an incubator there. Pick a location that emanates entrepreneurial spirit, attracts talent, and is surrounded by good colleges. Scout for a location that has good work-life characteristics where people feel the energy and have social outlets - pubs, hiking trails, good restaurants etc. San Francisco and Palo Alto in the Silicon Valley are a couple of examples of such locations.

I cannot overemphasize the impact of an inspirational physical space that fosters innovation and drives people with insane urge to be creative and build something disruptive. Ditch Steelcase and shop at IKEA. Have a loft-like set-up with open seating, project rooms instead of conference rooms, and have all the furniture on the wheels. Can you write on all the walls? Have alternate comfortable seating all over the places - bean bags, red couches, chairs and coffee tables with tall bar stools. Innovation does not happen in a cubicle. Have an entire team paint the loft with bright colors as a team-building exercise. Pay a mandatory visit to IDEO and d.school in Palo Alto if you haven't already been there.

No process is the new process: The incubator should not inherit your organization's legacy processes. You cannot expect your employees to behave differently to solve a problem if they are restricted by the same process overhead. Throw your application policing process out of the window and let people experiment with whatever works well for them. One of the main reasons why incubators fail because they rely on the organization's product roadmap and capabilities. Don't pick up any dependencies instead simply consider your organization's capabilities as one more source that you can evaluate for your needs. Use open source as much as you can, build your own partner relationships, and OEM whatever you can.

Pizza-size multidisciplinary teams: Can your entire product team be fed on two large pizzas? Smaller and tighter teams reduce the communication overhead, churn, and produce amazing results. Don't follow your corporate headcount calculations. Go for smaller teams. Hire I-shaped and T-shaped people to form a multidisciplinary team. Have a good mix of internal people who understand the business that you are into and the external people that are entrepreneurs or have worked in incubators. Get help from the external recruiters to find the right people since the internal recruiters may or may not have expertise to find and hire the kind of people that you are looking for.

Be agile and design think everything: Design thinking and agile methodology empower the teams to apply an ambidextrous and iterative approach to take on the revolutionary ideas in highly ambiguous environment. Encourage wild ideas, defer judgment, and be iterative. Be visual in storytelling, stay close to your customers and end-users, and have persuasive, catalysts, and performance design. Focus on useful over usable. Have a good-enough mindset and ship often to get continuous feedback to keep improving. Iterate as fast as you can and keep your sprint cycles small.

Seed, Round A, and Round B: This is where many organizations get hung up on an upfront $200M business case to qualify the business opportunity as incubation-worthy. If all the start-ups required to have a detailed upfront business model we would not have had Twitter, Facebook, Google, Craigslist etc. The same incremental business case mindset simply won't work for revolutionary innovation. The disruptive innovation has characteristics that many people haven't seen their in their lifetimes. The organization need to adopt the VC model and embrace the high risk high reward business environment. There will be plenty of failures before you hit a jackpot but that's the fundamental premise of VC funding. Have a separate budget and an investment decision process that provides autonomy to an incubator to make their own decisions without going through a long chain of command. Have multiple rounds of funding to ensure that you are tracking the potential of the innovation right from the seed to the maturity.

Explore all exit strategies: Don't expect to go-to-market with everything that comes out of an incubator. The mainstream product teams in your organization may or may not embrace and support the innovation citing the reasons "not invented here" or "too radical". Focus on your customers and success stories. If you are successful people will come to you instead of you selling the outcome to the organization. Be courageous and kill the products that are not working out and experiment with other exit strategies such as spin-offs, outright sale etc. Try to keep the product portfolio moving. High volume and turnover is a good thing for an incubator. Financial success is not the only success that counts; happy customers, re-invigorated organization, and global visibility as an innovation player are equally important KPI.

Reward high risk behavior: People work for uncertain and highly ambiguous projects for two reasons - higher reward for higher risk and passion to build something new. Design your compensation structure that is fundamentally different than your corporate title-driven compensation and includes a generous equity option. The titles don't mean much when it comes to an incubator. What really matters is the skills, attitude, and the knowledge that people bring to the table. The career path in an incubator is very different than a conventional corporate ladder. Make sure that all the people that are part of an incubator truly understand what they are signing up for and are passionate for the work rather than simply waiting to be a "Chief Innovation Officer".

Friday, July 17, 2009

Debunking The Cloud Security Issues

Forrester recently published a report on the security of cloud computing that grossly exaggerates the security threats. To point out few specific instances:

"Users who have compliance requirements need to understand whether, and how, utilizing the cloud services might impact your compliance goals. Data privacy and business continuity are two big items for compliance. A number of privacy laws and government regulations have specific stipulation on data handling and BC planning. For instance, EU and Japan privacy laws demand that private data—email is a form of private data recognized by the EU—must be stored and handled in a data center located in EU (or Japan) territories"

This is a data center design 101. One of the biggest misconceptions the organizations have about the cloud computing is that they don't have control over where their information is being stored. During my discussion with the Ron Markezich, corporate vice president of Microsoft Online, at the launch of Microsoft's Exchange on the cloud he told me that Microsoft already supports the regional regulatory requirements to store data in regional data centers. Cloud is fundamentally a logically centralized and physically decentralized medium that not only offers utility and elasticity but also allows the customers to specify policies around physical locations.

"Government regulations that explicitly demand BC planning include the Health Insurance Portability and Accountability Act (HIPAA) ...."

Amazon EC2 fully supports HIPAA [pdf] with few customers already using it. It is rather strange that people think of cloud as a closed and proprietary system against an on-premise system. A CIO that I met few weeks back told me that "on-premise systems are like an on-premise vault that you don't have a key to". The cloud vendors are under immense pressure to use open source and open standards for their infrastructure and publicize their data retrieval and privacy policies. In fact many people suggest that the United States should force the public companies to put their financial information on the cloud so that SEC can access it without any fears of the companies sabotaging their own internal systems. The cloud vendors have an opportunity to implement a common compliance practice across the customer. The customers shouldn't have to worry about their individual compliance needs.


"The security and legal landscape for cloud computing is rife with mishaps and uncertainties."

And the rest of the landscape is not? What about T.J. Maxx loosing 45.7 million credit and debit cards of shoppers, Ameritrade loosing backup tapes that had information of 200,000 of its customers, and UPS loosing Nelnet's backup tape that had personal information of approximately 188,000 customers?

"With the rising popularity of cloud computing and the emergence of cloud aggregators and integrators, the role of an internal IT security officer will inevitably change—we see that an IT security personnel will gradually move away from its operations-centric role and step instead into a more compliance and requirements-focused function."

Staying in current operational role still requires the IT to be compliant. Just because the information is stored on-premise it does not automatically make the system compliant. I would expect the the role of operational IT to change from a tactical cost center to a strategic service provider. If the IT does not embrace this trend they might just become a service consolidation organization. The role of a security officer will evolve beyond the on-premise systems to better understand the impact of the cloud and in many cases help influence the open cloud standards to manage and mitigate the security risks.

"In other cases, the division is not quite so clear. In software mashups, or software components-as-a-service, it can be difficult to delineate who owns what and what rights the customer has over the provider. It is therefore imperative that liability and IP issues are settled before the service commences."

I partially agree. The customers should absolutely pay attention to what they are signing up for and who will own what. The critical aspect of the IP is not the ownership but the IP indemnification. After the SCO case customers should know what are their rights as a customer if someone sues a cloud provider for IP infringement.

"Other contractual issues include end-of-service support—when the provider-customer relationship ends, customer data and applications should be packaged and delivered to the customer, and any remaining copies of customer data should be erased from the provider's infrastructure."

This is what happens when we apply the same old on-premise contracts to the new SaaS world. There are no copies of the software to be returned. Customer simply stop receiving the "service" when the relationship ends. Vendors such as Iron Mountain advocates the role of a SaaS escrow for business continuity reasons. It is up to the customers to decide what level of escrow support they need and what's their data strategy once the relationship with a SaaS vendor ends. It is certainly important to understand the implications of SaaS early on but there is absolutely no reason to shy away from the cloud.

Thursday, July 9, 2009

Chief Sustainability Officer - the next gig for a CIO

CIO no longer means Career Is Over. CIOs should not underestimate their skills and organizational clout to lead the company in its sustainability efforts by being a Chief Sustainability Officer (CSO).

Leverage relationship with the business: As a CIO you work closely with the business and have holistic understanding of the challenges that the business faces and the growth opportunities that they aspire to go after. You can leverage the relationship with the business to own and execute the sustainability strategy and effectively measure and monitor the progress using the expertise and investment into the IT systems. You can walk your business folks through your scenario-based architecture to help them quantify the business impact of the sustainability initiatives and estimate the required transformation efforts.

Start with Green IT and lead the industry: Start with the area that you are most familiar with. Reduce the carbon footprint of your IT systems by improving the PUE of the data centers and better manage energy consumption of the desktops. If you do decide to disinvest into the data centers and move tools and applications to the cloud it will not only reduce the energy cost but would also result in consuming cleaner energy. Share your best practices with your industry peers and lead your industry in the sustainability efforts.

Make Sustainability a business differentiation: For many organizations sustainability is not just a line item in the corporate responsibility report, it is actually the future growth strategy and a sustainable competitive advantage over their competition e.g. sustainable supply chain, higher operating margins, end-to-end environmental compliance etc. As a CIO you have the right weapons and skills in your arsenal to transform the organization in the sustainability initiatives. You could innovate your company out to grow leaps and bounds by focusing on the sustainability. This could be a blue ocean strategy for many organizations that are struggling in the red ocean to beat the competition. You do have an opportunity to empower your customers in their mission to be sustainable by providing them the data that they need e.g. a bill of material with the carbon footprint and recycle index, realtime energy measurement etc.

Redefine the program management office: The sustainability projects are very similar to IT projects in many ways - make a large set of stakeholders to commit without having much influence on them, work with internal employees, customers, and partners etc. Traditionally you have been running the program management office for technology and information management projects. Apply the same model and leverage skills of your program managers to run sustainability projects internally as well as externally. Sustainability is fundamentally about changing people's behavior. Promote alternate commute program tools such as RideSpring, carbon social networks such as Carbonrally, and employee-led green networks such as eBay Green Team. Run targeted campaigns to reduce energy and paper consumption, increase awareness, and solicit green ideas. Right kind of tools with an executive push and social support could create a great sustainable movement inside an organization.

Chief Sustainability Officer is an emerging title. Your ability to work across the organization, leverage relationship with the business to sell them on the sustainability goals, and manage the tools that are penetrated in all parts of your organization make you well suited for this role. A CSO does not necessarily have to be a domain expert in sustainability. In fact I would expect a CSO to be a people-person that can make things happen with the help of the sustainability experts and visionaries.

Now you know what your next gig looks like.

Monday, June 29, 2009

Structure 09 - Cloud Computing Is Here To Stay And Grow

I was invited as a guest blogger to Structure 09 - a day long event by GigaOM focusing on cloud computing. It was a great event with an incredible speaker line-up of thought leaders in the domain of cloud computing. The panel and keynote topics included persistence on the cloud, hosting web apps on the cloud, infrastructure design etc. I won't attempt to summarize everything that I saw and heard, instead here are some impressions:

Solving interoperability with Open Source: A founding developer of Wordpress, Matt Mullenweg, strongly advocated open source for the cloud for two reasons. The first reason is to achieve interoperability and the second is to ensure the business continuity when certain vendors cease to exist. As I have argued before there is a strong business case for open source on the cloud. It was great to see the reaffirmation that other thought leaders feel the same way.

Operational excellence: Javier Soltero, CTO of Management Products at SpringSource, emphasized the operational excellence as a key differentiation for a company to achieve a competitive advantage. Vijay Gill, a senior manager Engineering and Architecture at Google, also feels the same way. He believes that having the lowest cost platforms capable of providing good enough service is going to be a competitive advantage for the companies. For good software, you need great engineers – and most companies aren’t set up to do that. The technological challenges can be solved but it is the smart people writing smart code that will provide the competitive advantage to the cloud infrastructure companies.

Vertical clouds: We are likely to see more and more cloud offerings that are optimized for the vertical functionality e.g. run your Ruby apps on the cloud, analytics on the cloud, storage on the cloud etc. The IT should focus on becoming a service provider against merely a cost center. Chuck Hollis, CTO of Global Marketing, EMC Corporation believes that if IT does not embrace the cloud technology stack, they will most likely become an organization that manages the consolidation of all the cloud services. James Lindenbaum, co-founder and CEO of Heroku, emphasized that the developers should focus on core - what they are really good at and not worry about how the code will scale on the cloud. The bad code is bad code regardless of where it runs.

Hybrid cloud: The debate between private and public cloud continued. The proponents of the public cloud such as Greg Papadopoulos, CTO of Sun Microsystems, argued that most public clouds are run more securely than most private enterprise clouds. I completely agree. One of the ideas that was pitched is to have SEC force the public companies to put their data on the cloud. If, for compliance reasons, the data needs to be retrieved the government has a better shot at retrieving this data from a public cloud against a private and proprietary system that could potentially be sabotaged. The proponents of the private cloud such as Michael Crandell, CEO and founder of RightScale, cited security as a barrier and suggested approaches such as silo clouds that are dedicated for a given customer that do not share data with other customers.

I believe that hybrid deployments are here to stay. Successful cloud and SaaS vendors will be ones who can create seamless experience for the customers and end users from top to the bottom of the stack such that the customers still retain their current on-premise investment, keep their data that they don't want on the cloud, and significantly leverage cloud for all their other needs.

It was a lot of information packed into one day event. However on the lighter side Om's conversation with Marc Benioff included Marc poking fun at Oracle and Microsoft. Marc is witty and he has great sense of humor. Check out his conversation:

Thursday, June 18, 2009

Cloud Computing At The Bottom Of The Pyramid

I see cloud computing play a big role in enabling IT revolution in the developing nations to help companies market products and services to 4 billion consumers at the bottom of the pyramid (BOP). C.K.Prahlad has extensively covered many aspects of the BOP strategy in his book Fortune At The Bottom Of The Pyramid that is a must-read for the strategists and marketers working on the BOP strategy.

This is how I think cloud computing is extremely relevant to the companies that are trying to reach to the consumers at the BOP:

Logical extension to the mobile revolution: The mobile phone revolution at the BOP has changed the way people communicate in their daily lives and conduct business. Many people never had a landline and in some case no electricity. Some of them charged their mobile phones using a charger that generates electricity from a bike. As the cellular data networks become more and more mature and reliable the same consumers will have access to the Internet on their mobile phones without having a computer or broadband at home.

The marketers tend to be dismissive about the spending power of the people at the BOP to buy and use a device that could consume applications from the cloud. BOP requires innovative distribution channels. The telcos who have invested into the current BPO distribution channels will have significant advantage over their competitors. The telcos, that empowered people leap frog the landline to move to the mobile phones, could further invest into the infrastructure and become the cloud providers to fuel the IT revolution. They already have relationship with the consumers at the BOP that they can effectively utilize to pedal more products and services.

Elastic capacity at utility pricing: The computing demand growth in the developing countries is not going to be linear and it is certainly not going to be uniform across the countries. The cloud computing is the right kind of architecture that allows the companies to add computing infrastructure as demand surges amongst the BPO consumers in different geographies. Leaving political issues aside the data centers, if set up well, could potentially work across the countries to serve concentrated BOP population. The cloud computing would also allow the application providers to eliminate the upfront infrastructure investment and truly leverage the utility model. The BOP consumers are extremely value conscious. It is a win-win situation if this value can be delivered to match the true ongoing usage at zero upfront cost.

Cheap computing devices: OLPC and other small handheld devices such as Netbooks are weak in the computing power and low in memory but they are a good enough solution to run a few tools locally and an application inside a browser. These devices would discourage people from using the applications that are thick-client and requires heavy computation on the client side. The Netbooks and the introduction of tablets and other smaller devices are likely to proliferate since they are affordable, reliable, and provide the value that the BOP consumers expect. Serving tools and applications over the cloud might just become an expectation, especially when these devices come with a prepaid data plans.

Highly-skilled top of the pyramid serving BOP: Countries such as India and China have highly skilled IT people at the top and middle of the pyramid. These people are skilled to write new kind of software that will fuel the cloud computing growth in these emerging economies. The United States has been going through a reverse immigration trend amongst highly skilled IT workers who have chosen to return back to their home countries to pursue exiting opportunities. These skilled people are likely to bring in their experience of the western world to build new generation of tools and applications and innovative ways to serve the people at the BOP.

Sustainable social economies: It might seem that the countries with a large BOP population are not simply ready for the modern and reliable IT infrastructure due to bureaucratic government policies and lack of modern infrastructure. However if you take a closer look you will find that these countries receive a large FDI [pdf] that empowers the companies to invest into modern infrastructure that creates a sustainable social economy.

Most of the petrochemical refineries and cement manufacturing plants that I have visited in India do not rely on the grid (utility) for electricity. They have set up their own Captive Power Plants (CPP) to run their businesses. Running a mission critical data center would require an in-house power generation. As I have argued before, local power generation for a data center will result into clean energy and reduced distribution loss. There are also discussions on generating DC power locally to feed the data centers to minimize the AC to DC conversion loss. Relatively inexpensive and readily available workforce that have been building and maintaining the power plants will make it easier to build and maintain these data centers as well. The local governments would encourage the investment that creates employment opportunities. Not only this allows the countries to serve BOP and build sustainable social economy but to contribute to the global sustainability movement as well.

Wednesday, June 10, 2009

Structure 09: Put Cloud Computing To Work

GigaOM has organized an exciting event on cloud computing, Structure 09, on 06/25/2009. I will be at the event as a guest blogger and will be part of the energy and excitement. GigaOM has managed to put on an excellent schedule packed with great speakers including Marc Benioff, Michael Stonebraker, Jonathan Helliger, Greg Papadopoulos, Werner Vogels, and many others. I like the breadth of topics - cloud databases, data center design and optimization, commodity hardware, private cloud etc. I will see you there if you are planning on attending the event and if not come back here for blog posts covering the event. Leave a comment if you would like to see any specific topics or sessions covered.

Here is a lineup of the speakers:

Keynotes
  • Marc Benioff | Chairman and CEO, Salesforce.com
  • Paul Sagan | President and CEO, Akamai
Confirmed Speakers Include:
  • Werner Vogels | CTO, Amazon.com
  • Greg Papadopoulos | CTO, Sun Microsystems
  • Jonathan Heiliger | VP Technical Operations, Facebook
  • Dr. David Yen | EVP Emerging Technologies, Juniper Networks
  • Russ Daniels | VP and CTO, Cloud Services Strategy, Hewlett-Packard
  • Vijay Gill | VP, Engineering, Google
  • Richard Buckingham | VP Technical Operations, MySpace.com
  • Jack Waters | CTO, Level 3 Communications
  • Yousef Khalidi | Distinguished Engineer, Microsoft
  • Dr. Michael Stonebraker, Ph.D. | RDBMS pioneer and CTO, Vertica
  • Raj Patel | VP of Global Networks, Yahoo!
  • Michelle Munson | President and Co-founder, Aspera
  • Lloyd Taylor | VP Tech Operations, LinkedIn
  • Michael Crandell | CEO, Rightscale
  • Jeff Hammerbacher | Chief Scientist, Cloudera
  • Allan Leinwand | Venture Partner, Panorama Capital
  • Jason Hoffman | Co-founder and CTO, Joyent

Sunday, May 31, 2009

Calculating ROI Of Enterprise 2.0 Is Calculating The Cost Of A Lost Opportunity

I get this asked a lot – How do I calculate ROI of Enterprise 2.0? Bruce Schneier says, “Security is not an investment that provides a return, like a new factory or a financial instrument. It's an expense that, hopefully, pays for itself in cost savings. Security is about loss prevention, not about earnings. The term just doesn't make sense in this context.”. Similarly thinking of Enterprise 2.0 as an “investment” looking for a return does not make any sense. At best it is the cost of a lost opportunity.

If you are a CIO looking for a detailed ROI metrics or a simple checklist for Enterprise 2.0 you are probably out of luck. However you could adopt a two-pronged approach. Convince the business that the organization needs Enterprise 2.0 by showing whatever resonates with them e.g. sharing files help reduce email quota, Wiki makes people productive by X percentage, giving them a copy of The Future of Management by Gary Hamel etc. Once you do get a green signal for Enterprise 2.0 deployment, please, don’t be prescriptive to frame the problem or the solution. Instead simply provide the tools at grassroots and let people run with these tools.

For any collaboration, productivity, and social networking tools there is content and there is context that significantly depends upon the individuals that use these tools. For example some people prefer to be human-centric against artifact-centric. Some start interacting and collaborating with other people before exchanging the artifacts and there are others that prefer collaboration that is primarily an artifact-driven. Most of the tools mandate that users make an upfront choice. Even worse the IT makes the decision for them when they decide to purchase a specific tool assuming how people might want to work. This is the reason I like Google Wave since it does not make any assumptions on how people may want to use it. In fact it allows people to weave across people and artifacts seamlessly.

When Google Wave was announced Google spent most of the time demonstrating what it does and spent very little time showing what problems it is designed to solve. They received quite a criticism for that. Many designers questioned Google whether they really know if people want to work this way. Some bloggers called it an act of breathtaking arrogance of blowing off potential competition and touting tech buzzwords. I believe they all are missing the point. Google Wave has broken the grid that the designers are very protective about and has empowered people to stretch their imagination to make mental connections about how this tool might meet their needs that no other tool has met so far.

Would you still ask what’s the ROI?

Monday, May 11, 2009

Cloud Computing - Old Wine In A New Bottle?

A recent cloud computing report from McKinsey stirred quite a controversy. TechCrunch called the report partly cloudy. Google responded to the report with the great details on why cloud is relevant. I appreciate the efforts that McKinsey put into this report. However I believe that they took a very narrow approach in their scope and analysis. An interaction designer, Chris Horn, from MAYA Design sent me a paper, The Wrong Cloud, which argues that the cloud computing is essentially an old wine in a new bottle and the big companies are fueling the hype.
"Today’s “cloud computing” claims to be the next big thing, but in fact it’s the end of the line. Those corporate dirigibles painted to look like clouds are tied to a mooring mast at the very top of the old centralized-computing mountain that we conquered long ago."

I appreciate that there are people out there who question the validity and relevance of cloud computing. This puts an extra onus on the shoulders of the cloud computing companies and others to make their message crisper and communicate the real values that they provide. I was recently invited at the Under The Radar conference where many early stage cloud computing start-ups presented. The place was packed with the venture capitalists closely watching the companies and taking notes. It did feel like 1999 all over again! I hope that we don't fuel the hype and deliver the clear message on how cloud computing is different and what value it brings in. Here are my arguments on why cloud is not just a fad:


Utility style cheap, abundant, and purpose-agnostic computing was never accessible before: There are plenty of case studies about near zero adoption barrier for Amazon EC2 that allowed people to access the purpose-agnostic computing capabilities of the cloud computing at the scale that had never been technologically and economically feasible before. I particularly like the case study of Washington Post where they used Amazon EC2 to convert 17,481 pages of non-searchable PDF to searchable text by launching 200 instances for less than $150 in under nine hours. We did have massive parallel processing capabilities available to us such as grid computing and clusters but they were purpose-specific, expensive, and not easy to set up and access.

Peer-to-peer and cloud computing are not alternatives at the same level: The MAYA paper argues that the cloud computing is similar to P2P. I believe these two are complementing technology. The P2P solves the last mile problem of client-side computing where as the cloud computing is a collection of server-side technology and frameworks that has centralized computing characteristics. BitTorrent is a great example of effectively using P2P for distribution purposes since the distribution problem is fundamentally a decentralized one that could leverage the bandwidth and computing of the personal computers. However I do see potential in effectively combining both the approaches to design an end-to-end solution for certain kinds of problems e.g. use CDN on the cloud with P2P streaming to broadcast live events.

Virtualization and cloud computing are not the same: McKinsey's report on cloud computing recommends that organizations can get the most out of virtualizing their data centers against adopting the true cloud computing. I am a big fan of virtualization but it does not replace the cloud computing and does not yield the same benefits. Eucalyptus, an emerging cloud computing start-up, has detailed analysis on how cloud computing is different than virtualization.

Monday, May 4, 2009

Disruptive Early Stage Cloud Computing Start-ups

I was invited as a guest blogger to the Under The Radar conference organized by the Dealmaker media. This year's focus was to track early stage start-ups in cloud computing. The format was simple - each start-up gets six minutes to pitch their company and a panel listens to the pitch and provides feedback. It was a blast! The place was filled with the venture capitalists, entrepreneurs, and curious bloggers. I would highly recommend to check out the conference blog, Twitter updates, and watch some of the pitches. I wish I could blog about all the companies that participated in the conference. I have picked few companies - Twilio, Boomi, Zuora, and Cloudkick - based on their potential to cause some serious disruption in the cloud computing space. At the conference, while interacting with several people, the cloud computing felt to be nascent space bursting with energy and enthusiasm. The venture capitalists were drooling for the leads. It felt 1999 all over again.


Twilio commoditizes the telephony skills and uses the cloud to allow the companies to easily build and scale the voice applications without upfront capacity planning and expensive contracts with telco. Twilio has potential to revolutionize how developers build voice applications and allow companies to add a voice channel, by leveraging cloud-as-a-utility, to enhance the customer experience.

Watch Twilio's pitch:



Twilio's presentation:



Boomi's tag line "Connect Once Integrate Everywhere" is a riff on Java's tag line "Write Once Run Anywhere". Boomi is positioning their product Atomsphere as an integration middleware for the cloud that works across SaaS and on-premise systems. Boomi chose a hub-and-spoke architecture against an ad-hoc point-to-point integration. This not only allows Boomi and the partners to continue adding integration connectors without disrupting the core product and customers' deployments but it also allows the SaaS vendors to tap into Atomsphere to connect to other SaaS and on-premise vendors. The revenue model is based on integration-as-a-service - how many systems an organization wants to connect to. This allows Boomi to extract the maximum value out of the integration efforts that can be reused and resold.

Watch Boomi's pitch:



Boomi's presentation:



Zuora wants to be the Amdocs for SaaS and they are getting there much faster than I originally thought. In addition to commoditizing the billing for SaaS they also demonstrated that the cloud is a great platform not only for the edge applications but also for core applications such as billing that the organizations never thought of putting it on the cloud. Organizations are increasingly looking for a payment system and not just a billing system. Zuora does a great job by combining their billing domain expertise with an integration with PayPal. Zuora seems to be an acquisition target for eBay. I can't help notice that the typeface for "Pay" in Zuora's marketing collateral is identical to the typeface that PayPal uses. Coincident? I don't think so.

Watch Zuora's pitch:



Zuora's presentation:



I have used many management consoles but haven't seen a holistic design approach and simplicity in a management console that Cloudkick demonstrated. Three founders built the entire company in four months with $20k investment from Y Combinator and launched it to support other 40 Y Combinator companies to help manage their EC2 instances. Instead of waiting for the cloud vendors Cloudkick solved the interoperability problem by allowing the customers to get an AMI out of Amazon and put it on other cloud provider such as Slicehost. This is certainly encouraging for the organizations who see lack of interoperability as an adoption issue. The cloud management start-ups do run into risk of getting steamrolled by Amazon, but the fast and agile approach of Cloudkick could bring in some great innovation in the cloud management and interoperability domain that we may not see from the big cloud providers in the near future.

Watch Cloudkick's pitch:



Cloudkick's presentation: