Wednesday, December 24, 2008

Nokia Is The New Blackberry Of The Emerging Countries

Nokia announced mobile email service, Mail on Ovi, currently targeting the emerging markets.
Nokia has had great success in selling reliable and inexpensive handsets in the emerging markets. In the countries such as India the consumers never used the voice mail on their landlines and went through the mobile revolution to use SMS as a primary asynchronous communication medium. Many of these users are not active email users, not at least on their mobile devices. If Nokia manages to provide ubiquitous user experience using Ovi to bridge email and SMS on not-so-advanced-data-networks it can cause disruption by satisfying asynchronous communication needs of hundreds of thousands of users.

The smartphones would certainly benefit out of this offering and give Blackberry a good run for their money. Nokia completed the Symbian acquisition that makes them a company whose OS powers 50% of all the smartphones in the world. Symbian is still a powerful operating system powering more than 200 million phones and it is open source and it is supported by Nokia. The emerging countries haven't yet gone through the data revolution and Nokia is in great position to innovate.

Friday, December 19, 2008

De-coupled Cloud Runtime And Demand-based Pricing Suggest Second Wave Of Cloud Computing

A couple of days back Zoho announced that the applications created using Zoho Creator can now be deployed on the Google cloud. On the same day Google announced their tentative pricing scheme to buy resources on their cloud beyond the free daily quota. We seem to have entered into the second wave of the cloud computing.

Many on-demand application vendors, who rely on non-cloud based infrastructure, have struggled to be profitable since the infrastructure cost is way too high. These vendors still have value-based pricing for their SaaS portfolio and cannot pass on the high infrastructure cost to their customers. The first wave of the cloud computing provided a nice utility model to the
customers who wanted to SaaS up their applications without investing into the infrastructure and charge their customers a fixed subscription. As I observe the second wave of the cloud computing a couple of patterns have emerged.

Moving to the cloud, one piece at time: The vendors have started moving the runtime to a third party cloud while keeping the design time on their own cloud. Zoho Creator is a good example where you could use it to create applications using Zoho's infrastructure and then optionally use Google's cloud to run it and scale. Some vendors such as Coghead are already ahead in this game by keeping the both, design-time and run-time, on Amazon's cloud. Many design tools that have traditionally been on-premise might stay that way and could help the end users to run part of their code on the cloud or deploy the entire application on the cloud. Mathematica announced their integration with Amazon's cloud where you can design a problem on-premise and send it to the cloud to compute. Nick Carr calls it the cloud as a feature

Innovate with the demand-based pricing: As the cloud vendors become more and more creative about how their infrastructure is being utilized and introduce demand-based pricing, the customers can innovate around their consumption. Demand-based pricing for the cloud could allow the customers to schedule the non-real time tasks of the applications based on when the computing is cheap. This approach will also make the data centers green since the energy demand is now directly based on computing demand that is being managed by creative pricing. This is not new for the green advocates. The green advocates have long been pushing for a policy change to promote variable-pricing model for the utilities that would base price of electricity on the demand against a flat rate. The consumers can benefit by their appliances and smart meters negotiating with the smart grid to get the best pricing. The utilities can benefit by better predicting the demand and make the generation more efficient and green. I see synergies between the cloud and green IT.

Thursday, December 4, 2008

Incomplete Framework Of Some Different Approaches To Making Stuff

Steve Portigal sent me an article that he wrote in the Interactions magazine asking for my feedback. Unfortunately the magazine is behind a walled garden and would require a subscription but if you reach out to Steve he should be able to share the article with you. In the absence of the original article I will take liberty to summarize it. Steve has described how companies generally go about making stuff in his “incomplete” framework:

  • Be a Genius and Get It Right: One-person show to get it right such as a vacuum cleaner by James Dyson.
  • Be a Genius and Get It Wrong: One-person show to get it wrong such as Dean Kamen’s Segway.
  • Don’t Ask Customers If This Is What They Want: NBA changing the basketball design from leather to synthetic microfiber without asking the players
  • Do Whatever Any Customer Asks: Implementing the changes as requested by the customers exactly as is without understanding the real needs.
  • Understand Needs and Design to Them: Discovery of the fact that women shovel more than men and designing a snow shovel for women.
I fully agree with Steve on his framework and since this is proposed as an incomplete framework let me add few things on my own:

Know who your real customer is:

For enterprise software the customer who writes the check does not necessarily use the software and most of the time the real end users who use the software have no say in the purchase or adoption process. Designing for such demographics is quite challenging since the customers’ needs are very different than user needs. For instance the CIO may want privacy, security, and control where as the end users may want flexibility and autonomy and to design software that is autonomous yet controlled and secured yet flexible is quite a challenge. As a designer pleasing CIO for his or her lower TCO goals and at the same time delighting end users gets tricky.

Designing for children as end users and parents as customers also has similar challenges.

Look beyond the problem space and preserve ambiguity:

Hypothesis-driven user research alone would not help discover the real insights. Many times the good design emerges out of looking beyond your problem space.

If Apple were to ask people what they would want in their phones people might have said they want a smart phone with a better stylus and they do not expect their phone to tell them where they should eat their dinner tonight. We wouldn’t have had a multimodal interface on iPhone that could run Urbanspoon.

Embracing and preserving the ambiguity as long as you can during the design process would help unearth some of the behaviors that could lead to great design. Ambiguity does make people uncomfortable but recognizing that fact that “making stuff” is fundamentally a generative process allows people to diverge and preserve ambiguity before they converge.

Monday, December 1, 2008

Does Cloud Computing Help Create Network Effect To Support Crowdsourcing And Collaborative Filtering?

Nick has a long post about Tim O'Reilly not getting the cloud. He questions Tim's assumptions on Web 2.0, network effects, power laws, and cloud computing. Both of them have good points.

O'Reilly comments on the cloud in the context of network effects:

"Cloud computing, at least in the sense that Hugh seems to be using the term, as a synonym for the infrastructure level of the cloud as best exemplified by Amazon S3 and EC2, doesn't have this kind of dynamic."

Nick argues:

"The network effect is indeed an important force shaping business online, and O'Reilly is right to remind us of that fact. But he's wrong to suggest that the network effect is the only or the most powerful means of achieving superior market share or profitability online or that it will be the defining formative factor for cloud computing."

Both of them also argue about applying power laws to the cloud computing. I am with Nick on the power laws but strongly disagree with him on his view of cloud computing and network effects. The cloud at the infrastructure level will still follow the power laws due to the inherent capital intensive requirements of a data center and the tools on the cloud would help create network effects. Let's make sure we all understand what the powers laws are:

"In systems where many people are free to choose between many options, a small subset of the whole will get a disproportionate amount of traffic (or attention, or income), even if no members of the system actively work towards such an outcome. This has nothing to do with moral weakness, selling out, or any other psychological explanation. The very act of choosing, spread widely enough and freely enough, creates a power law distribution."

Any network effect starts with a small set of something and it eventually grows bigger and bigger - users, content etc. The cloud makes it a great platform for such systems that demand this kind of growth. The adoption barrier is close to zero for the companies whose business model actually depends upon creating these effects. They can provision their users, applications, and content on the cloud and be up and running in minutes and can grow as the user base and the content grows. This actually shifts the power to the smaller players and help them compete with the big cloud players and yet allow them to create network effects.

The big cloud players, that are currently on the supply side of this utility mode, have few options on the table. They either can keep themselves to the infrastructure business and I would wear my skeptic hat and agree with a lot of people on the poor viability of this capital intensive business model that has very high operational cost. This option alone does not make sense and the big companies have to have a strategic intent behind such large investment.

The strategic intent could be to SaaS up their tools and applications on the cloud. The investment and control over the infrastructure would provide a head start. They can also bring in partner ecosystem and crowdsource large user community to create a network effect of social innovation that is based on collective intelligence which in turn would make the tools better. One of the challenges with the recommendation systems that uses collaborative filtering is to be able to mine massive information that includes users' data and behavior and compute the correlation by linking it with massive information from other sources. The cloud makes a good platform for such requirements due to its inherent ability to store vast amount of information and perform massive parallel processing across heterogeneous sources. There are obvious privacy and security issues with this kind of approach but they are not impossible to resolve.

Google, Amazon, and Microsoft are the supply side cloud infrastructure players that are already moving in the demand side of the tools business though I would not call them the equal players exploring all the opportunities.

And last but not the least, there is a sustainability angle around the cloud providers. They can help consolidate thousands of data centers into few hundreds based on the geographical coverage, availability of water, energy, and dark fiber etc. This is similar to consolidating hundreds of dirty coal plants into few non-coal green power plants that can produce clean energy with efficient transmission and distribution system.

Monday, November 17, 2008

Microsoft Cloud Computing Blogger Roundtable

Today Microsoft announced the cloud offering for Exchange and SharePoint. I was invited to participate into the Microsoft Blogger round table that took place after the announcement as an initiative by Microsoft to establish relationship with the bloggers and thought leaders in the cloud computing area.

The launch event, attended by select customers, partners, bloggers, and the press, included a demo that articulated the seamless ubiquity of the solution – from the cloud to on-premise and vice versa reiterating the client-server-service strategy. Microsoft also iterated their commitment to continue investing massively into the data centers and also emphasized commitment to sustainability.

When asked about the SLA the answer was that the SLA is based on availability, security and privacy, and recovery time in the event of a disaster based on the geography. The SLA is three 9.

Stephen Elop, president of the Microsoft Business Division at Microsoft dismissed the possibility of slow online adoption due to the continued investment into the on-premise products commenting that the customers will still need an on-premise client - some kind of “local processing” - citing Google Chrome (without naming it)

The event was followed by a blogger round table that I participated into. This was an effort by Microsoft to establish relationship with the bloggers and thought leaders in the cloud computing, Enterprise 2.0, and social innovation area. It was an interesting conversation on the topics such as Microsoft embracing the cloud computing culture as an organization, better ways to engage with the bloggers, cloud computing adoption concerns, messaging issues around Microsoft products, SharePoint as a platform on Azure etc.

The discussion was quite open and well moderated by David Spark. Microsoft expressed the desire to better connect with the thought leaders and bloggers in the cloud computing area. Following is a list of some of the bloggers that were at the table:

Jeff Nolan – Enterprise 2.0
Ben Metcalfe – Co-founder of the Data Portability group
Salim Ismael – Headed Yahoo Brickhouse in his previous career
Phil Wainwright – from ZDNet
Geva Perry
Tom Foremski
Adrian Chan
Deb Schultz
Ohad Eder

Thursday, November 13, 2008

Continuous Passive Branding During Economic Downturn To Change Customers' Opinions

The current economic downturn has forced many CIOs to significantly reduce the external IT spending. Many projects are being postponed or canceled. This situation poses some serious challenges to the sales and marketing people of companies selling enterprise software. Many argue that there is nothing much these people can do. I would disagree.

Marketing campaigns tend to rely a lot on selling a product using active aggressive marketing that may not be effective under these circumstances since many purchase decisions are being placed on hold. However these circumstances and poor economic climate are ideal to build a brand and paddle concepts with continuous passive branding exercise. The branding exercise, if designed well, could change buyers’ experience around a concept or a product and evoke emotions that could be helpful when a product is actively being sold. Guy Kawasaki points us to an experiment that studied the art of persuasion to change people's attitudes. People should always be selling since the best way to change someone's mind is to sell them when they are not invested into an active purchase decision, emotionally or otherwise.

GE’s green initiative, branded as ecomagination, is an example of one of these passive branding exercise. Last year Climate Brand Index rated GE No. 1 on green brands. GE published a page long ad in a leading national magazine introducing their new green aviation engine. Instead Jeff could have picked up the phone and called Boing and Airbus and said "hey we have a new engine". Instead GE paddled their green brand to eventually support their other products such as green light bulbs. Climate change is a topic that many people are not necessarily emotionally attached to and have a neutral position on but such continuous passive marketing campaigns could potentially change people's opinions.

Apple’s cognitive dissonance is also a well known branding strategy to passively convince consumers that a Mac, in general, is better than a Windows. Many people simply didn’t have a stand on a laptop but now given a choice many do believe that they like a Mac.

The art of persuasion goes well beyond the marketing campaigns. Keeping customers engaged onto the topics and drive the thought leadership is something even more important during this economic downturn. The sales conversation is not limited to selling a product but also includes selling a concept or a need. The marketing is even more important considering the customers are not actively buying anything. The leaders should not fixate themselves on measuring the campaign to lead metrics. Staying with the customers in this downturn and help them extract the maximum value out of their current investment would go a long way since customers don't see their opinions being changed by a seemingly neutral vendor. When the economic climate improves and the customers initiates a purchase that sales cycle is not going to be that long and dry.

The leaders should carefully evaluate their investment strategy during this economic downturn. The economy will bounce back, the question is will they be ready to leap frog the competition and be a market leader when that happens. Cisco's recently announced their 2009 Q1 results. John Chambers made Cisco's strategy in the downturn very clear - invest aggressively in two geographies: the U.S. and selective emerging countries since emerging countries will be a steady state of growth as the countries grow and be prepared to sell in the western countries since they are likely the first ones to come out of this downturn.

“In our opinion, the U.S. will be the first major country to recover. The strategy on emerging countries is simple. Over time we expect the majority of the world’s GDP growth will come from the emerging countries. In expanding these relationships during tough times, our goal is to be uniquely positioned as the market turn-around occurs. This is identical to what we did during Asia's 1997 financial crisis.”

Friday, October 31, 2008

First Click Free - Opportunity For The Publishers To Promote Previously Undiscoverable Content

Nick has posted his analysis on Google's First Click Free . This free service allows the content providers to participate into it and promote their content by making the first click free when users discover the content via Google and subsequently enforce registration or subscription for the rest of the content.

I think this is a great idea! I am personally against the walled garden approach and do not believe in registrations and subscriptions just because content providers haven't managed to convince me so far to register or subscribe to for their content. This is a great opportunity for the publishers to showcase their content by making the first link free, demonstrate the value proposition, and drive traffic towards the paid content.

The discussion on the service has so far centered around:
  • Google making other search engine's users second-class citizens and not sticking to an unmediated role.
  • Users' ability to trick the content providers to get access to all the pages by acting as if the request is coming from a Google bot
I do not buy into the criticism around Google's unmediated role. No one is stopping the other search engines to build a similar service and work with the content providers. Though I would expect Google to somehow differentiate the first click free content from the always free content on the search results so that users don't feel that they are being tricked.

I also do not buy into the argument that users can trick the content providers by faking the request as if it is coming from a Google bot. Google can very easily solve this technological challenge to ensure that only the Google bot and no one else gets access to all the free content.

As much as I appreciate and value this service I suspect that the many publishers won't get it. I hope publishers don't ask Google to pay for the traffic instead of being happy that Google is sending them the traffic. I also see a challenge and an opportunity for the publishers to redesign their website to convert the first free click into a registration, subscription, or a future visit.

Thursday, October 16, 2008

Greening The Data Centers

Recently Google published the Power Usage Efficiency (PUE) numbers of their data centers. PUE is defined as a ratio of the total power consumed by a data center to the power consumed by the IT equipments of the facility. Google's data centers' PUE ranges from 1.1 to 1.3 which is quite impressive. Though it is unclear why all the data centers have slightly different PUE. Are they designed differently or are they all not tuned to improve for the energy efficiency? In any case I am glad to see that Google is committed to the Green Grid initiative and is making the measurement data and method publicly available. This should encourage other organizations to improve the energy performance of their data centers.

The energy efficiency of a data center can be classified into three main categories:

1. Efficiency of the facility: The PUE is designed to measure this kind of efficiency that is based on how a facility that hosts a data center is designed such as its physical location, layout, sizing, cooling systems etc. Some organizations have gotten quite creative to improve this kind of efficiency by setting up an underground data center to achieve consistent temperature or setting up data centers near a power generation facility or even setting up their own captive power plant to reduce the distribution loss from the grid and meet the peak load demand.

2. Efficiency of the servers: This efficiency is based on the efficiency of the hardware components of the servers such as CPU, cooling fans, drive motors etc. HP's green business technology initiative has made significant progress in this area to provide energy-efficient solutions. Sun has backed up the organization OpenEco that helps participants assess, track, and compare energy performance. Sun has also published their carbon footprint.

3. Efficiency of the software architecture: To achieve this kind of efficiency the software architecture is optimized to consume less energy to provide the same functionality. The optimization techniques have by far focused on the performance, storage, and manageability ignoring the software architecture tuning that brings in energy efficiency.

Round Robbin is a popular load balancing algorithm to optimize the load on servers but this algorithm is proven to be energy in-efficient. Another example is about the compression. If data is compressed on a disk it would require CPU cycles to uncompress it versus requiring more I/O calls if it is stored uncompressed. Given everything else being the same, which approach would require less power? These are not trivial questions.

I do not favor an approach where the majority of the programmers are required to change their behavior and learn new way of writing code. One of the ways to optimize the energy performance of the software architecture is to adopt an 80/20 rule. The 80% of the applications use 20% of the code and in most of the cases it is an infrastructure or middleware code. It is relatively easy to educate and train these small subset of the programmers to optimize the code and the architecture for energy-efficiency. Virtualization could also help a lot in this area since the execution layers can be abstracted into something that can be rapidly changed and tuned without affecting the underlying code to provide consistent functionality and behavior.

The energy efficiency cannot be achieved by tuning things in separation. It requires a holistic approach. PUE ratios identify the energy loss prior to it reaches a server, the energy-efficient server requires less power to execute the same software compared to other servers, and the energy-efficient software architecture actually lowers the consumption of energy for the same functionality that the software is providing. We need to invest into all the three categories.

Power consumption is just one aspect of being green. There are many other factors such as how a data center handles the e-waste, the building material used, the green house gases out of the captive power plant (if any) and the cooling plants etc. However tackling energy efficiency is a great first step in greening the data centers.

Friday, September 12, 2008

Google Chrome Design Principles

Many of you would have read the Google Chrome comic-strip and also would have test driven the browser. I have been following few blog posts that have been discussing the technical and business impact but let's take a moment and look at some of the fundamental architectural design principles behind this browser and its impact on the ecosystem of web developers.
  • Embrace uncertainty and chaos: Google does not expect people to play nice. There are billions of pages with unique code and rendering all of them perfectly is not what Google is after. Instead Chrome puts people in charge of shutting down pages (applications) that do not behave. Empowering people to pick what they want and allow them to filter out the bad experience is a great design approach.
  • Support the journey from pages to applications to the cloud: Google embraced the fact that the web is transitioning from pages to applications. Google took an application-centric approach to design the core architecture of Chrome and turned it into a gateway to the cloud and yet maintained the tab metaphor to help users transition through this journey.
  • Scale through parallelism: Chrome's architecture makes each application a separate process. This architecture would allow Chrome to better tap into the multi-core architecture if it gets enough help from an underlying operating system. Not choosing a multi-threaded architecture reinforces the fact that parallelism on the multi-core is the only way to scale. I see an opportunity in designing a multi-core adaptation layer for Chrome to improve process-context switching since it still relies on a scheduler to get access to a CPU core.
  • Don't change developers' behavior: JavaScript still dominates the web design. Instead of asking developers to code differently Google actually accelerated Javascript via their V8 virtual machine. One of the major adoption challenges of parallel computing is to compose applications to utilize the multi-core architecture. This composition requires developers to acquire and apply new skill set to write code differently.
  • Practice traditional wisdom: Java introduced a really good garbage collector that was part of the core language from day one and did not require developers to explicitly manage memory. Java also had a sandbox model for the Applets (client-side runtime) that made Applets secured. Google recognized this traditional wisdom and applied the same concepts to Javascript to make Chrome secured and memory-efficient.
  • Growing up as an organization: The Chrome team collaborated with Android to pick up webkit and did not build one on their own (actually this is not a common thing at Google). They used their existing search infrastructure to find the most relevant pages and tested Chrome against them. This makes it a good 80-20 browser (80% of the people always visit the same 20% of the pages). This approach demonstrates a high degree of cross-pollination. Google is growing up as an organization!

Monday, August 18, 2008

Cisco and Juniper eyeing the long tail of consumers for their second act

Very few companies have excelled in business beyond 25 years only with their first act, a product or a business model. Some companies recognize this early on and some don't. The networking giants Cisco and Juniper seem to get this and are looking for their second act. You don't wake up one day and drastically change your business model. It's a conscious decision based on long term strategy with very focused short term execution that is required to get to the second act.

Cisco started their "human network" efforts by acquiring Linksys and Susan Bostrom completely rebranded Cisco a couple of years back. Consumerization of the brand was a big leap from an enterprise-centric organization to get closer to non-enterprise consumers. Few days back Cisco announced the Q4 results and John Chambers emphasized that Cisco would invest into adjacencies.

"..and we will use this time as an opportunity to expand our share of customer spend and to aggressively move into market adjacencies."

On the other side of the networking world Juniper recently hired Kevin Johnson as their CEO who was the president of platform and services division at Microsoft. Competing with Cisco has been challenging and Juniper did have their own share of issues in the past but let's not forget this company started during the dot com era, had a spectacular performance, survived the burst, and kept growing. But now is probably the right time to look for the second act.

For Cisco, what could the second act be? Other than the obvious long tail of consumer-centric human network strategy I see a couple of possibilities:

1) Data Center Virtualization:

The virtualization is a fast-growing market segment that has not yet saturated. The real boundaries of data center virtualization are blurry since it is a conglomeration of server, network, and storage virtualization. Customers don't necessarily differentiate between managing servers versus backing up data across data centers.

This is an adjacency that Cisco can tap into with its current investments into data center virtualization switches such as Nexus 7000, strong ecosystem, and great service organization (The service revenue is 20% of the product revenue). In fact this was speculated when Cisco announced this switch.

This could indeed strain its relationship with vendors such as IBM and make it precarious who OEMs Cisco's switches in their data centers. Companies with large ecosystem would inevitably introduce "co-optition" when they decide to sell into the adjacencies that are currently served by their partners. They will have to learn walking on a thin rope.

Virtualization with scale can lead to rich business scenarios. Imagine a network virtualization switch that is not only capable of connecting data centers at high speed for real-time mirroring and backups but can also tap into the cloud for better network analysis. The routing protocols and network topology analysis require massive parallel processing that can be delivered from the cloud. This could lead to improvisation of many network and real-time voice and data management scenarios that otherwise wouldn't have been possible. Cisco's partnership with a cloud vendor could lead to some interesting offerings - think of it as network virtualization on steroids.

2) Network SaaS:

Network Managed Services has always been an interesting business with a variety of players such as IBM, Nortel, Lucent etc. This could be one of the adjacencies that Cisco might pursue and make it a true SaaS and not just a managed service. I won't be surprised if Cisco acquires a couple of key SaaS players in near future.

On-demand and SaaS have traditionally been considered a software and utility play. The networking companies already support the data centers that provision SaaS services but they could go well beyond that to provide Networking SaaS that provisions, monitors, and maintains the networks as true SaaS offering and not just as a managed service. This could include everything from network management, security, and related services. Traditionally SIs and partners have played this role but networking companies could see this as an adjacency and jump into it since it is a natural extension from hardware to data center to managed services to a SaaS delivery. Instead of selling to a service provider who sells services to customers an effective SaaS can turn the model upside down by partnering with service providers instead of selling to them and sell to an ever growing long tail of consumers.

Monday, August 4, 2008

Social computing in enterprise software - leveraging Twitter like microblogging capabilities

Twitter was buzzing with posts on the recent L.A earthquake nine minutes before AP officially broke the news. This Twitter phenomenon once again proved that unintended consequences are always larger than intended consequences. As we would have never imagined people find amusing ways of using Twitter ranging from keeping buddies updated and getting caught drinking when they called in sick and the boss followed their tweets to ensue wave of media coverage to get out of jail. A recent proposal to use Twitter as an emergency system met with stark criticism citing Twitter's availability issues. I don't see this as an "either or" proposition. The answer is "and" and not "yes, but". Let's use Twitter for what it is worth. It's a great microblogging and crowdsourcing tool to tap into the wisdom of crowd with a very little overhead and almost no barrier to entry.

Enterprise software should seriously consider this social computing phenomenon and leverage its capabilities by integrating such a tool in their offerings. For instance a social CRM application can use such a tool to help sales people effectively follow, collaborate, and close opportunities. The customer support system can provide transparency into the defect resolution process by service representatives tweeting the progress instead of logging it in semi-static IT ticket systems.

Following individual tweets has its obvious advantages but correlating multiple tweets could be extremely powerful and could yield to interesting nontraditional usage models such as using it to run predictive markets, sentiment analysis, or to track a recall on salmonella tainted tomatoes in real-time.

Thursday, July 24, 2008

Exprimental economics helps solve complex business problems

How do you predict demand from your distributors? Would you try demand simulation, predictive analytics, or a complex mathematical model? Try experimental economics. Wired points us to a story (found via Techdirt) of Kay-Yut Chen who is an experimental economist at HP solving the complex demand forecast problems.

One of Chen's recent projects involved finding a way for H.P. to more accurately predict demand from its nine distributors, who collectively sell as much as $3 billion worth of H.P.'s products. The problem? Its distributors' forecasts for demand were frequently off by as much as 100 percent, wreaking havoc on H.P.'s production planning.

Chen's solution to the planning problem, which H.P. intends to test soon with one distributor, was to develop an incentive system that rewarded distributors for sticking to their forecasts by turning those forecasts into purchase commitments. In the lab, the overlap between distributors' forecasts and their actual orders using this system increased to as high as 80 percent. "That's pretty astonishing given that the underlying demand is completely random," Chen says.

The human beings are terrible at making rational decisions and the complex problems such as demand forecast cannot really be solved by complex modeling algorithms or predictive analytics. Applying the economics of incentives to such problems is likely to yield better results. Freakonomics explains the creative use of economics of incentives in great depth. Dan Ariely writes in Predictably Irrational about people predictably making irrational decisions and how it breaks the rules of traditional economics and free markets that are purely based on demand and supply ignoring the human irrationality.

There is a lesson for an enterprise software vendor to design human-centric software that supports human beings in complex decision management process. Good news is that I do see the enterprise software converging towards social computing. Topics such as security that have been considered highly technical are being examined with a human behavior lens ranging from cognitive psychology to anthropology of religion.

I would welcome a range of tools that could help experimental economics gain popularity and dominance in the mainstream business. For instance behavior-based AB testing can be set up in a lab to test out hypothesis based on experimental economics and the results of the experiment could be directly fed to a tool that reconfigures an application or a website in real-time.

Monday, July 21, 2008

SaaS platform pitfalls and strategy - Part 2

In part 1, I discussed my views on the top 10 mistakes that vendors make while designing a SaaS platform as described in the post at GigaOM. This post, the part 2, has my strategic recommendations to SaaS vendors on some of the important topics that are typically excluded from the overall platform strategy.

Don't simply reduce TCO, increase ROI: According to an enterprise customer survey carried out by McKinsey and SandHill this year, the buying centers for SaaS are expected to shift towards the business with less and less IT involvement. A SaaS vendor should design a platform that not only responds to the changing and evolving business needs of a customer but can also adapt to changing macro-economic climate to server customer better. Similarly a vendor should carve out a Go To Market strategy targeting the businesses to demonstrate increased ROI and not necessarily just reduced TCO even if they are used selling a highly technical component to IT.

The Long Tail
: SaaS approach enables a vendor to up-sell a solution to existing customers that is just a click-away and does not require any implementation efforts. A vendor should design a platform that can identify the customer's ongoing needs based on the current information consumption, usage, and challenges and tap into a recommendation engine to up-sell them. A well-designed platform should allow vendors to keep upgrade simple, customers happy, and users delighted.

Hybrid deployment: The world is not black and white for the customers; the deployment landscape is almost never SaaS only or on-premise only. The customers almost always end up with a hybrid approach. A SaaS platform should support the integration scenarios that spans across SaaS to on-premise. This is easier said than done but if done correctly SaaS can start replacing many on-premise applications by providing superior (non)ownership experience. A typical integration scenario could be a recruitment process that an applicant begins outside of a firewall on a SaaS application and the process gradually moves that information into an enterprise application behind the firewall to complete the new hire workflow and provision an employee into the system. Another scenario could be to process a lead-to-order on SaaS and order-to-cash on on-premise.

Ability to connect to other platforms: It would be a dire mistake to assume standalone existence of any platform. Any and all platforms should have open, flexible, and high performance interfaces to connect to other platforms. Traditionally the other platforms included standard enterprise software platforms but now there is a proliferation in the social network platforms and a successful SaaS player would be the one who can tap into such organically growing social networking platforms. The participants of these platforms are the connectors for an organization that could speed up cross-organizational SaaS adoption across silos that have been traditional on-premise consumers.

Built for change: Rarely a platform is designed that can predict the technical, functional, and business impact when a new feature is included or an existing feature is discarded. Take internationalization (i18n) as an example. The challenges associated to support i18n are not necessarily the resources or money required to translate the content into many languages (Facebook crowdsourced it) but to design platform capabilities that can manage content in multiple languages efficiently. Many platform vendors make a conscious choice (rightfully so) not to support i18n in early versions of the platform. However rarely an architect designs the current platform that can be changed predictably in the future to include a feature that was omitted. The design of a platform for current requirements and a design for the future requirements are not mutually exclusive and a good architect should be able to draw a continuum that has change predictability.

Virtualize everything: Virtualization can insulate a platform from ever-changing delivery options and allow vendors to focus on the core to deliver value to the applications built on the platform. A platform should not be married to a specific deployment option. For instance a vendor should be able to take the platform off Amazon's cloud and put it on a different cluster without significant efforts and disruptions. The trends such as cloud computing have not yet hit the point of inflection and the deployment options will keep changing and the vendors should pay close attention to the maturity curve and hype cycle and make intelligent choices that are based on calculated risk.

Vendors should also virtualize the core components of the platform such as multi-tenancy and not just limit their virtualization efforts to the deployment options. Multi-tenancy can be designed in many different ways at each layer such as partitioning the database, shared-nothing clusters etc. The risks and benefits of these approaches to achieve non-functional characteristics such as scalability, performance, isolation etc. change over a period of time. Virtualizing the multi-tenancy allows a vendor to manage the implementation, deployment, and management of a platform independent of constantly moving building components and hence guarantee the non-functional characteristics.

Don't bypass IT: Instead make friends with them and empower them to server users better. Even if IT may not influence many SaaS purchase decisions IT is politically well-connected and powerful organization that can help vendors in many ways. Give IT what they really want in a platform such as security, standardization, and easy administration and make them mavens of your products and platform.

Platform for participation: Opening up a platform to the ecosystem should not be an afterthought instead it should be a core strategy to platform development and consumption. In early years eBay charged the developers to use their API and that inhibited the growth which later forced eBay to make it free and that decision helped eBay grow exponentially. I would even suggest to open source few components of the platform and also allow developers to use the platform the way they want without SaaS being the only deployment option.

Platform Agnostic: The programming languages, hardware and deployment options, and UI frameworks have been changing every few years. A true SaaS platform should be agnostic to these building components and provide many upstream and downstream alternatives to build applications and serve customers. This may sound obvious but vendors do fall into "cool technology" trap and that devalues the platform over a period of time due to inflexibility to adopt to changing technology landscape

Saturday, July 12, 2008

Make to think and think to make - Design thinking helps a start-up radio show compete with NPR's Morning Edition

The upstart radio show Takeaway's producers worked with the d.school at Stanford to apply design thinking approach to their show that competes with NPR's Morning Edition. It is quite an interesting story about how a legacy media industry can discard a traditional approach and embrace design thinking to rapidly iterate on the design of a radio show.

"A three-day crash course taught the producers the basic steps of d.school innovation: observe, brainstorm, prototype, and implement; repeat as necessary."


"The program's central idea is a daily question that audiences are asked to riff upon, either by calling in or by emailing. Their responses are then woven into the rest of the show's programming."

Not spelled out in so many words in the story but this is a good example of user-centered and participatory design with a crowdsourcing twist to it.

"But recognizing shortcomings and criticism and iterating quickly is one of the design process's core principles. The students in a d.school course called Design + Media, who are using the show as a class project, are helping producers generate ideas and track online response. For example, they're following Twitter streams to find out which questions and other parts of the broadcast are producing the strongest reactions."


Once again this story reinforces that design is an ongoing process and design thinking is not about talking but making and generating more ideas while making to change what you just made.

Monday, July 7, 2008

SaaS platform - design and architecture pitfalls - Part 1

I cannot overemphasize how critical it is to get the SaaS platform design right upfront. GigaOM has a post that describes the top 10 mistakes that vendors make while designing a SaaS platform. I would argue that many of these mistakes are not specific to a SaaS platform but any platform. I agree with most of the mistakes and recommendations, however I have quite the opposite thoughts about the rest. I also took an opportunity to think about some of the design and architectural must have characteristics of a SaaS platform that I will describe in the part 2 of this post.

1) Failing to design for rollback

"...you can only make one tweak to your current process, make it so that you can always roll back any code changes..."

This is a universal truth for any design decision for a platform irrespective of the delivery model, SaaS or on-premise. eBay makes it a good case study to understand the code change management process called "trains" that can track down code in a production system for a specific defect and can roll back only those changes. A philosophical mantra for the architects and developers would be not to make any decisions that are irreversible. Framing it positively prototype as fast as you can, fail early and often, and don't go for a big bang design that you cannot reverse. Eventually the cumulative efforts would lead you to a sound and sustainable design.

2) Confusing product release with product success

"...Do you have “release” parties? Don’t — you are sending your team the wrong message. A release has little to do with creating shareholder value..."

I would not go to the extreme of celebrating only customer success and not release milestones. Product development folks do work hard towards a release and a celebration is a sense of accomplishment and a motivational factor that has indirect shareholder value. I would instead suggest a cross-functional celebration. Invite the sales and marketing people to the release party. This helps create empathy for the people in the field that developers and architects never or rarely meet and this could also be an opportunity for the people in the field to mingle, discuss, and channel customer's perspective. Similarly include non-field people while celebrating field success. This helps developers, architects, and product managers understand their impact on the business and an opportunity to get to know who actually bought and started using their products.

5) Scaling through third parties

"....If you’re a hyper-growth SaaS site, you don’t want to be locked into a vendor for your future business viability..."

I would argue otherwise. A SaaS vendor or any other platform vendor should really focus on their core competencies and rely on third parties for everything that is non-core.

"Define how your platform scales through your efforts, not through the systems that a third-party vendor provides."

This is partially true. SaaS vendors do want to use Linux, Apache, or JBoss and still be able to describe the scalability of a platform in the context of these external components (that are open source in this case). The partial truth is you still can use the right components the wrong way and not scale. My recommendation to a platform vendor would be to be open and tell their customers why and how they are using the third party components and how it helps them (the vendor) to focus on their core and hence helps customers get the best out of their platform. A platform vendor should share the best practices and gather feedback from customers and peers to improve their own processes and platform and pass it on to third parties to improve their components.

6) Relying on QA to find your mistakes:

"QA is a risk mitigation function and it should be treated as such"

The QA function has always been underrated and misunderstood. QA's role extends way beyond risk mitigation. You can only fix defects that you can find and yes I agree that mathematically it is impossible to find all the defects. That's exactly why we need QA people. The smart and well-trained QA people think differently and find defects that developers would have never imagined. The QA people don't have any code affinity and selection bias and hence they can test for all kinds of conditions that otherwise would have been missed out. Though I do agree that the developers should put themselves in the shoes of the QA people and make sure that they rigorously test their code, run automated unit tests, and code coverage tools and not just rely on QA people to find defects.

8) Not taking into account the multiplicative effect of failure:

"Eliminate synchronous calls wherever possible and create fault-isolative architectures to help you identify problems quickly."


No synchronous calls and swimlane architecture are great concepts but a vendor should really focus on automated recovery and self-healing and not just failure detection. A failure detection could help vendor isolate a problem and help mitigate the overall impact of that failure on the system but for a competitive SaaS vendor that's not good enough. Lowering MTBF is certainly important but lowering MDT (Mean down time) is even more important. A vendor should design a platform based on some of the autonomic computing fundamentals.

10) Not having a business continuity/disaster recovery plan:

"Even worse is not having a disaster recovery plan, which outlines how you will restore your site in the event a disaster shuts down a critical piece of your infrastructure, such as your collocation facility or connectivity provider."

Having a disaster plan is like posting a sign by an elevator instructing people not to use it when there is a fire. Any disaster recovery plan is, well, just a plan unless it is regularly tested, evaluated, and refined. Fire drills and post-drill debriefs are a must-have.

I will describe some of the design and architectural must have characteristics of a SaaS platform in the part 2 of this post.

Saturday, May 3, 2008

Cloud computing: adoption fears and strategic innovation opportunities

The recent CIO.com article lists out the top three concerns that the IT executives have regarding the adoption of cloud computing - security, latency, and SLA. These are real concerns but I don't see them inhibiting the adoption of cloud computing.

Adoption fears

Security: Many IT executives make decisions based on the perceived security risk instead of the real security risk. IT has traditionally feared the loss of control for SaaS deployments based on an assumption that if you cannot control something it must be unsecured. I recall the anxiety about the web services deployment where people got really worked up on the security of web services because the users could invoke an internal business process from outside of a firewall.

The IT will have to get used to the idea of software being delivered outside from a firewall that gets meshed up with on-premise software before it reaches the end user. The intranet, extranet, DMZ, and the internet boundaries have started to blur and this indeed imposes some serious security challenges such as relying on a cloud vendor for the physical and logical security of the data, authenticating users across firewalls by relying on vendor's authentication schemes etc. , but assuming challenges as fears is not a smart strategy.

Latency: Just because something runs on a cloud it does not mean it has latency. My opinion is quite the opposite. The cloud computing if done properly has opportunities to reduce latency based on its architectural advantages such as massively parallel processing capabilities and distributed computing. The web-based applications in early days went through the same perception issues and now people don't worry about latency while shopping at Amazon.com or editing a document on Google docs served to them over a cloud. The cloud is going to get better and better and the IT has no strategic advantages to own and maintain the data centers. In fact the data centers are easy to shut down but the applications are not and the CIOs should take any and all opportunities that they get to move the data centers away if they can.

SLA: Recent Amazon EC2 meltdown and RIM's network outage created a debate around the availability of a highly centralized infrastructure and their SLAs. The real problem is not a bad SLA but lack of one. The IT needs a phone number that they can call in an unexpected event and have an up front estimate about the downtime to manage the expectations. May be I am simplifying it too much but this is the crux of the situation. The fear is not so much about 24x7 availability since an on-premise system hardly promises that but what bothers IT the most is inability to quantify the impact on business in an event of non-availability of a system and set and manage expectations upstream and downstream. The non-existent SLA is a real issue and I believe there is a great service innovation opportunity for ISVs and partners to help CIOs with the adoption of the cloud computing by providing a rock solid SLA and transparency into the defect resolution process.

Strategic innovation opportunities

Seamless infrastructure virtualization: If you have ever attempted to connect to Second Life behind the firewall you would know that it requires punching few holes into the firewall to let certain unique transports pass through and that's not a viable option in many cases. This is an intra-infrastructure communication challenge. I am glad to see IBM's attempt to create a virtual cloud inside firewall to deploy some of the regions of the Second Life with seamless navigation in and out of the firewall. This is a great example of a single sign on that extends beyond the network and hardware virtualization to form infrastructure virtualization with seamless security.

Hybrid systems: The IBM example also illustrates the potential of a hybrid system that combines an on-premise system with remote infrastructure to support seamless cloud computing. This could be a great start for many organizations that are on the bottom of the S curve of cloud computing adoption. Organizations should consider pushing non-critical applications on a cloud with loose integration with on-premise systems to begin the cloud computing journey and as the cloud infrastructure matures and some concerns are alleviated IT could consider pushing more and more applications on the cloud. Google App Engine for cloud computing is a good example to start creating applications on-premise that can eventually run on Google's cloud and Amazon's AMI is expanding day-by-day to allow people to push their applications on Amazon's cloud. Here is a quick comparison of Google and Amazon in their cloud computing efforts. Elastra's solution to deploy EnterpriseDB on the cloud is also a good example of how organizations can outsource IT on the cloud.

Service innovation: I see many innovation opportunities for the ISVs and partners to step in as trusted middleman and provide services to fuel the cloud computing adoption. SugarCRM recently announced a reseller partnership with BT to reach out to 1.2 million business customers of BT and sell them on-premise and SaaS CRM. I expect to see the ecosystem around the cloud computing and SaaS vendors grow significantly in the next few years.

Thursday, April 10, 2008

How is rise of massive parallel processing capabilities changing the dynamics of computing

The rise of massive parallel processing capabilities that multi-core architecture brings in has plentiful opportunities and challenges. The recent $20m investment by Microsoft and Intel into the parallel computing research has created quite a buzz around this topic. How well we can utilize the multi-core architecture without rewriting any of the applications is a major challenge ahead since most of the applications are not designed to leverage the potential of multi cores to its fullest extent. The current multi-threaded applications leverage parallel processing capabilities to some extent but these threads do not scale beyond few cores and hence the current applications won't run any slower on more cores but they will run relatively slower compared to the new applications that can leverage this parallel processing architecture. The best way to cease an opportunity of utilizing the multi cores is to have a single theaded application seamlessly utilizing the potential of a multi-core architecture. This does require significant work to rewrite the algorithms and middleware but this is essentially a long tail and has significant business value proposition.

The very concept of single-threaded application relying on concurrency at the algorithms level is going to challenge many developers since this is fundamentally a different way of looking at the problem. The algorithm design approaches have been changing to make algorithms explicitly aware of the available multi cores so that the algorithm can dispatch data on a random core without worrying about the issues related to multi-threading such as deadlocks, locking etc and let threads communicate with each other using asynchronous messages without sharing any common state. The algorithms always work better if it knows more about the data. If this were not to be the case, the brute-force algorithm would be the best one to solve any problem. More you know about the data, you can fine tune the algorithm and that would help you discover some good insights and that would further tighten the algorithm. The increasingly efficient processing capabilities could help deal with a large set of data early on without investing too much time upfront in an algorithm design to discard certain data. When you add the abundant main memory into this mix it has profound implications since the algorithms that were originally designed to access data from a disk are not efficient any more now that the data that they need is always available in the main addressable memory with different data structures and indexes. The cryptography algorithms have been designed to make sure that the attack cannot be completed in a reasonable amount of time given the plentiful resources. We should look at these algorithm design principles to do the opposite - how we can replicate the reverse semantics in other domains to actually make use of massive parallel processing capabilities.

I foresee an innovation opportunity in new functional languages such as Erlang and Haskell or new runtime for current dynamic and object-oriented languages to tap into this potential. The companies such as RapidMind and Google's acquisition of PeakStream also indicate growing interest in this area. Initially the cheap storage, followed by massive parallel processing capabilities, and main-memory computing is going to change the computing dynamics and redefine many things in coming years. We are ripe for the change. The whole is greater than sum of its parts. Yes, indeed.

Monday, March 31, 2008

Research labs and innovation priorities in an IT organization

Earlier this month HP announced that HP labs is going to focus on 20-30 large projects going forward instead of focusing on large number of small projects. If you compare the top 10 strategic priorities for 2008 that Gartner announced late last year you would find a lot of similarities even though HP's projects are not necessarily designed to address only the short term priorities. Quick comparison:

HP : Gartner
  • Sustainability: Green IT
  • Dynamic Cloud Services: Web Platform & SOA + Real World Web
  • Information explosion: Metadata management + Business Process Modeling
“The steps we’re taking today will further strengthen Labs and help ensure that HP is focused on groundbreaking research that addresses customer needs and creates new growth opportunities for the company.”

The role of a traditional "lab" in an IT organization has changed over last few years to focus on the growth and value projects that strategically aligns with company's operational, strategic, management, and product innovation priorities. The researchers have been under pressure to contribute significantly to the efforts that are directly linked to the product lines. There are pros and cons of being research-oriented versus product-oriented and it is critical that researchers balance their efforts. I firmly believe that labs should be very much an integral part of an organization and anything that they do should have a direct connection to the organization.

“To deliver these new, rich experiences, the technology industry must address significant challenges in every area of IT – from devices to networks to content distribution. HP Labs is now aligned to sharpen its focus on solving these complex problems so HP and its customers can capitalize on this shift.”
Traditionally labs have been perceived a cool place to work where you can do whatever you want without any accountability towards company's strategy and this poses a serious credibility issues for some labs regarding their ability to contribute towards the bottom line. I agree that the research organization should be shielded from the rest of the organization or incubated to a certain extent to protect the disruption in the ongoing business and allow researchers to focus and flourish in their efforts but eventually the efforts should be integrated well into the organization with the stakeholders having enough skin in adopting, productizing, and possibly commercializing what comes out of a lab. Credibility of a lab in an organization goes long way since the product development organizations largely control what customers would actually use, at least in IT organizations. Many innovations that come out of a lab may not even see the light of day if the research organization does not have credibility to deliver what customers actually want. Innovation by itself is not very useful until it is contextualized with the customers' and users' needs to solve specific problems.

Tuesday, March 25, 2008

Alaska Airlines expedites the check-in process through design-led-innovation

Southwest airlines is known to have cracked the problem of how to effectively board the aircraft and Disney specializes in managing the crowd and long lines. Add one more to this list, Alaska Airlines. Fast Company is running a story on how Alaska Airlines has been designing the check-in area to reduce the average check-in time at the Anchorage airport . This is a textbook example of design-led-innovation and has all the design thinking and user-centered design elements - need finding, ethnography, brainstorming, rapid prototyping, and end user validation. Alaska Airlines visited various different places to learn how others manage crowd and applied those learnings in the context of their problem supported by contextual inquiry of the check-in agents. They built low fidelity prototypes and refined them based on early validation.

The story also discusses that Delta is trying a similar approach at Atlanta terminal. Passengers see where they're going. The mental rehearsal or mental imagery aspects of cognitive psychology have been successfully applied to improve athletic performance. There have been some experiments in the non-sports domain, but this is a very good example. Imagine an airport layout where a security check-in process is visible from a check-in line. This could make people mentally rehearse a security check while they wait for their boarding passes so that they are more likely to complete the actual security check much faster.

What makes this story even more compelling that they managed to satisfy customers by reducing the average wait-time and yet saved the cost and proved that saving money and improving customer experience are not mutually exclusive. The innovation does not have to be complicated. They also had a holistic focus on the experience design where a customer's experience starts on the the web and ends at the airport. Some people suggest airplane-shaped boarding areas to expedite the boarding. This is an intriguing thought and this is exactly the kind of thinking we need to break out of traditional mindset and apply the design-thinking approach to champion the solution. I am all in for the innovations to speed up the check-in and boarding as long as I don't have to wear one of those bracelets that could give people debilitating shocks!

Wednesday, March 19, 2008

User-generated content, incentives, reputation, and factual accuracy

Not all user-generated content is factually accurate and it does not have to be that way. I don't expect Wikipedia to be completely accurate and some how many people have a problem with that. Traditionally the knowledge-bases that upfront requires high factual accuracy have been subjected to slow growth due to high barrier to entry. Wikipedia's prior stint, Nupedia, had a long upfront peer review process that hindered the growth and eventually led to the current Wikipedia model as we all know. Google Knol is trying to solve this problem by introducing the incentives to promote the quality of the thoughtocracy. I haven't seen Knol beyond this mockup and I am a little skeptical of a system that can bring in accuracy and wild growth at the same time. I would happy to be proven wrong here.

For an incentive-based system it is important not to confuse factual accuracy with popularity of the content. If content is popular it does not necessarily have to be accurate. If we do believe that incentives can bring in the accuracy, we need to be careful in associating incentives to the accuracy and not to the popularity and that is much harder to accomplish since the incentive scheme needs to rate the content and the author based on the sources, up-front fact-checking, and not just the traffic which could indicate popularity. Mahalo is trying to solve the popularity problem and not the accuracy problem. There have been some attempts to try out the reputation model for Wikipedia but the success has been somewhat underwhelming. I see many opportunities and potential in this area, especially if you can cleverly combine the reputation with the accuracy.

In reality what we need is a combination of restriction free content creation, fact-checking, incentives, and reputation. These models are not mutually exclusive and not necessarily required at all the times and should not be enforced to all the content across all the users. Guided or informative content tend to be more popular irrespective of the factual accuracy since it is positioned as a guide and not as a fact. The people who are in the business of working off the facts such as media reporters, students working on a thesis etc. should watch for the content that is useful, looks reputable, current, and may be factual but is pure wrong and should go through a systematic due diligence fact-checking process.

Friday, March 14, 2008

Ray Ozzie on service strategy and economics of cloud computing

In a recent interview by Om Malik, Ray Ozzie discusses Microsoft's service strategy and economics of cloud computing.

Desktop is not dead: He reaffirms that desktop is not going away but the desktop needs to be more and more network and web aware to support the computing and deployment needs.

"A student today or a web startup, they don’t actually start at the desktop. They start at the web, they start building web solutions, and immediately deploy that to a browser. So from that perspective, what programming models can I give these folks that they can extend that functionality out to the edge?........There are things that the web is good for, but that doesn’t necessarily mean that for all those things that the desktop is not good anymore."

Microsoft did try to ignore the Internet in the early days and they obviously don't want to repeat the same mistake. The desktop is certainly not going away, but there are plenty of innovation opportunities around the operating system. I am happy about the fact that Microsoft is considering user-centric approach for the desktops against an application centric one.

Economics of cloud computing: There are massive efforts already underway in this direction and we have seen some results such as the Microsoft Live.

"I think we’re well positioned, because we have a selfish need to do these things, and because we have platform genetics. We have the capacity to invest at the levels of infrastructure that are necessary to play in this game. So I think we’ll be well positioned."

This is simply the truth that people need to recognize. The cloud computing is about the scale and the scale requires large investment in infrastructure with no guaranteed near-term return. This is one of those boats that does not seem to have an obvious early ROI but you don't want to miss that boat. Microsoft certainly will be well positioned from the consumer and the supplier sides. They can run the productivity suite and other applications on the cloud and at the same time provide opportunities to the partners and ISVs to author and run applications on the cloud.

"But, we have every reason to believe that it will be a profitable business. It’s an inevitable business. The higher levels in the app stack require that this infrastructure exists, and the margins are probably going to be higher in the stack than they are down at the bottom."

The business value proposition of composition on cloud, the ability to build applications on this platform, is tremendous and that's the revenue stream Microsoft could count on. The profit expectations from the street are inevitable and there is no room for loss but raising the price at the bottom of the stack would increase the barrier to entry and competition at the commodity level would yield thin margins and risk slow adoption. He cites Amazon's strategy to set the price low despite an opportunity to raise it without risking to lose customers.

Cloud computing is not a zero sum game but organizations will be forced to make money somewhere to sustain the infrastructure investment and on-going maintenance and perhaps rack a decent profit on top of it.

Tuesday, March 11, 2008

Bottom-up software adoption – an opportunity or a threat?

I have been observing a trend where business users or information workers become more-informed and educated about the range of productivity software options available in the marketplace and start using them without any help or consent from IT. If taken as a threat IT could potentially attempt to block these applications and users get frustrated and still find a way to work around these restrictions and if taken as an opportunity IT takes the hint and standardizes these options across the organization to speed up the adoption. The later is a win-win situation; IT has beta users doing acceptance testing for them without being asked and IT can focus on more strategic tasks, empower users, and support users' aspirations and goals by providing them with the tools that they need. This trend follows the rule of the wisdom of crowd. If software is good enough it will bubble up. Firefox is a good example of such a trend. The users started using it way before IT decided to include it in the standard machines that they give out to the users.

I can understand why the enterprise applications such as ERP and SCM are not likely candidates for bottom-up adoption since they require upfront heavy customization and are tightly integrated with the organization’s business processes and has complex requirements such as compliance, process integration, workflow, access control etc. requiring IT's involvement. This is slowly changing as SaaS becomes more popular and the applications can reach to users directly and provide all the benefits to overcome the adoption barriers by eliminating the upfront IT involvement. Zoho People is a good example of such an application. Salesforce.com has achieved bottom-up departmental adoption despite of IT’s traditional claim to CRM ownership. Departmental solutions do have drawbacks of becoming silos and make intra-department integration difficult and that could result into bad data quality of due to redundancy and lack of effective collaboration. To overcome some of these concerns collaboration is a key feature in an application that is a likely candidate for bottom-up software adoption. Google Apps is a good example where they introduced a feature that allows users to discover each other and potentially collaborate with across departments in an organization.

The decision-making is tipping towards the information workers and many business users don't necessarily see the needs of some pre-installed on-premise solutions. The cultural shift to embed their personal life with the professional life is also making certain web-based tools their choice. If I am a vendor that finds a CIO sale a bit tricky, I would be very closely watching this trend.

Thursday, March 6, 2008

Blurring boundaries and blended competencies for retail and manufacturing supply chains

Widespread adoption of RFID in the Supply Chain Management (SCM) and Supplier Relationship Management (SRM) systems diminish the boundary between the retail and the manufacturing systems and the respective competencies begin blend as well. Today's supply chain goes beyond adding few more warehouses or trucks. Think about supply chain of a new Harry Potter book and you will have completely different perspective about the timeliness and the security of your supply chain orchestration.

Collaboration capabilities are the key competencies and they become even more crucial when a supply chain disrupts due to an exception. The solutions should have capabilities to handle exceptions. Some of the people whom I speak to in this domain tell me that a system typically does pretty good job when things are fine but when an exception occurs, such as a supplier backs out, people scramble to handle the disruption and an ability of a system to capture unstructured collaborative workflow in the context of the structured data could go long way. People don't want, and don’t expect, the system to make decisions for them. They want systems to help them and empower them to make decisions to make their supply chain leaner and smarter. They want an exception management system.

It would be naïve to say that retailers don’t model capacity. For retailers it is not just about demand and supply but how to optimize the shelf space and that’s part of the capacity modeling equation. The companies such as Demandtec have retail optimization solutions in this domain.

I see supply chain as a capability and not a solution and a well-designed supply chain could help companies achieve their in just-in-time inventory mantra.

Tuesday, March 4, 2008

Open source licenses and its impact on commercialization

The choice of an open source license sparks a debate from time to time and this time around it is about using GPL as a strategic weapon to force your competitors to share their code versus use BSD to have faith in your proprietary solution as an open source derivative to reduce the barrier to an entry into the market. I agree with the success of mySQL but I won’t attribute the entire success to the chosen license. Comparing open source licenses in the context of commercializing a database is very narrow comparision. First of all PostgreSQL and mySQL are not identical databases and don’t have the exact same customers and secondly I see database as enabler to value add on top of it. EnterpriseDB is a great example of this value add and I think it is very speculative to say whether it is an acquisition target or not – the real question is would EnterpriseDB have accomplished the same if PostgreSQL used GPL instead of BSD.

I see plenty of opportunities in the open source software license innovation and over a period of time disruptive business models will force the licenses to align with what business really need. IP indemnification of GPL v3 is a classic example of how licenses evolve based on the commercial dynamics amongst organizations. We can expect the licenses to become even more complex with wide adoption of SaaS delivery models where a vendor is not shipping any software anymore.

People do believe in open source but may not necessarily believe the fact that they have a legal obligation to contribute back to the open source community every time they do something interesting with it and Richard Stallman would strongly disagree. The companies such as BlackDuck has a successful business model on the very fact that vendors don’t want to ship GPLed code. We should not fight the license, just be creative, embrace open source, and innovate!

Thursday, February 28, 2008

Business model innovation opportunities in designing SaaS channels

The business model challenges are far more complex and brutal than the technology or architectural challenges for SaaS and they get compounded when selling to an SMB. It has been argued that the success of complex to implement enterprise software in marketplace is attributed to the channels, ISVs and VARs, to a certain extent since they step in and do the dirty job and it is a very lucrative business for them. If the VARs are not selling it, customers won't probably buy as much. This has serious implications on the SaaS as a delivery model. The fundamental benefits of SaaS such as pay-as-you-go type subscription models, try before buy, personalize against customize, and no physical box are some of the factors that work against the SaaS vendor since there aren't enough incentives for the indirect channels with the current business model.

If a vendor believes that a product is so good that it does not need any (value add) channels, the vendor can use web as platform for volume sell. Google recently slashed down the price of Postini by 90% and it is a move to get rid of the channels since it was an artificial cost barrier. Google believes that now it has much better shot at getting to the right customers at the right price. This move has upset some VARs but it is all about your supplier being smarter than you.

This is the business model innovation that the vendors should be paying attention to. Many enterprise software vendors have never sold over the web and they relied on their direct sales force driving around in their BMWs and selling to the customers. They also relied on the partners heavily. When these vendors move towards SaaS delivery model for volume selling to the SMBs, they have an option to build new infrastructure or use an existing infrastructure to volume sell to the customers. I also see the benefits of web as a volume selling platform to sell anything and not just the software. I can imagine Amazon’s e-commerce platform as a SaaS sales platform and it is also not farfetched for a software company into the business of providing platform via SaaS delivery to sell SaaS software.

Many SaaS solutions originally designed for SMBs do get subscriptions from large enterprises as well since in some cases the IT is fragmented into many departmental solutions for a large enterprise. These departments do not want to deal with the IT and would rather go for a SaaS delivery model even if it means a limited functionality and no integration capability with other departments. This decentralized strategy is a nightmare for many CIOs but in some cases loose integration could also mean higher productivity and a CIO is willing to make that compromise. There are also behavioral issues that a vendor will have to deal with on how to approach customers and whether a customer is comfortable making software purchase over the web where they relied on the partners in the past.

This is an interesting trend that SaaS vendors should be watching for since it simply changes the definition of an SMB. More and more knowledge workers are inclined to bypass IT if they have an access to better and easy-to-use solution. One of the new features of Google Apps is targeted towards this behavior. If you are a Google Apps consumer in one department, you can see who else has signed up for Google Apps (based on the domain name) and can collaborate with those people. This is what I would call it loose integration.

In a way, selling to small businesses is similar to a selling to a set of individual consumers and not to a business since a lot of these small businesses behave as individual consumers. Turbotax is a good example to compare channels for on-premise and SaaS delivery models. Intuit has separate sales and marketing channels to sell TurboTax. Intuit partners with financial institutions such as Fidelity, Vanguard, and Bank Of America to offer discount on the online offering and also distributes coupons for the on-premise offering at brick-and-mortar discount stores such as Costco.

SaaS delivery model, for an SMB or a large enterprise, has many channel challenges and the concept and definition of these channels are likely to be redefined as the SaaS adoption continues.

Wednesday, February 20, 2008

Scenario-based enterprise architecture - CIO’s strategy to respond to a change

Scenario-based planning is inevitable for an enterprise architect. The changing business models, organizational dynamics, and disruptive technology are some of the change agents that require enterprise architecture strategy to be agile enough to respond to these changes. The CIO.com has a post on a CIO's challenge on the enterprise architecture to respond to a possible change in the strategic direction due to a new CEO.

For CIOs, the key question is how to turn IT into an asset and a capability to support the business and not to become an IT bottleneck that everyone wants to avoid or circumvent. Strategic IT planning that is scenario-based, transparent policies, and appropriate governance could help the enterprise architecture from falling apart and build capabilities that serves the business needs and provides them with the competitive advantage.

To be tactical and strategic at the same time is what could make many CIOs successful. In my interaction with CIOs, I have found that some of their major concerns are organizational credibility and empowerment. CIO is often times seen as an inhibitor by the business people and it is CIO’s job to fix that perception. To be seen as a person who can respond to business needs quickly and pro-actively can go a long way to fix this perception. You cannot really plan for all the possible worst case scenarios but at least try to keep your strategy nimble and measures in place to react to the ones that you had not planned for and act ahead of time on the ones that you did plan for.

Monday, February 11, 2008

Data encryption as a new class of DoS

Not to sure what to make out of this argument. Experts from IBM Internet Security Systems, Juniper, nCipher argue that data encryption is a new class of DoS. The post says "It's a new class of DoS attack.. If you can go in and revoke a key and then demand a ransom, it's a fantastic way of attacking a business." This does not make any sense. If someone can get your private key revoked you would have a lot to worry about other than data encryption.

It also says "Another risk is that over-zealous use of encryption will damage an organization's ability to legitimately share and use critical business data". The storage is encrypted but the access is not, so I am not sure what sharing issues the post is talking about. The leading database vendors such as Oracle provides column level encryption where data is encrypted before it is stored but it is decrypted on-the-fly when accessed and it is very transparent to the user or to the application. Though a limited set of real-time data should be encrypted since there is an overhead of decryption every time the data is accessed and the physical and digital security of the real-time data store is much better than an off-line storage such as backup tapes . On the other hand the backups should always completely be encrypted because they are not supposed to be accessed in real time and there is a greater risk of loosing a tape from a UPS truck or get stolen by baggage handlers. In fact Oracle once considered not to allow taking unencrypted backups at all.

What really matters is the encryption strategy of the organization for the data accessed in real time and the data that gets backed up on a tape. Some simple key management solutions and the right decisions and governance can solve the supposed DoS problems that are being discussed. You could take any tool and use it a wrong way and then complain about the tool itself. Encryption is just a tool and an enabler and you have to figure out how to use it. If you closely look at the "experts" in the post they are into the key management business and want you to believe that your keys will be revoked one day and you might end up paying ransom and also risk your data so why not pay us now and buy our software.

Tuesday, February 5, 2008

Supply side of cloud computing

Lately the most of the buzz is around the demand side of the cloud computing - Google's data centers, Microsoft Live, Amazon EC2 etc. Add one more player, Cisco, but on the supply side. Cisco has entered into the supply side of the cloud computing by unveiling its 15 terabit/sec switch - that's ridiculously and embarrassingly fast (found via Rough Type). Cisco recognizes the opportunities around network and data center vitualization in the rising world of ubiquitous computing. This initiative and innovation emphasizes that the utility computing is not just about taking few commodity hardware and connect them together. That is just tip of the iceberg. The cloud computing at core would certainly support the computing needs in a utility fashion but the data center redundancy and geographical connectivity is crucial as well. You can get a lot done when two geographically dispersed severs can transfer high volume data at lightning speed.

Now put this switch in on of those Sun's black truck and see the difference!

Saturday, February 2, 2008

Monetizing social networks and preserving privacy - an oxymoron?

How do social networks monetize their core platform and applications? It's more than a billion dollar question, figuratively and literally. The social network companies such as Facebook does recognize the potential of an open platform for participation and developer-friendly attitude to let the community sip the champagne of the social network data. There is a plethora of applications built on Facebook platform and and this might be the key towards monetization. The other key players have also been experimenting with their platforms and business models but there is no killer business model, at least not yet.

Monetizing efforts do ruffle some feathers on the way since it is intertwined with other factors such as privacy, data portability, and experience design. The Facebook's experience design keeps applications' users inside of Facebook but at the same time provide the necessary, or sometimes unnecessary, access to user's data to the application providers. This has set off some debates around privacy concerns. Access to user's data and open architecture is a key to increased adoption that can potentially lead to monetization, but Facebook needs to be careful here not to piss of the users. Compare this with Google few years back where Google made a conscious decision to keep the search results rank clean (do no evil) and that strategy paid off when Google started monetizing via AdSense.

Marketers argue that the spending power of the current demographics of Facebook is not high, so why bother? This is true but don't forget that when these millennial grow up to buy that 60" plasma TV, some companies do want to make sure that they have a brand tattooed in their heads from their previous branding experience on such social networks. As pointed out by many studies, the millennial are not brand loyal and that makes it even more difficult for the marketers . The Facebook is a great strategic brand platform to infuse the brand loyalty into these kids.

Data portability is part of longer term vision for any social network. The applications are constrained inside a social network, but an ability to take the data out in a standard format and mesh it up with an application outside of Facebook has plenty of potential. Leading social and professional network providers have joined the Data Portability Group. Imagine to be able to link your Facebook friends with your LinkedIn contacts and provide a value add on top of that. There are plentiful opportunities for the social network providers to build the partner ecosystem and have the partners access to the data and services in the process of co-innovation. LinkedIn for the longest time resisted providing any APIs and relied on their paid subscription services. LinkedIn has tremendous potential in the data that they posses and standardizing the formats and providing the services has many monetization opportunities. It is good to see that LinkedIn has also joined the Data Portability Group and has also promised to open up APIs. Google's OpenSocial effort, partially opening up Orkut as a sandbox, and social network visualization APIs are also the steps in the right direction.

What I can conclude that the growth of such social networks is in two directions, platform and verticals. As platform becomes more open we can anticipate more participation, larger ecosystem, and service innovation. This should help companies monetize (no, no one has figured out how). The growth in vertical will help spur networks for specific verticals such as employment, classifieds, auction - who knows?

Monetization, experience design, and privacy cannot be separated from one another and few wrong strategic decisions could cause significant damage to the social network providers and their users.