Friday, November 22, 2013

Email Is Now Just Another Stream

This post was also published on TechCrunch.

Only a couple of years ago, pundits were predicting an end to email. But instead of fading away, there’s been ever-increasing email volume and usage. Rather than being replaced by Facebook and Twitter streams, email is actually becoming a stream itself.

Mail systems are evolving to match the new volume of email, and users will increasingly see only algorithmically vetted emails. Some other emails may be shown below the vetted email, and the rest will flow away into temporal oblivion, just like uninteresting social posts from a few hours ago.

Implications for marketers are significant. The days of the average AOL or Yahoo! mail user scrolling through every email in their inbox are rapidly fading. Email has been especially important in e-commerce sales and customer re-engagement. For e-commerce in particular, email marketing exceeds the performance of social advertising. Large-volume email senders will need to make a greater effort to send emails that are both personalized and interesting to the recipient.

The email tsunami problem is pervasive. Several Silicon Valley folks have already committed the unfortunately termed “email suicide,” where they give up on reading unread email and start anew. Others are adding email auto-responders stating that they will not necessarily see email. New vendors such as SendGrid have helped bring on the deluge by dramatically lowering the price of sending volume email and democratizing access with simplified onboarding and easy developer APIs.

Google has added several features to Gmail in an attempt to add some order to the chaos of email. The changes will effect both email users and marketers. With Gmail features like Priority Inbox, Gmail Tabs, and Circles, users are increasingly engaging only with algorithmically vetted email from senders they know. Priority Inbox is only a satisfactory product and needs to evolve to automatically mark as “important” email from senders that a recipient repeatedly opens, especially if the recipient replies. Next-generation email clients like Inky go as far as sorting email by relevance rather than date.

For marketers, sending a ton of email without any user engagement will soon become counterproductive. For each type of volume sender, a new balance will have to be found between sending numerous emails and still achieving desired “open rates” and “clickthroughs” — mechanisms by which an email provider like Google can detect whether or not the email is of interest to a user. Much like how “edge rank” increases for Facebook posts when the people like, share or comment on it, “mail rank” will be an increasingly important benchmark for email marketers to measure their effectiveness.

At CBS Interactive, we send over 200 million emails a month, ranging from news summaries to personalized fantasy sports updates, to an audience of 270 million unique users. We have corporate standards and systems to ensure that recipients can easily unsubscribe from unwanted emails. However, given these upcoming changes, we will need to look at overall open rates from a particular property and begin to proactively prune users that have no interest in emails we send.

The shift to email as a stream will have personal implications as well. People may have to be introduced via a mutual party rather than sending a cold email, especially if the sender has sent numerous emails that have not entailed or received a response. Even personal emails from people you know may soon be treated like a Facebook or Twitter post, where a user either immediately responds, such as with a Facebook like or comment, or instead lets it flow into the ether. Much like social posts, senders will likely shift toward keeping email messages short and to the point.

Facebook and Twitter both have nascent but unique takes on messaging. Facebook messages that are not from one of your friends go into an “other” folder that is rarely read. Twitter direct messages can only be sent to people who follow the sender, although Twitter is experimenting with opening this up a bit.

Stream-oriented companies like Facebook and Twitter essentially charge brands to target their own customers by allowing brands to purchase promoted posts for their fans and followers. Email providers may soon sell “promoted emails” where a marketer can target a user in their priority inbox. Users may revolt, but in the end they are getting email for free, so it will be hard to complain. Email has become a stream, and as the adage goes, when you’re not paying, you’re the product.

Saturday, August 24, 2013

Rebooting the Bay Bridge: A Classic Rewrite Story

This post was written with Mathew Spolin and published in VentureBeat.

From the smallest startup to the largest multi-national company, the ground-up software rewrite is the unicorn of organizations. At some point, a software system needs to be rewritten from scratch, usually for one or more of three reasons:

  1. Architectural flaws inherent from the onset.
  2. External market conditions that the software can not meet.
  3. Too much deferred maintenance such that the software is unstable and unchangeable.

Software is sometimes hard to understand as it is abstract. However, we have the ultimate physical, real-life rewrite on our doorstep here in San Francisco: the rebuild of the eastern span of the Bay Bridge.

Architectural Flaws

The eastern span of the Bay Bridge, first opened in 1936, had a significant architectural flaw in that it could not survive a high-magnitude earthquake. The magnitude 7 Loma Prieta earthquake in 1989 caused a 76- by 50-foot section of the upper deck to collapse onto the lower deck. There was a single fatality on the bridge and the bridge was out of commission for a month, from October 17 to November 18.

Sometimes software hits an architectural wall, and has to be rewritten. The current mobile and tablet phenomena has caused many traditionally web-only companies to substantially overhaul their software in order to deliver their service to mobile devices.

The Technically Obvious “Rewrite” Proposal

Bridges are hard to build, and the more complicated the bridge, the more complicated and expensive the construction project. The existing eastern span of the bay bridge was nothing fancy, and could be replaced relatively easily by a modern, economical skyway at a cost of $1.1 billion.

When proposing a rewrite of software, things may seem simple at first, and engineers and managers think that they can rip out a new version relatively quickly.

The Rewrite Maelstrom

Although there was a simple alternative, there are lots of stakeholders who couldn’t get what they wanted from the existing bridge and wanted to put their imprint on the new bridge. Jerry Brown, then the mayor of Oakland, demanded that the new bridge be a “statement bridge” since Oakland deserved a modern suspension bridge on par with what San Francisco has with the Western span of the Bay Bridge. Biking advocates demanded a bike lane, even though there is not much to do on Yurba Buena and Treasure Island and the Western span of the bridge doesn’t have a bike lane. Everyone settled on a self-anchored suspension bridge, a type of bridge design that had never been attempted at the scale of the Bay Bridge.

Even though the Governor at the time, Arnold Schwarzenegger, attempted to override everyone and go back to the simpler skyway design, there were so many people and factions involved that eventually the more extensive design was greenlit.

When rewriting software, it is important to consider that there are many people that are stakeholders in the existing solution, and likely have pent up requirements that they were not able to solve with the existing solution that they would now finally like to be addressed with the new solution. It is also extremely difficult to remove features as people expect a new solution to improve on their current solution. This type of requirements creep is all very normal and should be expected, and needs to be addressed well as it can lead to a significant amount of analysis, sometimes paralysis, and a lot of change management.

Agreement Challenges

Even once a new design and its features are agreed upon, there will likely be additional disagreements. In the Bay Bridge’s case, Mayor Willie Brown of San Francisco and Mayor Jerry Brown of Oakland, bickered over whether to put the new bridge to the North or South of the existing bridge. Brown wanted the new bridge placed to the South of the current bridge in order for its shadow to not decrease the value of prime real estate on Yurba Buena Island. Even the Navy was involved, since it was transferring the land to San Francisco, and wouldn’t let Caltrans test the soil at the site. The disagreement dragged on for two years and added hundreds of millions of cost to the project.

When rewriting software, it should be expected that there will be disagreements, so it is critical that all the stakeholders meet regularly, keep the overall project goals in mind, and have the ability to engage in frank and open dialog in order to address issues.

Unforeseen Problems, Delays and Cost

The Bay Bridge had numerous delays and cost overruns, ranging from bad welds due to welders being rushed to bad bolts that the manufacturer changed specifications on after they were initially tested.

During all the debate about what type of bridge to build, and exactly where to build the bridge, China’s building boom began consuming all available concrete and steel, driving up prices. The delays in the Bay Bridge construction ended up costing billions of dollars. The state and local agencies haggled constantly on who would pay for overruns, leading to toll increases and debt. The simple design was estimated to cost $1.1 billion, and the bridge ended up costing $6.3 billion and was delivered years late, with some potential flaws.

In large, complicated projects, unfortunately it should be expected that there are going to be unforeseen problems and delays, in even the most padded schedule. It is a fact of life of engineering and time and cost overruns are unfortunately very typical of large projects. In addition, these types of delays can cause endless thrash in organizations as people expected the new solution and deferred maintenance and features on the existing solution.

Finally, a New Bridge!

I have learned the hard way, unfortunately more than once, that rewrites are an incredibly painful process. They may initially seem simple, but end up taking far more time, money and coordination than initially envisioned. However, on the other side of the process, you end up with a brand spanking new bridge, ready for the future.

For more on the Bay Bridge project, see Wikipedia and

Thursday, August 22, 2013

How to Sell to the CIO, Part 3: Closing the Deal

This post was published in the Wall Street Journal's CIO Journal.

After almost two decades of selling enterprise infrastructure to IT organizations, I have spent the last two years on the other side of the table as the CIO/CTO of CBS Interactive. This is part three of a three part series on how to sell to CIOs.

In part one, I provided tips for getting into an account and in part two I discussed how to navigate the sales process. Now it’s time to close the deal.


Deals are going to take as long as they take, and there is really not much a vendor can do to make all the stars align within a big enterprise. A sophisticated vendor understands that large enterprises have a lot of processes in place to close deals and an aversion to new vendors. There is generally no shortcut through legal, finance and other internal approvals.

When a vendor asks for deviations so that they can make a number or close the deal in Q4 as they promised the board, it makes them seem weak and engenders doubt. It leads everyone internally to ask, “Are they really so desperate that our $150K is a make or break for them?”

While there’s no way to tactfully ask for a deal to get expedited, a vendor should understand a company’s process and track that the deal is going well. We had a deal with VMware Inc. that slid for two quarters for various reasons. They were so patient and diligent that in the end, we took them out to dinner.


Sometimes vendors agree to a price range at the beginning of a sales process and then change the price when it’s time to close. Agreeing to terms during contract iteration and then shifting parameters around and returning something completely different for each subsequent iteration makes a vendor look untrustworthy and unreliable. We once got very far along with a vendor, showed them how to pitch to our business units, had everyone lined up to switch to this vendor, and then they started playing tactics like this and we walked away.

When the reasons behind pricing are not clearly communicated, the vendor gets blamed. We once requested a tiered price instead of a flat fee for a deal to access segmentation information ad hoc. It was a reasonable request that the vendor accommodated, but unfortunately a rumor spread through our organization that we were paying by the impression to access our own data. The pricing was so customized that we had a hard time getting buy-in. As word of the pricing debacle spread, it reflected poorly on the company. So if the pricing model is customized in order to close the deal, be sure that it is very clear why this is the case and that the information is disseminated widely.

Deals always seem to get hung up on indemnification. While everyone seems to know where a deal will end up, it often takes weeks of back-and-forth between legal and a couple of ultimatums to close. If I ever run a software company again, I will be sure to have two pricing tiers: greater indemnification at a higher price, less indemnification at a lower price. You pick, and we all save ourselves a big headache.

Reputation Matters

The biggest surprise I had upon running a large IT organization is the level of information sharing and collaboration between CIOs at different companies, particularly about vendors. Most CIOs and IT VPs are all very accessible, candid, and frank with each other about what products they are using, how the vendors are doing, and how they make internal decisions.

Vendors should be aware that their performance quickly makes the rounds, both in a particular vertical and across different verticals. This type of diligence accelerates as more players across finance and legal get involved in a deal. “I heard they completely screwed up at Company X” will kill a deal even if it is on the verge of signing. Vendors need to make sure that they have happy customers.

In addition, executives often transition between companies in a vertical such as media or finance. One of my biggest internal stakeholders for a video platform had a bad experience with a potential vendor at his previous company — a cable channel without a large video-streaming offering. The vendor CEO’s reasoned that the previous company was small, and that we would be treated a lot differently as a large customer. Wrong answer. A much better approach would have been to figure out exactly what went wrong at the previous account, remediate it, or at least offer a “lessons learned” on how they have improved service.

Reputation is also important globally. If a vendor does a global deal with a company but doesn’t provide good service to the satellite locations, the international offices will go out and find their own vendor, causing a lot of thrash throughout the company. A global deal is not done until it’s delivered well internationally.

Account Management

Once a deal looks like it is about to be signed, there is typically a hand off to account management. A lot of vendors fail at this step. Salespeople need to be incented to ensure that there is a clean handoff. Vendors would also be wise to ensure a single account management point of contact, especially for larger organizations. Handing us an organizational chart with different contacts for professional services, technical issues, and billing makes us feel like we are not getting what we paid for.

And finally, a note of caution about growing organically within an organization: Although it can look like a vendor has traction because they are used in multiple departments, they can still be replaced. For example, in the cloud file-sharing segment, we were running Dropbox Inc., Box Inc., Google Inc.’s Google Drive, and others. When it became clear that we needed one official cloud storage vendor, our corporate IT folks took all the vendors through a rigorous process and picked a single vendor. Soon after, the other vendors were blocked at the firewall level. To stay relevant, a vendor should stay ahead of the customer with new features, provide excellent service, and be proactive if there is downtime or failures.

Sell to Yourself

The best thing a vendor can do is have staff who are domain experts at what they are selling and also staff who have worked in a decision-making capacity within enterprise IT. Sometimes that’s not possible, so I’ve shared my experiences to give vendors the inside look on how enterprise IT makes purchasing decisions and how to optimize time for everyone involved. A clear, straightforward pitch, a salient case for how your product will help the customer, and a good deal of patience go a long way when it comes to selling to the CIO.

Thursday, August 15, 2013

How to Sell to the CIO, Part 2: The Sales Process

This post was published in the Wall Street Journal's CIO Journal.

After almost two decades of selling enterprise infrastructure to IT organizations, I have spent the last two years on the other side of the table as the CIO/CTO of CBS Interactive. This is the second of a three part series on how to sell to CIOs.

The tips I laid out in part one helped you get into an account. But getting the meeting is just the beginning. Now it’s time to sell your product.

Sell the ROI

The question enterprise IT has for the salesperson during a sales meeting is, “Will your product save or make us money and does it have low risk?” As Founder and CEO of Upstream Group Doug Weaver writes, “They’re not thinking about helping you out or what kind of day you’re having. ‘What’s in it for me?’ is the order of the day.”

A vendor needs to have a very clear and verifiable return on investment (ROI) story with measurable metrics. There are times when companies buy software or services without evaluating ROI, such as during the e-commerce push in the dot-com boom and the social craze of the past couple of years, but these are the exceptions.

An ROI story should use the right numbers. I have experienced several instances where vendors offer ROI examples that don’t add up. For example, a database-as-a-service solution that is deployed on-premises and is pitched as cheaper than Inc.’s Relational Database Service. The comparison leaves out the cost of the database administrators needed to run the solution on premises. Also, many vendors don’t take into account the difference between capital expense and operating expense, a distinction that should always be included in ROI calculations.

There is generally a significant transition and maintenance cost to adopt new technology that IT organizations consider before starting a new vendor relationship. Numerous vendors have pitched us on “reporting in the cloud,” but never consider the cost of copying vast volumes of our data to their system or the compliance and regulatory headache that comes with having our proprietary data in their hands. The reporting features need to be morethan an on-premises solution for us to even entertain the concept.

Product Fit

It is critical for vendors selling a product to an enterprise to have people with domain expertise on their team.

Vendors should be very careful about “vaporware,” products that are announced but not yet implemented. For example, once a vendor showed screenshots and demonstrations of their product working in a certain way, but when we trialed the solution, we found that the software was missing several key features. The vendor then acknowledged that the features would be coming out in six months. Unfortunately, after wasting my team’s time, we will not give them another chance any time soon.

While it’s important to cater to the customer, make sure other enterprises also need a product feature before adding every feature a company requests. One approach for vendors is to have a blanket policy that three customers must request the same feature before it is implemented (but be careful when wielding that stick as customers often do talk to each other). There are, of course, certain products that inherently need to be customized and should include a set amount of professional services. Finding the right balance will result in a product fit that works for both the vendor and the customer.

Go to the Right Person

To sell to an IT organization, convince the person who actually has the problem. When someone three levels down in my organization comes running into my office and proclaims that they found a great product that pays for itself in six months, will save us money, and that they have already trialed it and it works well, there’s a great chance that deal is going to get done.

Get the attention of the database manager, the networking director, the system administration manager, the procurement director or whoever is the appropriate person and convince them to try the product. The best way to sell to people on the software side is to have a product that they can find, deploy and use on their own without needing to talk to anyone at the vendor.

MySQL’s Marten Mickos perfected this strategy with his “15 minutes to delight” philosophy, previously unthinkable for a product like a database. Subsequent products from Atlassian Inc., New Relic and Splunk have adopted this business model, and their growth reflects this philosophy’s focus on delivering a killer product.

Once a product has already been tested, the sales process becomes much smoother. Vendors can direct their salespeople to focus on learning the organization and helping to navigate finance, legal and stakeholders’ agendas to ensure that required features are added to the roadmap. Once a product sells itself, the sales organization doesn’t waste time chasing down leads and requesting face-to-face meetings.

This advice does not apply to companies selling deep workflow solutions like order management software that requires a lot of analysis, buy-in from multiple stakeholders and has a long implementation time. These types of solutions still require “traditional” sales techniques and cycles. The best way to close a deal like this done is to understand that it will take a long time and to identify an executive champion that will help push it through.

I will explain more about how to close a deal in the final part of this guide on selling to the CIO.

Thursday, August 08, 2013

How to Sell to the CIO, Part 1: The Initial Pitch

This post was published in the Wall Street Journal's CIO Journal.

After almost two decades of selling business infrastructure to technology companies, I thought I knew it all. But since spending the last two years on the other side of the table as the CIO/CTO of CBS Interactive, I realized how much I didn’t know about selling to enterprise IT.

The way to truly understand how and why an enterprise purchases technology is by gaining the ability to understand how IT departments at medium to large-size organizations work, how decision-making actually happens and how vendors can avoid getting in their own way.

Based on my experience on both sides, the following guide aims to help salespeople and companies become more efficient for both their clients and themselves.

Be clear what it is you do

A pitch, whether from a small startup or a multinational corporation, should always concisely state the following:

1. Description: What the offering specifically does, how it’s different from competitors and how it can help a CIO. Ideally this includes some tangibles, such as screenshots or performance graphs.

2. Validation: What stage the product is at and who else is using it.

3. Process: How can the offering be purchased, what the typical on-ramp looks like, and how much it costs.

Here’s a mock example of a good pitch:

Super Interconnect provides a next generation fiber interconnect that is 10x the speed of existing interconnects. Super Interconnect makes it possible for your web and application servers to quickly access backend resources such as databases, thereby significantly reducing latency to customers. As you can see in the attached graph, we offer 25x the price/performance of 10gigE, and are a year ahead of other nextgen fiber interconnect technologies.

Super Interconnect is a young company, however we all come from networking companies such as Cisco and Juniper, and are well funded by top venture capitalist firms Accel and Sequoia. A few of your peers such as PepsiCo and Walt Disney Company are actively using Super Interconnect and can serve as references.

We sell our product direct to IT organization and can work through your preferred resellers. A typical POC takes 30 days to deploy with minimal time by your staff. Super Interconnect is priced at $1K/server, a 300% significant $/gbps savings over 10gigE.”

If an introductory pitch can’t cover this basic information, IT professionals will get the sense that a meeting will likely be painful and will want to avoid it.

Also, if a company is new and doesn’t have a product yet, it’s best for the vendor to be honest and say that they are in a research phase. Instead of polling a number of CIOs, vendors might consider finding and collaborating with a domain expert that knows the next thing an enterprise will need. Hype and vision won’t sell.

Time is not the answer

Virtually every contact a CIO receives has the same ask: for a meeting or phone call. As most CIOs’ schedules are crammed, realize that coffee, lunch, drinks, dinner and events are particularly tough requests. So instead of asking for the CIO, who is not even the best decision-maker on a lot of new technology, vendors should ask to talk to the right experts in a CIO’s organization.

For startups in particular, sales and business development executives love to have big brand names and titles in their pipeline. Even when it is clear that a product is not a fit, they still push hard for a meeting in order to justify their own value to their management chain. This can reflect badly on a startup, so salespeople need to work to find the right targets and measure success not by the amount of meetings, but by how many prospects are moving into the next step in the sales funnel.

Bypassing IT is not realistic

Selling directly to a line of business and bypassing IT is not realistic. IT is generally with the program, supportive of cloud applications and no longer a roadblock like in years past. And lines of business need to have their tools integrated with their company’s overarching systems as well as be in compliance with security and Sarbanes-Oxley policies.

Treating the IT department like a speedbump signals that the vendor does not value what IT thinks. Lines of business typically partner with IT, and a deal takes agreement between the two.

Understand the IT culture

While IT vendors spend their time trying to sell to IT departments, they rarely have anyone on staff that has actually worked in an IT department. IT departments have a certain culture and the best way to sell to them is to understand how they work.

When a vendor secures a meeting, it’s in their interest to show up early, be on their best behavior, dress and act professionally and make sure not to reschedule unless a significant life event occurs. Once a vendor has the attention of IT management, they should tell it like it is and then try to close a deal. There are quite a few vendors that don’t answer direct questions regarding features and pricing.

Startup companies need to make sure to spend more time talking about how their passion is solving a customer’s problem, than about how their idea is “awesome.” The history of the company and its various pivots can be interesting, but only within the context of how excited the founders are to solve a business problem for a customer.

Getting feedback

IT managers are generally loathe to give any feedback since it usually engenders hostility. Quite often, what a vendor is selling is not a good fit for a particular enterprise. Particularly for startup founders, learning this can be quite emotional. It is important for founders to contain that emotion and try to learn exactly how a product needs to shift in order to meet the needs of an enterprise customer.

The best way to get feedback is to ask specific questions. For example: What is it that the customer likes and doesn’t like about their current solution? How hard would it be to move to a new solution? And is it realistic that the customer would move to a new solution or will they simply wait for their existing vendor to add the missing features?

The flip side

Enterprise IT is guilty of wasting vendors’ time in order to learn what’s going on in the industry and to keep options open. A question I used to ask when I was on the vendor side of the table was, “What’s the last product you bought and what was the process like?” If no one can answer, the IT group is clearly very conservative and a vendor should come back in a year after they have found and closed some early adopters.

In the next part of “How to Sell to the CIO,” I will discuss the sales process.

Saturday, August 03, 2013

In Mastering Machine Intelligence, Google Rewrites Search Engine Rules

This post was written with Cameron Olthius and published on TechCrunch.

Google has produced a car that drives itself and an Android operating system that has remarkably good speech recognition. Yes, Google has begun to master machine intelligence. So it should be no surprise that Google has finally started to figure out how to stop bad actors from gaming its crown jewel – the Google search engine. We say finally because it’s something Google has always talked about, but, until recently, has never actually been able to do.

With the improved search engine, SEO experts will have to learn a new playbook if they want to stay in the game.

SEO Wars

In January 2011, there was a groundswell of user complaints kicked off by Vivek Wadwa about Google’s search results being subpar and gamed by black hat SEO experts, people who use questionable techniques to improve search-engine results. By exploiting weaknesses in Google’s search algorithms, these characters made search less helpful for all of us.

We have been tracking the issue for a while. Back in 2007, we wrote about Americans experiencing “search engine fatigue,” as advertisers found ways to “game the system” so that their content appeared first in search results (read more here). And in 2009, we wrote about Google’s shift to providing “answers,” such as maps results and weather above search results.

Even the shift to answers was not enough to end Google’s ongoing war with SEO experts. As we describe in this CNET article from early 2012, it turns out that answers were even easier to monetize than ads. This was one of the reasons Google has increasingly turned to socially curated links.

In the past couple of years, Google has deployed a wave of algorithm updates, including Panda and Panda 2, Penguin, as well as updates to existing mechanisms such as Quality Deserved Freshness. In addition, Google made it harder to figure out what keywords people are using when they search.

The onslaught of algorithm updates has effectively made it increasingly more difficult for a host of black hat SEO techniques — such as duplicative content, link farming and keyword stuffing — to work. This doesn’t mean those techniques won’t work. One look into a query like “payday loans” or ‘‘viagra” proves they still do. But these techniques are now more query-dependent, meaning that Google has essentially given a pass for certain verticals that are naturally more overwhelmed with spam. But for the most part, using “SEO magic” to build a content site is no longer a viable long-term strategy.

The New Rules Of SEO

So is SEO over? Far from it. SEO is as important as ever. Understanding Google’s policies and not running afoul of them is critical to maintaining placement on Google search results.

With these latest changes, SEO experts will now need to have a deep understanding of the various reasons a site can inadvertently be punished by Google and how best to create solutions needed to fix the issues, or avoid them altogether.

Here’s what SEO experts need to focus on now:

Clean, well-structured site architecture. Sites should be easy to use and navigate, employ clean URL structures that make hierarchical sense, properly link internally, and have all pages, sections and categories properly labeled and tagged.

Usable Pages. Pages should be simple, clear, provide unique value, and meet the average user’s reason for coming to the page. Google wants to serve up results that will satisfy a user’s search intent. It does not want to serve up results that users will visit, click the back button, and select the next result.

Interesting content. Pages need to have more than straight facts that Google can answer above the search results, so a page needs to show more than the weather or a sports score.

No hidden content. Google sometimes thinks that hidden content is meant to game the system. So be very careful about handling hidden items that users can toggle on and off or creative pagination.

Good mobile experience. Google now penalizes sites that do not have a clean, speedy and presentable mobile experience. Sites need to stop delivering desktop web pages to mobile devices.

Duplicate content. When you think of duplicate content you probably think of content copied from one page or site to another, but that’s not the only form. Things like a URL resolving using various parameters, printable pages, and canonical issues can often create duplicate content issues that harm a site.

Markup. Rich snippets and structured data markup will help Google better understand content, as well as help users understand what’s on a page and why it’s relevant to their query, which can result in higher click-through rates.

Google chasing down and excluding content from bad actors is a huge opportunity for web content creators. Creating great content and working with SEO professionals from inception through maintenance can produce amazing results. Some of our sites have even doubled in Google traffic over the past 12 months.

So don’t think of Google’s changes as another offensive in the ongoing SEO battles. If played correctly, everyone will be better off now.

Monday, March 11, 2013

To Cloud or Not to Cloud

This post was published in the Wall Street Journal's CIO Journal.

Cloud is all the rage in IT right now, offering nearly instantaneous time to value, continual feature upgrades, and reduced cost. However, it is important to delineate different types of cloud offerings and what should and shouldn’t run in the cloud. There are also several contractual issues CIOs should consider when dealing with cloud vendor.

Software-as-a-Service is mainstream and cost-effective

Among the very first cloud offerings, software-as-a-service (SaaS) solutions such as Inc. and ServiceNow Inc. are very compelling. The offerings are completely self-contained, turnkey and offer rich feature sets that are continually enhanced and refined by the vendor. Often SaaS vendors integrate with each other which makes it easier to piece a stack of software together.

When selecting a SaaS vendor, CIOs should ensure that the vendor has a high standard of data security and offers an interface by which data can be retrieved into internal systems. Lines of business have a tendency to start using SaaS vendors on their own, and savings and compliance can be realized by rolling up several of these point purchases into a global deal with a single vendor. An interesting new tool from Skyhigh Networks can monitor what SaaS tools are being accessed from a company’s network in order to facilitate discovery of SaaS usage.

Legacy enterprise software vendors such as IBM Corp., Oracle Corp. and SAP AG are increasingly acquiring SaaS companies in order to continue their growth beyond traditional packaged software. After such acquisitions, pricing typically increases two to three times, so in order to have a predictable experience when engaging a SaaS vendor, a CIO should consider singing multi-year deals with a set yearly increase and exit options.

Infrastructure-as-a-Service is great, but expensive

Infrastructure-as-a-service (IaaS) offerings such as Amazon Web Services are a convenient method to elastically expand your data center without buying new equipment. Inc. provides security features such as Virtual Private Cloud (VPC) which segregates your network traffic and allows access via a secured virtual private network (VPN). In addition Amazon provides Direct Connect which offers point-to-point access to your Amazon infrastructure.

The elasticity and self-service features of infrastructure-as-a-service make it easy for internal customers to add machines instantly without waiting for machines to be ordered, racked, and kick-started. Additional machines can be added for heightened traffic due to large events or seasonality.

In order to build out an IaaS service, it is important to lock down security with features such as VPC and VPN. From a compliance perspective, it is important to have a clear policy of what can and can’t be run on the IaaS, as well as a plan to move systems off of the IaaS and back into a data center within a set amount of time in case the systems need to be moved back due to a security breach or legal issue.

The downside of IaaS is the price. Pricing can be complicated in terms of figuring out how many and what type of instances are needed, and whether or not to use reserved instances with a time commitment that are cheaper. At CBS Interactive, we have a director of cloudarchitecture that assists business units with this type of decision making, and we continually monitor the aggregate Amazon charges in order to maintain efficiency.

IaaS charges are completely operating expense, and cannot be depreciated as a capital expense as is normally the case when purchasing hardware for a data center. When I first started using Amazon Web Services in their private beta a decade ago, after 25 or so instances it was cheaper to run your own hardware. Now that number is 200 or so instances. However, the total cost comparison should include the business agility offered to internal customers in terms of easily spinning up new machines.

Private clouds are emerging

At CBS Interactive, we host a large number of websites ranging from to CNET to We are currently building out a private cloud in one of our main data centers where internal customers will benefit from CapEx amortization and have the agility of an IaaS provider. At that point, what to run where will become a simple math equation since system administration will be comparable between our IaaS and our private cloud.

Private clouds are difficult to implement, however, as they require cobbling together vendors that offer self-service provisioning, load balancing, storage access, and database administration that leverage the existing networking and system infrastructure in a data center.

Hybrid clouds are not realistic for existing applications

Many vendors claim that hybrid clouds offer elasticity without having to move systems into a SaaS or PaaS environment. In my experience, this is not a realistic alternative. Applications that can run in a hybrid cloud are specifically designed to be split across different data centers and architectures.

Taking a legacy application and running parts of it outside of a data center with a cloud provider requires that incoming requests into that application are load balanced across the two, that data access from the cloud version to the source system is available without latency, and that some data is synchronized between the two in such a way that the application performs as expected. The amount of work to accomplish all of this on an existing legacy application typically far exceeds the benefits of elasticity.

Platform-as-a-Service has too much lock-in

Platform-as-a-Service (PaaS) offerings such as Google Inc.’s AppCloud, Heroku Microsoft Corp. ’s Azure, and Amazon Beanstalk offer the ability to drop in code modules without having to worry about features such as load balancing, scaling, user management, and database management and tuning.

While they are typically not appropriate for legacy applications, these services offer incredibly fast time to value for new applications. However, PaaS platforms are like Hotel California – it’s easy to get in and hard to leave, as the application is heavily locked into the PaaS vendor and cannot be easily moved without significant re-engineering.

If the application needs to be moved due to a compliance or legal reason, or it has grown so much that it is exorbitantly expensive to run on a PaaS, organizations can find themselves stuck between a rock and a hard place. PaaS is therefore much more suited for smaller organizations, and only applicable to larger organizations that have a strongly defined PaaS policy in place on what proprietary PaaS services may be used in order to ease a transition out of the PaaS.

In addition, given Heroku’s recent metrics issues, it is important that independent measuring be put into place to ensure that the PaaS system is performing as promised.

Features-as-a-Service require too much access to internal systems

I am of course using the term feature-as-a-service in jest. There is currently a rash of analytics, Big Data, and mobile enablement vendors pitching the enterprise with a cloud-based approach and promising the simplicity of a SaaS solution.

However, virtually all of these solutions require that either the enterprise continually copies mountains of data into the vendor’s cloud systems, or that the enterprise allow deep vendor access into internal systems such as databases. Either option causes innumerable security and compliance headaches.

Some of the vendors are beginning to shift, either only targeting the SMB market, or adding a traditional on-premise option for their software. Another emerging option is to have the vendor deploy their solution within an Amazon Virtual Private Cloud, thereby accessing secure enterprise data within that cloud, but without compromising security or compliance.

Perhaps data oriented features-as-a-service will be more viable if an enterprise’s data shifts to the cloud onto the new breed of systems such as Amazon’s RedShift data warehouse. In the meantime, the benefits of a feature-of-a-service cloud offering rarely outweigh the compliance and data cost of onboarding the vendor.

To Cloud or not to Cloud

There are different types of cloud solutions, different types of applications suited to the cloud, and different levels of comfort with an enterprise to move systems to the cloud. It is important to evaluate each cloud option with a well-formulated approach that looks at the overall benefit, cost, exposure and includes an extraction evaluation to avoid lock in. Cloud on!

Friday, January 11, 2013

Android Challenges the iPhone in Every Category

This post was also published on CNET and VentureBeat.

The new breed of Android devices exceeds the iPhone 5 in every way, including hardware, operating system, and apps.

For the past month, I have been using an HTC Droid DNA, which has similar specs to the rumored upcoming Samsung Galaxy S4. People approach me at grocery stores, airports, coffee shops, even on the street and ask me about the phone. The device is indeed quite compelling, even from a distance.

The HTC DNA has an amazingly bright 1080p HD display with a higher resolution than Apple's iPhone 5 Retina display. The operating system is modern with dynamic widgets that tell you at a glance what's going on. The apps such as Facebook, Twitter, and such are equivalent to those available to iOS, and Google Apps such as Google Now, voice recognition, and Google Maps are sleek and modern. This is hands down a better device than the iPhone 5, and people seem to intuitively recognize it.

What phone would I recommend for my mom? An iPhone. It's safe, predictable, and uniform. What would I recommend for anyone under 40? Definitely one of the new breeds of Android phones. Android might still be a bit quirkier than an iPhone, but it's definitely not confusing for people who interact daily with a variety of advanced technology. Samsung really nailed it in its commercial where a young woman is waiting in line for a new iPhone and it turns out she is holding the spot for her parents.

The new breed of Android devices exceed the iPhone 5 in every category -- hardware, operating system, and apps.

The spec is alive and well -- and killing Apple

Hardware from Samsung, HTC, LG, and others has now caught up and eclipsed Apple's devices. Smartphones don't really have that many specs to evaluate, and each of the specs actually means something tangible to an average consumer. After five years of advanced smartphones, specs like screen size, screen density, screen brightness, camera speed, camera megapixels, physical dimensions, physical weight, amount of memory, and battery life are easily understandable and relevant to even the average smartphone consumer. Even specs like the number of processor cores and speed that are typically not easy to understand are easily understood when framed as "faster than the iPhone 5."

Conversely, the spec is definitely irrelevant when purchasing Apple products. There are so few products to choose from that decision making is essentially boiled down to a Goldilocks-style small/medium/large decision mainly driven by cost rather than actual features. While this is great for my mom and MG Siegler, the lack of spec-based decision making is not necessarily a good thing in a world where consumers actually understand each of the specs and would like to choose how to balance them out relative to cost. Apple has been a follower on many specs, particularly in terms of form factors, trailing the market in both 4-inch phones and 7-inch tablets.

iPhones are definitely gorgeous devices, but they are relatively uniform and monotone. Aluminum is definitely great. Conversely, I was surprised by how many women commented on the red accents on the HTC DNA, which are part of the DNA's crossbranding with Beats Audio. People like colors and variety, and they don't necessarily like having to completely cover a phone's shell and make it bulkier in order to express themselves.

Let's not forget that all of those Samsung Galaxy phones you see cost the same as an iPhone -- their owners are not bargain shoppers; they are spec and style shoppers.

The screen should actually show you something!

As mobile app developer Ralf Rottmann recently noted, the new generation of Android 4 Jelly Bean is a fundamentally better operating system than iOS -- better rendering, better cross-app sharing, better app/OS integration, and more polished.

But the real standout for Android is the customizability of the display. Rather than iOS static icons with embedded notifications, with Android, apps are front and center, displaying the time in different time zones, the weather, appointments, emails, texts, whatever you want in numerous themes that can completely reinvent the user interface.

Windows Phone 8, the dark horse in this race, is actually even more integrated, with a unified messaging interface that consolidates emails, texts, and Facebook messages into a single thread, and a consistent tile interface with which apps can display information on the home screens.

The operating system is not as important as the apps, and this is where Android is beginning to shine.

The cloud behind the app is more important than the app

In a world where the hardware and operating system have become commoditized, the apps are the differentiator, and more and more, the apps are a viewport into a cloud service driven by machine learning.

The vast majority of Internet users rely on Google Search, Maps, YouTube, Mail, and such, and spend more time in those apps than in the mobile operating system itself. As people are beginning to note, Google's apps are way better than Apple's. What good is Siri if it thinks "Hurricane Sandy" is a hockey team, when Google knows what's actually going on? Google Now is adding ambient awareness to Android devices, letting people know what's going on around them and what they need to do in a very personal way, with features like a notice that you need to leave for your next meeting because there is now traffic en route.

Perhaps, as is rumored off and on, Apple will start snapping up cloud services such as Waze. However, it is hard to buy and integrate a new type of product category into a large company that doesn't have it in its DNA. Competing with Google, an entrenched, dominant player in machine intelligence that recently added Ray Kurzweil to its roster is going to be a challenging affair. Microsoft actually had a better track record of delivering large-scale cloud services, such as mail, mapping, and storage, than Apple.

Beyond Google's apps, the reality of the app market is that all of the applications that matter are now on Android, and it actually will soon have more apps than iOS. Dan Lyons of ReadWrite is lambasting the Silicon Valley tech press for living in an iPhone echo chamber, and he does have a point. Pundits are lauding Google Maps features on their iPhones that have been available on Android devices for literally years. Bloggers breathlessly reveal new Facebook iPhone app features such as "Find Friends Nearby" that had been available for over a month on Android.

The feedback loop of the echo chamber is that developers initially develop apps on iOS, much like the recently popular Cinemagram. However, developers like Rottmann like cool devices, and are starting to shift over to Android. In addition, developers are feeling limited by iOS user interface patterns and its skeuomorphic apps and are branching out. Like the Mac OS of the early '90s, the consistent UI across applications will likely splinter.

The numbers speak for themselves. Android has a 75 percent smartphone worldwide market share, as evidenced by the hordes of Samsung devices in use throughout Europe and Asia. While Apple is regaining market share in the U.S. with the iPhone 5, it is about to face an onslaught of 5-inch Android phones with specs that far exceed the iPhone 5's. Wall Street clearly sees a shift coming, and has hammered Apple's stock price over the past quarter.

The average consumer has moved past the days of pious, scruffy haired, unshaven, thick glasses-wearing dudes lecturing us on how Apple is so cool. Perhaps soon Silicon Valley will catch up. When you see someone in a cafe with a MacBook Air, iPad, and iPhone on the table in front of them, is "Think Different" really what comes to mind?

Wednesday, January 02, 2013

2013: The Internet of Things, Delivered via Smartphone

This post was also published in VentureBeat.

Virtually every electronic device has now gained a smartphone controlled equivalent over the past year. The well known products in this category, such as the Nest thermostat and Sonos music system, have now been joined with smartphone controlled light bulbs, door locks, refrigerators, security systems, home theater remote controls, game consoles, weight scales, and even vacuum cleaners. Services such as teleconference systems that used to be controlled just by touch tones are now controllable by smartphones. There are even smart device powered telepresence robots.

Historically, these types of devices had unintuitive control panels, small, hidden buttons, and other such complex interfaces. The smartphone ecosystem makes is easy for manufacturers to deliver mobile apps as control systems, and for users to intuitively control devices by using a familiar interface.

The tech underneath it all

This device revolution has been powered by a new generation of cheap, embedded controllers, where full web-enabled systems can be cheaply embedded into a device. Consumer versions such as Arduino and the Rasberry Pi have kicked off a generation of controllable devices that even include do it yourself smartphone controlled power strips.

One quagmire that many users run into is that it can be hard to add a device to a home network and then connect a smartphone to the device. It can sometimes takes a while for a smartphone to find and log into a home Wi-Fi network. Sonos solves this problem by broadcasting over your network and letting users push a button to pair devices together. New peer-to-peer wireless protocols such as Wi-Fi Direct are attempting to address this problem in a more seamless manner.

There is a definite threat that hackers can gain control of a home network, and thereby control all of the devices in a home, so it is important that users secure their WiFi with a password and a high level of encryption.

This new era of control by smartphones is actually quite cheaper than legacy electronic controls. For example, both a legacy “smarthome” lightswitch and an overstock 7” Android tablet each cost $70. iPads are far cheaper and more usable than state-of-the-art systems such as Crestron.

Touchscreens are not applicable to every dedicated device. My personal experience waiting for clerks to use iPad enabled point of sale terminals has not been positive. For whatever reason, it seems to take about twice as long for items to be entered and a receipt printed than with old-school push button registers.

Where is this going next?

A very interesting facet of this next generation of devices is their ability to add ambient awareness to devices. Just like the Nest thermostat learns your comings and goings and the FitBit monitors your activity level, all of these devices will be soon be able to monitor their surroundings and fetch information like the current weather for their location.

Google Now is currently providing this for Android users. Hobbyists have had a boon tinkering with the Xbox Kinect to add ambient awareness to their projects, and this type of technology is likely to be embedded in numerous devices in the near future.

Another area that may soon feel the impact of smartphone interfaces is vehicles. There have been quite a few attempts at vehicle touch interfaces, with a large level of investment, from manufacturers including Ford with its MyTouch panel, Tesla with its huge 17” panel and Audi’s Multimedia Interface. While these interfaces are functional, they are not familiar to users that expect iOS and Android style touch interfaces.

In the near future, vehicle manufacturers will likely be pairing up more with software vendors, much like Ford partnered with Microsoft for its Sync voice recognition system. Imagine a day when you can add the iOS or Android control panel as an option when purchasing a vehicle.

It is likely that every room of a home will have a 4” or 7” smart device mounted as a control panel for lights, music, and more. Soon, “flipping a light switch” will sound as archaic as “dialing a telephone”.