Blog posts from OpSource blog |
32 blog posts:
Mar 07, 2013
Five SLA Criteria to Evaluate When Public Cloud Shopping
Last December, Gartner's lead cloud IaaS analyst Lydia Leong made the now oft-repeated declaration that many leading public cloud SLAs are "practically useless." For companies considering offering enterprise IT services in the cloud, this type of public proclamation undoubtedly fuels the fire of public cloud naysayers who insist that enterprise-level services cannot be delivered via public cloud infrastructure.
Not surprisingly, at Dimension Data, we're big believers in the ability of the public cloud to deliver all types of enterprise IT and B2B SaaS applications, though Ms. Leong's comments highlight the importance of vendor selection when making your platform choice.
Given that we have conversations with clients about evaluating cloud SLAs every day, I am using this month's post to elaborate on the state of the market and the key decision criteria we recommend buyers evaluate. These points come from a whitepaper we recently released on the state of public cloud SLAs. Click here to read the entire report.
Below are five key areas of the SLA to consider when Cloud shopping for enterprise application hosting:
1.What level of uptime is the provider committing?
The 'x nines' SLA is usually the first cited and least meaningful part of a Cloud provider's SLA. Nevertheless, it is the first key point of an SLA to identify when evaluating your options. As examples of common public Cloud SLAs, as of the time of this post, HP & Amazon offer a 99.95% SLA and Dimension Data offers a 99.99% SLA.
2. How is uptime calculated?
SLAs are typically calculated monthly, and only consider the period during which you were a client in the calculation of uptime vs. downtime. This is a relatively standard policy across public IaaS providers, though we advise clients to read the small print carefully here as some well-known providers like Amazon Web Services (aws) take a far different approach to this calculation, using an annual uptime calculation assuming that a
Feb 15, 2013
One thing we’ve learned in cloud is that no one ever asks for things to go slower. And that the faster we go, the higher the speed bar keeps getting raised. As a result, we’re always trying to find ways to make the Dimension Data Cloud perform ever more quickly.
Since launching Dimension Data Cloud services, we’ve opened Managed Cloud Platforms (MCPs) on 5 different continents and our clients have been putting up sites in every corner of the world. This allows them to not only serve worldwide customers with low latency, it also means they can replicate their data in a variety of locations for the greatest level of redundancy. To this end, we have been deploying multiple systems for individual clients to help them speed up both image replication and access/transfer of database data.
As we explored our options further, we decided that WAN acceleration would be beneficial for all of our clients. We believe it’s a best practice to utilize multi-region replication and we wanted to make it as simple as possible. As a result, we just announced that we have installed Riverbed Steelhead WAN optimization technology in every single one of our MCPs. The results have been phenomenal. For certain applications like database replication, file synchoronization and backup and recovery, we’re seeing 65 to 95% improvements.
The best part of all of this news? It comes at no additional charge. WAN acceleration is built into your outbound bandwidth charges, which are as low as .09/GB. (And, of course, inbound bandwidth is always free.) We believe all of our clients should take advantage of the service, and the best way to get them there would be to make it as painless and transparent as possible.
So have at it. Replicate your data to your heart’s content. We’ll make sure it gets done as quickly as possible.
Feb 11, 2013
One of the benefits of our specialization in helping ISVs successfully deliver their SaaS products is that we are exposed to dozens of new SaaS architectures every month. We see everything from single-tenant legacy software solutions to the latest "stateless, self-healing, share-nothing, built to withstand massive failure" application architectures.
Given our experience, we're often asked by clients for advice on their architecture, particularly related to what should be virtualized and what should not. Not surprisingly, most new prospective clients enter the discussion assuming they need to migrate to a 100% virtualized environment when they move their application "to the cloud." The unrelenting hype about the latest public cloud offering or newest cloud feature set has IT teams neck-deep in cloud mania.
As a result, we speak with numerous companies insistent on virtualizing applications (or application tiers) that are unquestionably not ready for --or well-suited to-- virtualization in a IaaS environment. And while we ourselves are huge proponents of the benefits of virtualization, we take a very deliberate approach with our clients to ensure that they can actually leverage these benefits before running headlong into a "virtualize everything" strategy.
In this post, I'll provide a preview of the discovery process we go through to determine the recommended architecture. It's difficult all of the intricacies of these discussions in one blog post, so I'm calling out the issues that most commonly lead us to recommend integrating physical servers into a client's architecture.
Questions we review with clients:
In each of these questions below, the terms "servers," "application," "environment," and "application tier" can be used interchangeably depending on which is most relevant in your situation.
Is demand for the application predictable?
Must all servers be operational 24/7/365?
Does the application primarily scale vertically (more powerful servers) vs horizo
Jan 10, 2013
Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.
Jan 08, 2013
Our recent release had a little trumpeted feature called anti-affinity. Anti-affinity is the ability to insure that your servers sit on distinct vmHosts within our architecture. It allows an even greater level of High Availability within your environments.
I was asked by one of our largest clients why we needed to offer this service. His question was very simple; if Dimension Data offers a 99.99% up-time SLA, then doesn't that mean I can just run one server and be sure it's up all the time? That mindset has become a common misperception in the cloud. Cloud providers provide all the redundancy, so I don't need to make sure my application is HA.
While we are extremely proud of the level of availability we are able to provide with a server that costs less than 50 bucks a month (which is what 6.5 cents an hour comes out to,) it's important not to stop there. The fact that we provide complete hardware redundancy built in for our cheapest product is one of the things that people love about the cloud. In addition, we use vMotion, which we believe is the world’s best system for moving systems in case of a hardware failure. That's why we offer such a strong SLA. If it doesn't work, you immediately get credits.
In addition to the strong SLA though, we also offer built-in hardware based load balancing at no additional fees. Why do you need to do load balancing if we're running vMotion underneath? Because hardware redundancy isn't the end all be all.
Software fails as well. As much as we'd like to believe that underlying virtualization technology never ever has a problem, we know that's not true. Anti-affinity insures that even if a vmHost gets out of sync, your application continues to run. You can use anti- affinity with our built in, no cost, hardware load balancing features to run multiple copies of your app on completely separate hardware and software for as little as $100 a month. That's a fraction of what we used to spend to even get one isolated server with no load balancing up.
There's another si
Dec 10, 2012
Two things have been happening lately that’s got us thinking about how people buy cloud. First of all we recently launched our Dimension Data Cloud sites around the world and working with our regions to do on-line sales on 5 continents. Secondly, we were having a conversation with one of our One Cloud partners on how they could accelerate their cloud uptake. They had implemented the same solution we sell directly, at the same price points, and we had worked with them to provide all sorts of sales training and marketing collateral. Yet their sales were still below their expectations.
During the conversation it came out they were requiring their clients to make a two year commitment to cloud. Right away, we realized the main problem. One of the things clients love about cloud is they don't have to make anything more than a one hour commitment. They only risk 6.5 cents if they want to try one of our cloud servers. If they ever need more or less they can use it for just as long as they needed. And if for some reason we fail to provide phenomenal service they can turn off their usage and never pay us another dime.
It's pretty obvious why No Commit hourly pricing is good for the client. What our partner had failed to realize was how good it is for provider as well. They forgot one of the basic tenants of cloud computing, which is clients will keep using it not because they are locked in to a contract, but because they like the service and find it useful for their business. That type of relationship built on a user’s desire to continue with the service instead of their being locked in creates a much more positive environment. We have clients paying over 1 million dollars a year on cloud (and one over 5 million) with no commits at all. We realize they can walk at any time if we don't keep up the good work and they know we know that.
But it’s great for us, because clients will sign-up much faster if they don't feel locked in. While a million dollar decision has to be run up to the CFO, anyone can make a
Nov 01, 2012
What a difference a year makes! In October, we received notification from Gartner that, for the first time in our company's history, we had been selected as a Leader in their Magic Quadrant (MQ) for Cloud IaaS. Gartner defines "leaders" in the MQ as distinguishing themselves by offering an excellent service, and having an ambitious future roadmap. Despite the play on words in the title, this was no surprise to us, as our team has been demonstrating to clients for the last three years that we belonged in this category, though it was exciting to see that Gartner now agrees with us as well. :)
On a serious note, we're extremely proud to be selected as one of only five companies in the world to land in the leaders quadrant, and see it as a tremendous validation of the strategy we've put in place over the last three years since we first made our public cloud available.
The report also specifically cited the uniformity of our private cloud & public cloud offerings as reasons for our competitive positioning. We speak about this compatibility often with clients and partners and highlight the fact that very few public cloud offerings include an analogous private cloud implementation that allows consistent development across the two, as well as the easy transition of workloads from private to public infrastructure. The consistency exists across the web user interface as well as our RESTful APIs, which allows our clients to make the transition from private to public infrastructure with minimal modification.
In addition, the report cited the fact that Dimension Data retains OpSource's rich history as a SaaS-focused hoster as a continued advantage for the company. We remain deeply-focused on our SaaS clients and continue to expand that customer base through our one-of-a-kind application operations services with a 99.99% SaaS application uptime SLA.
But don't just take our word for it, download the entire report here and read for yourself. We hope you enjoy it and look forward to hearing from you.
Jason Cumberland
VP, Sa
Oct 12, 2012
As I mentioned in a previous <span style="";>blog post, a bit more than a year ago, Dimension Data acquired OpSource. The idea behind the purchase was to integrate the two companies, capitalize on OpSource's expertise in the SaaS and Cloud markets, and make the OpSource offering available to Dimension Data's extensive global client base.
As of October 1st, the first official step of that journey has begun with the renaming of OpSource to the Cloud Solutions Business Unit (CBU for short) within Dimension Data. While it's going to be tough to leave behind the OpSource name after ten years, the opportunities that this transition affords us and our clients is extremely exciting.
Specifically, with the creation of the CBU, OpSource is being combined with two other long-standing companies in the Dimension Data portfolio. Internet Solutions in South Africa, BlueFire in Australia, and OpSource will be combined into the new Dimension Data CBU. Together, these three companies combine regional expertise with class-leading infrastructure as a service (IaaS) and hosted product offerings like hosted desktop, hosted Microsoft Exchange, and hosted Microsoft Sharepoint. For OpSource clients, it means exposure to additional products like these and others in the Dimension Data portfolio as time goes on. It also means continued global expansion of our datacenter footprint as we leverage Dimension Data's global client base and demand for Cloud-based services.
It's also important to note what is not changing during this transition. For our existing and prospective clients, your support teams will remain the same, your sales teams will remain the same, the current product and service offerings remain the same as well. In fact, our team's focus within the CBU remains the same. We will continue to differentiate ourselves with our enterprise-ready private and public Cloud offering and our one-of-a-kind application operations managed services. The biggest adjustment we anticipate for our clients wi
Sep 05, 2012
I'd be willing to bet that the last time you asked a Cloud salesperson about their security story, the answer loosely mimicked the following... "At <vendor>, security is of paramount performance. We take pride in offering security that is iron-clad, enterprise-class, best-of-breed, industry-leading…"
In fact, I hear this so often myself that I'm considering implementing a "buzzword swear jar" my sales reps have to contribute to any time I observe one of these over-hyped and largely meaningless words used in front of a client. In my opinion, if we're going down that path, we just as well say our security is "awesome," which would be more succinct and provide roughly the same amount of information.
So, if everyone is saying the same thing, as a buyer, what questions can you ask to uncover meaningful differences in Cloud security offerings?
Obviously, in the context of a short blog post, we can only scratch the surface of a complex subject like this, but below are a few points I find most compelling in OpSource's approach to Cloud security.
Many (almost all) public cloud providers took a shortcut when developing their public Cloud network architectures. OpSource chose a different path and created a Cloud network architecture that exactly mimics what is capable on traditional networks comprised of physical servers.
To get specific, this approach creates three important differences in our Cloud networks:
All OpSource Cloud networks are Layer 2 hardware-based networks (not software-emulated Layer 3 networks). In addition to the security benefits further discussed below, hardware-based networks are faster, and allow us to offer a performance SLA in our Cloud (something you won't commonly find because it can't be done on software-emulated networks). This allows our clients to deploy to the Cloud without re-architecting their application to expect slow or variable network performance.
The availability of multiple, hardware-based VLANs allows clients to deploy their network architec
Jul 31, 2012
Since then, OpSource and DiData have been busy integrating the two companies and cultures (which have proven to be a very strong fit). In addition, our infrastructure teams have been hard at work quietly doubling the size of our global Cloud footprint. Since last year, we’ve opened new datacenters in the Netherlands, Sydney Australia, Johannesburg South Africa, and are currently working on our next location, which we expect to announce before the end of this year.
In addition, we’ve developed and brought to market a private cloud offering built using the same technology that powers our public Cloud, an offering that we expect to be a huge part of our business over the next year as we expose the power of the public/private cloud combination to Dimension Data’s amazing client base.
Lastly, our product and engineering teams have continued to roll out new features and enhancements to our Cloud stack twice a month, leading to some of our more notable features like hosted MsSQL and SharePoint, enhanced load balancing, Hybrid NAS and Hybrid physical server environments ("HybridConnect") (among many others too numerous to mention).
In the midst of all this change, one thing we’ve absolutely neglected has been the OpSource blog, which is something we’ll pay more attention to going forward.
If you have questions or comments about our offerings, please leave them below or contact us directly . We look forward to hearing from you.
Jason Cumberland
Vice President, Sales
[email protected]
408-567-2000
Jun 09, 2011
We announced an alliance with VCE, a consortium of VMware, Cisco and EMC, to build public clouds for service providers. VCE sells an integrated cloud infrastructure stack built called a Vblock(TM). A Vblock provides the compute, network, storage and virtualization to build a public or private cloud; however, it does not include the orchestration, billing, API and user controls that enable a service provider to build a complete pay-as-you-go public cloud service. Our partnership seeks to solve this problem by creating an integrated solution made up of OpSource running on top of a VCE Vblock.
What is exciting about this alliance is that OpSource is a "Certified Orchestration Software Provider" for the VCE Vblock platform. This means that our "OpSource Stack," made up of the software that provides the user interface (web and API) layer, integration layer (billing, accounting, user controls, etc.), and infrastructure layer (ability to control compute, network, memory and storage resources), have all been tested and certified by VCE to run on their Vblock. OpSource will manage this stack as a service just as we do for our White Label resellers.
VCE, OpSource and our ecosystem consulting, integration and infrastructure partners are working together to target service providers all over the world. This provides OpSource the ability to more efficiently scale our efforts to target service providers and differentiates us further in the market.
Read our announcement about our partnership.
Jan 11, 2011
Gartner's 2010 Magic Quadrant for Cloud IaaS and Web Hosting is out, and OpSource has been recognized this year as a “Challenger,” a nice improvement over last year’s position. Putting pressure on the incumbent providers, OpSource was recognized as having an enterprise-class cloud offering that is aggressively priced.
Strong growth, product innovation and competitive pricing drove OpSource to a spot in the 'challengers' corner on this year's Gartner Magic Quadrant, moving from its position as a niche player.
Published just a few weeks ago, there has already been a lot of talk about the MQ this year, with Derrick Harris at GigaOm criticizing Gartner’s ranking system, and Lydia Leong of Gartner defending it. Whatever side you support in this debate, the reality is that the Gartner Magic Quadrant is one of the most influential benchmarks for companies seeking to evaluate vendors.
From the outside it may seem like a mystery as to how companies gain a position on the magic quadrant as a niche provider, visionary, challenger or leader. Lydia did a good job explaining the objectivity that goes into the MQ; and, I can say that this was by far the most detailed evaluation I have been through by an industry research group. Their questions and myriad surveys were detailed, quantitative as well as technical.
It took a while for Gartner to publish the 2010 report; however, Lydia has stated that the pace of innovation in the cloud IaaS market is so rapid that she and Gartner colleague Ted Chamberlin will be issuing a mid-year update rather than waiting a whole year for the next report. As one of the companies committed to this innovation, we look forward to the next report.
Sep 23, 2009
Wrecking Balls and Open PO's 2009-09-23 11:38:21
Imagine this scenario. You're an IT manager and you want one of your sys admins to set up a new server in your data center. To accomplish this you do three things:
Give the sys admin the root password to every other system in the data center.
Rent a wrecking ball, put it in front of the data center and give them the keys.
Give them an Open Purchase Order with the ability to buy unlimited amounts of equipment!
Now, no IT Manager would ever do this with their current data centers, but it's pretty much what every IT manager has to do with most of today's cloud environment. That's because the vast majority of clouds today are built to be used by one person with one password. If you want multiple people to access the account, they each have to use the same user-name and password!
That's like having every single person in your IT department using the same password for every system. I'm not a security paranoid, but even I shudder at the thought. Every single system with the same user name and password? You have no way of tracking who's made what changes to the systems or who is even allowed to make changes to the systems. Someone could install a virus across your environment and there would be no checking on who did it because everyone uses the same user name and password!
But with cloud it's even worse than that. Not only could they infect all of your systems, they could wipe them out with a few clicks of the mouse (hence the Virtual Wrecking Ball.) You could log in to find hundreds of systems and years of work completely destroyed in about 3 minutes by one windows admin with a grudge. But of course you wouldn't know who did it because everyone uses the same user name and password!
Finally, even assuming your staff has the best intentions (and most of them do), you are also giving them unlimited access to add as many systems and as much storage as they like. That's liking me giving my 7th grade daughter unlimited text without a plan (like
Sep 23, 2009
Wrecking Balls and Open PO's 2009-09-23 11:38:21
Imagine this scenario. You're an IT manager and you want one of your sys admins to set up a new server in your data center. To accomplish this you do three things:
Give the sys admin the root password to every other system in the data center.
Rent a wrecking ball, put it in front of the data center and give them the keys.
Give them an Open Purchase Order with the ability to buy unlimited amounts of equipment!
Now, no IT Manager would ever do this with their current data centers, but it's pretty much what every IT manager has to do with most of today's cloud environment. That's because the vast majority of clouds today are built to be used by one person with one password. If you want multiple people to access the account, they each have to use the same user-name and password!
That's like having every single person in your IT department using the same password for every system. I'm not a security paranoid, but even I shudder at the thought. Every single system with the same user name and password? You have no way of tracking who's made what changes to the systems or who is even allowed to make changes to the systems. Someone could install a virus across your environment and there would be no checking on who did it because everyone uses the same user name and password!
But with cloud it's even worse than that. Not only could they infect all of your systems, they could wipe them out with a few clicks of the mouse (hence the Virtual Wrecking Ball.) You could log in to find hundreds of systems and years of work completely destroyed in about 3 minutes by one windows admin with a grudge. But of course you wouldn't know who did it because everyone uses the same user name and password!
Finally, even assuming your staff has the b
Jun 22, 2009
Often the two biggest concerns about using Cloud resources today is the lack of latency SLA's and the difficulty of locking down sensitive data in cloud environments. These issues of performance and security are often cited as the most common reasons users either don't adopt the cloud, or if they do use cloud resources, the reason they only use them for test/dev environments.
Interesting enough, the base reason for the inability of cloud providers to SLA latency between different systems in the cloud and the difficulty in locking down data in the cloud is the same. It is what I call the flat network problem. The flat network problem is the underlying structural defect of the first generation of cloud systems. Essentially in order to make the cloud as flexible as possible, all of the systems within a cloud sit on the same network.
This is fine if you want to add lots of front end systems doing the same thing. But in a traditional two tier architecture, putting your databases on the same network as your front end web traffic creates all sorts of headaches. First of all, while you can secure the servers it's generally best not to directly connect sensitive database servers to the internet.
Secondly, since all traffic between your web/application servers and your database servers must be routed over the front end network it is difficult if not impossible to guaranty latency between those systems. Even if they sit in the same data center, the latency can often be microseconds instead of milliseconds. That just won't work for most traditional two tier architectures.
Now their have been many ingenious work arounds to the increased latency between cloud based systems. That said, what would make the cloud much more accessible for enterprise is a way to create what I call Virtual Private Clouds within the public cloud. Essentially it gives cloud users network level as well as systems level control on how their infrastructure is managed. Cloud infrastructures would look much more like this:
By creating true layer t
Jun 22, 2009
Often the two biggest concerns about using Cloud resources today is the lack of latency SLA's and the difficulty of locking down sensitive data in cloud environments. These issues of performance and security are often cited as the most common reasons users either don't adopt the cloud, or if they do use cloud resources, the reason they only use them for test/dev environments.
Interesting enough, the base reason for the inability of cloud providers to SLA latency between different systems in the cloud and the difficulty in locking down data in the cloud is the same. It is what I call the flat network problem. The flat network problem is the underlying structural defect of the first generation of cloud systems. Essentially in order to make the cloud as flexible as possible, all of the systems within a cloud sit on the same network.
This is fine if you want to add lots of front end systems doing the same thing. But in a traditional two tier architecture, putting your databases on the same network as your front end web traffic creates all sorts of headaches. First of all, while you can secure the servers it's generally best not to directly connect sensitive database servers to the internet.
Secondly, since all traffic between your web/application servers and your database servers must be routed over the front end network it is difficult if not impossible to guaranty latency between those systems. Even if they sit in the same data center, the latency can often be microseconds instead of milliseconds. That just won't work for most traditional two tier architectures.
Now their have been many ingenious work arounds to the increased latency between cloud based systems. That said, what would make the cloud much more accessible for enterprise is a way to create what I call Virtual Private Clouds within the public cloud. Es
May 14, 2009
Much has been made lately of the fact that the cloud is not enterprise-ready. Security, performance, SLAs, support, standards and management tools are all cited as reasons the cloud isn't ready for enterprise adoption.
Many vendors are proposing Private Clouds as a solution. Private Clouds are clouds that run inside enterprise data centers, by enterprise IT, for the use of the members of the enterprise. Basically it's a way to virtualize a large swath of the IT data center. As is often the case with technology vendors, they think that the infrastructure technology, virtualization, is the end solution the user wants rather than the vehicle with which their needs are filled. While large scale adoption of Private Virtual farms will aid in the management of the data center, it will not address the value that users are getting from true Cloud computing.
To understand the true value of Cloud computing, you first need to understand how the 'Cloud Generation' uses technology and why the Cloud is so attractive to that generation as an infrastructure solution. The Cloud Generation has grown up on the web. As a result they have come to expect three core elements to their technology experience:
Immediate Availability - They do a search and get going right away.
Ubiquitous Access - They can get to their data and apps anytime, anyplace.
Sharing and Collaboration - They expect to be able to collaborate and share anything they are working on.
The current Cloud addresses those needs by providing infrastructure in a way that is far different than any past solutions.
Immediate Availability = Complete Flexibility
Cloud solutions allow users to provision resources immediately. By the time you are done reading this, you could have a server running in Amazon or an application published in Google. It's that immediate. Moreover, it's completely flexible. You can turn off services as quickly as you turn them on. Finally you only pay for what you use down to the hour or gigabyte. This resonates with a group that's n
May 14, 2009
Much has been made lately of the fact that the cloud is not enterprise-ready. Security, performance, SLAs, support, standards and management tools are all cited as reasons the cloud isn't ready for enterprise adoption.
Many vendors are proposing Private Clouds as a solution. Private Clouds are clouds that run inside enterprise data centers, by enterprise IT, for the use of the members of the enterprise. Basically it's a way to virtualize a large swath of the IT data center. As is often the case with technology vendors, they think that the infrastructure technology, virtualization, is the end solution the user wants rather than the vehicle with which their needs are filled. While large scale adoption of Private Virtual farms will aid in the management of the data center, it will not address the value that users are getting from true Cloud computing.
To understand the true value of Cloud computing, you first need to understand how the 'Cloud Generation' uses technology and why the Cloud is so attractive to that generation as an infrastructure solution. The Cloud Generation has grown up on the web. As a result they have come to expect three core elements to their technology experience:
Immediate Availability - They do a search and get going right away.
Ubiquitous Access - They can get to their data and apps anytime, anyplace.
Sharing and Collaboration - They expect to be able to collaborate and share anything they are working on.
The current Cloud addresses those needs by providing infrastructure in a way that is far different than any past solutions.
Immediate Availability = Complete Flexibility
Cloud solutions allow users to provision resources immediately. By the time you are done reading this, you could have a server running in Amazon or an application published in Google. It's that immediate. Moreover, it's completely flexible. You can turn off services as quickly as you turn them on. Finally you only pay fo
Aug 29, 2008
Just in time for the third part of my Silo Busting Trilogy (don't be confused by the 4, the first post was just an overview) Sarah Lacy published her fantastic article On Demand Computing: A Brutal Slog. (Sara, thanks for the set-up. Let me know how I can return the favor.)
For those without the patience to read her prose, Sarah basically says, the world is going to On Demand but selling this stuff is really, really hard. CEO's are flying all over the place trying to get deals done.
The natural reaction might be, isn't this the case with traditional software as well? It is for big deals, but smaller deals (still the majority of SaaS sales) are done through the channel. The network of channels for traditional ISV's is huge, from local Mom and Pop VARs, to huge resellers such as CDW, to the big integrators like Accenture. Unfortunately none of these organizations does much for SaaS (seems like they are as addicted to up-front revenue as the traditional ISV's.)
Fortunately we are seeing a next generation of integrators focused on integrating SaaS products. Companies such as Astadia, BlueWolf, and Appirio have built burgeoning business's around SaaS application cusotmization and integration. Problem is that most of the focuse has still been around integration SalesForce.com.
That's where web services come in. By insuring you have a good web services interface you allow your app to be integrated in to these solutions by these next generation integrators. This opens whole new channels (admittedly small right now, but growing like the rest of SaaS.) Integrators can either use your software as a platform in which to develop custom apps, or more likely, integrate your app as part of a custom solution for a specific company or vertical.
Beyond the SI play, there is the ability to integrate your app in other SaaS applications allowing them do the hard work of sales while you grow every time they get a new customer. Intacct software has done just that with RealPage. Intacct is a critical component of RealPage, providing
Aug 29, 2008
Just in time for the third part of my Silo Busting Trilogy (don't be confused by the 4, the first post was just an overview) Sarah Lacy published her fantastic article On Demand Computing: A Brutal Slog. (Sara, thanks for the set-up. Let me know how I can return the favor.)
For those without the patience to read her prose, Sarah basically says, the world is going to On Demand but selling this stuff is really, really hard. CEO's are flying all over the place trying to get deals done.
The natural reaction might be, isn't this the case with traditional software as well? It is for big deals, but smaller deals (still the majority of SaaS sales) are done through the channel. The network of channels for traditional ISV's is huge, from local Mom and Pop VARs, to huge resellers such as CDW, to the big integrators like Accenture. Unfortunately none of these organizations does much for SaaS (seems like they are as addicted to up-front revenue as the traditional ISV's.)
Fortunately we are seeing a next generation of integrators focused on integrating SaaS products. Companies such as Astadia, BlueWolf, and Appirio have built burgeoning business's around SaaS application cusotmization and integration. Problem is that most of the focuse has still been around integration SalesForce.com.
That's where web services come in. By insuring you have a good web services interface you allow your app to be integrated in to these solutions by these next generation integrators. This opens whole new channels (admittedly small right now, but growing like the rest of SaaS.) Integrators can either use your software as a platform in which to develop custom apps, or more likely, integrate your app as part of a custom solution for a specific company or vertical.
Beyond the
Jun 30, 2008
A couple of posts ago, I spoke to the busting of the SaaS Silo with Web Services and the impact that was having on the SaaS industry. The last post spoke specifically about using Web Services to add functionality to your app. While adding cool new functionality to the app is big for the product guys and the marketing guys, the interest from the sales side seems to be driven by a whole separate set of concerns, chief among them... Integration.
According to recent research by both Saugatuck and Forrester, integration has surpassed security as the main concern for enterprise implementations of SaaS. This is actually a great sign for SaaS vendors. It means that SaaS is extending beyond the departmental sale and making true progress into the enterprise. It also means that in order to get past this increasingly common sales objection, companies need to figure out how to use Web Services to integrate their SaaS application.
While enterprise adoption of SaaS has been quite good, it's usually done at the departmental level initially. That means good SaaS apps appeal to business users with specific problems. As the adoption of those applications spreads from the department to the whole enterprise, IT gets involved. And it's logical to think IT wouldn't want a separate employee record in its Taleo system than it has in its payroll system. Solutions such as Boomi's Atoms help IT shops avoid that problem.
Besides integrating with legacy applications, Web Services are beginning to help companies integrate multiple SaaS applications. Up to now the most ubiquitous integration problem, user management, has either been ignored by companies using SaaS or has had to be cobbled together by in house teams. I can tell you, we use everything from SalesForce to NetSuite to RightNow and we've had to put some pretty tricky things in to (imperfectly) manage users. Now we are seeing ready built solutions from TriCipher and Symplified that are making this easier and easier for both the SaaS vendor and the enterprise.
Finally, the integrat
Jun 30, 2008
A couple of posts ago, I spoke to the busting of the SaaS Silo with Web Services and the impact that was having on the SaaS industry. The last post spoke specifically about using Web Services to add functionality to your app. While adding cool new functionality to the app is big for the product guys and the marketing guys, the interest from the sales side seems to be driven by a whole separate set of concerns, chief among them... Integration.
According to recent research by both Saugatuck and Forrester, integration has surpassed security as the main concern for enterprise implementations of SaaS. This is actually a great sign for SaaS vendors. It means that SaaS is extending beyond the departmental sale and making true progress into the enterprise. It also means that in order to get past this increasingly common sales objection, companies need to figure out how to use Web Services to integrate their SaaS application.
While enterprise adoption of SaaS has been quite good, it's usually done at the departmental level initially. That means good SaaS apps appeal to business users with specific problems. As the adoption of those applications spreads from the department to the whole enterprise, IT gets involved. And it's logical to think IT wouldn't want a separate employee record in its Taleo system than it has in its payroll system. Solutions such as Boomi's Atoms help IT shops avoid that problem.
Besides integrating with legacy applications, Web Services are beginning to help companies integrate multiple SaaS applications. Up to now the most ubiquitous integration problem, user management, has either been ignored by companies using SaaS or has had to be cobbled together by in house teams. I can tell you, we use everything from SalesForce to NetSuite to RightNow
May 30, 2008
Probably the simplest thing SaaS apps can do to improve their business is to use web services to improve the functionality of their application. By integrating third party applications in "Corporate Mash-Ups" SaaS companies can have the best of both worlds; a robust feature set and a complete focus on their core product.
Companies like SalesForce and WebEx have all shown the value of doing things like offering on-line ordering and billing, tracking site usage, and adding strong reporting and user management features. The problem with all of these additional features take programming time away from the core value of the apps, sales force automation and collaboration. That's fine if you have 100's of million in funding and 8 years of development. What's the new SaaS app to do?
Fortunately, we have a new world of apps available to add that functionality. No longer is it just Google Maps and Hoover information. There is a ton of new apps you can integrate via api's or web services. Examples include:
TriCipher - For strong identity management and integration with corporate directories.
Sabrix - For tax calculations.
PivotLink - For graphs and pivot tables.
OpSource Billing - If I don't get one corporate plug in, Richard, Kim, and Christina get mad.
Business Objects - For Crystal Reports and others.
Ribbit - For integrating Cell Phones in to your app. If that doesn't make sense, go to their site. It's extremely cool.
This list could be ten times as long and it's growing daily. Needless to say, a lot of the "extraneous" work of creating the app can now be integrated instead of programmed, allowing your precious coders to focus on the core value you are selling to your customers. This not only keeps the R&D costs down, it allows for more robust apps to hit the market sooner.
Up next, Silo Busting 3 - web services for enterprise integration.
May 30, 2008
Probably the simplest thing SaaS apps can do to improve their business is to use web services to improve the functionality of their application. By integrating third party applications in "Corporate Mash-Ups" SaaS companies can have the best of both worlds; a robust feature set and a complete focus on their core product.
Companies like SalesForce and WebEx have all shown the value of doing things like offering on-line ordering and billing, tracking site usage, and adding strong reporting and user management features. The problem with all of these additional features take programming time away from the core value of the apps, sales force automation and collaboration. That's fine if you have 100's of million in funding and 8 years of development. What's the new SaaS app to do?
Fortunately, we have a new world of apps available to add that functionality. No longer is it just Google Maps and Hoover information. There is a ton of new apps you can integrate via api's or web services. Examples include:
TriCipher - For strong identity management and integration with corporate directories.
Sabrix - For tax calculations.
PivotLink - For graphs and pivot tables.
OpSource Billing - If I don't get one corporate plug in, Richard, Kim, and Christina get mad.
Business Objects - For Crystal Reports and others.
Ribbit - For integrating Cell Phones in to your app. If that doesn't make sense, go to their site. It's extremely cool.
This list could be ten times as long and it's growing daily. Needless to say, a lot of the "extraneous" work of creating the app can
May 22, 2008
It's time to grow up....and learn to play nice with others.
SaaS adoption in the enterprise has definitely increased. But with that organizations are increasingly asking SaaS applications to start working with both other SaaS applications and the company's legacy applications as well. According to recent studies by both Saugatuck and Forrester suggest that integration has surpassed security and compliance as Enterprise IT's chief concern with implementing (or growing) SaaS applications.
This is an extremely encouraging sign. It shows the acceptance of SaaS as a legitimate enterprise software solution by the majority of Enterprise IT shops. Up to now, SaaS has been primarily a departmental sale. HR departments buy Taleo for human capital management, Marketing buys Marketo for marketing analysis, and call centers buy SupportSoft to manage their ticketing. As you know from past posts, selling immediately recognizable value at the departmental level is key to a strong success story in SaaS andwe can see how that has happened.
But now these apps are growing up and reaching across the organization (growing your app is another key SaaS sales strategy.) When that happens, IT is willing to accept the app's growth, but needs it to do more now. Enterprise IT doesn't want a separate employee record in Taleo from their payroll system. The want to be able to correlate all this marketing data back to their sales productivity, and they want to use the same master customer record for their ERP system as for their ticketing system. And they don't want to have different log-ins for each employee, they want a single sign-on solution for all of their SaaS as well as on-premise apps (ala TriCipher.)
So SaaS applications have to stop being Silo's that work just inside themselves. They need to start using web services to integrate with other SaaS apps and with legacy applications. By doing so, they'll open up three great new areas for growth:
Increased Functionality by working with other Apps
Enterprise Growth by integra
May 22, 2008
It's time to grow up....and learn to play nice with others.
SaaS adoption in the enterprise has definitely increased. But with that organizations are increasingly asking SaaS applications to start working with both other SaaS applications and the company's legacy applications as well. According to recent studies by both Saugatuck and Forrester suggest that integration has surpassed security and compliance as Enterprise IT's chief concern with implementing (or growing) SaaS applications.
This is an extremely encouraging sign. It shows the acceptance of SaaS as a legitimate enterprise software solution by the majority of Enterprise IT shops. Up to now, SaaS has been primarily a departmental sale. HR departments buy Taleo for human capital management, Marketing buys Marketo for marketing analysis, and call centers buy SupportSoft to manage their ticketing. As you know from past posts, selling immediately recognizable value at the departmental level is key to a strong success story in SaaS andwe can see how that has happened.
But now these apps are growing up and reaching across the organization (growing your app is another key SaaS sales strategy.) When that happens, IT is willing to accept the app's growth, but needs it to do more now. Enterprise IT doesn't want a separate employee record in Taleo from their payroll system. The want to be able to correlate all this marketing data back to their sales productivity, and they want to use the same master customer record for their ERP system as for their ticketing system. And they don't want to have different log-ins for each employee, they want a single sign-on solution for all of their SaaS as well as on-premise apps (ala TriCipher.)
So SaaS applications have to stop being Silo's that work just inside themselves
Feb 04, 2008
When Mike Mankowski sent me this blog post today, I figured "Yeah! My running buddy David Greenfield from Altera is writing a post about me. I didn't even know he blogged.
Alas, it was a case of mistaken identity but the post was real. This David Greenfield disagrees with my hogwash, but that's O.K., I just like getting quoted. That said, I think Mr. Greenfield's issue that function (cloud applications) and form (cloud computing) are mutually exclusive is misguided.
I was stating that the next generation of users will demand on-demand, collaborative group applications they can access anywhere and connect to in any way. This is what everything we see on the web from SaaS to Social Networking is driving to. David's argument that these applications will run like exisiting applications behind the firewall and on servers bought and managed by IT is short sighted.
Instead, I think Cloud Infrastructures will evolve with the applications that they serve. And with that evolution, IT will find a way to exert the kind of data control and security necessary to run Enterprise critical applications. So instead of buying servers, IT will find ways to use cloud resources that give them the same type of control they had with the old models. We are already seeing that today. While an Amazon ec2 cluster is fine for a blog site, when a Web Applications (or SaaS) company wants to sell they know their cloud environement needs to be secure and robust. Hence the proliferation of certifications (SaS 70, PCI, European SafeHarbor, etc.) that have become the ingrained into the DNA of SaaS applications. These are the beginnings of IT reasserting it's control over cloud apps.
I see the evolution of enterprise class Cloud computing similar to what we saw with Client/Server. When the PC was seen as a toy IT talked about getting apps back under central control. This was accomplished not by moving back to mainframes and mini's but by evolving PC apps to Client/Server apps. Many people forget that "Servers" are just souped up PC's with more pr
Feb 04, 2008
When Mike Mankowski sent me this blog post today, I figured "Yeah! My running buddy David Greenfield from Altera is writing a post about me. I didn't even know he blogged.
Alas, it was a case of mistaken identity but the post was real. This David Greenfield disagrees with my hogwash, but that's O.K., I just like getting quoted. That said, I think Mr. Greenfield's issue that function (cloud applications) and form (cloud computing) are mutually exclusive is misguided.
I was stating that the next generation of users will demand on-demand, collaborative group applications they can access anywhere and connect to in any way. This is what everything we see on the web from SaaS to Social Networking is driving to. David's argument that these applications will run like exisiting applications behind the firewall and on servers bought and managed by IT is short sighted.
Instead, I think Cloud Infrastructures will evolve with the applications that they serve. And with that evolution, IT will find a way to exert the kind of data control and security necessary to run Enterprise critical applications. So instead of buying servers, IT will find ways to use cloud resources that give them the same type of control they had with the old models. We are already seeing that today. While an Amazon ec2 cluster is fine for a blog site, when a Web Applications (or SaaS) company wants to sell they know their cloud environement needs to be secure and robust. Hence the proliferation of certifications (SaS 70, PCI, European SafeHarbor, etc.) that have become the ingrained into the DNA of SaaS applications. These are the beginnings of IT reasserting it's control over cloud apps.
I see the evolution of enterprise class Cloud computing similar to what we saw with Client/Server. When the PC was seen as a toy IT talked about getting apps back u
Nov 05, 2007
As many who know me, know that I have not been a big fan of Google. I love the desktop search (or I did until I got a Mac with Spotlight) but am not a big fan of their corporate culture. Just because they got search right (emphasis on the past tense, but that's a later post) doesn't give them the license to walk around the valley looking down their noses. (Prius anyone?) They are a notoriously difficult company to partner with and to sell to (probably the real genesis of my distaste.)
I especially dislike the "Do No Evil" motto. As if other companies have the motto "Do Evil." It's like an ad campaign that asserts "Trebelicious BubbleGum has no Spider Eggs in it." (Although we know that certain telco's bubblegum does have spider eggs.)
But lately, I'm beginning to like Google. They really do seem committed to an open web, and that is good for everyone. First was their support of Net Neutrality. Actively fighting the telcos in their effort to control what traffic they deliver is critical to the success of the Internet. To see what would happen if AT&T and Verizon got their way on Net Neutrality, one would just have to look at how horrible the mobile web browsing experience is (another area Google is trying to address with it's gPhone initiative.)
Now one could argue that Google supports Net Neutrality because they don't want to pay telcos for carrying terabits worth of YouTube videos. Except that Google has more than enough cash to pay the telcos and serve their cafeteria meals in disposable gold happy meal boxes. If Google didn't believe in an open web, it would do just that. The truth is while Google can afford to pay the telcos off, start-ups would not. They could effectively bar a good portion of their next generation of competitors from the market by allowing the telcos to set up a content tax that would be a market killer.
Google's latest salvo of course is their OpenSocial initiative. Again, Google has come down on the side of an open platform over using its muscle to promote a Google-only platform. The
Nov 05, 2007
As many who know me, know that I have not been a big fan of Google. I love the desktop search (or I did until I got a Mac with Spotlight) but am not a big fan of their corporate culture. Just because they got search right (emphasis on the past tense, but that's a later post) doesn't give them the license to walk around the valley looking down their noses. (Prius anyone?) They are a notoriously difficult company to partner with and to sell to (probably the real genesis of my distaste.)
I especially dislike the "Do No Evil" motto. As if other companies have the motto "Do Evil." It's like an ad campaign that asserts "Trebelicious BubbleGum has no Spider Eggs in it." (Although we know that certain telco's bubblegum does have spider eggs.)
But lately, I'm beginning to like Google. They really do seem committed to an open web, and that is good for everyone. First was their support of Net Neutrality. Actively fighting the telcos in their effort to control what traffic they deliver is critical to the success of the Internet. To see what would happen if AT&T and Verizon got their way on Net Neutrality, one would just have to look at how horrible the mobile web browsing experience is (another area Google is trying to address with it's gPhone initiative.)
Now one could argue that Google supports Net Neutrality because they don't want to pay telcos for carrying terabits worth of YouTube videos. Except that Google has more than enough cash to pay the telcos and serve their cafeteria meals in disposable gold happy meal boxes. If Google didn't believe in an open web, it would do just that. The truth is while Google can afford to pay the telcos off, start-ups would not. They could effectively bar a good portion of their next generation of competitors from the market by allowing the telcos to set up
Oct 18, 2007
Reading M.R. Rangaswami's recent post Where are Software's Children, I am struck by the continued belief that enterprises will continue to use installed applications through the next generation of software. That is simply not going to happen.
Mr. Rangaswami's observation of the age of the ruling class of software companies is aging and that most good young programmers and executives are going to Web 2.0, open source, and SaaS companies. He makes a number of suggestions on what the TBA "traditional business application" companies can do to combat that trend. While Mr. Rangaswami is correct in observation, his suggestions in the end will be spitting in the wind.
That is because the young talent is attracted to these companies because of what they are doing, creating the next generation of applications. They have no interest in working on client server technologies. They grew up on the web and they want to be building Web Applications on next generation platforms. The idea that better mentoring will get these people to work on a fading technology is absurd. So the real interesting question is what is the world going to be like when these "Children" grow up.
I remember a similar shift when I first got in to the business world back in the late 80's. The company I worked for did all their computing on a VAX, and made very minimal use of PCs (just for word processing and some spreadsheets). I was charged with putting together a corporate training database and employee scheduling tools. I never once considered doing it on the VAX. The idea of using that technology was as a complete anathema.
The same thing is happening in todays technology world. These new generation of technologists grew up on-line. They look at client server computing and installed software the way I looked at the VAX. They probably realize the power of it, but would never consider using it or working on it. It's as separated from their existence as an ATM network would be to todays network engineers.
Which of course leads back to one of my favori
Oct 18, 2007
Reading M.R. Rangaswami's recent post Where are Software's Children, I am struck by the continued belief that enterprises will continue to use installed applications through the next generation of software. That is simply not going to happen.
Mr. Rangaswami's observation of the age of the ruling class of software companies is aging and that most good young programmers and executives are going to Web 2.0, open source, and SaaS companies. He makes a number of suggestions on what the TBA "traditional business application" companies can do to combat that trend. While Mr. Rangaswami is correct in observation, his suggestions in the end will be spitting in the wind.
That is because the young talent is attracted to these companies because of what they are doing, creating the next generation of applications. They have no interest in working on client server technologies. They grew up on the web and they want to be building Web Applications on next generation platforms. The idea that better mentoring will get these people to work on a fading technology is absurd. So the real interesting question is what is the world going to be like when these "Children" grow up.
I remember a similar shift when I first got in to the business world back in the late 80's. The company I worked for did all their computing on a VAX, and made very minimal use of PCs (just for word processing and some spreadsheets). I was charged with putting together a corporate training database and employee scheduling tools. I never once considered doing it on the VAX. The idea of using that technology was as a complete anathema.
The same thing is happening in todays technology world. These new generation of technologists grew up on-line. They look at client server computing and installed software the way I looked at the VAX. They probably realize the power of it, but would never consider using it or working on it. It's as separated from