Cloud hosting package details:
1 CPU 2GB - Pay As You Go |
Pay only for what you use based on hourly rates.
You're only billed for CPU and RAM usage when your server is actually running. And there is no term commitment with ANY of the pricing options - you can cancel with just three days notice at any time.
|
|
There are no user reviews for OpSource.
|
OpSource™ provides enterprise cloud and Software-as-aService hosting & services for Fortune 1000, Software-as-a-Service and Web companies, with hundreds of applications, millions of users and billions of transactions supported daily. By choosing OpSource, organizations large and small are free to focus their resources on building businesses rather than investing in and running IT infrastructure and support services. The company’s OpSource Cloud™ is the first cloud to bring together the flexibility, availability and community of the public cloud with the security, performance and control the enterprise demands. In addition, the market-leading OpSource On-Demand™ empowers SaaS ISVs to bring enterprise cloud solutions to their end-users by quickly and securely delivering their applications and services over the Web. Headquartered in Santa Clara, Calif., OpSource has cloud and web application delivery centers in Virginia, London and Bangalore
12 Cloud hosting packages
($68.40 - $302.40)
Company Info:
Site Info:
Contact Info:
User Reviews:
There are no user reviews for OpSource.
Blog posts
Recent posts from
OpSource blog:
Mar 07, 2013
Five SLA Criteria to Evaluate When Public Cloud Shopping
Last December, Gartner's lead cloud IaaS analyst Lydia Leong made the now oft-repeated declaration that many leading public cloud SLAs are "practically useless." For companies considering offering enterprise IT services in the cloud, this type of public proclamation undoubtedly fuels the fire of public cloud naysayers who insist that enterprise-level services cannot be delivered via public cloud infrastructure.
Not surprisingly, at Dimension Data, we're big believers in the ability of the public cloud to deliver all types of enterprise IT and B2B SaaS applications, though Ms. Leong's comments highlight the importance of vendor selection when making your platform choice.
Given that we have conversations with clients about evaluating cloud SLAs every day, I am using this month's post to elaborate on the state of the market and the key decision criteria we recommend buyers evaluate. These points come from a whitepaper we recently released on the state of public cloud SLAs. Click here to read the entire report.
Below are five key areas of the SLA to consider when Cloud shopping for enterprise application hosting:
1.What level of uptime is the provider committing?
The 'x nines' SLA is usually the first cited and least meaningful part of a Cloud provider's SLA. Nevertheless, it is the first key point of an SLA to identify when evaluating your options. As examples of common public Cloud SLAs, as of the time of this post, HP & Amazon offer a 99.95% SLA and Dimension Data offers a 99.99% SLA.
2. How is uptime calculated?
SLAs are typically calculated monthly, and only consider the period during which you were a client in the calculation of uptime vs. downtime. This is a relatively standard policy across public IaaS providers, though we advise clients to read the small print carefully here as some well-known providers like Amazon Web Services (aws) take a far different approach to this calculation, using an annual uptime calculation assuming that a
Feb 15, 2013
One thing we’ve learned in cloud is that no one ever asks for things to go slower. And that the faster we go, the higher the speed bar keeps getting raised. As a result, we’re always trying to find ways to make the Dimension Data Cloud perform ever more quickly.
Since launching Dimension Data Cloud services, we’ve opened Managed Cloud Platforms (MCPs) on 5 different continents and our clients have been putting up sites in every corner of the world. This allows them to not only serve worldwide customers with low latency, it also means they can replicate their data in a variety of locations for the greatest level of redundancy. To this end, we have been deploying multiple systems for individual clients to help them speed up both image replication and access/transfer of database data.
As we explored our options further, we decided that WAN acceleration would be beneficial for all of our clients. We believe it’s a best practice to utilize multi-region replication and we wanted to make it as simple as possible. As a result, we just announced that we have installed Riverbed Steelhead WAN optimization technology in every single one of our MCPs. The results have been phenomenal. For certain applications like database replication, file synchoronization and backup and recovery, we’re seeing 65 to 95% improvements.
The best part of all of this news? It comes at no additional charge. WAN acceleration is built into your outbound bandwidth charges, which are as low as .09/GB. (And, of course, inbound bandwidth is always free.) We believe all of our clients should take advantage of the service, and the best way to get them there would be to make it as painless and transparent as possible.
So have at it. Replicate your data to your heart’s content. We’ll make sure it gets done as quickly as possible.
Feb 11, 2013
One of the benefits of our specialization in helping ISVs successfully deliver their SaaS products is that we are exposed to dozens of new SaaS architectures every month. We see everything from single-tenant legacy software solutions to the latest "stateless, self-healing, share-nothing, built to withstand massive failure" application architectures.
Given our experience, we're often asked by clients for advice on their architecture, particularly related to what should be virtualized and what should not. Not surprisingly, most new prospective clients enter the discussion assuming they need to migrate to a 100% virtualized environment when they move their application "to the cloud." The unrelenting hype about the latest public cloud offering or newest cloud feature set has IT teams neck-deep in cloud mania.
As a result, we speak with numerous companies insistent on virtualizing applications (or application tiers) that are unquestionably not ready for --or well-suited to-- virtualization in a IaaS environment. And while we ourselves are huge proponents of the benefits of virtualization, we take a very deliberate approach with our clients to ensure that they can actually leverage these benefits before running headlong into a "virtualize everything" strategy.
In this post, I'll provide a preview of the discovery process we go through to determine the recommended architecture. It's difficult all of the intricacies of these discussions in one blog post, so I'm calling out the issues that most commonly lead us to recommend integrating physical servers into a client's architecture.
Questions we review with clients:
In each of these questions below, the terms "servers," "application," "environment," and "application tier" can be used interchangeably depending on which is most relevant in your situation.
Is demand for the application predictable?
Must all servers be operational 24/7/365?
Does the application primarily scale vertically (more powerful servers) vs horizo
Jan 10, 2013
Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.
Jan 08, 2013
Our recent release had a little trumpeted feature called anti-affinity. Anti-affinity is the ability to insure that your servers sit on distinct vmHosts within our architecture. It allows an even greater level of High Availability within your environments.
I was asked by one of our largest clients why we needed to offer this service. His question was very simple; if Dimension Data offers a 99.99% up-time SLA, then doesn't that mean I can just run one server and be sure it's up all the time? That mindset has become a common misperception in the cloud. Cloud providers provide all the redundancy, so I don't need to make sure my application is HA.
While we are extremely proud of the level of availability we are able to provide with a server that costs less than 50 bucks a month (which is what 6.5 cents an hour comes out to,) it's important not to stop there. The fact that we provide complete hardware redundancy built in for our cheapest product is one of the things that people love about the cloud. In addition, we use vMotion, which we believe is the world’s best system for moving systems in case of a hardware failure. That's why we offer such a strong SLA. If it doesn't work, you immediately get credits.
In addition to the strong SLA though, we also offer built-in hardware based load balancing at no additional fees. Why do you need to do load balancing if we're running vMotion underneath? Because hardware redundancy isn't the end all be all.
Software fails as well. As much as we'd like to believe that underlying virtualization technology never ever has a problem, we know that's not true. Anti-affinity insures that even if a vmHost gets out of sync, your application continues to run. You can use anti- affinity with our built in, no cost, hardware load balancing features to run multiple copies of your app on completely separate hardware and software for as little as $100 a month. That's a fraction of what we used to spend to even get one isolated server with no load balancing up.
There's another si
Dec 10, 2012
Two things have been happening lately that’s got us thinking about how people buy cloud. First of all we recently launched our Dimension Data Cloud sites around the world and working with our regions to do on-line sales on 5 continents. Secondly, we were having a conversation with one of our One Cloud partners on how they could accelerate their cloud uptake. They had implemented the same solution we sell directly, at the same price points, and we had worked with them to provide all sorts of sales training and marketing collateral. Yet their sales were still below their expectations.
During the conversation it came out they were requiring their clients to make a two year commitment to cloud. Right away, we realized the main problem. One of the things clients love about cloud is they don't have to make anything more than a one hour commitment. They only risk 6.5 cents if they want to try one of our cloud servers. If they ever need more or less they can use it for just as long as they needed. And if for some reason we fail to provide phenomenal service they can turn off their usage and never pay us another dime.
It's pretty obvious why No Commit hourly pricing is good for the client. What our partner had failed to realize was how good it is for provider as well. They forgot one of the basic tenants of cloud computing, which is clients will keep using it not because they are locked in to a contract, but because they like the service and find it useful for their business. That type of relationship built on a user’s desire to continue with the service instead of their being locked in creates a much more positive environment. We have clients paying over 1 million dollars a year on cloud (and one over 5 million) with no commits at all. We realize they can walk at any time if we don't keep up the good work and they know we know that.
But it’s great for us, because clients will sign-up much faster if they don't feel locked in. While a million dollar decision has to be run up to the CFO, anyone can make a
Nov 01, 2012
What a difference a year makes! In October, we received notification from Gartner that, for the first time in our company's history, we had been selected as a Leader in their Magic Quadrant (MQ) for Cloud IaaS. Gartner defines "leaders" in the MQ as distinguishing themselves by offering an excellent service, and having an ambitious future roadmap. Despite the play on words in the title, this was no surprise to us, as our team has been demonstrating to clients for the last three years that we belonged in this category, though it was exciting to see that Gartner now agrees with us as well. :)
On a serious note, we're extremely proud to be selected as one of only five companies in the world to land in the leaders quadrant, and see it as a tremendous validation of the strategy we've put in place over the last three years since we first made our public cloud available.
The report also specifically cited the uniformity of our private cloud & public cloud offerings as reasons for our competitive positioning. We speak about this compatibility often with clients and partners and highlight the fact that very few public cloud offerings include an analogous private cloud implementation that allows consistent development across the two, as well as the easy transition of workloads from private to public infrastructure. The consistency exists across the web user interface as well as our RESTful APIs, which allows our clients to make the transition from private to public infrastructure with minimal modification.
In addition, the report cited the fact that Dimension Data retains OpSource's rich history as a SaaS-focused hoster as a continued advantage for the company. We remain deeply-focused on our SaaS clients and continue to expand that customer base through our one-of-a-kind application operations services with a 99.99% SaaS application uptime SLA.
But don't just take our word for it, download the entire report here and read for yourself. We hope you enjoy it and look forward to hearing from you.
Jason Cumberland
VP, Sa
Oct 12, 2012
As I mentioned in a previous <span style="";>blog post, a bit more than a year ago, Dimension Data acquired OpSource. The idea behind the purchase was to integrate the two companies, capitalize on OpSource's expertise in the SaaS and Cloud markets, and make the OpSource offering available to Dimension Data's extensive global client base.
As of October 1st, the first official step of that journey has begun with the renaming of OpSource to the Cloud Solutions Business Unit (CBU for short) within Dimension Data. While it's going to be tough to leave behind the OpSource name after ten years, the opportunities that this transition affords us and our clients is extremely exciting.
Specifically, with the creation of the CBU, OpSource is being combined with two other long-standing companies in the Dimension Data portfolio. Internet Solutions in South Africa, BlueFire in Australia, and OpSource will be combined into the new Dimension Data CBU. Together, these three companies combine regional expertise with class-leading infrastructure as a service (IaaS) and hosted product offerings like hosted desktop, hosted Microsoft Exchange, and hosted Microsoft Sharepoint. For OpSource clients, it means exposure to additional products like these and others in the Dimension Data portfolio as time goes on. It also means continued global expansion of our datacenter footprint as we leverage Dimension Data's global client base and demand for Cloud-based services.
It's also important to note what is not changing during this transition. For our existing and prospective clients, your support teams will remain the same, your sales teams will remain the same, the current product and service offerings remain the same as well. In fact, our team's focus within the CBU remains the same. We will continue to differentiate ourselves with our enterprise-ready private and public Cloud offering and our one-of-a-kind application operations managed services. The biggest adjustment we anticipate for our clients wi
Sep 05, 2012
I'd be willing to bet that the last time you asked a Cloud salesperson about their security story, the answer loosely mimicked the following... "At <vendor>, security is of paramount performance. We take pride in offering security that is iron-clad, enterprise-class, best-of-breed, industry-leading…"
In fact, I hear this so often myself that I'm considering implementing a "buzzword swear jar" my sales reps have to contribute to any time I observe one of these over-hyped and largely meaningless words used in front of a client. In my opinion, if we're going down that path, we just as well say our security is "awesome," which would be more succinct and provide roughly the same amount of information.
So, if everyone is saying the same thing, as a buyer, what questions can you ask to uncover meaningful differences in Cloud security offerings?
Obviously, in the context of a short blog post, we can only scratch the surface of a complex subject like this, but below are a few points I find most compelling in OpSource's approach to Cloud security.
Many (almost all) public cloud providers took a shortcut when developing their public Cloud network architectures. OpSource chose a different path and created a Cloud network architecture that exactly mimics what is capable on traditional networks comprised of physical servers.
To get specific, this approach creates three important differences in our Cloud networks:
All OpSource Cloud networks are Layer 2 hardware-based networks (not software-emulated Layer 3 networks). In addition to the security benefits further discussed below, hardware-based networks are faster, and allow us to offer a performance SLA in our Cloud (something you won't commonly find because it can't be done on software-emulated networks). This allows our clients to deploy to the Cloud without re-architecting their application to expect slow or variable network performance.
The availability of multiple, hardware-based VLANs allows clients to deploy their network architec
Jul 31, 2012
Since then, OpSource and DiData have been busy integrating the two companies and cultures (which have proven to be a very strong fit). In addition, our infrastructure teams have been hard at work quietly doubling the size of our global Cloud footprint. Since last year, we’ve opened new datacenters in the Netherlands, Sydney Australia, Johannesburg South Africa, and are currently working on our next location, which we expect to announce before the end of this year.
In addition, we’ve developed and brought to market a private cloud offering built using the same technology that powers our public Cloud, an offering that we expect to be a huge part of our business over the next year as we expose the power of the public/private cloud combination to Dimension Data’s amazing client base.
Lastly, our product and engineering teams have continued to roll out new features and enhancements to our Cloud stack twice a month, leading to some of our more notable features like hosted MsSQL and SharePoint, enhanced load balancing, Hybrid NAS and Hybrid physical server environments ("HybridConnect") (among many others too numerous to mention).
In the midst of all this change, one thing we’ve absolutely neglected has been the OpSource blog, which is something we’ll pay more attention to going forward.
If you have questions or comments about our offerings, please leave them below or contact us directly . We look forward to hearing from you.
Jason Cumberland
Vice President, Sales
[email protected]
408-567-2000
Jun 09, 2011
We announced an alliance with VCE, a consortium of VMware, Cisco and EMC, to build public clouds for service providers. VCE sells an integrated cloud infrastructure stack built called a Vblock(TM). A Vblock provides the compute, network, storage and virtualization to build a public or private cloud; however, it does not include the orchestration, billing, API and user controls that enable a service provider to build a complete pay-as-you-go public cloud service. Our partnership seeks to solve this problem by creating an integrated solution made up of OpSource running on top of a VCE Vblock.
What is exciting about this alliance is that OpSource is a "Certified Orchestration Software Provider" for the VCE Vblock platform. This means that our "OpSource Stack," made up of the software that provides the user interface (web and API) layer, integration layer (billing, accounting, user controls, etc.), and infrastructure layer (ability to control compute, network, memory and storage resources), have all been tested and certified by VCE to run on their Vblock. OpSource will manage this stack as a service just as we do for our White Label resellers.
VCE, OpSource and our ecosystem consulting, integration and infrastructure partners are working together to target service providers all over the world. This provides OpSource the ability to more efficiently scale our efforts to target service providers and differentiates us further in the market.
Read our announcement about our partnership.
Jan 11, 2011
Gartner's 2010 Magic Quadrant for Cloud IaaS and Web Hosting is out, and OpSource has been recognized this year as a “Challenger,” a nice improvement over last year’s position. Putting pressure on the incumbent providers, OpSource was recognized as having an enterprise-class cloud offering that is aggressively priced.
Strong growth, product innovation and competitive pricing drove OpSource to a spot in the 'challengers' corner on this year's Gartner Magic Quadrant, moving from its position as a niche player.
Published just a few weeks ago, there has already been a lot of talk about the MQ this year, with Derrick Harris at GigaOm criticizing Gartner’s ranking system, and Lydia Leong of Gartner defending it. Whatever side you support in this debate, the reality is that the Gartner Magic Quadrant is one of the most influential benchmarks for companies seeking to evaluate vendors.
From the outside it may seem like a mystery as to how companies gain a position on the magic quadrant as a niche provider, visionary, challenger or leader. Lydia did a good job explaining the objectivity that goes into the MQ; and, I can say that this was by far the most detailed evaluation I have been through by an industry research group. Their questions and myriad surveys were detailed, quantitative as well as technical.
It took a while for Gartner to publish the 2010 report; however, Lydia has stated that the pace of innovation in the cloud IaaS market is so rapid that she and Gartner colleague Ted Chamberlin will be issuing a mid-year update rather than waiting a whole year for the next report. As one of the companies committed to this innovation, we look forward to the next report.
Sep 23, 2009
Wrecking Balls and Open PO's 2009-09-23 11:38:21
Imagine this scenario. You're an IT manager and you want one of your sys admins to set up a new server in your data center. To accomplish this you do three things:
Give the sys admin the root password to every other system in the data center.
Rent a wrecking ball, put it in front of the data center and give them the keys.
Give them an Open Purchase Order with the ability to buy unlimited amounts of equipment!
Now, no IT Manager would ever do this with their current data centers, but it's pretty much what every IT manager has to do with most of today's cloud environment. That's because the vast majority of clouds today are built to be used by one person with one password. If you want multiple people to access the account, they each have to use the same user-name and password!
That's like having every single person in your IT department using the same password for every system. I'm not a security paranoid, but even I shudder at the thought. Every single system with the same user name and password? You have no way of tracking who's made what changes to the systems or who is even allowed to make changes to the systems. Someone could install a virus across your environment and there would be no checking on who did it because everyone uses the same user name and password!
But with cloud it's even worse than that. Not only could they infect all of your systems, they could wipe them out with a few clicks of the mouse (hence the Virtual Wrecking Ball.) You could log in to find hundreds of systems and years of work completely destroyed in about 3 minutes by one windows admin with a grudge. But of course you wouldn't know who did it because everyone uses the same user name and password!
Finally, even assuming your staff has the best intentions (and most of them do), you are also giving them unlimited access to add as many systems and as much storage as they like. That's liking me giving my 7th grade daughter unlimited text without a plan (like
Sep 23, 2009
Wrecking Balls and Open PO's 2009-09-23 11:38:21
Imagine this scenario. You're an IT manager and you want one of your sys admins to set up a new server in your data center. To accomplish this you do three things:
Give the sys admin the root password to every other system in the data center.
Rent a wrecking ball, put it in front of the data center and give them the keys.
Give them an Open Purchase Order with the ability to buy unlimited amounts of equipment!
Now, no IT Manager would ever do this with their current data centers, but it's pretty much what every IT manager has to do with most of today's cloud environment. That's because the vast majority of clouds today are built to be used by one person with one password. If you want multiple people to access the account, they each have to use the same user-name and password!
That's like having every single person in your IT department using the same password for every system. I'm not a security paranoid, but even I shudder at the thought. Every single system with the same user name and password? You have no way of tracking who's made what changes to the systems or who is even allowed to make changes to the systems. Someone could install a virus across your environment and there would be no checking on who did it because everyone uses the same user name and password!
But with cloud it's even worse than that. Not only could they infect all of your systems, they could wipe them out with a few clicks of the mouse (hence the Virtual Wrecking Ball.) You could log in to find hundreds of systems and years of work completely destroyed in about 3 minutes by one windows admin with a grudge. But of course you wouldn't know who did it because everyone uses the same user name and password!
Finally, even assuming your staff has the b
Jun 22, 2009
Often the two biggest concerns about using Cloud resources today is the lack of latency SLA's and the difficulty of locking down sensitive data in cloud environments. These issues of performance and security are often cited as the most common reasons users either don't adopt the cloud, or if they do use cloud resources, the reason they only use them for test/dev environments.
Interesting enough, the base reason for the inability of cloud providers to SLA latency between different systems in the cloud and the difficulty in locking down data in the cloud is the same. It is what I call the flat network problem. The flat network problem is the underlying structural defect of the first generation of cloud systems. Essentially in order to make the cloud as flexible as possible, all of the systems within a cloud sit on the same network.
This is fine if you want to add lots of front end systems doing the same thing. But in a traditional two tier architecture, putting your databases on the same network as your front end web traffic creates all sorts of headaches. First of all, while you can secure the servers it's generally best not to directly connect sensitive database servers to the internet.
Secondly, since all traffic between your web/application servers and your database servers must be routed over the front end network it is difficult if not impossible to guaranty latency between those systems. Even if they sit in the same data center, the latency can often be microseconds instead of milliseconds. That just won't work for most traditional two tier architectures.
Now their have been many ingenious work arounds to the increased latency between cloud based systems. That said, what would make the cloud much more accessible for enterprise is a way to create what I call Virtual Private Clouds within the public cloud. Essentially it gives cloud users network level as well as systems level control on how their infrastructure is managed. Cloud infrastructures would look much more like this:
By creating true layer t
Jun 22, 2009
Often the two biggest concerns about using Cloud resources today is the lack of latency SLA's and the difficulty of locking down sensitive data in cloud environments. These issues of performance and security are often cited as the most common reasons users either don't adopt the cloud, or if they do use cloud resources, the reason they only use them for test/dev environments.
Interesting enough, the base reason for the inability of cloud providers to SLA latency between different systems in the cloud and the difficulty in locking down data in the cloud is the same. It is what I call the flat network problem. The flat network problem is the underlying structural defect of the first generation of cloud systems. Essentially in order to make the cloud as flexible as possible, all of the systems within a cloud sit on the same network.
This is fine if you want to add lots of front end systems doing the same thing. But in a traditional two tier architecture, putting your databases on the same network as your front end web traffic creates all sorts of headaches. First of all, while you can secure the servers it's generally best not to directly connect sensitive database servers to the internet.
Secondly, since all traffic between your web/application servers and your database servers must be routed over the front end network it is difficult if not impossible to guaranty latency between those systems. Even if they sit in the same data center, the latency can often be microseconds instead of milliseconds. That just won't work for most traditional two tier architectures.
Now their have been many ingenious work arounds to the increased latency between cloud based systems. That said, what would make the cloud much more accessible for enterprise is a way to create what I call Virtual Private Clouds within the public cloud. Es
May 14, 2009
Much has been made lately of the fact that the cloud is not enterprise-ready. Security, performance, SLAs, support, standards and management tools are all cited as reasons the cloud isn't ready for enterprise adoption.
Many vendors are proposing Private Clouds as a solution. Private Clouds are clouds that run inside enterprise data centers, by enterprise IT, for the use of the members of the enterprise. Basically it's a way to virtualize a large swath of the IT data center. As is often the case with technology vendors, they think that the infrastructure technology, virtualization, is the end solution the user wants rather than the vehicle with which their needs are filled. While large scale adoption of Private Virtual farms will aid in the management of the data center, it will not address the value that users are getting from true Cloud computing.
To understand the true value of Cloud computing, you first need to understand how the 'Cloud Generation' uses technology and why the Cloud is so attractive to that generation as an infrastructure solution. The Cloud Generation has grown up on the web. As a result they have come to expect three core elements to their technology experience:
Immediate Availability - They do a search and get going right away.
Ubiquitous Access - They can get to their data and apps anytime, anyplace.
Sharing and Collaboration - They expect to be able to collaborate and share anything they are working on.
The current Cloud addresses those needs by providing infrastructure in a way that is far different than any past solutions.
Immediate Availability = Complete Flexibility
Cloud solutions allow users to provision resources immediately. By the time you are done reading this, you could have a server running in Amazon or an application published in Google. It's that immediate. Moreover, it's completely flexible. You can turn off services as quickly as you turn them on. Finally you only pay for what you use down to the hour or gigabyte. This resonates with a group that's n
May 14, 2009
Much has been made lately of the fact that the cloud is not enterprise-ready. Security, performance, SLAs, support, standards and management tools are all cited as reasons the cloud isn't ready for enterprise adoption.
Many vendors are proposing Private Clouds as a solution. Private Clouds are clouds that run inside enterprise data centers, by enterprise IT, for the use of the members of the enterprise. Basically it's a way to virtualize a large swath of the IT data center. As is often the case with technology vendors, they think that the infrastructure technology, virtualization, is the end solution the user wants rather than the vehicle with which their needs are filled. While large scale adoption of Private Virtual farms will aid in the management of the data center, it will not address the value that users are getting from true Cloud computing.
To understand the true value of Cloud computing, you first need to understand how the 'Cloud Generation' uses technology and why the Cloud is so attractive to that generation as an infrastructure solution. The Cloud Generation has grown up on the web. As a result they have come to expect three core elements to their technology experience:
Immediate Availability - They do a search and get going right away.
Ubiquitous Access - They can get to their data and apps anytime, anyplace.
Sharing and Collaboration - They expect to be able to collaborate and share anything they are working on.
The current Cloud addresses those needs by providing infrastructure in a way that is far different than any past solutions.
Immediate Availability = Complete Flexibility
Cloud solutions allow users to provision resources immediately. By the time you are done reading this, you could have a server running in Amazon or an application published in Google. It's that immediate. Moreover, it's completely flexible. You can turn off services as quickly as you turn them on. Finally you only pay fo
Aug 29, 2008
Just in time for the third part of my Silo Busting Trilogy (don't be confused by the 4, the first post was just an overview) Sarah Lacy published her fantastic article On Demand Computing: A Brutal Slog. (Sara, thanks for the set-up. Let me know how I can return the favor.)
For those without the patience to read her prose, Sarah basically says, the world is going to On Demand but selling this stuff is really, really hard. CEO's are flying all over the place trying to get deals done.
The natural reaction might be, isn't this the case with traditional software as well? It is for big deals, but smaller deals (still the majority of SaaS sales) are done through the channel. The network of channels for traditional ISV's is huge, from local Mom and Pop VARs, to huge resellers such as CDW, to the big integrators like Accenture. Unfortunately none of these organizations does much for SaaS (seems like they are as addicted to up-front revenue as the traditional ISV's.)
Fortunately we are seeing a next generation of integrators focused on integrating SaaS products. Companies such as Astadia, BlueWolf, and Appirio have built burgeoning business's around SaaS application cusotmization and integration. Problem is that most of the focuse has still been around integration SalesForce.com.
That's where web services come in. By insuring you have a good web services interface you allow your app to be integrated in to these solutions by these next generation integrators. This opens whole new channels (admittedly small right now, but growing like the rest of SaaS.) Integrators can either use your software as a platform in which to develop custom apps, or more likely, integrate your app as part of a custom solution for a specific company or vertical.
Beyond the SI play, there is the ability to integrate your app in other SaaS applications allowing them do the hard work of sales while you grow every time they get a new customer. Intacct software has done just that with RealPage. Intacct is a critical component of RealPage, providing
Aug 29, 2008
Just in time for the third part of my Silo Busting Trilogy (don't be confused by the 4, the first post was just an overview) Sarah Lacy published her fantastic article On Demand Computing: A Brutal Slog. (Sara, thanks for the set-up. Let me know how I can return the favor.)
For those without the patience to read her prose, Sarah basically says, the world is going to On Demand but selling this stuff is really, really hard. CEO's are flying all over the place trying to get deals done.
The natural reaction might be, isn't this the case with traditional software as well? It is for big deals, but smaller deals (still the majority of SaaS sales) are done through the channel. The network of channels for traditional ISV's is huge, from local Mom and Pop VARs, to huge resellers such as CDW, to the big integrators like Accenture. Unfortunately none of these organizations does much for SaaS (seems like they are as addicted to up-front revenue as the traditional ISV's.)
Fortunately we are seeing a next generation of integrators focused on integrating SaaS products. Companies such as Astadia, BlueWolf, and Appirio have built burgeoning business's around SaaS application cusotmization and integration. Problem is that most of the focuse has still been around integration SalesForce.com.
That's where web services come in. By insuring you have a good web services interface you allow your app to be integrated in to these solutions by these next generation integrators. This opens whole new channels (admittedly small right now, but growing like the rest of SaaS.) Integrators can either use your software as a platform in which to develop custom apps, or more likely, integrate your app as part of a custom solution for a specific company or vertical.
Beyond the
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||