The Past Several Weeks in IT….

On the whole, the past few weeks have not been good for IT.  Let’s take a look at some of the top news items:

What was it?    
Heartbleed is an OpenSSL vulnerability that allowed hackers to pull random data from memory for servers running specific versions of OpenSSL.  This had the potential to reveal passwords, credit card numbers, encryption keys, etc. – and has been a problem since March 2012.  TheReg has got a pretty good analysis of the bug.

What was the immediate effect?

If you’re running a website that was using a vulnerable OpenSSL version, you probably spent most of this week patching SSL and updating your certificates.  You may have also sent out emails to your users asking them to change their potentially compromised passwords.

How did it affect me?

Fortunately, all I needed to do was verify that wasn’t at risk and change a few passwords.  I did get to revisit my own advice and found that updates from external vendors were a lot easier to find on Twitter than either through their website or customer support.

What will the long term effect be?

Once the dust settles, the questions will be: if a bug this serious went unnoticed for over 2 years, what other problems are out there?  How secure can you actually be on the internet?  Internet resource providers will need to implement more than just encryption if they have to support secure traffic.

End of XP Support:
What was it?
Microsoft’s most successful desktop operating system.

What was the immediate effect?

Microsoft will no longer be releasing patches, but XP computers will still be used.

How did it affect me?

It didn’t.  Rebuilt my last XP laptop at home as Linux a couple of years ago, and we’ve long since upgraded the XP desktops at Heroix.

What will the long term effect be?

If someone finds a security bug in XP as big as Heartbleed was for OpenSSL, Microsoft won’t provide a patch.  There may be third party vendors supplying patches, but it’s on the XP owners to find and apply them.  Another effect may be users not even bothering to replace XP – between tablets and smartphones, they may not need to.

HBO Go Crashes:
What was it?
Cloud based SaaS allowing subscribers to view HBO programming in real time from internet devices.

What was the immediate effect?

A lot of angry users on Twitter complaining that they couldn’t watch the Game of Thrones season premiere, along with bad publicity for HBO in particular and Cloud services in general.

How did it affect me?

What is this Game of Thrones of which you speak?

What will the long term effect be?

There has been speculation that the reason HBO Go crashed was that subscriptions were shared across multiple devices, which had the effect of multiplying demand several times over.  Given that this is not the first time HBO Go has seen problems with higher than anticipated demand on its services, they either need to find a way to scale up their resources, or find a way to limit the number of users allowed per subscription.

End of Ubuntu One
What was it?
Cloud based file storage provided by Ubuntu with both free and paid options.

What was the immediate effect?
Users need to start making plans to download their files and keep them elsewhere by 7/31/14, or lose the data.

How did it affect me?

My files were already backed up elsewhere – but I will lose access to them over the internet.

What will the long term affect be?

If you weren’t using Ubuntu One, probably not much.  But, if you were, then you’re probably reconsidering Cloud storage as a long term backup option.

50 Years of the Mainframe
What is it?
Mainframe computers were – and are – the backbone of data processing and ERP for large organizations.

What was the immediate effect?
Good publicity for IBM – especially now that IBM is offering a mainframe Cloud server.

How did it affect me?
No effect – haven’t really done anything with a mainframe since college.

What will the long term affect be?
On its own, 50 years of anything is a considerable achievement.  In IT, given that XP was considered unsupportably old at 13, the 50 year lifespan for mainframes is a testament to the flexibility, reliability and backwards compatibility of the platform.  Mainframes are not going away anytime soon – that is, as long as they can still find people who have the skills to run them.

What does this all add up to?

1)       “Secure” internet connections for the past couple of years weren’t secure.

2)      There are still a lot of XP computers in use, and they won’t be getting security patch updates.

3)      Cloud technologies sometimes don’t scale the way they should.

4)      Cloud vendors sometimes drop services with short notice.

5)      The rock solid computers at the core of many large organizations are still going strong – but they’re running out of people who know how to run them.

These are not insurmountable problems – patches will be applied, lessons will be learned, and IT as a whole will live on to make new and better mistakes with technologies that have not even been dreamed of yet.  IBM may even root out enough hardcore mainframe fanatics to address staffing problems for another generation.   The point isn’t to look for an absolutely flawless technology, but to find a pretty good one, keep a close eye on it to make sure it’s working as expected, and find and fix the problems as they crop up.  Because, no matter how many problems are fixed, nothing will ever be completely bug free.

Implementing Electronic Health Records and the other state level health exchanges are only one part of US health care reform.  Another aspect of Health Care reform that is much less publicized is the Health Information Technology for Economic and Clinical Health (HITECH) Act – which is designed to build an electronic health records (EHR) infrastructure.  The program is implemented using a carrot and stick approach: there are currently incentive payments made to implement EHR, but health care providers who don’t comply by 2015 will face “payment adjustments” for Medicare claims.  In order to qualify for the incentive payments, the providers must not only implement EHR, but also prove that they’re getting “meaningful use” from it.

Beyond the financial incentives and hoops that providers need to jump through to attain said incentives, the advantages of using EHR will be compelling when they’re fully implemented.  My doctor will no longer need to rely on my admittedly bad memory to know exactly what a specialist diagnosed after a referral.  I would be able to access the full gory saga of breaking my wrist from emergency room X-ray to OT referrals, schedule whatever follow ups were needed, and know exactly what my co-pays would be.  And researchers would be able to use sanitized records in big data analysis for policy planning and medical research.

We’re not there yet, and there are a lot of road blocks between now and then:

1)      Doctors are not IT professionals.
And I don’t want them to be.  I want them to spend their time reading the latest and greatest developments in the world of medicine.  Most providers will end up hiring EHR consultants – to help in this, the Office for the National Coordinator for Heath Information Technology (ONC) website,, provides multiple spreadsheets and documents that render an excruciatingly detailed outline on how to set up a practice for EHR and how to evaluate EHR consultants.  The Centers for Medicare & Medicaid Services (CMS) also provides a list of certified EHR software, with automated guides that score the software based on how well they meet CMS criteria for incentive payments.  Even with this help, though, the process of implementing EHR will be a long, detailed, time consuming job for any doctor or office manager tasked with it.

2)      EHR application usability is often not very good.
Health care professionals concentrate on treating patients, not on entering information about what they just did into an application. EHR applications that are not easy to use will ultimately not be used.  According to the Healthcare Information and Management Systems Society (HIMSS):

Electronic medical record (EMR) adoption rates have been slower than expected in the United States, especially in comparison to other industry sectors and other developed countries. A key reason, aside from initial costs and lost  productivity during EMR implementation, is lack of efficiency and usability of EMRs currently available. Achieving the healthcare reform goals of broad EMR adoption and “meaningful use” will require that efficiency and usability be effectively addressed at a fundamental level.

While the ONC does not provide any guidance on EHR application usability, HIMSS has several resources that can help providers evaluate applications for usability before the applications are adopted.

3)      Specialization and Interoperability
As technology and standards evolve providers may find that the EHR software they’ve implemented is not satisfactory. A lack of interoperability between departments may be a deciding factor for hospitals, while specialists sometimes find that the applications lack necessary features.  Given the evolving technology and complexity of selecting software that can meet a wide spectrum of requirements, providers need to be prepared for the possibility of either migrating to a new EHR platform or re-implementing one from scratch.

4)      HIPAA
Hospitals and other large providers may already have an IT infrastructure that can be scaled up to meet EHR needs, but many healthcare providers, especially small practices, will be implementing a new IT infrastructure to handle EHR.  In most cases, building a new infrastructure would be the ideal environment for Cloud implementations.

There is one problem: Cloud providers need to prove they are HIPAA compliant.  HIPAA has steep fines, even for violations where the “Individual did not know (and by exercising reasonable diligence would not have known) that he/she violated HIPAA”, and healthcare providers are justifiably wary of handing patient records to an offsite provider.  In order to be able to use a Cloud provider for EHR, the Cloud provider must sign a HIPAA Business Associate Agreement (BAA) which ensures that EHRs will be managed securely, accessible only by approved entities, and that the provider agrees to be audited to ensure compliance.  Many mainstream Cloud vendors (Amazon, Microsoft, etc.) provide BAA agreements, and some EHR vendors applications are already Cloud based.

Even with a BAA, however, healthcare providers may still be reluctant to trust HIPAA compliance in the Cloud.  The Cloud may also not be the best option if there is an existing infrastructure, if the systems are critical, or if there is no stable, high-bandwidth internet connection.  In these cases, virtualization can provide the necessary capacity, redundancy and security control to support an EHR system.


Microsoft and Your Privacy

Last week Microsoft admitted that it had snooped into a user’s hotmail account in order to pursue an investigation into the theft of Microsoft’s Intellectual Property.  Microsoft’s Deputy General Counsel & VP, Legal and Corporate Affairs, John Frank, points out in a blog that, firstly, they were within their rights based on the Terms of Service that the user agreed to when he registered for the account, and secondly that they couldn’t get a court order because they owned the servers hosting the data:

Courts do not, however, issue orders authorizing someone to search themselves, since obviously no such order is needed. So even when we believe we have probable cause, there’s not an applicable court process for an investigation such as this one relating to the information stored on servers located on our own premises.

In order to address end user privacy concerns stemming from this incident, the blog goes on to assure users that they would only view user data if the circumstances would have warranted a court order to do so, that the circumstances would be evaluated by a legal team outside the investigation, that they would only look at data pertinent to their investigation, and that Microsoft would report on the number of times and number of users affected by said searches in their annual report.

Personally, I expect that anything I send outside my LAN is no longer private.  If it’s not over a VPN, it can be sniffed out.  If it is stored on someone else’s server, someone else could read it.  Mind you, I don’t expect anyone is itching to expose my favorite scalloped turnip recipe, but SaaS provided banking or tax information is potentially snoop-worthy.  I seldom read all the way through Terms of Service agreements, and when I have in the past, it’s basically confirmed my suspicion that, if it’s a free service, I give up rights to the information, and, if I’m paying for it, I have a limited expectation of privacy in that the provider shouldn’t be reading my data directly, but the metadata describing my usage is being collected and analyzed.

Now that Microsoft has demonstrated the privacy limits for free services, the question is: what are the limits for paid services?

Microsoft’s first argument for free services was that the user agreed that Microsoft owned the data.  That is not the case if you’re paying for a Cloud implementation.  In section 2c of the Windows Azure Agreement:

Ownership of Customer Data. Except for Software we license to you, as between the parties, you retain all right, title, and interest in and to Customer Data. We acquire no rights in Customer Data, other than the right to host Customer Data within the Services, including the right to use and reproduce Customer Data solely as necessary to provide the Services.

Microsoft’s second argument was that they could not get a search warrant to search themselves, even though a warrant was justified.  What would it take to justify a warrant for Cloud data?  In section 1b of the Agreement, Microsoft defines an acceptable use of the Azure Cloud:

Acceptable use. You may use the Product only in accordance with this Agreement. You may not reverse engineer, decompile, disassemble, or work around technical limitations in the Product, except to the extent that applicable law permits it despite these limitations. You may not disable, tamper with, or otherwise attempt to circumvent any billing mechanism that meters your use of the Product. You may not rent, lease, lend, resell, transfer, or sublicense the Product or any portion thereof to or for third parties.

The legalese is vague enough to allow for interpretation as technology develops, and that could lead to unintentional violations.  If a Cloud consumer came up with an innovative, wildly effective way of circumventing the “technical limitations” of the product, would violation of the terms of service justify a warrant?  What if a subscription to a Cloud based SaaS application began to look a lot like reselling Cloud services?

The caveat here is that, at a minimum, you should read the Terms of Service carefully.  And given the vagaries of legal language, get a lawyer to review any agreement for services if you’re going to do something a bit beyond the mainstream.

Managing Cloud Migration

Federal agencies have a “Cloud First” mandate from the Office of Management and Budget (OMB) which specifies that, if a “secure, reliable, cost-effective cloud option exists”, it should be used by default in federal agencies.  Since the definition of “secure” can be a moving target, the Federal Risk and Authorization Management Program (FEDRAMP) was established to provide an approval stamp and ongoing certification for Cloud Service Providers.  If all the cogs in the machine work as designed, an agency using a FEDRAMP approved vendor should not have to worry about Cloud related security concerns, or excessive red tape when contracting with a FEDRAMP approved Cloud vendor.

That being said, there are still the “reliable” and “cost-effective” parts of the mandate that need to be established.  Even if a solution is rock solid secure, it’s absolutely no good if it’s not effective, or if it ends up costing more than using a previously existing solution.  Just as with any IT project, reliable and cost-effective are equally as much about how well your project is managed as it is about how good your programmers are.

Part of the problem is the mindset that implementation failure is due to bad coding and a lack of resources.  If you just had better code, or a few more GB RAM, then your site would be a marvel of efficiency and cost savings.  Back in the real world, the problem is more that the individual components of your solution work up to specifications on their own, but when you try to put them together you end up hitting walls where a database connection from to Homeland Security or the IRS doesn’t connect efficiently, and slows down the application. is different from “Cloud First” mandated applications.  It was, and is, a sprawling application that covers multiple agencies and was intended to be accessed by millions of external users simultaneously.  Federal agencies moving their internal applications to the Cloud should have far less complexity to deal with than had, and the transition should be smooth and flawless.  Right?

Not necessarily.  Federal Computer Week (FCW) reported on an executive briefing by Wolf Tombe, CTO of Customs and Border Protection (CBP), outlining mistakes that were made in their Email-as-a-Service Cloud implementation.  FCW quotes Tombe’s summary of the experience as:

Tombe… said the agency did not specify with the vendor how the migration to cloud email would occur, nor did it contractually demand visibility into the vendor’s cloud infrastructure.

Upon signing the contract, Tombe said, the agency learned the vendor would initially be able to migrate only about 100 users per week to the cloud. A server blade failure soon after led to a total system outage, getting CBP’S email-as-a-service offering off to a terrible start.

“We should have known we were in for trouble,” Tombe said. “It wasn’t what we signed up for, and we’re still not seeing the cost realization you’d expect for cloud. It was a custom infrastructure built for us — not a managed service.”

CBP’s experience with their attempted Cloud migration led to changes in their Cloud acquisition strategy.  While some guidelines were requirements for Cloud vendors, there were other internal requirements to ensure a successful Cloud implementation:

  • Start with small, low visibility applications rather than trying to migrate large scale, agency wide applications.
  • Make sure mission owners are committed to the implementation
  • Set a reasonable level of expected cost savings, and prove that those savings are being achieved.

Focusing on the mission owners and their experience with migrated applications is important in establishing the “reliable” and “cost-effective” portion of the Cloud mandate.  Are Cloud based applications working just as well as the locally hosted ones?  Are there any problems integrating the as yet unmigrated applications with the newly migrated applications?  If the Cloud based applications are not as reliable as locally hosted applications, do the savings justify the move?  If not, can you back out and return to your previous application?  These questions are all much more manageable with small, low visibility applications.

However, organizations seldom have applications that are completely standalone.  A web processing application and email system may have links to each other, and moving one to the Cloud without the other can cause that integration to breakdown.  As demonstrated, trying to integrate disparate components requires careful management to make sure that the individual components work together as a whole. Planning for how to re-integrate applications after migration should take place well before the first application is migrated to the cloud.

Considering the Cloud? Part 6: When You Shouldn’t Use the Cloud

So far in this series, we’ve looked at the available options for Cloud computing, budgeting and performance tuning for a Cloud implementation. While it’s not difficult to make a persuasive case for moving your infrastructure to the Cloud, there are some scenarios where the Cloud is not the right option.  There are several concerns related to security, portability, economics and system criticality that can limit the benefit you would receive from moving to the Cloud.


As we discussed in part 3, a Community or Public Cloud will be hosted remotely.  The vendor providing your Cloud services will have copies of your server images, applications, and data.  Additionally, your data and applications may physically exist on devices that are shared with other Cloud customers.  The Cloud vendor should have infrastructure level security measures in place to ensure that data is secure and isolated from other customers.

However, even if the vendor’s security is sufficiently strong, in order to be compliant you need to be able to prove that the vendor meets or exceeds regulations.  Make sure the vendor provides enough detail to satisfy auditors, and updates security measures as new threats are found and new regulations are implemented.

Application access via the internet can also be a security concern even if the connection between the Cloud application and user’s browser is encrypted.  It is very difficult, but not impossible, to decrypt traffic.  Traffic patterns can be analyzed.  Users very often create passwords that are far less secure than they should be.  And, last but not least, users could be accessing the site through infected browsers that compromise data confidentiality.  These issues may not be significant if the client and server are both behind a firewall, but over the open internet they can lead to a security breach.


Another issue is portability. Vendors can and do go out of business, or get acquired by other vendors.  Before porting an application to a Cloud vendor, make sure you have a plan for what would happen if that vendor were not available.  Rebuilding an IaaS application with a new vendor is relatively straight forward, but moving SaaS or PaaS applications could require rebuilding applications according to the new vendor’s requirements.


There may also be less of an economic benefit to the Cloud if you have very large servers running very heavy workloads 24×7.  At that point, the per hour cost of a Cloud based server can end up being higher than the amortized cost for an in-house enterprise server.   Additionally, the one time cost for a hardware upgrade may be significantly smaller than the ongoing cost for additional Cloud resources.

Critical Systems

Finally, there are some critical systems which are just not suited to Cloud applications.  As per NIST’s Cloud Computing Synopsis and Recommendations:

Safety-critical systems, both hardware and software, are a class of systems that are usually regulated by government authorities. Examples are systems that control avionics, nuclear materials, and medical devices. Such systems typically incur risks for a potential of loss of life or loss of property.
Such systems inherit “pedigree” as a byproduct of the regulations under which they are controlled, developed, and tested. Because of the current lack of ability to assess “pedigree” of one of these systems within a cloud (due to many distinct subcomponents that comprise or support the cloud), employing cloud technologies as the host for this class of applications is not recommended…

The lack of visibility into the exact hardware underlying the Cloud makes it unsuitable for “safety-critical” systems.  The Cloud hardware may meet or exceed the required specifications, but there is no way to ensure that it does, or that the underlying hardware might not change without notice.  If your organization has business critical applications that have similar stringent requirements, you would be best served by controlling the hardware directly.

This is the last post in the Considering the Cloud series – while the Cloud can suit many IT needs, it is not appropriate for all, and if it is suitable for your needs, finding the appropriate application service, deployment option, and then a vendor to meet those needs can be a daunting process.  The purpose of these posts was to provide a framework of the currently available options, and we will be posting updates to the series as Cloud technology evolves.

Much of the source material for these posts was taken from the previously referenced Cloud Computing Synopsis and Recommendations from NIST, and I would also highly recommend the European Commission’s Unleashing the Potential of Cloud Computing in Europe, which has a different perspective on Cloud Computing.

Previous posts in our Considering the Cloud series:
Part 1: Terms of Service
Part 2: Application Models
Part 3: Deployment Models
Part 4: Enterprise Level Considerations
Part 5: Performance

Considering the Cloud? Part 5: Performance

Applications running on traditional bare metal servers in local data centers generally have linear rules for performance optimization:  the slowest step in an application determines the speed of the application, and optimizing that step will speed up performance.  If your CPU is spiking, add another processor.  If disk I/O slows down your application, optimize the application’s I/O, or get faster drives.

Unfortunately, these rules can be obscured when the same application is transferred to a Public Cloud.  Performance will vary depending on the options you’ve chosen for your instance, and on the underlying performance optimizations implemented by the vendor.   An InfoWorld comparison test of 8 Public Cloud vendors found that similar offerings from different vendors had significantly different performance on benchmark tests, and a later, more in-depth test on Amazon’s EC2 cloud found significant performance issues when the proper sizing was not chosen for a Cloud.

On top of that, Cloud instances configured to automatically scale up resource use in response to performance slowdowns can cover up problems with application design or configuration – leading to a situation where applications fail regardless of the resources they’re given.

Performance testing before full scale implementation can provide the opportunity to benchmark your application, and determine the effects of resource allocation, application performance tuning, and overall application design before you rely on it to scale up automatically when you go live.  In the best case scenario you would be able to experiment with scaling your application on one vendor, and then extrapolate those results to other vendors and look for the best deal.  However, there is no apples-to-apples comparison between Cloud vendor offerings, so the results of Cloud performance testing will be unique to each vendor.  The number of vendors you test will vary depending on time, budget, and how many make your CIO’s shortlist.

However, Public Cloud performance doesn’t end with optimizing your application in the Cloud – there are several additional performance factors that are not present in locally hosted applications:

1)      Network  Latency
Every I/O operation has some latency, however small that may be.  Moving an application from a local LAN based server to a remote internet accessed server will increase the amount of network latency for an application.  For web sites that were already being accessed by the public over the internet, this might not be in issue – in point of fact, if your Cloud vendor has a better network infrastructure, it could work in your favor.

2)      Noisy Neighbors
From the Cloud vendor’s perspective your server instances are Virtual Machines (VMs) on a host that is shared with multiple other VMs.  Cloud vendors have strict software protocols to make sure that each instance has its allocated resources, and is strictly segregated from other instances on the same host.  That being said, it is possible that another Instance’s need for additional resources could cause a slowdown on your server until the resources are redistributed appropriately.  InfoWorld’s EC2 test showed significant degradation in performance during periods of time when the EC2 hardware was heavily used.

3)      Vendor Outages
Hidden though it may be, clouds are still built on hardware, and hardware fails.  The data centers hosting the hardware are subject to natural disasters.  The operators in the data centers can, and do, make mistakes.  If your servers are at your site, you can troubleshoot a problem, correct it, and take steps to make sure it doesn’t happen again.  If your servers are in a remotely hosted location, you rely on the vendor to manage problems, and, hopefully make sure they don’t happen again.  About all you can do in the event of a vendor outage is to try to determine what went wrong, make sure you receive whatever benefits you are due from your SLA, and, if it happens too often, make plans to port your application elsewhere.

In the next part of this series we’ll look at reasons why you may not want to use the Cloud.

Previous posts in our Considering the Cloud series:

Part 1: Terms of Service
Part 2: Application Models
Part 3: Deployment Models
Part 4: Enterprise Level Considerations
Part 6: When you shouldn’t use the Cloud

Considering the Cloud? Part 4: Enterprise Level Considerations

There are two fundamental reasons to use Cloud Computing: saving money and improving performance. You reserve resources that are in your best estimate enough to meet your performance needs, and then pay extra for additional on-demand resources if they’re needed. And, of course, the rates for the on-demand resources are higher than the rates for the reserved resources, so your most cost-effective configuration would be to make sure that you reserve just enough capacity without reserving too much.

The tricky part of reserving Cloud resources is that every aspect of the Cloud is metered. It’s not just a matter of selecting how much bandwidth, memory, CPU and storage. What OS do you need to run? Do you need databases? Or a Web server? On a scale of Light, Medium or Heavy, how much utilization do you expect your instances to have? How much storage do you need? How much I/O will that storage see? Do you want all your resources in one geographic location, or multiple locations?

In order to simplify Cloud Management, Cloud providers usually package resources into typical usage cases, and, in the case of the IBM SmartCloud, or Amazon Web Services, provide calculators to help you estimate your monthly expenses based on sample use cases. For example, using the IBM calculator to compare a high-availability configuration to a small web application configuration gives the following details:

High Availability Web Application
Virtual Machines 14 5
CPUs 40 20
Storage 1 TB 2 TB
Storage IO 526 million 237 million
Network IO 5 TB 173 GB
Cost Estimate $3,148/month $2,377/month

The cost estimate application lets you adjust the values used for the configurations, which can provide an idea as to where costs might begin to add up rapidly. For example, if you decide you would like a 10 TB storage package, the High Availability package price increases to $4,171/month. If you select a 50 TB storage package, the cost increases even more dramatically to $8,082/month.

The AWS Simple Monthly Calculator provides similar options, but also provides the option to decide which of Amazon’s data centers you would like to use to host your application, including the option of hosting the application over multiple data centers for the sake of redundancy. Please keep in mind that while the redundancy provided by multiple data centers will keep your site running in the event of an outage, you are paying for twice the cloud, and if data needs to be replicated between sites, there will be additional costs for that as well.

The takeaway from this is that everything in the Cloud is metered, and any increase in IO, or storage needs, or web traffic can increase your costs if you have not reserved enough resources and need to use on-demand resources. Make sure you know your application’s performance parameters, budget resources accordingly, and keep an eye on the application to see if you need an increase in resources.

In the next part of this series, we’ll take a look at performance issues in Cloud computing.

Additional Posts in our Considering the Cloud series:

Part 1: Terms of Service
Part 2: Application Models
Part 3: Deployment Models
Part 5: Performance
Part 6: When you shouldn’t use the Cloud

Considering the Cloud? Part 3: Deployment Models

If you’ve made it this far, you’ve gotten past the Service Terms and have decided what type of Cloud  you want to use (Software as a Service (SaaS), Platform as a Service (PaaS), or Infrastructure as a Service (IaaS)).  The next thing to consider is exactly where you want your Cloud deployed.

The “where” of Cloud deployment involves two considerations:  1) Do you want to share your hardware with other Cloud clients, and 2) Do you want your hardware to be locally or remotely hosted?  In an ideal world, with enough time and money for a custom implementation, a private, locally hosted Cloud implementation would be the first choice.  However, most organizations do not have the requisite funding, and are not subject to the overriding  security concerns of organizations like the CIA, and must therefore take a look at the pros and cons of both sharing hardware in the Cloud, and working with remotely hosted servers.

Let’s look at sharing hardware first.  In a post last June, I discussed the National Institute of Standards and Technology (NIST) Definition of Cloud Computing, which described 4 categories of deployment models for Clouds:  Private, Community, Public, and Hybrid.  These categories refer to whether, and to what extent, a Cloud infrastructure is shared.  Sharing infrastructure typically means that clients are assigned a virtual server on hardware that hosts multiple virtual servers.  The individual virtual servers are segregated by the virtualization software, but the underlying infrastructure is shared, so resource contention (e.g. “noisy neighbors”) is a possibility, as it is in any virtualized environment.

In Private Clouds, the hardware is exclusively used by one organization, and is typically hosted locally by that organization.  As may be expected, this is at the more expensive end of the Cloud price range, but it buys you complete control of your hardware, and the accompanying security and compliance benefits.

In Community Clouds, a group of related organizations share a cloud – for example, the shared municipal infrastructure in Melrose, MA.  Community Clouds are much more cost-effective than Private Clouds, but are able to focus more closely on security and compliance issues for specific groups of clients- for example, see the US GSA’s list of approved Cloud Service Providers for a current list of federally approved vendors who are compliant with FedRAMP standards.

The Public Cloud is the most inexpensive model.  Public Clouds can span several large scale data centers, and may offer the advantage of being able to geographically distribute your servers.  If a natural disaster wipes out an East Coast data center, your traffic could be seamlessly redirected to a West Coast data center.  Additionally, the pay-as-you-go model for public clouds can be a mixed blessing – either offering an economical way to only pay for the resources you need with a learn-as-you-go, do-it-youself implementation, or pushing you toward implementation consultants who can navigate the dizzying array of cloud options .

The Hybrid Cloud model is a combination of any of the other Cloud models.  For instance, an organizational implementation of a small, Private Cloud for proprietary data, with a Public Cloud implementation for publicly accessible web sites.  Or a university Community Cloud implementation that uses Public Cloud based SaaS for fundraising.

The exact model you use will be based on your budget, your organizational needs, and your security concerns.  For any remotely hosted resource, make sure that you know exactly where their hardware is hosted, and if at all possible, that they have alternate sites in the event that your primary server instances go down.  Yes, they have an SLA obligation to keep your site up, and yes, you will get some money back from them if they don’t, but it isn’t likely to be enough to offset the business losses you’re liable to suffer in the event of an outage.

I’ll wrap up Cloud deployment models on that note.  The next post in this series will take a look at NIST’s overview of how enterprise level computing issues are manifested in Cloud environments.

Additional Posts in our Considering the Cloud series:

Part 1: Terms of Service
Part 2: Application Models
Part 4: Enterprise Level Considerations
Part 5: Performance
Part 6: When you shouldn’t use the Cloud

Considering the Cloud? Part 2: Application Models

If you are planning on moving to the Cloud, you need to make decisions about exactly which of your applications you will be moving.   The National Institute of Standards and Technology (NIST) has developed a Definition of Cloud Computing that outlines the currently available application options: Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS) – I gave a brief overview of each of these in a post from last June.

Keep in mind that you don’t need to commit all your resources to one type of Cloud application – you can pick and choose resources according to your needs.  For example,   moving your website to a PaaS vendor, while using Exchange on IaaS for your mail, and a SaaS CRM application.

Before you make any decisions on which model of cloud you would like to use, first examine the applications you’re currently using – are they performing satisfactorily?  Are there any problems with the applications?  Do you anticipate problems with them in the future?  If you anticipate sales growth, will your CRM be able to keep up with the new customers?  If you’re rolling out a new web application, will your web servers be able to handle the extra load?

If an application you’re using is performing well, and you anticipate that it will continue to perform well, but may need more resources, then using IaaS can help you by providing extra resources based on, and priced by, demand.  IaaS also has the benefit of allowing you to use the same server platform you’re currently using, making it easier to move applications to and from the cloud.  The one drawback IaaS has as compared to other cloud technologies is that, as with any virtual server, you still have to install and administer the OS and applications, and may well need to provide licensing for both.

However, if you have applications that need to be replaced,   SaaS applications have the advantage of offloading application development, maintenance and support to the vendor, and vendors may also provide training materials to get your organization up to speed quickly.

There are downsides to SaaS.  Applications may not be as customizable as you would like.  And, if you want to move to a different application, you need to export the information you’ve configured for your company, and then import it to another application.

PaaS addresses the customization problems found in SaaS – since you develop the application, it can be customized as much as needed.  Depending on the platform used to develop the application, it may be very easy to transfer the application to a new PaaS vendor.  The drawbacks to PaaS include needing to allocate resources to application development, and having to provide training for the applications.

Another factor to consider when determining which Application model meets your needs is how that model is deployed.  In Part 3, we’ll look at the different Cloud deployment models, and what you need to consider when choosing where your cloud is deployed.

Additional Posts in our Considering the Cloud series:

Part 1: Terms of Service
Part 3: Deployment Models
Part 4: Enterprise Level Considerations
Part 5: Performance
Part 6: When you shouldn’t use the Cloud

Considering the Cloud? Part 1: Terms of Service

Depending on who you’re talking to, the Cloud is either the greatest innovation in computing since the internet, or overhyped software as a service.  Part of the problem is, according to NIST, that:

Cloud computing is a developing area and its ultimate strengths and weakness are not yet fully researched, documented and tested.
Attempts to describe cloud computing in general terms, however, have been problematic because cloud computing is not a single kind of system, but instead spans a spectrum of underlying technologies, configuration possibilities, service models, and deployment models.

In an attempt to clarify the possibilities and potential problems for organizations considering cloud computing, NIST has created a Cloud Computing Synopsis and Recommendations Special Publication which outlines typical commercial terms of service, the different types of clouds, when they are best used, and considerations for using cloud computing.

Even if you aren’t considering a move to the cloud, the analysis of the typical terms of service is useful for any outsourced resource.  The terms break down into the services the provider promises, limitations to the service agreement, and the terms you, the customer, agree to abide by.

The first part, the provider’s service promise, is often referred to as a service level agreement (SLA), with terms like “99.9% uptime”, and compensation for not meeting that target, along with agreements to protect the confidentiality and integrity of your data.  When examining the SLA terms, make sure you understand the provider’s definition of “uptime” – if they measure service time intervals in 5 minute periods, and service is out for less than 5 minutes, then they may not count it as an outage.  Also, “uptime” may not mean that all resources are available – for example, if you have a web site with a backend database, if the web site is available, but the database is down, it may not count as an outage.

The compensation for an outage is also often in the form of a service credit, so if you’re so dissatisfied with a provider that you want to move elsewhere, the compensation is lost.  Also, if you don’t notice an outage and claim a credit for it, they’re not going to tell you about it – monitoring your system’s availability is the only way you’re going to know if they’re living up to their end of the agreement.

The limitations to the provider’s agreement are primarily that they are allowed to have scheduled outages, which are not counted against uptime.  Check the provider’s history with this – there are times when scheduled maintenance can run over its window, and cause outages.  Of course, in the event of a natural disaster, or other unavoidable disaster, the SLA is no longer in effect.  In that case, if your provider has multiple geographically distributed data centers, and your data is replicated across them, you may still be able to maintain service.

As a customer, you also agree to terms – primarily that you will use the services legally, with properly licensed software, and pay on time.  If, for some reason, the provider thinks you have violated these terms, they can terminate your account, and delete all of your data.  So, make sure you’ve got a back up copy of all the data you have on their servers, partially in case their business office transposes the numbers on your credit card, and partially in the event of a natural disaster.

Finally, keep in mind that the service provider usually reserves the right to change the terms of your contract with advanced notice.  This can include changes to any part of the agreement, including  provided services, which could in turn effect the performance of your site.  To cover this contingency, it’s a good idea to have a backup plan for moving your services either back to your own site, or to a different service provider.

Additional Posts in our Considering the Cloud series:

Part 2: Application Models
Part 3: Deployment Models
Part 4: Enterprise Level Considerations
Part 5: Performance
Part 6: When you shouldn’t use the Cloud