Cloud: Open Business Transformation

A lot of attention this week has been focused on comments made in an interview with Barclays Bank on how their use of Linux allowed them streamlined and focused development environments and how use of Linux and Open Source confirmed their internal strengths to get to first base quicker and faster, saving a huge amount of budget in the process.

Barclays are a thought leader, they’re a great company who understand how changing the ethos and supporting their people to deliver the platforms and technologies allows them to be one step ahead of many in the same vertical industry. It’s one reason Red Hat work with them and why we put so much effort into supporting their ambitions.

So if Barclays can harness Cloud openly and benefit by association can other businesses use it for transforming not just their architectures but also the underlying ethos that glues IT processes and practices together ? Allowing the adoption of Open Cloud to re-invent or re-imagine what they are able to achieve. Agility has always been a cornerstone of Cloud, elasticity goes beyond availability but also describes methodologies of provisioning to fit both governance and appetite. Using Open Cloud to develop hybrid methodologies to build new structures around your internal capabilities may yet be seen as the smartest move yet for shrewd CIOs facing the ever increasing needs from both internal and external facing customers.

OpenShift has given voice to Red Hat’s ability to work with forward thinking developers at every level across every business vertical. To be able to demonstrate openly with three clicks how you break through existing barriers to Cloud application deployment and management. Tie in JBoss (the underlying glue behind OpenShift) and you have an agile structured development environment world class and ready for application deployment as well as those organisations who are realising that migration from WebLogic and Websphere makes JBoss a more advantageous platform than just being a JRE stablemate.

Barclays aren’t the first, they may be one of the most vocal and supportive, but the world is waking up to the fact that if you are sensible, if you understand business process and bottom line effects on your business – you avoid vendor lock in, and you think Open.

We’ve worked very hard over the last five years to build a solution set of technologies as part of engagement at the customer level to get existing enterprise customers often drowning in older legacy less flexible legacy platforms to start thinking openly. The Red Hat Pathways engagement model is proven to work and is a great starting point when you are starting to consider how you re-imagine and re-focus your business process methodology around harnessing the best of Open Source. This is never more critical when it comes to the decision making around Cloud. The video below gives you a brief snapshot of what it is and how it can be engaged with.

Below you’ll also find two more online resources to help you think about why being Open can dramatically increase your agility and flexibility at every level of your business using Red Hat Open Hybrid Cloud.

Red Hat – Get more out of your Open Cloud

Red Hat – Enterprise PaaS

Podcast: Red Hat Acquire ManageIQ

miq-horizontal-logo

I’ve been promising to record and release this quick podcast on my take on our acquisition of ManageIQ. I don’t speak for Red Hat on this nor do my views or opinions matter in any way. I look at things from a technology adoption and technology abstraction layer and how it impacts and enhances our abilities in Cloud.

If you’re not aware of ManageIQ I’m presuming you’ve had your head buried in the sand for the last three years.

Needless to say when I found out about ManageIQ becoming part of Red Hat (subject to all the usual shareholder stuf) I was beaming from ear to ear. ManageIQ are an amazing group of people who really understand the granularity of cloud, the flexibility you need to demonstrate when dealing with elastic architecture but the need to get under the hood and deliver. Quite simply they demonstrate maturity in depth and excellence of the highest order when it comes to engineering solutions across heterogeneous cloud platforms and technologies.

manageiq2

Combine that with a virtualisation layer(RHEV/KVM), a storage platform (Gluster – Red Hat Storage), a PaaS platform (OpenShift) and CloudForms and you are effectively delivering an entire orchestration piece that no other vendor, VMWare included, can currently compete with. Seventy two hours on to the minute I am STILL smiling.

Here’s my take on it – listen now on iTunes or Stitcher or click the link to the podcast to listen to it in your browser.

Come back in 2013 for more content – and remember I love hearing your feedback and your news. Better stories come out of collaboration. 43,000+ downloads of my podcasts since August (thats nearly 580 man days of listening if you stacked each episode end to end) is flattering but I can do better – with your help.

Happy Christmas from my family to yours. Have a peaceful festive period and thanks for listening to my work and reading my articles during 2012.

Download the podcast here in MP3 format only

Podcast: Bill Bauman – the RHEV God

Folks we have a real treat for you today, a podcast from Bill Bauman. The guy is about as good as it gets when you want to talk about virtualisation. A righteous dude and a very good friend. Apologies for the photo above, Bill is on my right, whilst I look like someone pumped me up. I’m offering the excuse of jetlag, good Scotch and bad camera angle.

Recorded in Barcelona on IBM’s stand talking about RHEV and IBM Flex systems if you’ve an interest in virtualisation topology, io architecture planning and the future of proper virtual platform computing you need to listen to this.

You’ll also need the slidedeck to accompany the podcast which you can grab here in PDF format.

Download the podcast here in MP3 and OGG formats

Podcast: Cutting through the confusion

I was stood at GigaOM in Holland last week and got involved in a heated discussion over ITIL as a standard in cloud. Tried to point out ITIL is a framework not a standard. Took mental notes while I was there and the result is this short podcast where I can rant and let off steam. The power of the microphone is sometimes awesome and it let’s me educate as well as try to show my enthusiasm for what we’re doing here in Cloud.

Also I take time out to talk about the London Developer Day we’re hosting at London South Bank University on the 1st November (thats next week !!) so if you haven’t registered you need to do so asap right now.

Download the podcast here in MP3 and OGG formats

Value Add – Tough Love in Cloud

There is no doubting the fact that as a lot of enterprise organisations and institutions who have for many years been wholly reliant on silo led computing platform architecture feel a little overwhelmed (or underwhelmed in some parts) by Cloud. Cloud the buzzword de jour, the spin. The undefined re-invention of IT. I see it a lot, and I hear it more. There seems to be this “Tough Love” battle of hearts and minds where the positioning of new IT enablement and design becomes more than technology refresh or even attrition to a position where Cloud becomes just part of the paradigm shift to doing more with less, or getting more for your dollar as you plan and procure your IT spend. It could even, if you outsource some of your current IT mean you spend less with your incumbent provider as you are able to identify and skill requirements and platforms internally with the people who understand your business the best – your current staff rather than hired consultants at arms length.

Cloud will, be under no illusion, also make those service providers and industry service providers increase profitability by being able to create elastic easily consumable cloud services that become stock catalogue items that sell themselves without sales people needing to push the hard sell. If that provider has the right services they become an asset and a building brick for growth – providing people want them. Where demand is met with intelligent solutions in Cloud there is a marriage made in heaven.

Last year, before I transitioned into this role as one of two Red Hat “Cloud Evangelists” I worked alongside the EMEA sales team in Cloud as their technical solutions architect helping providers stand up Red Hat platforms for customers to burst out to or to bring enterprise workloads too. It was enlightening because here was a software and services company working with the provider channel to build context extensibility into providers rather than just providing an OS or middleware capability. Real world business engineering (or re-engineering if you’d prefer to view it in that context) for both provider and enterprise customer alike to build a two way Open non-vendor locked in example of how we envisage those longterm hybrid and public workloads transitioning to Cloud. And then on the back of it building the provisioning and engagement model to assist customers to be able to just slot in as and when they felt the demand and push to do so. Getting over the “tough love” argument by making Cloud business as usual and easy to consume for both consumer of services – and the provider.

Tough Love – The Provider Angle

Service provision at any tier you can define as being able to take a blended approach of solutions and services that customers want or need to be able to contract. With Cloud it’s been hard for the service tier. A massive over emphasis on the hypervisor, on the provisioning and management and the self service element of the equation has left many now with an expensive overhead in the form of the ongoing licencing costs and ownership costs of proprietary technologies and layered or tiered infrastructures. Ken Hess and Jason Perlow of ZDNet explored this when discussing HyperV vs VMWare and there are a lot of other analysts who are now realising that at some point you are left in a position where that most basic cost of Cloud in the public or hybrid tier has to be passed on in the form of the contractual cost to the customer.

They are also missing a point. It’s not just about the provision of Cloud it’s about what you need to do with it when you get there as a customer, your development and deployment of architectures and infrastructures, your hidden ownership charges and your management layer on top. It would be great, and overdue somewhat for the likes of  GigaOM, Gartner and Forrester whose advice and guidance is read and given credence by many to now start thinking out the box and do more than just tickle Cloud ownership. There isn’t one credible ongoing analyst piece around the service provider tier and frankly when I talk to people (people being customers and decision makers) the positioning of left and right mystical fluffy quadrants needs to align itself to physically adaptable IT planning and positioning not just thought leadership and marketing budgets.

For service providers building Open infrastructures on KVM and in the past on Xen (although we now see KVM as the de-facto standard) and who understand the need to use open components such as CloudForms and OpenShift into the mix they are at a major advantage. They are better armed to be able to offer customers a customisable onramp to Cloud adoption at a pace that meets the appetite of sceptical CIOs but also that then reacts accordingly when the consumption and demand for services from that fledgling customer increases at speed. The ability for providers to have that flexibility and capability with the likes of Red Hat at a engineering level, matched and married to a software stack capability across storage, the hypervisor (RHEV KVM), the secure capabilities afforded by SELinux and sVirt, Middleware OpenJRE power in the form of JBoss, Gluster giving them the unstructured kick ass big data story and then wrap it up with their own ability to ride on the back of CloudForms (and DeltaCloud by association) means an immediate IaaS capability. Then as the customers who are already smart enough to be using OpenShift Origin to build out their sandpit PaaS test capability or to have used OpenShift on AWS start to demand hosted PaaS for that provider to be able to do so with applomb.

Bolt on capability = revenue, the providers who think out the box attract and retain customers longer and become an essential part of the foodchain of Cloud.

Tough Love – The Enterprise / Institutional Customer

It’s hard enough sometimes to run an enterprise environment at the best of times. The driving factors that push and promote the need for ever increasing attention to the needs of customers and consumers of your platforms and architecture are only beaten by the fact that from an accountancy perspective there is little to no elasticity in budgets that need to match or at least demonstrate an affinity for ambitions around elastic cloud. Now add on a new found skill as CIO. Contract negotiation at the most granular level. Signing an SLA is only made easier when you know what signing the solution with your Cloud Provider when you know 1) what you are signing up to 2) if you know what the problem is that you’re trying to solve by engaging with the provider.

Bryan Che of Red Hat writes brilliantly about his“2nd Tenet of Evaluating Products – You Have to Know What Problem You Want To Solve”. If it’s the only thing you click on in this article then I recommend you do so as it’s both thought provoking and influential in it’s steering as a guidance piece. Bryan correctly argues that the comparison of two given cloud products or services are aligned to understanding the problem that the consumption or procurement of that service will deliver. You can’t evaulate until that argument is understood and examined.

When we talk about Open Cloud it’s an understanding that to succeed and get the best out of the utilisation of compute capability in a manner that affords an enterprise something very clear. Independent, capable, assured performance married with a commitment to a flexible future as you grow.

An open provider who demonstrates that the tough love in Cloud is part of their problem, not yours, is the one who can give you the flexibility and the core belief to get to the start line (never mind the finish line). The good news is the smartest way to achieve that goal is for that provider to base his platform capabilities on Red Hat Cloud technologies.

It’s not just about the hypervisor and management – if anyone else tells you it is then it’s time to talk to someone who understands the pressures and needs of your expected IT delivery programme. Make sure they’re open, and make sure they use a certified supported open infrastructure married to a upstream that just happens to have millions of pairs of eyes examining its every release and move.

Pays to be open – but genuinely it’s the toughest love and the most responsible you can be when delivering future computing.

Podcast: Rhys talks Cloud

Today I am releasing part two of a podcast I recorded with Rhys Oxenham last week. In this second installment of a podcast thats proved very popular Rhys will be talking about CloudForms, some of the realworld engineering stuff we’ve been working on with partners etc.

Rhys talks about how CloudForms solves some of the end to end problems of Cloud provisioning and platform management. For you guys looking at the newly released Red Hat OpenStack Preview this could be really important for you to listen to.

I am recording two new podcasts today with Jon Masters and Duncan Doyle, Jon I’ve known for nearly twelve years and is a leading light in the ARM porting world and a longtime Red Hat stalwart. He recently gave one of the best attended and best appreciated Summit talks in Boston. Duncan and I share a common love of everything JBoss so both should be a lot of fun and I’ll bring them to you asap.

  Download part two here in MP3 format or OGG format

Red Hat release OpenStack Preview

OpenStack Technology Preview Available from Red Hat
by: Cloud Computing Team – Written by Gordon Haff (reproduced here verbatim)

The OpenStack Infrastructure-as-a-Service (IaaS) cloud computing project, has been much in the news. April’s formation of the forthcoming OpenStack Foundation put in place a governance structure to help encourage open development and community building. Red Hat, along with AT&T, Canonical, HP, IBM, Nebula, Rackspace, and SUSE, are Platinum members of that foundation. The foundation announcement was quickly followed by a well-attended OpenStack Conference that clearly demonstrated the size and enthusiasm of the OpenStack developer community.

That’s not to say that OpenStack’s work is done. Anything but! The structure and community is now largely in place to form the foundation for development of robust OpenStack products that meet the requirements of a wide range of businesses. However, that development and work doesn’t just happen by itself.

Red Hat was actively involved in the project even before the foundation announcement; we are the #3 contributor to the current “Essex” release. This surprised some commentators given that it exceeded the contributions of vendors who had been louder about their alignment with the project. However, Red Hat’s relatively quiet involvement was fully in keeping with our focus on actual code contributions through upstream communities. With the formation of the OpenStack Foundation and its open governance policies, these contributions have only accelerated.

In parallel, we’ve also begun the task of making OpenStack suitable for enterprise deployments. This means bringing the same systematic engineering and release processes to OpenStack that Red Hat has for products such as Red Hat Enterprise Linux, Red Hat Enterprise Virtualization, Red Hat CloudForms, and JBoss Enterprise Middleware.

For example, these enterprise products have well defined lifecycles over which subscriptions can deliver specific types and levels of support. Upgrade paths between product versions are established and tested. Products have hardware certifications for leading server and storage vendors, certification and support of multiple operating systems including Windows and the experience and personnel to provide round the clock SLAs.

In short, stability, robustness, and certifications are key components of enterprise releases. The challenge—one that Red Hat has years of experience meeting—is to achieve the stability and robustness that enterprises need without sacrificing the speed of upstream innovation.

We’re now taking an important step in the development of an enterprise-ready version of OpenStack with the release of a Technology Preview. Red Hat frequently uses Technology Previews to introduce customers to new technologies that it intends to introduce as enterprise subscription products in the future.

Technology Preview features provide early access to upcoming product innovations, enabling customers to test functionality, and provide feedback during the development process. We’re doing all this because OpenStack will be an important component of Red Hat’s open, hybrid cloud architecture.

Here’s where it fits:

OpenStack is an IaaS solution that manages a hypervisor and provides cloud services to users through self-service. Perhaps the easier way to think of OpenStack, however, is that it lets an IT organization stand up a cloud that looks and acts like a cloud at a service provider. That OpenStack is focused on this public cloud-like use case shouldn’t be surprising; service provider Rackspace has been an important member of OpenStack and uses code from the project for its own public cloud offering.

This IaaS approach differs from the virtualization management offered by Red Hat Enterprise Virtualization, which is more focused on what you can think of as an enterprise use case. In other words, Red Hat Enterprise Virtualization supports typical enterprise hardware such as storage area networks and handles common enterprise virtualization feature requirements such as live migration.

Both OpenStack and Red Hat Enterprise Virtualization may manage hypervisors and offer self-service – among other features – but they’re doing so in service of different models of IT architecture and service provisioning.

Red Hat CloudForms provides open, hybrid cloud management on top of infrastructure providers.

These “cloud providers” may be an on-premise IaaS like OpenStack or a public IaaS cloud like Amazon Web Services or Rackspace. They may be a virtualization platform (not just a hypervisor) like Red Hat Enterprise Virtualization or VMware vSphere. CloudForms even plans to support physical servers as cloud providers in the future.

CloudForms allows you to build a hybrid cloud that spans those disparate resources. Equally important, though, CloudForms provides for the construction and ongoing management of applications across this hybrid infrastructure. It allows IT administrators to create Application Blueprints (for both single- and multi-tier/VM applications) that users can access from a self-service catalog and deploy across that hybrid cloud under policy.

Finally, Platform-as-a-Service (PaaS) capabilities on the infrastructure of your choice are delivered by Red Hat OpenShift PaaS. Unlike a PaaS that is limited to a specific provider, OpenShift PaaS can run on top of any appropriately provisioned infrastructure whether in a hosted or on-premise environment.

This allows organizations to not only choose to develop using the languages and frameworks of their choice but to also select the IT operational model that is most appropriate to their needs. The provisioning and ongoing management of the underlying infrastructure on which OpenShift PaaS runs is where virtualization, IaaS, and cloud management solutions come in.

OpenStack is therefore part of a portfolio of Red Hat cloud offerings which, in concert with Red Hat Enterprise Linux, JBoss Enterprise Middleware, Red Hat Storage, and other offerings, provides broad choice to customers moving to the cloud. Cloud is a major shift in the way that computing is operated and delivered. It’s not a shift that can be implemented with a single point product.

Find out more:

We’ve been working in the OpenStack community for a while now and can see its potential. Our focus has been around making OpenStack a great product for enterprises to use. Just like we did with Linux. In the future, we plan to release a commercial version of OpenStack for enterprise customers. But today, we invite you to download a preview of that product and try it out for free. Follow this link to the download site here, fill out the form (you will need a redhat.com account and if you don’t have one don’t worry we offer the option to create one).

Requirements:

Red Hat OpenStack Preview only works with Red Hat Enterprise Linux 6.3 or higher. You’ll need a Red Hat Enterprise Linux subscription for each server you install with Red Hat OpenStack.

The OpenStack Word Mark and OpenStack Logo are either registered trademarks / service marks or trademarks / service marks of OpenStack, LLC, in the United States and other countries and are used with OpenStack LLC’s permission. CloudForms and OpenShift are trademarks of Red Hat.

It’s all about the Enterprise PaaS roadmap

Today Red Hat launched it’s Enterprise roadmap for Platform as a Service to the press and analysts. It’s been a labour of love for a long time internally with our teams and management working intensively to put together a structured offering that really could hit the market offering a value proposition and a lifecycle for enterprise customers.

OpenShift is a game changer in Platform as a Service (PaaS). If you historically look at the definition of PaaS it’s been shrouded in the architectural frameworks, scalability and application / source syncing challenges that present a raft of APIs and behaviour changes to developers that you could perceive as less than friendly – or that doesn’t meet your or my own definition of open. Certainly it’s not the greatest experience when faced with a new stack it presents you with a list of service definitions, frameworks and capabilities.

OpenShift is different. For starters theres a message here for the analysts and technology press – it’s written by developers – for developers. Please don’t lose focus on the importance of this. Theres a reason why the popularity of OpenShift since we launched it last May 2011 has been somewhat stellar. We’re providing an end user experience of being able to focus on what matters – your code. Removing the handcuffs and the shackles and allowing people to get to work faster not worrying about the VM’s, or the change control and how to get servers online and built etc. A gentle cursory search of the Twittersphere will drown the average researcher in plaudits from the development community who have realised a three stage push to Cloud really is redefining how you can just take leaps and bounds into the ecosystem.

Let’s not over egg the pudding here. This blog isn’t a marketing stall that sets out to look purely down the gun of the Cloud technologist and to aim Red Hat flavoured solutions scattergun style. What we’re doing is fundamentally different, to concentrate on a paradigm shift that offers you an application platform in the Cloud whilst managing the stack for you – automating the painful stuff that hinders technology growth and slows down the rate of application development and Cloud provisioning. As I said before developed by developers for developers to deliver the capabilities they need whilst also tacitly improving the developer experience in the process. As we get to a point in the technology curve where Cloud matures it becomes even more obvious that the solutions we describe right now, that we’re making available today, are THE ecosystem of choice not the simple automation of a providers framework or clutch of badly documented APIs.

Click the image below to maximise it to full size for easier reading and understanding

The fact that we come from an Enterprise background with RHEL the supported prizefighter out there in the Linux environments globally then it’s screamingly obvious that once you lift the hood of OpenShift you see all the goodness, strengths and maturity of RHEL underneath. The support for standard operating and development environments as well as all the ultra tenacious stuff that the analysts in Cloud now realise is the kingpin – the major benefits of faster application scaling, better higher efficiency by the virtue of OpenShifts ability to support two tier multi-tenancy from the get go. For the bean counters that means you’re reducing your costs out the box. Proper portability of applications and development environments, industry leading security by virtue of control groups as well as sVirt and SELinux out the box (security as aspect of design not by retrofit) and heres the magic sauce, the multi-tenancy capability at the Operating System tier not at the virtualisation layer unlike other offerings out there.

As you move to embrace a true hybrid Cloud model you have to acknowledge as technologists that your support frameworks and application model will have to stretch to conform to different models with different hypervisor types, SLA’s enforced on you as end user adopters still expected to offer the same level of service and conformity to your users and customers. OpenShift as part of its design specification had a core realisation that if you develop an application for PaaS you were going to be in a situation where there would be flux on the part of everchanging underlying hypervisor or provider technologies. Minimising the adverse effects this would have on PaaS environments in hybrid cloud therefore became a design factor. To be able to maintain service regardless of operating environment and to maintain security and segregation in multi tenant environments and move it away from the underpinning virtualisation layer. Down to basics if you think of a battlefield planner who has to come up with a fabric that will cope and perform to the same level no matter how hostile the weather or the neighbourhood in a conflict zone then OpenShift is the body armour of choice for the Cloud soldier going into battle.

Bryan Che is the Product Marketing Manager and thought leader at Red Hat on all things Cloud, an MIT graduate and an amazing font of knowledge when it comes to virtualisation, Cloud and reinventing how we need to embrace change. He has contributed an article today which explores further how the development eco system and our JBoss core strengths can scale to handle multiple applications and multi tenancy in Cloud. Follow this link to read the article, and while you’re there check out his other Tenet’s of Cloud articles which are thought provoking and a great armoury for you to keep personally as you tackle objections and shape your own path in Cloud.

How to avoid Aasholes

Those of you who have been reading my stuff for almost a decade or using the security stuff I’ve been writing and bringing to the market for more than that length of time will know that I have a passion for security as a business as usual accepted practice. That extends from perimeter security through to application level security and the chagrin of being intelligent about your management and change controls around every aspect of your deployment be it on-premise or in a third party hosted datacentre or hybrid/public Cloud.

One of the reasons for finally joining Red Hat is here is a company that has grown in every aspect of it’s operation that is relied upon by the largest brands and the institutions we all rely upon to handle our financial transactions, our well being and the processing of our needs as consumers. I can be picky who I work for, I do this for the love, not remotely for the money and whoever I work with has to be able to add to what I bring to the table around the whole security value add. Never more so is that intrinsic to what we do as an industry as in Cloud. There is literally nowhere to hide. Security through obscurity is not a practical approach and a zero day exploit or a badly coded application or a drop in escalation of a privilege level can be the difference between a Cloud environment succeeding or failing and a platform collapsing like a pack of cards.

A conversation I often have with friends in the Security space is one of understanding risks. Mark Cox who runs the Security Response team at Red Hat is someone I’ve known for over a decade and who I talk to very regularly. He runs a blog outside Red Hat which is crammed full of illustrations around the maturity of security controls in the Red Hat release and engineering space (see this report from December around the vulnerabilities and advisories and our responses as a vendor for RHEL). Mark’s team work very closely with the engineering teams in Westford and globally to ensure that our appetite for risk (given we’re the platform people rely on to go to work) is entirely focused around visible responses in lightning fast windows.

So why is the title of this article talking about Aasholes, what is an aashole ?

For starters I’d have loved to have coined the description, to be the one adding this to the Cloud vernacular but unfortunately I can’t take the praise for it. Fred Pinkett the popular blogger came up with it and it’s the perfect word to describe a potential or actual security hole in a PaaS, SaaS or IaaS environment. I point you with genuine admiration to his article from June 2011 as a primer on the very basic needs and structures as you build your own Aashole Protection System (let’s just refer to it going forward as an APS).

An APS can take many formats but one thing that I start to try and get across to people, and those of you who have sat and listened to me at conferences or across a table will hear me bang on about controls and mindset to deployment and beyond. I have long been a major fanbois for the Cloud Security Alliance and I work closely with their founder Jim Reavis (watch for an upcoming announcement soon from the CSA about working with Red Hat). Since 2009 I’ve been responsible for signing off and accrediting some of the largest Linux deployments in the most dangerous and critical parts of national and international infrastructure and in the defence sector (or defense for the majority of you reading this article appreciating you already think I spelt datacentre wrong earlier in this article). I would not have been able to do so without being able to take often badly written and badly managed higher level design documents and to cross reference them against the freely developed and distributed Cloud Security Alliance control matrixes or CCM’s. I cannot stress heavily enough or place enough emphasis on why these are uber critical towards getting on your personal radars if you don’t already know what I am talking about.

Here are some pointers why you should already be aware or using them !

1) These controls are free !!! If you haven’t got a copy – get a copy.
2) If you read them and you build and deploy with them in mind you are going to have a very boring life but you’ll be able to rely on your own deployed controls to avoid an Aashole incident.
3) They are a living, breathing document that changes over time – make sure you check for updates as the CSA community have more strength in depth than any blue chip consultancy security company / pen testing organisation / managed services organisation.
4) Working with them when designing your Cloud and working out which apps you can and can’t move to a Cloudy environment and how you fit into legislative governance requirements and audit needs (PCI-DSS/ISO 27001/2/SAS 70/HIPAA etc) will save your organisations tens of thousands of dollars.
5) Using the CSA CCM matrixes alongside proven segregation controls such as sVirt and SELinux templates in RHEL / RHEV deployments will give you the strongest industry controls that you can find. Belt and braces.

So you have the Cloud Security Alliance freely propogating and educating more than any other body in the world around standards adoption and building security as a cornerstone of your application and provisioning environment and you have a healthy fear of a pink slip / P45 / being able to work again because you’re an Aashat (I am claiming this one Fred – sorry) and more than anything you take a pride in what you do as an individual in your team or as a solo warrior in your Cloud efforts within your organisation.

Now if you didn’t read Tim Kramer’s article I posted last week on Security in the Cloud please go read it now.  We’re all about playing safe and being sensible. Nobody wants to be the Aashat who didn’t go the extra mile.

Last but not least we hope to have an interview in Podcast form with Jim Reavis from the CSA that we’ve been trying to get in the can for three weeks but we keep missing diaries / travel schedules. If you’re in Germany and you want to go and hear him speak he’s at the CSA conference in Frankfurt next week, details here.

You can also listen to a podcast I recently recorded with Gordon Haff and Ellen Newlands when I was in Boston around the whole Cloud Security piece in MP3 and OGG formats by following those links.

The Red Hat Security Knowledgebase

Mark Cox has asked me to point out that we have a Security Knowledgebase that is now for the first time publically available from access.redhat.com containing a depth of information that aligned with the CSA controls give you as a practitioner / administrator security in depth and able to work with us to move to Cloud even more securely. Alongside the cookbooks that are available on request (please feel free to ask me for more info) we hope that you find these massively useful.

Just in case anyone reading this has a sight impairment and uses a text to speech / Festival type converter I hope you didn’t have a heart attack listening to the transcription of this article. Sometimes to get a very serious critical point across you have to bow to the influence of others and Fred Pinkett wrote the book on this.

How Fluffy is Portability in the Cloud ?

I fly around a lot for a living, it’s as much a pro as it is a con but one of the cool things about having that “downtime” in airport lounges and in the air is time away from noise and time to catch up on reading articles and white papers from across the industry. From sources both open and proprietary and all points east and west of the commercial doctrines that I’ve spent my working life trying to be more reflective of the realities of being more open and transparent. I often find myself (and I’ve been known to be frowned at by other passengers) avidly shaking head as I scrawl short punchy rude words across print offs of articles that I’ve thrown in my flight bag mouthing them quietly as I’m doing it like a idiot savant who just happens to have every gadget known to man accompanying him on a long haul flight. Having a reliance on an iPad and an Android 4.0 ICE 10″ tablet hasn’t helped as I don’t really want to use a third party proprietary app to dump a page to a PDF so I can “read it later” and bookmarking something to read at 38,000 ft doesnt tend to work well without WiFi (most routes I travel don’t have inflight WiFi). So I’m back in 1993 technology territory with my highligher pen / marker and a sheath of paper that I try not to confuse with boarding passes (yes I know you can get an app for that – I have one but I’m anal about organisation) and other travel ephemera that I carry.

It was during a period of melancholy catchup on a flight to Finland to talk at a Cloud Tour Red Hat were organising and where I was speaking that I found myself reading three or four various journalists take on what Cloud Application Portability actually was. It’s a scary world where if you took their output, broke it down to core essential bullets took out the chaff and the hyperbole that all four disagreed with each other as to the true definition of application portability in Cloud. These are thought leaders so having my own ideas I went away and talked to some of the crew in the Cloud BU at Red Hat.

Now the cool thing about working for any Open Source company is not only are they often meritocracy based organisations but there is no such thing as a wrong answer, only a different methodology or construct to get to a reasoned answer. We just announced our most successful trading year and a $1.13 billion dollar turnover which considering the amount of time the good and the great at Red Hat putting care and conviction into what we do isn’t therefore surprising. So I was looking, for a definition, from the movers and shakers at Westford our offices just outside Boston where the engineering pulse of Red Hat has a strong beat. It was therefore more surprising that given I’d read and scrawled over in luminous yellow marker four different takes on portability that those I approached came back with 95% of the same answer even though asked seperately to a man and without knowledge I was consulting with others across a massive business unit.

Taking this counsel aside I then looked into reasoning why so many educated thought leaders whose editorial is enjoyed and respected had their own opinion and direction. None of it wrong by definition but certainly differing paths to a differing conclusion.

It bugged me. It bugged me because some of the editorial and reasoning I read slightly misunderstood portability and took it as re-deploying applications to specific clouds with engineered boundaries and challenges and varying standards. Thats not portability – thats porting in the traditional sense. If you have to make your application fit a variety of vendor SLAs or engineering tenets then you’re just adding workload and change control schema into your existing workload and process control management, whilst only slightly avoiding the lock in issue of being stuck with one Cloud Provider partner.

The issues around the lack of Cloud standards and their impact on interoperable stacks in Cloud providers again becomes a moot point if you consider that true portability doesn’t start the day you finish your application and then think about where you are going to deploy it and the necessary major modifications or changes you’ll have to make to get that to Cloud safely and with confidence and to be able to then deploy it on premise or a local stack or even on another provider. Scalability, the mandatory and additional security controls needed to deploy securely, the ability to orchestrate and to define scripts and provision all then become these parts of the portability piece that then get talked about in hushed tones and added to the project plan as “costs” and a guesstimate made as to how long that will add to the deployment / dev piece. And then start thinking about ongoing slightly different SVN/Git/CVS repositories for slightly varying versions of the same app or package and the necessary security controls and patching controls if you’ve had to dovetail your apps into a container type Cloud environment.

A journalistic appreciation of the issue that appeared in December then confounded me by making the sweeping statement that the only way application portability would happen is when embracing a PaaS offering from a specific vendor or using a proprietary platform (in this case Stackato). Answering 25% of her own question with 10% of the answer, not necessarily the wrong answer but not one I’m sure a lot of enterprises wanted to read given the fact that doing this right first time out isn’t hard as long as you use open standards and technologies to do it.

The annoying thing for me was that a few weeks before the article that made me frown and scrawl rude words on it with a marker pen at 38,000 feet almost spilling my Duerrs and Ginger Ale, a blog article written by a good friend of mine James Labocki  had appeared (@jameslabocki if you want to follow his Twitter feed which I do advise as he’s one of our brightest and best Cloud gods). The article gave a short but kickass editorial around portability to the layman and CloudForms, the Red Hat IaaS piece that we’ve been polishing and getting ready to package for release this year.

At this point I’m going to say if you want to avoid me ranting further on a Friday afternoon and you want to really be able to get the portability piece then I’m going to pass you into the safe care of James whose article you can read here.

Footnote: If you’re a journalist wanting to write about portability then I also suggest that you read this post and follow James’ blog as he’s right on the money and you won’t go far wrong using the right sources.

Addendum: I should have posted a link to the authoritative bible on the topic ! – Bryan Che’s Tentenet.net started out with a deep dive into the autonomy and power that can be evolved from understanding what we mean by open – feel free to read and share, here’s the link.