13-10-15

VMworld Europe Day Two

Today is pretty much the day the whole conference springs to life. All the remaining delegates join the party with the TAM and Partner delegates. The Solutions Exchange opened for business and there’s just a much bigger bustle about the place than there was yesterday.

The opening general session was hosted by Carl Eschenbach, and credit to him for getting straight in there and talking about the Dell deal. I think most are scratching their heads, wondering what this means in the broader scheme of things, but Carl reassured the delegates that it would still be ‘business as usual’ with VMware acting as an independent entity. That’s not strictly true, as they’re still part of the EMC Federation, who are being acquired by Dell, so not exactly the same.

Even Michael Dell was wheeled out to give a video address to the conference to try and soothe any nerves, giving one of those award ceremony ‘sorry I can’t be there’ speeches. Can’t say it changed my perspective much!

The event itself continues to grow. This year there are 10,000 delegates from 96 countries and a couple of thousand partners.

Into the guts of the content, first up were Telefonica and Novamedia. The former are a pretty well known European telco, and the latter are a multinational lottery company. The gist of the chat was that VMware solutions (vCloud, NSX etc) have allowed both companies to bring new services and solutions to market far quicker than previously. In Novamedia’s case, they built 4 new data centres and had them up and running in a year. I was most impressed by Jan from Novamedia’s comment ‘Be bold, be innovative, be aggressive’. A man after my own heart!

VMware’s reasonably new CTO Ray O’Farrell then came out and with Kit Colbert discussed the ideas behind cloud native applications and support for containers. I’ll be honest at this point and say that I don’t get the container hype, but that’s probably due in no small part to my lack of understanding of the fundamentals and the use cases. I will do more to learn more, but for now, it looks like a bunch of isolated processes on a Linux box to me. What an old cynic!

VMware have taken to approaches to support containers. The first is to extend vSphere to use vSphere Integrated Containers and the second is the Photon platform. The issue with containerised applications is that the vSphere administrator has no visibility into them. It just looks and acts like a VM. With VIC, there are additional plug-ins into the vSphere Web Client that allow the administrator to view which processes are in use, on which host and how it is performing. All of this management layer is invisible and non-intrusive to the developer.

The concept of ‘jeVM’ was discussed, which is ‘just enough VM’, a smaller footprint for container based environments. Where VIC is a Linux VM on vSphere, the Photon platform is essentially a microvisor on the physical host, serving up resource to containersa running Photon OS, which is a custom VMware Linux build. The Photon platform itself contains two objects – a controller and the platform itself. The former will be open sourced in the next few weeks (aka free!) But the platform itself will be subscription only from VMware. I’d like to understand how that breaks down a bit better.

VRealize Automation 7 was also announced, which I had no visibility of, so that was a nice surprise. There was a quick demo with Yangbing Li showing off a few drag and drop canvas for advanced service blueprints. I was hoping this release would do away with the need for the Windows IaaS VM(s), but I’m reliably informed this is not the case.

Finally, we were treated with a cross cloud vMotion, which was announced as an industry first. VMs were migrated from a local vSphere instance to a vCloud Air DC in the UK and vice versa. This is made possible by ‘stretching’ the Layer 21 network between the host site and the vCloud Air DC. This link also includes full encryption and bandwidth optimisation. The benefit here is that again, it’s all managed from a familiar place (vSphere Web Client) and the cross cloud vMotion is just the migration wizard with a couple of extra choices for source and destination.

I left the general session with overriding feeling that VMware really are light years ahead in the virtualisation market, not just on premises solutions but hybrid too. They’ve embraced all cloud providers, and the solutions are better for it. Light years ahead of Microsoft in my opinion, and VMware have really raised their game in the last couple of years.

My first breakout session of the day was Distributed Switch Best Practices. This was a pretty good session as I’ve really become an NSX fanboy in the last few months, and VDSes are the bedrock of moving packet between VMs. As such, I noted the following:-

  • DV port group still has a one to one mapping to a VLAN
  • There may be multiple VTEPS on a single host. A DV port group is created for all VTEPs
  • DV port group is now called a logical switch when backed by VXLAN
  • Avoid single point of failure
  • Use separate network devices (i.e switches) wherever possible
  • Up to 32 uplinks possible
  • Recommend 2 x 10v Gbps links,  rather than lots of 1 Gbps
  • Don’t dedicate physical up links for management when connectivity is limited and enable NIOC
  • VXLAN compatible NIC recommended, so hardware offload can be used
  • Configure port fast and BPDU on switch ports, DVS does not have STP
  • Always try to pin traffic to a single NIC to reduce risk of out of order traffic
  • Traffic for VTEPs only using single up link in an active passive configuration
  • Use source based hashing. Good spread of VM traffic and simple configuration
  • Myth that VM traffic visibility is lost with NSX
  • Net flow, port mirroring, VXLAN ping tests connections between VTEPs
  • Trace flow introduced with NSX 6.2
  • Packets are specially tagged for monitoring, reporting back to NSX controller
  • Trace flow is in vSphere Web client
  • Host level packet capture from the CLI
  • VDS portgroup, vmknic or up link level, export as pcap for Wireshark analysis
  • Use DFW
  • Use jumbo frames
  • Mark DSCP value on VXLAN encapsulation for Quality of Service

For my final session of the dayt, I attended The Practical Path to NSX and Network Virtualisation. At first I was a bit dubious about this session as the first 20 minutes or so just went over old ground of what NSX was, and what all the pieces were, but I’m glad I stayed with it, as I got a few pearls of wisdom from it.

  • Customer used NSX for PCI compliance, move VM across data center and keep security. No modification to network design and must work with existing security products
  • Defined security groups for VMs based on role or application
  • Used NSX API for custom monitoring dashboards
  • Use tagging to classify workloads into the right security groups
  • Used distributed objects, vRealize for automation and integration into Palo Alto and Splunk
  • Classic brownfield design
  • Used NSX to secure Windows 2003 by isolating VMs, applying firewall rules and redirecting Windows 2003 traffic to Trend Micro IDS/IPS
  • Extend DC across sites at layer 3 using encapsulation but shown as same logical switch to admin
  • Customer used NSX for metro cluster
  • Trace flow will show which firewall rule dropped the packet
  • VROps shows NSX health and also logical and physical paths for troubleshooting

It was really cool to see how NSX could be used to secure Windows 2003 workloads that could not be upgraded but still needed to be controlled on the network. I must be honest, I hadn’t considered this use case, and better still, it could be done with a few clicks in a few minutes with no downtime!

NSX rocks!

 

 

 

Advertisement

12-10-15

VMworld Europe Day One

Today saw the start of VMworld Europe in Barcelona, with today being primarily for partners and TAM customers (usually some of the bigger end users). However, that doesn’t mean that the place is quiet, far from it! There are plenty of delegates already milling around, I saw a lot of queues around the breakout sessions and also for the hands on labs.

As today was partner day, I already booked my sessions on the day they were released. I know how quickly these sessions fill, and I didn’t want the hassle of queuing up outside and hoping that I would get in. The first session was around what’s new in Virtual SAN. There have been a lot of press inches given to the hyper converged storage market in the last year, and I’ve really tried to blank them out. Now the FUD seems to have calmed down, it’s good to be able to take a dispassionate look at all the different offerings out there, as they all have something to give.

My first session was with Simon Todd and was titled VMware Virtual SAN Architecture Deep Dive for Partners. 

It was interesting to note the strong numbers of customer deploying VSAN. There was a mention of 3,000 globally, which isn’t bad for a product that you could argue has only just reached a major stage of maturity. There was the usual gratuitous customer logo slide, one of which was of interest to me. United Utilities deal with water related things in the north west, and they’re a major VSAN customer.

There were other technical notes, such as VSAN being an object based file system, not a distributed one. One customer has 14PB of storage over 64 nodes, and the limitation to further scaling out that cluster is a vSphere related one, rather than a VSAN related one.

One interesting topic of discussion was whether or not to use passthrough mode for the physical disks. What this boils down to is the amount of intelligence VSAN can gather from the disks if they are in passthrough mode. Basically, there can be a lot of ‘dialog’ between the disks and VSAN if there isn’t a controller in the way. I have set it up on IBM kit in our lab at work, and I had to set it to RAID0 as I couldn’t work out how to set it to passthrough. Looks like I’ll have to go back to that one! To be honest, I wasn’t getting the performance I expected, and that looks like it’s down to me.

VSAN under the covers seems a lot more complex than I thought, so I really need to have a good read of the docs before I go ahead and rebuild our labs.

There was also an interesting thread on troubleshooting. There are two fault types in VSAN – degraded and absent. Degraded state is when (for example) an SSD is wearing out, and while it will still work for a period of time, performance will inevitably suffer and the part will ultimately go bang. Absent state is where a temporary event has occured, with the expectation that this state will be recovered from quickly. Examples of this include a host (maintenance mode) or network connection down and this affects how the VSAN cluster behaves.

There is also now the ability to perform some proactive testing, to ensure that the environment is correctly configured and performance levels can be guaranteed. These steps include a ‘mock’ creation of virtual machines and a network multicast test. Other helpful troubleshooting items include the ability to blink the LED on a disk so you don’t swap out the wrong one!

The final note from this session was the availability of the VSAN assessment tool, which is a discovery tool run on customer site, typically for a week, that gathers existing storage metrics and provides sizoing recommendations and cost savings using VSAN. This can be requested via a partner, so in this case, Frontline!

The next session I went to was Power Play :What’s New With Virtual SAN and How To Be Successful Selling It. Bit of a mouthful I’ll agree, and as I’m not much of a sales or pre-sales guy, there wasn’t a massive amount of takeaway for me from this session, but Rory Choudhari took us through the current and projected revenues for the hyperconverged market, and they’re mind boggling.

This session delved into the value proposition of Virtual SAN, mainly in terms of costs (both capital and operational) and the fact that it’s simple to set up and get going with. He suggested it could live in harmony with the storage teams and their monolithic frames, I’m not so sure myself. Not from a tech standpoint, but from a political one. It’s going to be difficult in larger, more beauracratic environments.

One interesting note was Oregon State University saving 60% using Virtual SAN as compared to refreshing their dedicated storage platform. There are now nearly 800 VASN production customers in EMEA, and this number is growing weekly. Virtual SAN6.1 also brings with it support for Microsoft and Oracle RAC clustering. There is support for OpenStack, Docker and Photon and the product comes in two versions.

If you need an all flash VSAN and/or stretched clusters, you’ll need the Advanced version. For every other use case, Standard is just fine.

After all the VSAN content I decided to switch gears and attend an NSX session called  Disaster Recovery with NSX, SRM and vRO with Gilles Chekroun. Primarily this session seemed to concentrate on the features in the new NSX 6.2 release, namely the universal objects now available (distributed router, switch, firewall) which span datacentres and vCenters. With cross vCenter vMotion, VMware have really gone all out removing vCenter as the security or functionality boundary to using many of their products, and it’s opened a whole new path of opportunity, in my opinion.

There are currently 700 NSX customers globally, with 65 paying $1m or more in their deployments. This is not just licencing costs, but also for integration with third party products such as Palo Alto, for example. Release 6.2 has 20 new features and has the concept of primary and secondary sites. The primary site hosts an NSX Manager appliance and the controller cluster, and secondary sites host only an NSX Manager appliance (so no controller clusters). Each site is aware of things such as distributed firewall rules, so when a VM is moved from one site to another, the security settings arew preserved.

Locale IDs have also been added to provide the ability to ‘name’ a site and use the ID to direct routing traffic down specific paths, either locally on that site or via another site. This was the key takeway from the session that DRis typically slow, complex and expensive, with DR tests only being invoked annually. By providing network flexibility between sites and binding in SRM and vRO for automation, some of these issues go away.

In between times I sat the VCP-CMA exam for the second time. I sat the beta release of the exam and failed it, which was a bit of a surprise as I thought I’d done quite well. Anyway, this time I went through it, some of the questions from the beta were repeated and I answered most in the same way and this time passed easily with a 410/500. This gives me the distinction of now holding a full house of current VCPs – cloud, desktop, network and datacenter virtualisation. Once VMware Education sort out the cluster f**k that is the Advanced track, I hope to do the same at that level.

Finally I went to a quick talk called 10 Reasons Why VMware Virtual SAN Is The Best Hyperconverged Solution. Rather than go chapter and verse on each point I’ll list them below for your viewing pleasure:-

  1. VSAN is built directly into the hypervisor, giving data locality and lower latency
  2. Choice – you can pick your vendor of choice (HP, Dell, etc.) And either pick a validated, pre-built solution or ‘roll your own’ from a list of compatible controllers and hard drives from the VMware HCL
  3. Scale up or scale out, don’t pay for storage you don’t need (typically large SAN installations purchase all forecasted storage up front) and grow as you go by adding disks, SAS expanders and hosts up to 64 hosts
  4. Seamless integration with the existing VMware stack – vROps adapters already exist for management, integration with View is fully supported etc
  5. Get excellent performance using industry standard parts. No need to source specialised hardware to build a solution
  6. Do more with less – achieve excellent performance and capacity without having to buy a lot of hardware, licencing, support etc
  7. If you know vSphere, you knopw VSAN. Same management console, no new tricks or skills to learn with the default settings
  8. 2000 customers using VSAN in their production environment, 65% of whom use it for business critical applications. VSAN is also now third generation
  9. Fast moving road map – version 5.5 to 6.1 in just 18 months, much faster rate of innovation than most monolithic storage providers
  10. Future proof – engineered to work with technologies such as Docker etc

All in all a pretty productive day – four sessions and a new VCP for the collection, so I can’t complain. Also great to see and chat with friends and ex-colleagues who are also over here, which is yet another great reason to come to VMworld. It’s 10,000 people, but there’s still a strong sense of community.

10-08-15

VCIX-NV Exam Experience

vcix-nv

Last Thursday I went over to Leeds to sit the VCIX-NV exam. Obviously regular readers will know I haven’t been using NSX all that long (around 6 weeks, I’d say) and I’ve already managed to get the VCP out of the way, so I figured I needed a new challenge! As per usual, there are no exam questions listed as per the NDA, but if you’re thinking of doing this exam any time soon, I’d recommend it. Advanced exams are always a tough but rewarding experience.

The exam itself, as per the blueprint, is 18 questions with a selection of subtasks. Passing score is 300 out of 500 and obviously you can score points even when you don’t fully meet all question requirements. Total time allowed is 225 minutes, although I didn’t spend a lot of time clock watching until the end.

I’ve read a lot of people complain about latency issues, but I didn’t really see that during my sitting. I have a level of expectation that there will be latency anyway, and it wasn’t so severe that it really made much of a difference to me getting things done. I did have an issue with low colour on the screen, which is obviously a known issue as it was listed on the exam start screen. Again it didn’t prevent me performing any tasks, so I elected against disconnecting and reconnecting as recommended, I’m always paranoid that something bad will go wrong second time around!

The exam itself is very faithful to the blueprint, but as the blueprint is so wide in scope and there are only 18 questions, some areas were not covered at all, which you’d sort of expect. There was certainly nothing in there that I thought was not fair game.

About half way through I had a major issue where a host stopped responding. After informing the proctor and some phone calls to and fro between the test centre, Pearson and VMware, it was decided it was my fault and so therefore wouldn’t be fixed. I wasn’t sure I agreed with that assessment, but as things turned out it worked out in my favour, in a crazy way. Firstly, up to that point I’d been going quite slowly and not managing my time very well (a constant point when sitting VCAP/VCIX exams), so having 20 minutes out of the room to look at the host issue meant that when I went back in, the dead host issue meant a fire was lit under me to get things done quicker and in the end, the dead host had no effect on any other tasks I had to do (and I should add that Pearson did give me the time back on the exam timer).

I did miss one question out that I was saving to the end, but I ran out of time to come back to it. After hitting the finish button with seconds left, I got my score report back on Friday night (thanks again Josh @ VMware for pushing the scoring through) and much to my surprise and utter relief I passed with 300/500. Right on the limit, but a pass is a pass and the exam has helped me identify areas I need to strengthen, so a win-win all around.

In terms of study materials, let me recommend the following:-

The Hands On Lab environment is very similar to the exam environment and working through each exercise several times until you have it down pat is a really effective way of preparing for the exam. Remember during the exam that you can score points in a variety of ways, so make sure to read the question and complete as many tasks as you can, this was basically the key to me just about getting over the line. Even if it’s only one sub task out of three or four, if you can complete it, do it and add it to your total.

Finally, get to your exam centre in plenty of time, stay relaxed and don’t be intimidated! No idea what is next for me exam wise, I think I’ll probably have a breather and wait until the new VCIX-DCV and DTM are released, probably towards Christmas/New Year time.

 

01-07-15

Achievement Unlocked – VCP-NV

VMW-LGO-CERT-PRO-6-NETWK-VIRT

A little bit after the fact, but last Friday I sat and passed the VCP-NV exam to leave me the VCP-CMA short of a full house of VCPs (and that beta result is pending). Even though I have only had a few weeks getting hands on with NSX in the hands on labs, I think it’s a tribute to how simple the product is to pick up and run with that I found most aspects of it pretty straight forward to pick up and understand.

I went over the ICM course notes which I had and also watched Jason Nash’s excellent Pluralsight videos. Although not everything about the product is covered in these videos, it’s an excellent primer on some networking fundamental refreshers and also the building blocks to NSX and how to deploy them. There are still a couple of areas that I’m not totally sure about (SNAT and DNAT for example, and where to apply these rules) and I also seem to have a bit of a mental block around when MAC addresses change in transit, but I’m sure I’ll get there in the end.

As NSX is still fresh in my mind and we’re hoping to join a VMware Lighthouse program in the UK, I’ve already booked my VCIX-NV exam for early August, which should give me plenty of time to crystallise the problems I’ve had as listed above. I actually enjoy the Advanced exams more than the VCP type exams as it appeals to the way I work and I prefer being hands on with products, rather than answering conceptual questions about the product.

The exam itself is 125 questions over 125 minutes and as usual is very faithful to the blueprint. Even before I’d got to the end I felt confident that I’d done enough to pass even though I’d been probed on some of my problem areas. In the end I passed reasonably comfortably and I look forward to sitting the VCIX in August!

 

21-05-15

IP Expo Manchester – Day 2 Review

Today was the second and final day of the inaugural IP Expo event at G-Mex Manchester Central. I’d spent a lot of time wandering around the solution hall yesterday, so today I wanted to spend more time in the sponsor sessions. The problem is that often I would find myself bumping into a familiar face and at the end of a catch up, 30 minutes has gone!

One thing I’d like to express before I go through the session reports is that I wish vendors would top this “avoid lock in” bullshit. Every vendor does lock in of one form or another. After all, how can you expect a good revenue stream for a product if it’s too easy for a customer to take their stuff elsewhere? I’m not saying vendor lock in is a bad thing, as long as there is a degree of interoperability which means you do have a path out if you need it. But please, don’t sell me something proprietary and then tell me it avoids vendor lock in, it makes my blood boil.

Anyway, onto the show…

Microsoft Mobile First, Cloud First Transformation – James Akrigg

Much to my disappointment, I missed all of this session bar the last 10 minutes. I’ll not go into the details as to why, but some aspects of the event organisation could be filed under “could do better”. Not all bad though. Anyway, I caught some integration with Cortana and Business Intelligence. Impressive stuff and makes you realise how much consumer tech is now appearing in “business” products as users start to expect the same experience on both sides.

One thing I will say about Microsoft is that I’ve been pretty impressed with how they’ve reinvented themselves. It’s not easy to change the path of the company that is so big and has massive cash cows like Windows and Office. In many ways, it makes me wonder how much further along they’d be today if they’d ditched Steve Ballmer a lot sooner.

The full house signs also demonstrated that Microsoft is still more than relevant in today’s IT landscape. Reports of their demise have been premature, and hats off to them for reinventing themselves as a cloud/mobile company. Yes, there are a lot of updates about Windows 10 and Office 2016, but they seem to me to be on a par with Azure and other platform announcements these days.

The true foundation for the Software Defined Enterprise – John J. Ryan – VMware

jr

John Ryan from VMware – not walking like an Egyptian..

Next up was John Ryan from VMware to take us through a “how we got here” session and also how VMware were driving the SDDC market forward.

Key points included:-

  • We’re now in the mobile/cloud era after mainframe and client server eras
  • Cloud management platform is automation, operations,  business intelligence wrapping over “traditional” virtual infrastructure of compute, storage and  networking
  • Control of data centre automated by software (management and security)
  • Foundation of Software Defined Enterprise means handling complex tasks in a simple way
  • vSphere provides capability to virtualise applications, desktops, servers, databases etc, best general purpose hypervisor around
  • Hadoop Big Data extensions, certified support for SAP HANA amongst newer features
  • Container support in vSphere 6, integrated OpenStack
  • Instant clone for desktop workloads, radically improve your VDI provisioning
  • 4x scalability in vSphere 6 (vCPU, vRAM, etc.)
  • Photon special VMware edition of Linux to run containers, open sourced with Lightwave (identity)
  • Long distance vMotion up to 150 milliseconds latency
  • Enabled by cross vCenter vMotion, which is also new in vSphere 6
  • Use cases for long distance vMotion include follow the sun, disaster avoidance
  • Fault tolerance now up to 4 vCPU s
  • Needs 10 gig infrastructure however
  • Content library can store templates, ISOs,  OVAs.  Subscribe and replicate content (bit like System Center)
  • Recent tasks and right click improved in Web client, more intuitive workflows (i.e. a bit more like the “fat” client!)
  • Virtual SAN extended to use hybrid models, including storage arrays
  • Virtual volumes (VVols) changes the storage paradigm
  • Automated policy management
  • All industry partners will support virtual volumes, some natively in the array and some via virtual appliances with VASA

One cloud, One security – VMware / Trend Micro – Peter Bury and Stephen Porter

peterbury

Peter Bury – VMware

From the previous session I hot footed it across the hall to the next session which mainly dealt with what NSX is at a high level and how that fits into the SDDC message. This was a decent presentation that illustrated the issue of applying security out on physical network devices when you might want to be able to segment VMs away from each other in the same cluster and the “old” networking model created in vSphere with standard vSwitches just wasn’t very flexible.

Key points from this session:-

  • Old design methodology meant sending VM traffic out of cluster and let edge firewall deal with it (tromboning)
  • Firewall rules get massive over time as nobody removes them when a service is unprovisioned because of the fear of breaking something. The secondary impact is slowing down the firewall as it has to churn through 100s of rules
  • Not a fluid design for agile changes
  • This design is virtual but it’s not cloud
  • Enterprises want rapid elasticity, roll out services as the organisation demands
  • If IT is too slow, company will go out to the public cloud
  • Intelligence is built into software for security, firewall etc. Physical networks become “dumb carriers”
  • Load balancing, routing, switching, firewalls, access control lists in software, as part of the hypervisor stack.
  • Context in workloads is achieved by baking in features into the hypervisor
  • New model enables wrapper around workloads to provide security
  • API allows trusted partners to provide their expertise in the micro segmentation of virtual machines
  • Anti malware,  anti virus, intrusion protection etc
  • Agent less design from Trend
  • All policy driven so policy follows VM wherever it goes
  • Moved from scheduled scan to real time security
  • Dashboard available for vCenter/vRealise Operations Manager
  • Security rules can be applied before patches are ready. Heart bleed rule available in a couple of hours, patches for the same vulnerability take days or weeks

It’s time to upgrade from backup to business continuity – Fifosys

fifosys

The next session covered the topic of business continuity. One interesting point on this was that BC is not always a “site down” issue. Sometimes it can be a key LOB application that has gone for a lie down and you need to have a strategy for bringing that back so the business can keep functioning properly.

This was also a leader into introducing us to the Datto appliance, which I must admit was new to me. There seems to be a burgeoning market now in hardware appliances that keep some data local and then move “cold” blocks out to the cloud where it’s cheaper to store. This product works along similar lines, but with backup images.

Key takeaways:-

  • Disaster recovery hampered by slow,  manual processes including getting off site backups,  restore from tape etc
  • 13 % of Fifosys survey responders don’t take tapes off site
  • 61% still using tape
  • 52% of SMB s do not have a BC plan as they don’t view IT as critical to the business
  • 45% of downtime caused by human error according to Oracle User Group
  • BC is not just failure of a site, but key business systems
  • Assign a financial cost to an outage to justify a BC plan, there are plenty of simple equations out there you can use
  • Impact time has a direct effect on the costs of an outage
  • BC should be simple and automated
  • Sub 1 hour recovery is a must
  • Reduce reliance on staff for BC
  • BC reduces operating risk, don’t see it as a cash waste
  • DR tests should not be time consuming or impactful
  • Traditional designs use active/passive or active/active data centres, which can be prohibitively expensive
  • Datto appliance performs backups and replicates to the cloud
  • Image based backup every 5 minutes
  • Can restore to the Datto appliance as it has a baked in hypervisor
  • Agent based backup
  • Screen shot verification is an automated daily DR test
  • Uses inverse chain technology
  • Datto protects 100 PB globally

Cisco’s Intercloud Strategy – Bruno Oliveira

cisco

Cisco have a “cloud of cloud” strategy called Intercloud, which is an interesting concept. By the presenter’s own admission, there are still odd pieces of it not quite ready yet (mainly the Cloud Market Place option) but again in a similar way to vCAC/vRA does it’s best to be a technology agnostic solution (with Cisco’s wrapper around it, naturally!).

Again the key takeaway is freedom of choice and the flexibility to move workloads around to internal or external clouds as economics and performance requirements dictate.

Key points:-

  • Uber now biggest taxi company in the world but don’t own taxis
  • AirBnB don’t own hotels
  • Digital disruption caused by these types of companies
  • 50 billion objects connected to the Internet by 2020
  • Unified workload management. Any VM any cloud is the essence of Intercloud
  • Keep data in country as opposed to AWS etc where you may not know where it is
  • 55% companies turning to the cloud to lower costs
  • Global cloud of clouds using VMware,  OpenStack, etc. Cisco validated architecture
  • Cisco want to wrap around all these disparate services so customer sees it as their infrastructure
  • 160 inter cloud partners
  • 60 providers and resellers
  • InterCloud fabric is the software wrapper around this environment
  • InterCloud market place coming,  both for internal and external clouds
  • Fabric provides end user and administration portals
  • Can move workloads from one provider to another (Azure to vCloud Air, vCA to AWS, etc.)
  • Cloud usage collector can be attached to physical network kit to accurately measure cloud service consumption so CIOs can “really” see what external services are being consumed

A new approach to optimising the WAN with Citrix CloudBridge – Al Taylor, CloudDNA

My final session of the conference was around the Citrix CloudBridge 11 appliance. Folks who know me know I’m not so much a Citrix guy, but I try to be as agnostic as possible and try to avoid “drinking the Kool-Aid”. At the end of the day, I don’t believe it pays to close yourself off from any vendor as you never know when they’ll have a niche product or solution that will come to your rescue.

I actually enjoyed this session the most of all the ones I saw over the two days. There was something vaguely punky about the presenter and his enthusiasm for the CloudBridge device really shone, whereas some of the other presenters went through the motions a little bit. I’m not denigrating them, but perhaps that’s the difference between a true tecchie and non-tecchie speaker.

Anyway, CloudBridge is a Citrix appliance that is intended for use over constrained bandwidth to improve XenDesktop / XenApp user experience (amongst other use cases such as video and Lync).

Key session points:-

  • CloudDNA are the only dedicated Citrix cloud networking practice in the UK
  • NetScalertaylor.com for NetScaler blog
  • ILoveNetScaler.com news aggregate and weekly newsletter
  • CloudBridge is like WAN repeater
  • Acceleration and compression, amongst other things
  • Video optimisation for Lync etc
  • HDX analysis for CloudBridge to get full visibility of all bottlenecks
  • Feeds back into Desktop Director for quick and simple performance analysis
  • 64 channels in ICA traffic
  • Prioritise channel traffic to ensure performance for the user
  • Tolly Report on CloudBridge gives WAN optimisation report of optimised vs non-optimised
  • Virtual appliance or piece of hardware
  • Branch office in a box, can run ThinPrint on the hardware if need be
  • Virtual WAN binds multiple links together and uses policies to decide which traffic goes down which link. Not bonded
  • Encrypt the paths between the two end points
  • Send packets based on application needs
  • Active bi directional probing
  • 600 applications optimised out of the box, not just Citrix centric
  • Faster time to deploy branch offices

In all, I enjoyed the event and it was nice to see this type of event in my neck of the woods as these things tend to be London only. Hopefully there was enough interest to make the show a bit bigger next year (they’ve already published similar dates for 2016) and get some “proper” representation from the heavy hitters (VMware, Microsoft) rather than being a desk on part of a partner’s stand.

 

14-02-14

Are we finally seeing VMware 2.0?

I’ve been keeping a close eye on some of the news coming out of VMware Partner Exchange (PEX) this week and it left a bit of an impression on me. So much so I decided to write about it, in a change from our usual programming of study guides for VCAP. We’ll get back to that, don’t worry, but I wanted to impart my opinion on this topic because I think it’s important and wonder what other people think.

VMware as a company grew exponentially in the 2000s by introducing x86 virtualisation to the market, something which was a game changer as it meant we could put dozens of servers on one physical piece of tin, saving a lot of time and money and making admin’s lives a lot easier. I remember the first time I saw vMotion at a demo and my instinct was to be cynical and say it was all smoke and mirrors, but no, it was the real deal and so was the company and the technology.

Fast forward a few more years and as the company grew and got acquired by EMC, it started to look to broaden it’s solution stack to become a much richer software company. In 2009 they acquired SpringSource, 2010 Zimbra and 2011 SlideRocket. This was back in the day when I was still working for a VMware Partner. I did wonder at the time what the value was to VMware from acquiring these companies and their technologies. In the case of Zimbra for example, it seemed like a solution looking for a problem. Let’s be frank, the on premise e-mail platform war was won years ago by Microsoft Exchange, and even if you try and “cloudify” Zimbra, you’re still facing stiff competition from the likes of Google Apps and Office 365.

For me in many ways they made the classic business mistake – forgetting what you’re good at. If you look at the technology business, the ones that do the best have a fairly narrow focus, know what they do best and stick to it. There’s no harm in spreading yourself across different technologies or industries, but you must remain true to what you originally put you where you are. Look at Apple and Oracle as example of companies that may have dabbled in a couple of additional technologies, but in the end they’ve remained strong and successful by focusing on a couple of product lines and executing them really well to become market leaders.

Take Microsoft as an example of a company that tried to spread itself too thinly. They’ve made highly successful desktop operating systems and productivity suites for years, but that wasn’t enough and the problems and eye watering costs accrued from the likes of XBox, Windows Phone, Surface and Bing are well documented. In many ways, Microsoft still doesn’t know what it wants to be, but continues to execute on the Windows platform (including Hyper-V) and Office year on year. If Microsoft had not had so much cash in the bank to fund these failures, they’d have gone under years ago. Windows and Office still provide the financial engine that drives Microsoft.

Which brings me neatly to my point about VMware. In 2013, Zimbra was sold to Telligent Systems, SlideRocket was sold to ClearSlide and SpringSource was hived off to Pivotal (source – Wikipedia). The Zimbra announcement at least was done relatively quietly in my view and represented the epiphany VMware must have had that they were carrying too much baggage which was non strategic to the core business. Is it co-incidence that these activities have occurred since Pat Gelsinger became CEO? I’ll let you decide that.

So at PEX, a lot of focus was put on two emerging technologies – NSX and VSAN. The first one for those that don’t know is network virtualisation. This is big news and will again see VMware disrupting this market too. Cisco have already made sounds about the impact network virtualisation will have on their hiterto successful core business of network tin. If that moves into the software stack, they’ve got troubles.

VSAN is a new product which basically accumulates and aggregates local storage on ESXi hosts and presents it as shared storage. There are more features than that, but this is the basic premise – lower cost, simpler deployments and one vendor less to deal with if proprietary storage platforms are in use. Again, it’s very early days and the storage market is highly competitive right now (Nutanix being the obvious example).

I’ve seen and heard a lot of criticism about VMware in recent years, some of justified and some not. I’ve heard remarks that they’re a busted flush and Hyper-V will take over. For me, putting the focus back to core virtualisation products is entirely the right move to make and will fundamentally keep VMware relevant and market leaders in the industry for the next decade. Now that “vanity” projects have been spun off and sold, the company can keep a narrower focus and keep doing what it does best – virtualisation.

As always, your views are welcomed.