Jul 02

Chance to win VMworld 2015 conference pass via Infinio

“VMTurbo"
Today it came to my knowledge that Infinio once more will give away one full VMworld 2015 conference pass. VMworld will be held  August 3 – September 3 in Moscone Center San Francisco. Infinio did the same thing last year, which i published a blog post about here, and the promotion is open between July 3, 2015 until 11:59 pm ET July 16, 2015.

Just use the following link or click the banner below for your chance to win the free conference pass.

2015-07-InfinioVMworldPass-250x250

The winner will be announced Friday July 17:th and by entering to win you sign up for the following terms and conditions

Would really like to thank Infinio for this great way of sponsoring the virtualisation community. This is not an advertising blog post payed for by my blog sponsor Infinio, this is just a way for you as a VDCX56 blog reader to get a chance to take advantage of this great offer from Infinio.

Jul 01

Meet NPX 001 – Josh Odgers

“VMTurbo"

  • Name: Josh Odgers
  • NPX Number: 001
  • Employer: Nutanix
  • Blog URL: http://www.joshodgers.com
  • Twitter: @josh_odgers
  • Virtualisation background: Working with VMware as a focus since ~2006
  • Storage Background: Shared storage experience with a wide range of technologies from EMC, Netapp, IBM, HP since early 2000s with a more NAS focus since 2010.
  • Hyper Converged Infrastructure background: Started working with Nutanix mid 2013 prior to which I investigated specifically Nutanix HCI and quickly formed the opinion that HCI would quickly become a significant part of the virtualization/storage market and Nutanix would be a great place to take the next step up and continue to challenge myself.
  • Future of HCI and other emerging technologies: Without a doubt, HCI should and will be a major percentage of the storage market in the coming years. The trend is already clear with major players such as EMC,Netapp and VMware following Nutanix lead with HCI offerings as well as numerous small start-ups entering the market.
  • Value of NPX: NPX is proof of “X” level skills across multiple platform types (e.g.: SAN/NAS/HCI), Hypervisors (ESXi,KVM,Hyper-V) and the associated management tool sets. Even if I didn’t work for or partner with Nutanix, the NPX is an excellent challenge for any senior / consulting architect. While NPX does require knowledge of Nutanix platform, its everything around the platform (e.g.: Applications/Network/Datacenter etc) which still need expert level guidance to ensure a consistent and high quality outcome for the customer – this is what NPX is all about.
  • What made you go for NPX: The main driver for me starting on a journey to achieve an “X” level certification such as VCDX and now NPX is to obtain more knowledge/experience. Being a Double-VCDX was great preparation for NPX where I added not only improved my VMware/Nutanix skills but also added another equally as important skill-set being KVM/Acropolis. I am of the opinion regardless of if anyone like’s it or not, the days of vSphere dominating the market are coming to a close and skills outside of VMware products is becoming increasingly more critical for customers.
  • Advise to people looking into the NPX track: Anyone who is currently VCDX or aspires to be VCDX is a great candidate, even if your day to day focus is not Nutanix, the skills and experience you will learn will help ensure your skill-set is well rounded covering traditional 3-tier and multiple hypervisors/management solutions.
  • Other: The last thing I wanted to say is there has been questions by a small minority about why a certification like NPX is relevant when Nutanix dramatically simplified the datacenter. The answer is simple, NPX is about the total solution, not just one component (i.e.: vSphere or NDFS). Nutanix does not create Operating Systems like Windows/Linux, Nutanix creates invisible infrastructure which supports things like Hypervisors,VMs,applications and containers. Without skill sets in dependencies such as Network as well as Hypervisors, VMs, Applications and management tools, the Nutanix platforms value will not be maximized. NPX is about ensuring candidates have the skills/abilities to minimize risk and maximize value for customers, regardless of the application, hypervisor etc.

Back to the NPX section

 

Jul 01

vSphere 6.0 – Performance Best Practices Guide Released

“VMTurbo"
This is just a very short blog post but i know that a lot of you have been waiting for the content/topic of this one.

VMware recently released an updated version of their popular Performance Best Practices guide and this version addresses vSphere 6.0.
You can download the pdf here and by doing so you make sure you got something to read for the next few hours since this paper is actually 98 pages.

The Performance Best Practices guide is intended for vSphere administrators who wants to tune their environment for best performance but it is also useful for everyone that needs an understanding of what you can achieve and what you can do in terms of performance tuning in a Sphere 6.0 environment.

If you are still running vSphere 5.5 you can download the Performance Best Practices guide for vSphere 5.5 here.

Jun 29

Upgrade to vSphere ESXi 6.0 via Nutanix PRISM

“VMTurbo"
So one of the cool things with the Nutanix PRISM interface is that more and more software components can be managed from this single interface. I created a post a while back where i showed how we can easily upgrade the Nutanix Operating System and you can fint it here.

This blog will be about upgrading the ESXi hosts from 5.5 to 6.0 via Nutanix PRISM. Since i’m using vCenter Server Appliance (VCSA) i really don’t want to use a Windows Server Operating System based virtual machine (VM) just to run vSphere Update Manager (VUM) for upgrading and patching purposes.

Follow the below steps to perform the upgrade:

  • Verify existing ESXi host version, in this case via vSphere Client. yeah i know:)
    esxi-version1
  • Log in to Nutanix PRISM (either the Cluster FQDN or IP address or one of the Controller Virtual Machines (CVM) FQDN or IP addresses.
  • Click the wheel icon in the upper right corner and select “Upgrade Software”
    Screen Shot 2015-06-24 at 14.09.54
  • Select the “Hypervisor” tab and you’ll see the current hypervisor version. In my case ESXi 5.5.0 Screen Shot 2015-06-24 at 14.10.21
  • Click “upload a Hypervisor binary”
  • Select your JSON file and the ESXi offline .zip bundle. In my case VMware-VMvisor-Installer-6.0.0-2494585.x86_64.iso.
  • Click Upload now
    Screen Shot 2015-06-24 at 14.10.48
  • When the upload is finished you’ll select the “Upgrade -> Upgrade Now option
    Screen Shot 2015-06-24 at 14.10.57
  • Fill in the vCenter Server details and click upgrade:
    • vCenter IP Address
    • Username
    • Password
      Screen Shot 2015-06-24 at 14.11.12
  • Sit back and watch the upgrade process or do something else:)
    upgrade1
  • Click close when the upgrade process is finished.
    upgrade2
  • Verify the ESXi version via the vSphere Client as well
    esxi-version2
  • And select Upgrade Software and verify the ESXI host version from within PRISM as wellScreen Shot 2015-06-24 at 14.09.54
    upgrade4

This completes the blog post about how you can upgrade your Hypervisor, in this case VMware ESXi, from with the Nutanix management UI PRISM meaning.

Jun 24

Hypervisor resource utilization question

“VMTurbo"

When you determine the amount of resources you need for your virtualised environment you need to include several things such as:

  • Hypervisor CPU requirements
  • Hypervisor RAM requirements
  • Hypervisor disk requirements
  • Hypervisor CPU and or RAM requirement per running VM
  • VM CPU requirements
  • VM RAM requirements

On top of this some hypervisors allows for resource oversubscription for CPU and/or RAM and that must also be taken into account. I have written a blog post about my experience over the years in terms of vCPU to physical core oversubscription which can be found here.

I have a pretty good understanding of the “Hypervisor CPU and or RAM requirement per running VM” requirements for VMware ESXi since that’s the hypervisor i have been working with for the past + 10 years and during that time i have collected a lot of resource utilisation data.

Since i now work with KVM, Microsoft Hyper-V and VMware ESXi i’m in the process of building up the same level of data for KVM and Microsoft Hyper-V. I know there are documents available that tells you what figures to use but nothing is more important than real world data.

It’s a first time for everything and this is the first time i ask my blog readers for feedback:)
If you got 2 minutes to spare it would be awesome if you could fill out the very very very short survey regarding hypervisor resource utilisation for your KVM, Microsoft Hyper-V and or VMware ESXi environment.

The results will be published as a separate blog post and hopefully a lot of people will find the information useful.

Thanks

Jun 23

Can’t connect to vCenter Server – connection timeout

“VMTurbo"

So i few days back i ran into a problem while connecting to vCenter Server version 5.5 U2d i have never seen before.
Neither the vSphere Web Client or the vSphere Client managed to connect to the vCenter Server. I just received connection timeout messages. I did actually try to login, using the vSphere Client, from the Windows Server 2012 R2 based virtual machine (VM) running the vCenter Server application. That also failed.
The following steps were taken during my investigation:
  • Verified all vCenter Server related services were up and running
  • Verified that the anti-virus installed in the VM was not blocking connections.
  • Verified that the local firewall did not block our connections.
  • Investigated the vCenter Server log files:
    • The only thing i could find was the following in the Inventory Service log file:
      • ProviderManagerServiceImpl Error fetching resources: com.vmware.vim.query.server.store.exception.ResourceRetrievalException: org.apache.http.conn.HttpHostConnectException: Connection to 443 refused
  • Verified the Windows Server Event logs and found the following in the System event log:
    • Event: 4227 Tciip with level warning:
      • TCP/IP failed to establish an outgoing connection because the selected local endpoint was recently used to connect to the same remote endpoint. This error typically occurs when outgoing connections are opened and closed at a high rate, causing all available local ports to be used and forcing TCP/IP to reuse a local port for an outgoing connection. To minimize the risk of data corruption, the TCP/IP standard requires a minimum time period to elapse between successive connections from a given local endpoint to a given remote endpoint.
    • Event: 23 Eventlog with level Error
      • The event logging service encountered an error (res=1500) while initializing logging resources for channel Microsoft-Windows-PushNotification-Platform/Admin
    • Event 25 Eventlog with level Info
      • The event logging service encountered a corrupt log file for channel Microsoft-Windows-FileServices-ServerManager-EventProvider/Operational. The log was renamed with a .corrupt extension.

Based on my findings i decided to restart all the vCenter Server related services but that didn’t help.

I then decided to reboot the VM and that actually made all the event log entries, the vCenter Server Inventory Services error disappear and i could connect to the vCenter Server.

I have asked the customer to create SRs to both Microsoft and VMware so we can get an understanding of why this happened. Will update the post with additional information when i get som feedback.

Jun 16

vSphere 6.0 Security Hardening Guide

“VMTurbo"
Yesterday, 2015-06-15, VMware released the vSphere 6.0 security hardening guide. I know a lot of you have been waiting for this one so it’s nice that it’s now available. As usual, read through the doc (excel document) and find out the security hardening configuration required for your company and /or your customer. Download the vSphere 6.60 security hardening guide here.

Below is a collection of the available VMware Security Hardening Guidelines in case you need an older vSphere version or maybe the vRealize Automation (vRA) formerly known as vCloud Automation center (vCAC)

You can read more and download additional information at the  VMware Security Hardening Guides web page

Jun 12

Nutanix Erasure Coding

“VMTurbo"

For those of you who runs the Nutanix platform or have seen Nutanix presentations understands that the Nutanix Distributed FileSystem (NDFS) stores data using replication factor (RF) 2 or 3. This means that the file system stores either 2 pieces or 3 pieces of the same data to deliver availability and this is customer decision and configuration. Replication factor is a high performance operation without traditional RAID penalty.

To make disk usable calculations easy you can:

  • Multiply your RAW capacity with 0.5 (50%) when using RF2
    • This means we could store data pieces X & Y according to the following in a 4 node Nutanix cluster.
      Nutanix Node Data copy
      1 X
      2 Y
      3 X
      4 Y
  • Multiply your RAW capacity with 0.33 (33%) when using RF3
    • This means we could store data pieces X & Y according to the following in a 4 node Nutanix cluster.
      Nutanix Node Data copy
      1 X
      2 XY
      3 XY
      4 Y

Up until today Nutanix has offered Compression and Deduplication as space saving features and on top of space savings the features also delivers additional performance in many cases.

During the Nutanix .Next conference (8 June 8 – 10 June) the Nutanix Erasure Coding (EC) feature was announced. Erasure coding works at the Nutanix Extent Group level. This will help you save additional storage capacity and the below table helps you get an understanding of how much space you can actually save.

Nutanix Nodes RAW TB RF2 TB EC TB
A B P * 60 30 40
A B P U 80 40 ∼53
A B C P U 100 50 75
A B C D P U 120 60 96
A B C D E P U 140 70 112

* 3 Nodes are not supported but can be enabled for testing purposes if needed.

Nutanix Nodes explanation:

  • A = Nutanix Node used for data
  • B = Nutanix Node used for data
  • C = Nutanix Node used for data
  • D = Nutanix Node used for data
  • P = Nutanix Node used forParity
  • U = Unused or Nutanix node avoided by data and parity

So EC means you don’t have to size for your logical requirement times 2 for RF2 and/or your logical requirement time 3 for RF3. You can use the below overhead instead:

  • 3 Nutanix Nodes = 1.5
  • 4 Nutanix Nodes = 1.5
  • 5 Nutanix Nodes = 1.33
  • 6 Nutanix Nodes = 1.25
  • 7 Nutanix Nodes = 1.25

I have tested EC in my lab, and on top of the space savings i had with compression, EC added an additional 22,5% space savings which is really great. Additional tests will be added when i get the time.

You activate EC on a container level via the PRISM by highlight the Container -> click Update -> click Advanced and mark the check box “ERASURE CODING”

Screen Shot 2015-06-11 at 13.05.10

Another way would have been to use the ncli available in each CVM:

  • ncli container edit name=<ctr_name> erasure-code=”X/Y”

The Erasure Coding option is also available when creating a new Nutanix Container.

Since Nutanix Operating System (NOS) version 4.1.3 released by Nutanix 2015-06-10 the Erasure Coding feature is actually available as a tech preview feature meaning you should not use it production systems.

 

 

Jun 11

Nutanix Operating System 4.1.3 released

“VMTurbo"
Yesterday, 2015-06-10, Nutanix released a new version of the Nutanix Operating System (NOS) 4.1.3 and even though it’s just a DotDot release from the previous one 4.1.2 i would like to present some of the new things available. If you attended the Nutanix .Next conference June 8 – June 10 i’m sure you heard a lot about the new features and several people have already asked me when these will be available.

Well, below is a short version of the release notes for 4.1.3 and as you can see, some of the features are already available as tech previews for you.

  • Support for vSphere 6.0
  • Image Service for Acropolis
  • Synchronous Replication (SyncRep) for Hyper-V Clusters
  • Data at Rest Encryption for NOS clusters with Acropolis and Hyper-V hosts
  • Prism Web Console/Prism Central Timeout setting
  • Erasure Coding – Tech Preview so don’t use for production systems or storage used or data stored in production systems
  • Acropolis VM High Availability – Tech Preview so don’t use for production systems or storage used or data stored in production systems
  • Acropolis Volume Management (In guest iSCSI support) – Tech Preview so don’t use for production systems or storage used or data stored in production systems

Important: Don’t use the tech preview features for production systems for storage used or data stored in production systemsAs i have already highlighted after the tech preview features

Screen Shot 2015-06-11 at 06.25.34

Jun 11

Nutanix Community Edition available

“VMTurbo"
Approximately one month ago i wrote a blog post about the Nutanix Community Edition announcement which can be read here. By that time you could sign up for the public bete and i’m now happy to announce that the public beta is available.

Get your own CE code by making a comment on this blog post, send me, @magander3 or use the Author page to drop me an email.

Useful links for Nutanix CE:

In the previous blog post i did not include any information about features included so below is an impressive list including some of the features available in CE:

  • Acropolis Hypervisor
  • Analytics
  • API framework, Usable for
    • Automation
    • Development
    • Orchestration
  • Asynchronous DR
  • Compression
  • De-duplication
  • Erasure Coding
  • Shadow Cloning
  • Replication factor (RF) 1 and 2.
    • Single server (RF=1)
    • Three servers (RF=2)
    • Four servers (RF=2)
  • Self-Healing

Screen Shot 2015-06-11 at 04.31.45

The same hardware requirements as mention in my announcement blog post still applies and includes:

  • Server – 1, 3 & 4 servers (Yes you donate need several server to start using Nutanix CE)
  • CPU – Intel CPUs, 4 cores minimum, with VT-x support
  • Memory – 16GB minimum
  • Storage Subsystem – RAID 0 (LSI HBAs) or AHCI storage sub-systems
  • Hot Tier (SSD) – One SSD per server minimum, ≥ 200GB per server
  • Cold Tier (HDD) – One HDD per server minimum, ≥ 500GB per serve
  • Networking – Intel NICs

 

Older posts «