StevenPoitras.com http://stevenpoitras.com Thoughts and ramblings on everything Tue, 02 Feb 2016 19:36:15 +0000 en-US hourly 1 http://wordpress.org/?v=4.0.15 Advanced Nutanix – PowerShell Edition: Ordered PD Restorehttp://stevenpoitras.com/2014/10/advanced-nutanix-powershell-edition-ordered-pd-restore/ http://stevenpoitras.com/2014/10/advanced-nutanix-powershell-edition-ordered-pd-restore/#comments Tue, 21 Oct 2014 21:24:30 +0000 http://stevenpoitras.com/?p=1218 In this post I provide a sample script on how you can use the Nutanix Powershell CMDlets and PowerCLI to restore PDs in a specified order. Here’s the...

The post Advanced Nutanix – PowerShell Edition: Ordered PD Restore appeared first on StevenPoitras.com.

]]>
In this post I provide a sample script on how you can use the Nutanix Powershell CMDlets and PowerCLI to restore PDs in a specified order.

Here’s the function which can be called perform an ordered PD restore:

############################################################
##
## Script: Ordered PD Restore
## Author: Steven Poitras
## Description: Restore and power on PDs in desired order
## Language: PowerShell
##
############################################################
function NTNX-PD-Recovery {
<#
.NAME
	NTNX-PD-Recovery
.SYNOPSIS
	Recovery Nutanix PDs to target site and restore in specified order
.DESCRIPTION
	Recovery Nutanix PDs to target site and restore in specified order
.NOTES
	Authors:  The Dude
	
	Logs: C:\Users\<USERNAME>\AppData\Local\Temp\NutanixCmdlets\logs
.LINK
	www.nutanix.com
.EXAMPLE
    NTNX-PD-Recovery -pdToRestore "TestPD1","TestPD2" -pathPrefix "/mypath"
  	NTNX-PD-Recovery -pdToRestore "PD1","PD2","PD3" -pathPrefix "/mypath" -nxIP "99.99.99.99.99" -nxUser "admin" -vcIP "99.99.99.99.99" -vcUser "mydomain\admin"
#> 
	Param(
    	[parameter(mandatory=$true)][array]$pdToRestore,
		
		[parameter(mandatory=$true)]$pathPrefix,
		
		[parameter(mandatory=$false)]$nxIP,
		
		[parameter(mandatory=$false)]$nxUser,
		
		[parameter(mandatory=$false)]$nxPassword,
		
		[parameter(mandatory=$false)]$vcIP,
		
		[parameter(mandatory=$false)]$vcUser,
		
		[parameter(mandatory=$false)]$vcPassword,
		
		[parameter(mandatory=$false)][bool]$replaceVMs

	)
  	begin{
		# Clear window
		clear
		
		# Placeholder for VMs
		$global:restoredVMs = @()
		
		# Defaults
		[bool]$success = $true
		[int]$sleepTime = 10
		[int]$maxRetry = 5
		
		# If values not set use defaults
		if ($nxUser -eq $null) {
			Write-Host "No Nutanix user passed, using default..."
			$nxUser = "admin"
		}
		
		if ($replaceVMs -eq $null) {
			[bool]$replaceVMs = $false
		}
		
		# Make sure requried snappins are installed / loaded
		$loadedSnappins = Get-PSSnapin

		if ($loadedSnappins.name -notcontains "VMware.VimAutomation.Core") {
			Write-Host "PowerCLI snappin not installed or loaded, exiting..."
			break
		}

		if ($loadedSnappins.name -notcontains "NutanixCmdletsPSSnapin") {
			Write-Host "Nutanix snappin not installed or loaded, exiting..."
			break
		}
		
		# Check formatting for path prefix
		if ($pathPrefix.StartsWith("/") -eq $false) {
			Write-Host "Path prefix is not in format '/path', reformatting..."
			$pathPrefix = "/$pathPrefix"
		}
		
		# Check for connection and if not connected try to connect to Nutanix Cluster
		if ($nxIP -eq $null) { # Nutanix IP not passed, gather interactively
			$nxIP = Read-Host "Please enter a IP or hostname for the Nutanix cluster: "
		}

		if ($(Get-NutanixCluster -Servers $nxIP -ErrorAction SilentlyContinue).IsConnected -ne "True") {  # Not connected
			# If no password, get password for Nutanix user
			if ($nxPassword -eq $null) {
				$nxPassword = Read-Host "Please enter a password for Nutanix user ${nxUser}: " -AsSecureString
			}
			
			Write-Host "Connecting to Nutanix cluster ${nxIP} as ${nxUser}..."
			$nxServerObj = Connect-NutanixCluster -Server $nxIP -UserName $nxUser -Password $(if ($nxPassword.GetType().Name -eq "SecureString") `
				{([Runtime.InteropServices.Marshal]::PtrToStringAuto([Runtime.InteropServices.Marshal]::SecureStringToBSTR($nxPassword)))} `
				else {$nxPassword}) -AcceptInvalidSSLCerts
		} else {  # Already connected to server
			Write-Host "Already connected to server ${nxIP}, continuing..."
		}

		# Check for connection and if not connected try to connect to vCenter Server
		if ($vcIP -eq $null) { # VC IP not passed, gather interactively
			$vcIP = Read-Host "Please enter a IP or hostname for the vCenter Server: "
		}
		
		if ($($global:DefaultVIServers | where {$_.Name -Match $vcIP}).IsConnected -ne "True") {
			# If no VC user passed prompt for username
			if ($vcUser -eq $null) {
				$vcUser = Read-Host "Please enter a admin user for vCenter: "
			}
			
			if ($vcPassword -eq $null) {
				$vcPassword = Read-Host "Please enter a password for vCenter user ${vcUser}: " -AsSecureString
			}
			
			# Connect to vCenter Server
			$vcServerObj = Connect-VIServer $vcIP -User $vcUser -Password $(if ($vcPassword.GetType().Name -eq "SecureString") `
				{([Runtime.InteropServices.Marshal]::PtrToStringAuto([Runtime.InteropServices.Marshal]::SecureStringToBSTR($vcPassword)))} `
				else {$vcPassword})
		} else {  #Already connected to server
			Write-Host "Already connected to server ${vcIP}, continuing..."
		}
		
  	}
  	process{
		# Get PD objects once
		$pds = Get-NTNXProtectionDomain
		
		# Make sure input PDs exist
		$global:foundPDs = $pds |? {$pdToRestore -contains $_.name}
		$notFoundPDs = $pdToRestore |? {$global:foundPDs.name -notcontains $_}
		
		Write-Host "Found $($global:foundPDs.length) PDs of $($pdToRestore.length) expected!"
		
		if ($global:foundPDs.Length -lt $pdToRestore.length) {
			Write-Host "Could not find the following PDs: $($notFoundPDs -join ",") exiting..."
			$success = $false
			break
		}
		
		# For each PD restore and power on
		$global:foundPDs | %{
			$currentPD = $_
			Write-Host "Current Protection Domain: $($currentPD.name)"
			
			# Get Protection Domain VMs
			$pdVMs = $currentPD.vms
			Write-Host "Protection domains contains VMs: " $(($pdVMs | select -expand vmName) -join ",")
			
			# Check to see if any previously restored VMs exist similar name
			$existingVMs = @()
			
			# Create regex for searching for VMs
			[regex]$vmRegex = ‘(?i)(‘ + (($($pdVMs.vmName) |foreach {[regex]::escape($_)}) –join “|”) + ‘)'
			
			# Search for existing VMs
			$existingVMs = Get-NTNXVM | where {$_.vmName -match $vmRegex -And $_.vdiskFilePaths -match $pathPrefix}
			
			# If VMs exist at current path, exit
			if ($existingVMs.length -gt 0) {
				Write-Host "Found $($existingVMs.length) matching VMs at current path, exiting..."
				$success = $false
				return	
			}

			# Get PD snapshots
			$pdSnapshots = $currentPD | Get-NTNXProtectionDomainSnapshot -SortCriteria ascending
			
			if ($pdSnapshots.length -gt 0) {
				Write-Host "Protection domain has $($pdSnapshots.length) snapshots, selecting latest..."
			} else { # No snapshots
				Write-Host "Protection domain has 0 snapshots, exiting..."
				$success = $false
				break
			}

			# Try to restore PD using latest snapshot
			Write-Host "Restoring Protection domain $($currentPD.name)"
			Restore-NTNXEntity -Name $currentPD.name -PathPrefix $pathPrefix -Replace $replaceVMs -SnapshotId $pdSnapshots[0].snapshotId

			# Sleep for $sleepTime
			Write-Host "Sleeping for $sleepTime seconds to allow for PD restoration and VM registration..."
			sleep $sleepTime
			
			# Set retry counter
			[int]$retryInt = 1
			
			while ($retryInt -le $maxRetry) {
				$matchedVMs = Get-NTNXVM | where {$_.vmName -match $vmRegex -And $_.vdiskFilePaths -match $pathPrefix}
				
				Write-Host "Found VMs: $(($matchedVMs | select -expand vmName) -join ",")"
			
				# Try to get VM objects from PowerCLI for VMs in PD
				$vmObjects = Get-VM |? {$matchedVMs.vmName -eq $_.Name}
				
				Write-Host "Found the following VM objects in VC: $(($vmObjects | select -expand name) -join ",")"
				
				# If all VM objects are found
				if($vmObjects.length -ge $matchedVMs.length -and $vmObjects.length -ne 0) {
					Write-Host "$($vmObjects.length) VMs registered of $($matchedVMs.length) expected!"
					
					# Add VMs to aggregate array for cleanup later
					Write-Host "Adding $($vmObjects.length) objects to aggregate array..."
					$global:aggVMs += $vmObjects
					
					# Break loop
					break
				} else {
					# No VM objects found
					Write-Host "Attempt $retryInt/$maxRetry, sleeping for $sleepTime seconds to allow for VM registration..."
					$retryInt += 1
					sleep $sleepTime
				}
			}
			
		}
		
  	}
  	end{
		# Finish
		if ($success) {
    		Write-Host "Restored the following PDs: $(($global:foundPDs | select -expand name) -join ",")"
  		} else {
			# Something went wrong
			Write-Host "Something went wrong, please check console errors and logs"
		}
	}
}

 

The post Advanced Nutanix – PowerShell Edition: Ordered PD Restore appeared first on StevenPoitras.com.

]]>
http://stevenpoitras.com/2014/10/advanced-nutanix-powershell-edition-ordered-pd-restore/feed/ 0
Advanced Nutanix – PowerShell Edition: Produce vDisk Reporthttp://stevenpoitras.com/2014/07/advanced-nutanix-powershell-edition-produce-vdisk-report/ http://stevenpoitras.com/2014/07/advanced-nutanix-powershell-edition-produce-vdisk-report/#comments Tue, 22 Jul 2014 16:49:25 +0000 http://stevenpoitras.com/?p=1157 In this post I provide a sample script on how you can use the Nutanix Powershell CMDlets to produce a report for every VM and vDisk on the...

The post Advanced Nutanix – PowerShell Edition: Produce vDisk Report appeared first on StevenPoitras.com.

]]>
In this post I provide a sample script on how you can use the Nutanix Powershell CMDlets to produce a report for every VM and vDisk on the Nutanix platform.

The script will report the following for each VM/vDisk:

VM_Name, vDisk_Name, Container, Replication_Factor, Used_Capacity_in_GB, Container_Compression, Container_Fingerprinting, vDisk_Fingerprinting,vDisk_On_Disk_Dedup, Container_On_Disk_Dedup, PD_Name, CG_Name, Snapshots

Here’s how to use the script:

# Create array of cluster to pass to function
[array]$clusters = @(
	@("99.99.99.98","admin","blah"),
	@("99.99.99.99","admin","blah")
)

NOTE: you can also do this without hardcoding passwords using the following
[array]$clusters = @(
	@("99.99.99.98","admin",$(read-host "Please enter the prism user password:" -AsSecureString)),
	@("99.99.99.99","admin",$(read-host "Please enter the prism user password:" -AsSecureString))
)

# Run function
NTNX-vDisk-Report -Clusters $clusters -OutputCSVLocation "X:\TESTING\" -CSVPrefix "testReport"

Here’s an example function to produce a vDisk report:

############################################################
##
## Script: Produce Nutanix vDisk Report
## Author: Steven Poitras
## Description: Produce Nutanix vDisk Report
## Language: PowerShell
##
############################################################
function NTNX-vDisk-Report {
<#
.NAME
	NTNX-vDisk-Report
.SYNOPSIS
	This function creates generated a report per VM/vDisk
.DESCRIPTION
    This function creates generated a report per VM/vDisk
.NOTES
	Authors:  The Dude
.LINK
	www.nutanix.com
.EXAMPLE
  NTNX-vDisk-Report -Clusters $clusters -OutputCSVLocation "X:\TESTING\" -CSVPrefix "testReport"
#>
	Param(

		[Parameter(Mandatory=$True,ValueFromPipeline=$True)]
		[array]$Clusters,
		
		[Parameter(Mandatory=$True)]
		[string]$OutputCSVLocation,
		
		[Parameter(Mandatory=$False)]
		[string]$CSVPrefix
	)

	BEGIN {
		# Don't need to get CVM stats
		$cvmPostfix = "-CVM"
		
		# Create array to store results data
		$results = @()
		
		# Connect to each cluster
		$Clusters | %{
			$server = $_[0]
			$user = $_[1]
			$password = $_[2]
			
			Write-Host "Connecting to $server"
			Connect-NutanixCluster -Server $server -UserName $user -Password $(if ($password.GetType().Name -eq "SecureString") `
				{([Runtime.InteropServices.Marshal]::PtrToStringAuto([Runtime.InteropServices.Marshal]::SecureStringToBSTR($password)))} `
				else {$password}) -AcceptInvalidSSLCerts
	
		}
		
		# Get list of VMs excluding Nutanix CVMs and vDisk objects
		$vms = Get-NTNXVM | where {$_.vmName -notmatch $cvmPostfix}
		$vdisks = Get-NTNXVDisk
		
		Write-Host "Found $vms.length VMs and $vdisks.length vDisks"
		
		if ($CSVPrefix -eq $null) {
			$CSVPrefix = "Report"
		}
		
		# Output CSV
		$fOutputCSV = "$OutputCSVLocation$CSVPrefix-$(Get-Date -Format Hmm-M-d-yy).csv"
		
		# Get list of containers
		$containers = Get-NTNXContainer
	} 
	PROCESS {
		$vms | %{ # For each VM
			$l_vm = $_
			$vmName = $l_vm.vmName
			$vDiskNames = $l_vm.vdiskNames
			
			Write-Progress -Activity "Getting stats for" `
				-Status "VM: $vmName with vDisks: $vDiskNames" -PercentComplete  (($vms.IndexOf($l_vm) / $vms.Length) * 100)
			
			$vDiskNames | %{ # For each vDisk
				$l_vDiskName = $_
				$l_vDisk = $vdisks | where {$_.name -match $l_vDiskName}

				$l_container = $containers | where {$_.id -match $l_vDisk.containerId}
				
				[string]$vDiskName = $l_vDisk.nfsFileName
				[string]$container = $l_vDisk.containerName
				$replicationFactor = $l_container.replicationFactor
				[Double]$usedCap = [Math]::Round($l_vDisk.usedLogicalSizeBytes / 1GB,3)
				[string]$snapshots = $l_vDisk.snapshots
				[string]$pdName = $l_vm.protectionDomainName
				[string]$cgName = $l_vm.consistencyGroupName
				[string]$compression = $l_container.compressionEnabled
				[string]$fingerprint = $(if ($l_vDisk.fingerPrintOnWrite -match "none") {
					"Use container setting"} else {$l_vDisk.fingerPrintOnWrite})
				[string]$diskDedup = $(if ($l_vDisk.onDiskDedup -match "none"){
					"Use container setting"} else {$l_vDisk.onDiskDedup})
				[string]$conFingerprint = $l_container.fingerPrintOnWrite
				[string]$conDiskDedup = $l_container.onDiskDedup
				
			}
			
			# Add object to results array
			$results += New-Object PSCustomObject -Property @{
				VM_Name = $vmName
				vDisk_Name = $vDiskName
				Container = $container
				Replication_Factor = $replicationFactor
				Used_Capacity_in_GB = $usedCap
				Snapshots = $snapshots
				PD_Name = $pdName
				CG_Name = $cgName
				vDisk_Fingerprinting = $fingerprint
				vDisk_On_Disk_Dedup = $diskDedup
				Container_Compression = $compression
				Container_Fingerprinting = $conFingerprint
				Container_On_Disk_Dedup = $conDiskDedup
			}

		}
	}
	END {
		# Append/write results to CSV
		Write-Host "Exporting results to CSV: $fOutputCSV"
		$results | Select-Object VM_Name, vDisk_Name, Container, Replication_Factor, `
				Used_Capacity_in_GB, Container_Compression, Container_Fingerprinting, `
				vDisk_Fingerprinting,vDisk_On_Disk_Dedup, Container_On_Disk_Dedup, `
				PD_Name, CG_Name, Snapshots |
				Export-Csv $fOutputCSV -NoTypeInformation -Append -Force
	}
}

If you have any requests or questions feel free to reach out!

The post Advanced Nutanix – PowerShell Edition: Produce vDisk Report appeared first on StevenPoitras.com.

]]>
http://stevenpoitras.com/2014/07/advanced-nutanix-powershell-edition-produce-vdisk-report/feed/ 1
Advanced Nutanix: Using Nutanix PowerShell CMDlets to Manually Fingerprint vDiskshttp://stevenpoitras.com/2014/07/advanced-nutanix-nutanix-powershell-fingerprint-vdisks/ http://stevenpoitras.com/2014/07/advanced-nutanix-nutanix-powershell-fingerprint-vdisks/#comments Wed, 09 Jul 2014 22:02:32 +0000 http://stevenpoitras.com/?p=1148 As explained in the Elastic Dedupe Engine section of the Nutanix Bible, Nutanix uses a fingerprint (SHA-1 hash) to find and remove duplicate data. In this post I...

The post Advanced Nutanix: Using Nutanix PowerShell CMDlets to Manually Fingerprint vDisks appeared first on StevenPoitras.com.

]]>
As explained in the Elastic Dedupe Engine section of the Nutanix Bible, Nutanix uses a fingerprint (SHA-1 hash) to find and remove duplicate data.

In this post I provide a sample script on how you can use the Nutanix Powershell CMDlets and remote SSH to manually fingerprint vDisks matching a specific search term.  This would come in handy where you want to fingerprint / dedupe VMs which may have been provisioned prior to enabling dedupe on the container.

Here’s an example script (inputs will need to be modified to match your environment):

############################################################
##
## Script: Manually fingerprint vdisks
## Author: Steven Poitras
## Description: Manually fingerprint vdisks matching a 
##	specific search term
## Language: PowerCLI
##
############################################################

# Import SSH module to connect to Nutanix via SSH
# source: http://www.powershelladmin.com/w/images/a/a5/SSH-SessionsPSv3.zip
Import-Module SSH-Sessions

# Data Inputs
$server = '99.99.99.99'
$user = "your prism user"
$password = read-host "Please enter the prism user password:" -AsSecureString
$searchString = "searchstring"
$end_offset_mb = "12288" #12 GB

# SSH inputs
$sshUser = 'nutanix'
$keyFile = 'path to your openssh_key'

# Connect to Nutanix Cluster
Connect-NutanixCluster -Server $server -UserName $user -Password ([Runtime.InteropServices.Marshal]::PtrToStringAuto([Runtime.InteropServices.Marshal]::SecureStringToBSTR($password))) -AcceptInvalidSSLCerts

# Get vdisks matching a particular VM search string
$vdisks = Get-VDisk | where {$_.nfsFileName -match $searchString}

# Perform string formatting
$vdiskIDs = $vdisks | select name | %{ $_ -replace ".*:" -replace "}.*"}

# Find containers where vdisks reside
$containerIDs = $vdisks.containerId | select -uniq

# For each container make sure fingerprinting is enabled
$containerIDs | %{
	# If fingerprinting is disabled for the container turn it on
	if ($(Get-Container -Id $_).fingerPrintOnWrite -eq 'off') {
		Set-Container -Id $_ -FingerprintOnWrite 'ON'
	} else {
		# Fingerprinting already enabled
	}
}

# Connect to Nutanix cluster via SSH
New-SshSession -ComputerName $server -Username $sshUser -KeyFile $keyFile

# For each vdisk ID add fingerprints
$vdiskIDs | %{
	Invoke-SshCommand -InvokeOnAll -Command "source /etc/profile > /dev/null 2>&1; `
	vdisk_manipulator -vdisk_id=$_ --operation=add_fingerprints -end_offset_mb=$end_offset_mb"
}

This is the first of many more Nutanix Powershell examples to come!

The post Advanced Nutanix: Using Nutanix PowerShell CMDlets to Manually Fingerprint vDisks appeared first on StevenPoitras.com.

]]>
http://stevenpoitras.com/2014/07/advanced-nutanix-nutanix-powershell-fingerprint-vdisks/feed/ 3
Advanced Nutanix: Create SCVMM Clusterhttp://stevenpoitras.com/2014/03/advanced-nutanix-create-scvmm-cluster/ http://stevenpoitras.com/2014/03/advanced-nutanix-create-scvmm-cluster/#comments Fri, 21 Mar 2014 17:39:15 +0000 http://stevenpoitras.com/?p=1082 Having worked with Hyper-V a lot lately I frequently need to stand up and modify SCVMM clusters.  In this post I include some code snippets of how to...

The post Advanced Nutanix: Create SCVMM Cluster appeared first on StevenPoitras.com.

]]>
Having worked with Hyper-V a lot lately I frequently need to stand up and modify SCVMM clusters.  In this post I include some code snippets of how to create a SCVMM cluster and register Nutanix SMB shares to the cluster.

# Data inputs
$clusterName = 'TM5CLU01'
$clusterNodes = 'NTNX-130', 'NTNX-131', 'NTNX-132', 'NTNX-133' # DNS or IPs of hosts
$clusterIPs = '99.99.99.99' # IP used for failover cluster
$nutanixShare = '\\nutanix-130\NTNX-NFS-DEFAULT' # SMB share point

# Create failover cluster
New-Cluster -Name $clusterName -Node $clusterNodes -NoStorage -IgnoreNetwork 192.168.5.0/24 -StaticAddress $clusterIPs

# Add cluster to SCVMM
$cred = Get-SCRunAsAccount -Name "Superstevepo"
Add-SCVMHostCluster -Name $clusterName -Reassociate $True -EnableLiveMigration $True -Credential $cred

# Add Nutanix SMB share to cluster
$cluster = Get-SCVMHostCluster -Name $clusterName
Register-SCStorageFileShare -VMHostCluster $cluster -FileSharePath $nutanixShare

Here’s a few other snippets to add and remove nodes from an existing cluster

# Add new nodes to existing cluster
$clusterName = 'TM5CLU01'
$nodesToAdd = 'NTNX-130','NTNX-131'
Add-ClusterNode -Name $nodesToAdd -Cluster $clusterName -NoStorage

# Removes node(s) from existing cluster
$clusterName = 'TM5CLU01'
$nodesToRemove = 'NTNX-130','NTNX-131'
Remove-ClusterNode -Name $nodesToRemove -Cluster $clusterName

Happy Hyper-V’ing :)

The post Advanced Nutanix: Create SCVMM Cluster appeared first on StevenPoitras.com.

]]>
http://stevenpoitras.com/2014/03/advanced-nutanix-create-scvmm-cluster/feed/ 0
Configure SQL Server AlwaysOn Availability Grouphttp://stevenpoitras.com/2014/02/configure-sql-db-availability-group/ http://stevenpoitras.com/2014/02/configure-sql-db-availability-group/#comments Thu, 13 Feb 2014 22:06:44 +0000 http://stevenpoitras.com/?p=1037 In the following post I’ll go over how to configure a Microsoft SQL Server AlwaysOn Availability Group.  NOTE: Before configuring make sure the Failover Cluster is correctly installed...

The post Configure SQL Server AlwaysOn Availability Group appeared first on StevenPoitras.com.

]]>
In the following post I’ll go over how to configure a Microsoft SQL Server AlwaysOn Availability Group.  NOTE: Before configuring make sure the Failover Cluster is correctly installed and the SQL servers are identically configured.  Also, make sure the computer accounts exist on each SQL server and have been granted the connect access on the endpoint (I show images of this below).  To learn more about SQL Server on Nutanix Check out the Microsoft SQL Server on Nutanix Best Practices!

This will be a three part series covering the following:

  • Failover Cluster Configuration
  • SQL Server AlwaysOn Availability Group Configuration – You’re here!
  • Adding Databases to a AlwaysOn Availability Group

Let’s get started!

Configure DRS Anti-affinity rules to ensure VM aren’t on the same host

021314_2201_ConfigureSQ1.png

For each SQL Server goto the SQL Configuration Manager, and Right click on the SQL Server instance running and select ‘Properties’

021314_2201_ConfigureSQ2.png

Navigate to the ‘AlwaysOn High Availability’ tab and check ‘Enable AlwaysOn Availability Groups’. Click ‘Ok’ You will need to restart the SQL Server service and perform this on all SQL servers in the AlwaysOn Availability Group

021314_2201_ConfigureSQ3.png

Open SQL Server Management Studio, expand ‘AlwaysOn High Availability’ and select ‘New Availability Group Wizard…’  Here is a good reference: http://technet.microsoft.com/en-us/library/hh403415.aspx

021314_2201_ConfigureSQ4.png

Type in the name for the AlwaysOn Availability Group and click ‘Next’

021314_2201_ConfigureSQ5.png

Select the Databases which you’d like to protect and click ‘Next’. NOTE: The recovery model must be full a full backup must have been performed previously.

021314_2201_ConfigureSQ6.png

Click on ‘Add Replica…

021314_2201_ConfigureSQ7.png

Enter the connection details for the other SQL Server and click ‘Connect’

021314_2201_ConfigureSQ8.png

The secondary SQL server will now appear.  Here is a good reference: http://technet.microsoft.com/en-us/library/hh213088.aspx

021314_2201_ConfigureSQ9.png

Check the ‘Automatic Failover…’ and ‘Synchronous Commit’ check boxes

021314_2201_ConfigureSQ10.png

On the Endponts tab verify the SQL endpoints are correct

021314_2201_ConfigureSQ11.png

On the Backup Preferences tab make sure ‘Prefer Secondary’ is selected

021314_2201_ConfigureSQ12.png

On the Listener tab select ‘Create an availability group listener’ and enter the DNS name, ports (1433 used here) and select ‘Static IP’ for Network Mode  Here is a good refence: http://technet.microsoft.com/en-us/library/hh213080.aspx

021314_2201_ConfigureSQ13.png

Add the static IP and click ‘Ok’

021314_2201_ConfigureSQ14.png

Click ‘Next’

021314_2201_ConfigureSQ15.png

Make sure the initial data synchronization is set to ‘Full’ and enter a share where the data can be staged. In this example I used a Microsoft DFS share. Click ‘Next’

021314_2201_ConfigureSQ16.png

Ensure the validation tests are successful and click ‘Next’

021314_2201_ConfigureSQ17.png

Click ‘Finish’

021314_2201_ConfigureSQ18.png

In my case I had an error joining the availability group on my secondary server

021314_2201_ConfigureSQ19.png

After looking at the logs it was clear that there were logon failures occurring on both the primary and secondary servers.  The logon error will look similar to “Database Mirroring login attempt by user ‘<DOMAIN>\<USER>.’ failed with error: ‘Connection handshake failed. The login ‘<DOMAIN>\<USER>’ does not have CONNECT permission on the endpoint. State 84.”

021314_2201_ConfigureSQ20.png

After validating that the accounts trying to be used by the availability group existed servers I ran the following SQL command to grant the account connect access to the endpoint. NOTE: This will need to be performed on both SQL servers for the connecting servers account. The SQL syntax is similar to the following:  GRANT CONNECT ON ENDPOINT::Hadr_endpoint TO [SPLAB\svc_sqlserver];  An example is shown below

021314_2201_ConfigureSQ21.png

After resolving the logs showed the connection for the availability group was successful

021314_2201_ConfigureSQ22.png

The Availability Group Dashboard now shows the replica and database status

021314_2201_ConfigureSQ23.png

Whala!

The post Configure SQL Server AlwaysOn Availability Group appeared first on StevenPoitras.com.

]]>
http://stevenpoitras.com/2014/02/configure-sql-db-availability-group/feed/ 1
Microsoft Failover Cluster Configuration with Nutanixhttp://stevenpoitras.com/2014/02/microsoft-failover-cluster-configuration-nutanix/ http://stevenpoitras.com/2014/02/microsoft-failover-cluster-configuration-nutanix/#comments Thu, 13 Feb 2014 17:36:38 +0000 http://stevenpoitras.com/?p=1006 In the following post I’ll go over  the Microsoft Failover Cluster Configuration for VMs running on Nutanix and leveraging a File share based quorum (Nutanix SMB share).  NOTE:...

The post Microsoft Failover Cluster Configuration with Nutanix appeared first on StevenPoitras.com.

]]>
In the following post I’ll go over  the Microsoft Failover Cluster Configuration for VMs running on Nutanix and leveraging a File share based quorum (Nutanix SMB share).  NOTE: Before configuring make sure the Failover Clustering feature is installed on all servers that will be clustered.  To learn more about SQL Server on Nutanix Check out the Microsoft SQL Server on Nutanix Best Practices!

This will be a three part series covering the following:

Let’s get started!

Open ‘Failover Cluster Manager’ and click on ‘Create Cluster…’

021314_1731_FailoverClu1.png

Click ‘Next’

021314_1731_FailoverClu2.png

Browse for the servers which will be added to the cluster, click ‘Ok’

021314_1731_FailoverClu3.png

The servers for the cluster will be shown below, click ‘Next’

021314_1731_FailoverClu4.png

Select ‘Yes…’ to run the cluster validation, then click ‘Next’

021314_1731_FailoverClu5.png

Click ‘Next’

021314_1731_FailoverClu6.png

Select ‘Run all tests…’ and click ‘Next’

021314_1731_FailoverClu7.png

Click ‘Next’

021314_1731_FailoverClu8.png

If some tests fail check the failed tests by clicking on ‘View Report’.  In my example these servers are both mounting NDFS locally using the private IP (192.168.5.2) which gave a duplicate IP error message.  This can be ignored and a the cluster IP should be used if using Nutanix, or the IP of an external DFS server (preferred).

Click ‘Finish’

021314_1731_FailoverClu9.png

Type in a Cluster Name and click ‘Next’.  NOTE: the Cluster Name should be less than 15 characters for NETBIOS

021314_1731_FailoverClu10.png

Un-check ‘Add eligible storage to the cluster’ and click ‘Next’.  Since we’re using a file share based quorum we don’t need to add any storage to the cluster.  If we were using iSCSI we could add the iSCSI LUN to the cluster for the quorum

021314_1731_FailoverClu11.png

Click ‘Finish’

021314_1731_FailoverClu12.png

Right click on the Cluster and navigate to ‘Configure Cluster Quorum Settings…’

021314_1731_FailoverClu13.png

Click ‘Next’

021314_1731_FailoverClu14.png

Select the ‘Add or change the quorum witness’ radio button and click ‘Next’

021314_1731_FailoverClu15.png

Select ‘Configure a file share witness…’ and click ‘Next’

021314_1731_FailoverClu16.png

Enter a File Share path, in this case I used SMB accessed NDFS container where I created a Quorum folder on the container.  It is recommended to create a new NIC and attach to each SQL server to allow it to communicate with the Nutanix cluster IP or IP of the DFS server.  This can be validated by navigating to it using Windows Explorer.

Click ‘Next’

021314_1731_FailoverClu17.png

Here’s an example with an actual path

failovercluster_local

Click ‘Next’

021314_1731_FailoverClu19.png

Click ‘Finish’

021314_1731_FailoverClu20.png

You can now see the file share quorum has been added and the status should be ‘Online’

021314_1731_FailoverClu21.png

Click on the cluster and select ‘Properties’.  Configure a static IP address for the cluster and click ‘Ok’

021314_1731_FailoverClu22.png

Click ‘Yes’

021314_1731_FailoverClu23.png

You will then be prompted the IP is online, click ‘Ok’

021314_1731_FailoverClu24.png

Whala, you now have a failover cluster prepped and ready for SQL AlwaysOn or another clustered service.  I’ll be posting shortly on how to configure a SQL Server AlwaysOn cluster on Nutanix!

Now lets configure a SQL AlwaysOn Availability Group and protect the databases, HERE!

The post Microsoft Failover Cluster Configuration with Nutanix appeared first on StevenPoitras.com.

]]>
http://stevenpoitras.com/2014/02/microsoft-failover-cluster-configuration-nutanix/feed/ 1
Advanced Nutanix: SQL Server on Nutanix Best Practices Released!http://stevenpoitras.com/2013/11/advanced-nutanix-sql-server-nutanix-best-practices-released/ http://stevenpoitras.com/2013/11/advanced-nutanix-sql-server-nutanix-best-practices-released/#comments Sat, 09 Nov 2013 00:57:16 +0000 http://stevenpoitras.com/?p=907 Recently I’m been in the trenches with Microsoft SQL Server on the Nutanix platform and am pleased to finally announce the public release of the Microsoft SQL Server...

The post Advanced Nutanix: SQL Server on Nutanix Best Practices Released! appeared first on StevenPoitras.com.

]]>
Recently I’m been in the trenches with Microsoft SQL Server on the Nutanix platform and am pleased to finally announce the public release of the Microsoft SQL Server on Nutanix Best Practices!

In this post I’ll talk about some of the key best practices to optimize MSSQL performance on Nutanix and cover some of the performance results.  The following post below is a small excerpt from the document so be sure to check it out!

Here’s the link to the SQL Server on Nutanix Best Practices document: LINK

Solution Overview

Below we show a high-level overview of the Microsoft SQL Server on Nutanix solution.  A key thing to highlight here is the ability to handle both MSSQL workloads as well as other workloads including VDI, App/Data servers and other Microsoft services like Sharepoint and Exchange.

SQL-overview

Best Practices Checklist

In reality its simple, to get optimal performance of MSSQL on Nutanix you don’t need to change anything on Nutanix.  Just keep a minimum of 4-6 disks on each SQL server and all of the ILM, tiering, caching, etc. is handled by NDFS.  Like one of my earlier posts on DFS, just keep it simple :)

Now, there are a lot of configurable parameters for MSSQL and items that need to be tuned within the application.

Here’s a high-level checklist of the SQL Server tunings:

General

  • Perform a current state analysis to identify workloads and sizing
  • Spend time up front to architect a solution that meets both current and future needs
  • Design to deliver consistent performance, reliability, and scale
  • Don’t undersize, don’t oversize, right size
  • Start with a PoC, test, optimize, iterate, scale

Core Components

MSSQL

Performance and Scalability

  • Utilize multiple drives for TempDB Log/Data and Database Log/Data
    • Start with a minimum of 2 for small environments or 4 for larger environments
    • Look for PAGEIOLATCH_XX contention and scale number of drives as necessary
  • Utilize a 64KB NTFS allocation unit size for MSSQL drives
  • Enabled locked pages in memory for MSSQL Server service account (NOTE: if this setting is used the VM’s memory must be locked, only applies with memory > 8GB)
  • TempDB Data Files
    • Set TempDB size between 1 and 10% of instance database sizes
    • If number of cores < 8
      • # of cores = # of data files
    • If number of cores > 8Use 8 data files to being with
      • Look for contention for in-memory allocation (PAGELATCH_XX) and scale 4 files at a time until contention is eliminated
  • Database Data files
    • Size appropriately and enable AUTOGROW respective to database growth
    • Do not AUTOSHRINK data and log files
    • At a maximum keep below 80% of disk capacity utilization
    • Use multiple data file and drives
      • Look for contention  for in-memory allocation (PAGELATCH_XX), if contention increase number of files
      • Look for I/O subsystem contention (PAGEIOLATCH_XX), if contention, spread the data files across multiple drives
  • Trace flags
    • Implement trace flag 1118 at startup to remove single page allocations
    • Implement trace flag 834 to enable large pages (for tier-1 performance)
  • Utilize the MSSQL Server Best Practices Analyzer (BPA) to identify potential issues
  • Utilize fast file initialization
  • Scale number of MSSQL VMs vs. number of MSSQL instances per VM
  • More memory = higher performance, if seeing memory pressures, increase VM memory
  • Utilize a dedicated disk for Microsoft Page File

Availability

  • In most cases vSphere HA will provide an adequate level of availability and uptime for non-mission critical/tier-1 applications
  • For mission critical/tier-1 applications:
    • MSSQL 2012: utilize AlwaysOn availability groups (preferred)
    • MSSQL 2008 and prior: utilize log shipping or clustered MSSQL using MSCS clusters
  • Take consistent database snapshots/backups, frequency should be derived from required RPOs
  • Leverage native or 3rd party tools to manage backups (eg. Microsoft System Center Data Protection Manager (DPM), etc.)

Manageability

  • Standardize, monitor and maintain
  • Leverage a MSSQL application monitoring solution (eg. System Center, etc.)
  • Create standardized MSSQL VM Templates
  • Utilize consistent disk quantities and layout schemes for MSSQL VMs
  • Join the MSSQL Server to the domain and use Active Directory for authentication
  • Leverage Contained Database Authentication (MSSQL 2012)
  • Use named instances for MSSQL database instances, even when only planning a single instance per VM
  • For named instances,  ensure application compatibility with dynamic ports, otherwise set instance to use a fixed port

VMware vSphere

  • Follow VMware performance best practices
  • Avoid CPU core oversubscription (for tier-1 workloads)
  • For small MSSQL VMs keep vCPUs <= to the number of cores per each physical NUMA node
  • For wide MSSQL VMs size vCPUs to align with physical NUMA boundaries and leverage vNUMA
  • Keep vCPU numbers easily divisible by NUMA node sizes for easy scheduling
  • Leave Hyperthreading sharing at the default policy (Any)
  • Enable ‘High Performance’ host power policy
  • Lock MSSQL VM memory (for tier-1 workloads)
  • Size MSSQL VM memory using the following calculation:
    • VM Memory = SQL Server Max Memory + ThreadStack + OS Memory + VM Overhead
    • Threadstack = SQL Max Worker Threads * 2MB (for x64)
  • Use paravirtual SCSI Controllers and VMXNET3 Nics
  • Use resource pools with correct share allocation
  • Use DRS anti-affinity rules to keep MSSQL VMs apart

Nutanix

  • Use a single container
  • Utilize appropriate model based upon compute and storage requirements
    • Ideally keep working set in SSD and capacity within node
    • Choose a model which can ideally fit the full database on a single node.  NOTE: for larger databases which cannot fit on a node, ensure there is ample bandwidth between nodes
    • Utilize higher memory node models for I/O heavy MSSQL workloads
  • Create a dedicated consistency group with the MSSQL VMs and applications
  • Leverage ‘Application Consistent Snapshots’ on the consistency group to invoke VSS when snapshotting

Supporting Components

Network

  • Utilize and optimize QoS for NDFS and database traffic
  • Use low-latency 10GbE switches
  • Utilize redundant 10GbE uplinks from each Nutanix node
  • Ensure adequate throughput between Nutanix nodes and MSSQL VMs
  • Check for any pause frames which could impact replication and VM communication

Active Directory

  • Utilize AD based authentication for MSSQL servers

OS and Application Updates

  • Schedule updates to be applied outside business hours to avoid performance impacts
  • Stagger updates in phases

Performance Testing

For performance testing we utilized a mix of SQLIO, SQLIOSim and HammerDB to simulate workloads and test NDFS I/O performance for MSSQL.  To learn more on how the testing was performed or to automate, check out my earlier post on Automating SQLIO Benchmarking with Powershell or check out the best practices doc.  The script is pretty awesome, I used it to automate over 20,000 SQLIO test runs, made my life easy :)

For the SQLIO testing a single VM was leveraged to find a node’s performance.  These numbers give a nodes performance which can then be extrapolated by the number of nodes (NOTE: minimum cluster size is 3).

The figure below shows the SQLIO IOPS and throughput performance based upon the block size.  Results showed the single SQLIO VM was able to facilitate ~35,000 random IOPS with a 8 KB block size and ~16,000 IOPS with a 64 KB block size.  Sequential I/O peaked with a 512 KB block size at ~1,200 MBps (1.2 GBps) and was ~950 MBps (.95 GBps) with a 64 KB block size.

sqlio_agg_iops2

The figure below shows the SQLIO operation latency for read and write workloads based upon the block size.  Results showed read latency kept consistently under 1ms and in microseconds for 8 KB and 64 KB block sizes and ~1 ms for 512 KB.  Write latency was consistent at 1 ms for 8 and 64 KB block sizes and ~5 ms for 512 KB.

sqlio_lat_blocksize

 

The Controller VM (CVM) hosting the VM running the SQLIO workload (CVM-B – highlighted with * below), peaked at ~91% CPU utilization during the testing.  All other CVM CPU utilizations stayed consistently ~15-20% which is expected during idle operation.  Aggregate cluster CPU utilization peaked at 22%.

This highlights the ability to incrementally scale out the number of SQL servers and maintain performance as the number of databases/instances scale, as well as the ability to eliminate any noisy-neighbor scenarios.

sqlio_cvm_cpu

 

To learn more check out the best practices document which has a lot more details and test information! :)

 

The post Advanced Nutanix: SQL Server on Nutanix Best Practices Released! appeared first on StevenPoitras.com.

]]>
http://stevenpoitras.com/2013/11/advanced-nutanix-sql-server-nutanix-best-practices-released/feed/ 2
Just DFS it!http://stevenpoitras.com/2013/11/just-dfs/ http://stevenpoitras.com/2013/11/just-dfs/#comments Wed, 06 Nov 2013 00:44:52 +0000 http://stevenpoitras.com/?p=724 I see a lot of questions from customers and others around how to handle user data and replication – in this post I’ll go over some considerations around...

The post Just DFS it! appeared first on StevenPoitras.com.

]]>
I see a lot of questions from customers and others around how to handle user data and replication – in this post I’ll go over some considerations around user and data management and why DFS can be an ideal choice.

First off, what are my options?

Most of the time, customers have the following considerations:

  • Local File Server
  • Array Based (CIFS/SMB)
  • Embedded VDI (View Persona, Personal vDisk)
  • DFS

Each of the above has it’s own applicable scenarios, but each comes with caveats.  Local file servers are site specific and have limited HA capabilities in the case of a site outage.  Array based solutions are high performing (…initially) however are inflexible and expensive, and embedded VDI can be a configuration nightmare and forces vendor lock-in.  So what to do?

Follow the KISS principle!

I’ve gone through a ton of complex datacenter and system designs and one very fundamental principle to a successful deployment and operation is not making things more complex than they need to.

When it comes down to it there are very simple things needed by any file/user data storage platform:

  • Manageability – I have to be able to build AND manage it
  • Availability – Data needs to be available
  • Performance – I don’t want to spend a lot of time waiting
  • Scalability – I need to be able to increase storage capacities

Guess what, we can have all of these with DFS …and Nutanix :)

Windows DFS – A Primer

We’re all aware of Windows file servers and have likely been using them for ages.  DFS advances these capabilities and provides IT something that can be deployed and managed for the enterprise.  Deployment of DFS is done by installing the File Services role with the DFS features.  You can also learn more from Microsoft here: http://technet.microsoft.com/en-us/library/cc732006.aspx

DFS can be broken down into two main components (we’ll go into each further below):

  • DFSN – The DFS Namespace Component
  • DFSR – The DFS Replication Component

DFSN

DFSN creates a domain based namespace (\\mydomain.com\DATA) which clients can access shares.  Each namespace is composed of one or many DFS file servers each hosting storage and data.  A key piece to DFSN is that the namespace might consist of multiple file servers which are dispersed in multiple geographical locations, however when accessed by clients it appears like a single share.

When accessing a namespace share the client’s DFS file server is chosen based upon targets which can be based upon cost, local site or randomly.  If a client were to move from one site to another and the referral cache duration (time to cache referrals for namespace) has expired the client’s DFS target would then be a different (preferably local) DFS file server.

Below we show an example of how DFSN looks to a client:

DFS-highlevel

 

DFSR

DFSR is the replication capability of DFS and allows admins to create replication typologies and schedules.  This allows for both high-availability in the case of a DFS file server, or site, failure as well as for the ability to perform I/Os on a local DFS file server as compared to going over the WAN.  This can be used to replicate user data from the branch to a failover site or datacenter as well as for bits/software distribution between sites.

DFSR has a few main typologies:

  • Hub and Spoke
  • Full Mesh
  • Custom

The hub and spoke topology is similar to your normal branch office (spoke) and datacenter (hub) replication topology.  This topology is great for branch office user data failover where the user data and virtual desktop might be hosted at a branch office, and in the case of a site outage be failed over to a datacenter.  This topology is also normally used for data collection for central backups of multiple sites.  With this model remote spoke sites would replicate data to the hub site where the data would then be centrally backed up.

Below we show an example of what this looks like:

DFS-hubspoke

 

The full mesh model replicates between all participating peers and is best suited for smaller replication groups.  This can also be used for peer replication between a few sites where they will protect each other or to replicate data between a few sites.

Below we show an example of what this looks like:

DFS-mesh

 

Custom replication topologies can be configured for cases where full mesh or hub and spoke models don’t fit.

In conclusion…

Is DFS the answer for every user data issue or data replication decision?  No.  However, it can be a good starting point and solution to be considered for multiple scenarios.  I’ve used this and deployed in instances ranging from DFS for VDI user home folder redirection to ISO and software replication between sites and vDisk replication for Citrix PVS.

Remember, follow the KISS principle

The post Just DFS it! appeared first on StevenPoitras.com.

]]>
http://stevenpoitras.com/2013/11/just-dfs/feed/ 0
Automating SQLIO Benchmarking with Powershellhttp://stevenpoitras.com/2013/09/automating-sqlio-benchmarking-powershell/ http://stevenpoitras.com/2013/09/automating-sqlio-benchmarking-powershell/#comments Thu, 26 Sep 2013 17:54:07 +0000 http://stevenpoitras.com/?p=848 Recently I’ve been doing a great deal of SQLIO benchmarking on Nutanix for an upcoming SQL on Nutanix Best Practices document (hush hush :P).  In this post I’ll...

The post Automating SQLIO Benchmarking with Powershell appeared first on StevenPoitras.com.

]]>
Recently I’ve been doing a great deal of SQLIO benchmarking on Nutanix for an upcoming SQL on Nutanix Best Practices document (hush hush :P).  In this post I’ll go over how I automated these SQLIO tests using Powershell to drive the testing and automatically output to a consumable CSV.

This test will take parameters specified by the user and automate the test piping the output to a CSV document specified by the user.  I just used this to automate over 2,000 SQLIO runs on Nutanix (still running as we speak!).

Also, here’s an awesome post by Jose Barreto which motivated me to automate and served as a foundation for running SQLIO via Powershell: LINK.  The article also has a plethora of information on how to utilize SQLIO to identify performance bottlenecks and breaking points.

Here’s an example of calling the function (below).  NOTE: you can also utilize loops in Powershell (this is what I do using a hash table of various test params) to drive multiple iterations.

  • NTNX-Run-SQLIO -Iterations 10 -Duration 30 -RorW R -RandOrSeq random -BlockSize 512 -Threads $_ -OutstandingOps 1 -TargetFile F:\test.dat -TargetType file -OutputCSV X:\results.csv

The script

Now for the good stuff, here’s the Powershell script (formatting and tabbing stripped since I removed the code plugin since it was causing long load times):

function NTNX-Run-SQLIO {
<#
.NAME
	NTNX-Run-SQLIO
.SYNOPSIS
	This function does automated SQLIO runs
.DESCRIPTION
    This function does automated SQLIO runs
.NOTES
	Authors:  VMwareDude
.LINK
	www.nutanix.com
.PARAMETER Iterations
  This parameter specifies the number of iterations to perform the test
.PARAMETER Duration
  This parameter specifies the duration of each SQLIO run
.PARAMETER RorW
  This parameter specifies if the test is read (R) or write (W).  Applicable values are R or W.
.PARAMETER RandOrSeq
  This parameter specifies if the test is random or sequential.  Applicable values are random or sequential.
.PARAMETER BlockSize
  This parameter specifies the blocksize of the IOs.  Applicable values are 8,64 or 512.
.PARAMETER Threads
  This parameter specifies the number of threads to use
.PARAMETER OutstandingOps
  This parameter specifies the number of outstanding ops to use
.PARAMETER TargetFile
  This parameter specifies the target file or param file to use (eg. test.dat)
.PARAMETER TargetType
  This parameter specifies the type of target param or file.  Applicable values are param or file.
.PARAMETER OutputCSV
  This parameter specifies the CSV file to output (eg. X:\results.csv)
.EXAMPLE
  NTNX-Run-SQLIO -Iterations 10 -Duration 30 -RorW R -RandOrSeq random -BlockSize 512 -Threads $_ -OutstandingOps 1 -TargetFile F:\test.dat -TargetType file -OutputCSV $gOutputCSV
#>
	Param(
		[Parameter(Mandatory=$True,ValueFromPipeline=$True)]
		[int]$Iterations,

        [Parameter(Mandatory=$True,ValueFromPipeline=$True)]
		[int]$Duration,
	  
		[Parameter(Mandatory=$True,ValueFromPipeline=$True)]
		[ValidateSet("R","W")]
		[string]$RorW,
	  	
		[Parameter(Mandatory=$True,ValueFromPipeline=$True)]
		[ValidateSet("random","sequential")]
		[string]$RandOrSeq,
				
		[Parameter(Mandatory=$True,ValueFromPipeline=$True)]
		[ValidateSet("8","64","512")]
		[string]$BlockSize,
		
		[Parameter(Mandatory=$True,ValueFromPipeline=$True)]
		[string]$Threads,
		
		[Parameter(Mandatory=$True,ValueFromPipeline=$True)]
		[string]$OutstandingOps,
		
		[Parameter(Mandatory=$True,ValueFromPipeline=$True)]
		[string]$TargetFile,
		
		[Parameter(Mandatory=$True,ValueFromPipeline=$True)]
		[ValidateSet("param","file")]
		[string]$TargetType,

		[Parameter(Mandatory=$False,ValueFromPipeline=$True)]
		[string]$TargetNum,
		
		[Parameter(Mandatory=$True,ValueFromPipeline=$True)]
		[string]$OutputCSV
	)

  BEGIN {
		# Create array to store results data
		$fResults = @()
		
		# Perform string formatting for params
		$fSecs = "-s$Duration"
		$fRorW = "-k$RorW"
		$fRandOrSeq = "-f$RandOrSeq"
		$fBlockSize = "-b$BlockSize"
		$fThreads = "-t$Threads"
		$fOps = "-o$OutstandingOps"
		$fTarget = if ($TargetType -eq "param") {"-F$TargetFile"} else {$TargetFile}
		$resultsCSV = $OutputCSV
		
	} PROCESS {
        1..$Iterations | %{

		    Write-Host "Running SQLIO with the following params: " $fSecs $fRorW $fRandOrSeq $fBlockSize $fThreads $fOps $fTarget
		
		    $r = C:\SQLIO\SQLIO.EXE $fSecs $fRorW $fRandOrSeq $fBlockSize $fThreads $fOps -LS -BN $fTarget

		
		    # Run command.  NOTE: The output varies when using a param file with multiple files.
		    if ($TargetType -eq "param") {
	            # Parse multi file output
                if ($TargetNum -eq 2) {
                    # Formatting for 2 disk
	                $iops = $r.Split("`n")[14].Split(":")[1].Trim() 
	                $mbps = $r.Split("`n")[15].Split(":")[1].Trim() 
	                $lat = $r.Split("`n")[18].Split(":")[1].Trim() 
                } elseif ($TargetNum -eq 4) {
                    # Formatting for 4 disk
	                $iops = $r.Split("`n")[18].Split(":")[1].Trim() 
	                $mbps = $r.Split("`n")[19].Split(":")[1].Trim() 
	                $lat = $r.Split("`n")[22].Split(":")[1].Trim() 
                } elseif ($TargetNum -eq 6) {
                    # Formatting for 6 disk
	                $iops = $r.Split("`n")[22].Split(":")[1].Trim() 
	                $mbps = $r.Split("`n")[23].Split(":")[1].Trim() 
	                $lat = $r.Split("`n")[26].Split(":")[1].Trim() 
                }
		    } else {
	            # Parse file output
	            $iops = $r.Split("`n")[10].Split(":")[1].Trim() 
	            $mbps = $r.Split("`n")[11].Split(":")[1].Trim() 
	            $lat = $r.Split("`n")[14].Split(":")[1].Trim() 
		    }
		
		    $fResults += New-Object PSObject -Property @{
	            File = $TargetFile
                NumDisks = $TargetNum
	            ReadWrite = $RorW
	            RanSeq = $RandOrSeq
	            BlockSize = $BlockSize
	            NumThreads = $Threads
	            OutstandingOps = $OutstandingOps
	            IOPS = $iops
	            Throughput = $mbps
	            Latency = $lat
	            Timestamp = $(Get-Date -Format "yyyy-MM-dd HH:mm:ss")
		    }

            #Print results to display
            write-host "Target: $fTarget IOPS: $iops Throughput: $mbps Latency: $lat"
        }
		
	} END {
		# Append/write results to CSV
		$fResults | Export-Csv $OutputCSV -NoTypeInformation -Append -Force
	}
}

 

Driving script with params

Here’s an example of how to automate this for all possible test variations.  This will take a array of files and perform every test and combination on those and output results to CSV for consumption.

$gOutputCSV = "X:\TESTING\results2.csv"
$gIterations = 5
$gDuration = 30
$gVarIterations = 30
$gFiles = ("H:\test.dat","file"),("L:\test.dat","file"),("P:\test.dat","file"),("T:\test.dat","file")
$gBlockSizes = "8","64","512"
$gRorW = "R","W"
$gRandOrSeq = "random","sequential"

# For each file
$gFiles | %{
    # Set file vars
    $tFile = $_[0]
    $tFileType = $_[1]

    # For each block size
    $gBlockSizes | %{
        # Set block size
        $tBlockSize = $_

        # For both R and W
        $gRorW | %{
            $tRorW = $_

            # For random and sequential
            $gRandOrSeq | %{
                $tRandOrSeq = $_

                # Perform test iterations
                1..$gVarIterations | %{

                    $tVarInt = $_

                    #Run var ops test
                    NTNX-Run-SQLIO -Iterations $gIterations -Duration $gDuration -RorW $tRorW -RandOrSeq $tRandOrSeq -BlockSize $tBlockSize -Threads 1 -OutstandingOps $tVarInt -TargetFile $tFile -TargetType $tFileType -OutputCSV $gOutputCSV

                    #Run var thread test
                    NTNX-Run-SQLIO -Iterations $gIterations -Duration $gDuration -RorW $tRorW -RandOrSeq $tRandOrSeq -BlockSize $tBlockSize -Threads $tVarInt -OutstandingOps 1 -TargetFile $tFile -TargetType $tFileType -OutputCSV $gOutputCSV

                    #Run var thread and ops test
                    NTNX-Run-SQLIO -Iterations $gIterations -Duration $gDuration -RorW $tRorW -RandOrSeq $tRandOrSeq -BlockSize $tBlockSize -Threads $tVarInt -OutstandingOps $tVarInt -TargetFile $tFile -TargetType $tFileType -OutputCSV $gOutputCSV
                }
            }
        }
    }
}

 

The post Automating SQLIO Benchmarking with Powershell appeared first on StevenPoitras.com.

]]>
http://stevenpoitras.com/2013/09/automating-sqlio-benchmarking-powershell/feed/ 0
Advanced Nutanix: Splunk Forwarder Deploymenthttp://stevenpoitras.com/2013/09/advanced-nutanix-splunk-universal-forwarder-deployment/ http://stevenpoitras.com/2013/09/advanced-nutanix-splunk-universal-forwarder-deployment/#comments Thu, 19 Sep 2013 17:00:40 +0000 http://stevenpoitras.com/?p=830 Recently I’ve become a huge fan of Splunk and it’s capabilities and have put together the Splunk on Nutanix Reference Architecture.  In this post I’ll cover how to...

The post Advanced Nutanix: Splunk Forwarder Deployment appeared first on StevenPoitras.com.

]]>
Recently I’ve become a huge fan of Splunk and it’s capabilities and have put together the Splunk on Nutanix Reference Architecture.  In this post I’ll cover how to automatically deploy the Splunk Universal Forwarder on the Nutanix Controller VM to allow Splunk to index and search Nutanix logs.

By deploying Splunk universal forwarders on my Nutanix CVMs I can forward my Nutanix logs over to Splunk and allow myself to do some interesting log and data analysis.  You can also correlate these to release versions, outages and performance runs and get some very interesting data.

Here we show an example of searching Nutanix log data in Splunk:

Splunk_SearchStargate

Here are the steps to deploy the universal forwarder on the CVM:  (NOTE: You can put all of this together into a single script, however, I’ve chosen to keep apart to show the various steps of deployment and to keep things clear)

  1. Download Splunk universal forwarder rpm
  2. SCP forwarder rpm to ~/tmp on a Nutanix CVM where you’ll be running the commands
  3. Copy bits to other CVMs
    • for i in svmips;do scp ~/tmp/<Rpm Name> $i:/home/nutanix/tmp/;done
    • Example: for i in svmips;do scp ~/tmp/splunkforwarder-6.0-178852-linux-2.6-x86_64.rpm $i:/home/nutanix/tmp/;done
  4. Install package (NOTE: This will install to /opt/splunkforwarder)
    • for i in svmips;do ssh $i sudo rpm -i ~/tmp/<Rpm Name>;done
    • Example: for i in svmips;do ssh $i sudo rpm -i ~/tmp/splunkforwarder-6.0-178852-linux-2.6-x86_64.rpm;done
  5. Start Splunk forwarder
    • for i in svmips;do ssh $i sudo /opt/splunkforwarder/bin/splunk start –answer-yes –no-prompt –accept-license;done
  6. Enable forwarder to start on boot
    • for i in svmips;do ssh $i sudo /opt/splunkforwarder/bin/splunk enable boot-start;done
  7. Add the forward server
    • for i in svmips;do ssh $i sudo /opt/splunkforwarder/bin/splunk add forward-server <Forward Server>:9997 -auth admin:<Local Password>;done
    • Example: for i in svmips;do ssh $i sudo /opt/splunkforwarder/bin/splunk add forward-server 10.2.133.196:9997 -auth admin:changeme;done
  8. Add monitors
    • for i in svmips;do ssh $i sudo /opt/splunkforwarder/bin/splunk add monitor /home/nutanix/data/logs/<Log to Monitor> -sourcetype <Log Type>;done
    • Example: for i in svmips;do ssh $i sudo /opt/splunkforwarder/bin/splunk add monitor /home/nutanix/data/logs/stargate.FATAL -sourcetype stargate.FATAL;done

Here are some interesting log files to monitor and feed into Splunk (includes the commands to add the monitor):

  • stargate.FATAL
    • for i in svmips;do ssh $i sudo /opt/splunkforwarder/bin/splunk add monitor /home/nutanix/data/logs/stargate.FATAL -sourcetype stargate.FATAL;done
  • stargate.ERROR
    • for i in svmips;do ssh $i sudo /opt/splunkforwarder/bin/splunk add monitor /home/nutanix/data/logs/stargate.ERROR -sourcetype stargate.ERROR;done
  • stargate.WARNING
    • for i in svmips;do ssh $i sudo /opt/splunkforwarder/bin/splunk add monitor /home/nutanix/data/logs/stargate.WARNING -sourcetype stargate.WARNING;done
  • stargate.INFO
    • for i in svmips;do ssh $i sudo /opt/splunkforwarder/bin/splunk add monitor /home/nutanix/data/logs/stargate.INFO -sourcetype stargate.INFO;done
  • genesis.out
    • for i in svmips;do ssh $i sudo /opt/splunkforwarder/bin/splunk add monitor /home/nutanix/data/logs/genesis.out -sourcetype genesis.out;done
  • zookeper.out
    • for i in svmips;do ssh $i sudo /opt/splunkforwarder/bin/splunk add monitor /home/nutanix/data/logs/zookeeper.out -sourcetype zookeeper.out;done
  • cassandra system log
    • for i in svmips;do ssh $i sudo /opt/splunkforwarder/bin/splunk add monitor /home/nutanix/data/logs/cassandra/system.log -sourcetype cassandra_system;done

Happy Splunking! :)

The post Advanced Nutanix: Splunk Forwarder Deployment appeared first on StevenPoitras.com.

]]>
http://stevenpoitras.com/2013/09/advanced-nutanix-splunk-universal-forwarder-deployment/feed/ 0