harbar.net component based software & platform hygiene

Enabling multiple OUs and avoiding credential touch up with the MIMSync “toolset” for SharePoint Server 2016

posted @ Thursday, August 25, 2016 7:56 AM | Feedback (0)

As many of you are aware there is a “toolset” published on GitHub which provides one way to get up and running using Microsoft Identity Manager 2016 (MIM) for profile synchronization with Active Directory. This Windows PowerShell Module and exported MA configurations basically provisions a base capability more or less akin to what shipped with SharePoint 2013’s User Profile Synchronization capability.

I’m not much of a fan of this Module or it’s approach. Seriously, if a customer is going down the road of implementing MIM they better be sure they have the right skills in place – and right skills won’t be using this toolkit. Furthermore, the default property mappings etc are well, defaults. The less said the better frankly.

But of course there is the ye olde upgrade consideration. Customers who were using UPS need something to make it easier to move to MIM and of course there are also many without MIM experience (perhaps this year’s greatest understatement so far!). So there is a need for the Module despite it’s faults conceptually and in terms of the implementation (which are not the fault of the coder but consequences of bugs in the products).

Unfortunately due to the nature of the SharePoint Connector and bugs with MIM PowerShell cmdlets, the current version only supports a single container selection and also requires that the MA exports are “fixed” – i.e. with the details of the customer domain.

With the MIM hotfix rollup 4.3.2195.0 and changes to the module both of these issues can be avoided. It’s not a full “fix” and it’s not without a downside. However it is a much more practical “quick start”. Especially for customers with privileged access management policies enforced for administrators. The primary benefit thou is multiple container selection – the most common complaint I’ve received.

Once the hotfix update is installed on the MIM Sync box, the Install-SharePointSyncConfiguration function of the SharePointSync.psm1 module needs altered.

Basically, I’ve changed a a parameter name, removed the credential touch up work, and added a call to the MIM Set-MIISADMAConfiguration cmdlet after preparing the –Partitions parameter value. I’ve also updated the version check.

function Install-SharePointSyncConfiguration
{
<#
.Synopsis
   Configures the Synchronization Service for SharePoint User Profile Synchronization
.DESCRIPTION
   Long description
.EXAMPLE
   Install-SharePointSyncConfiguration -Path C:\SharePointSync -ForestDnsName litware.ca -ForestCredental (Get-Credential LITWARE\Administrator) –OrganizationalUnits 'ou=Litwarians,dc=Litware,dc=ca' -SharePointUrl http://SharePointServer:5555 -SharePointCredential (Get-Credential LITWARE\Administrator)
.EXAMPLE
    $spProps = @{
        Path                 = 'C:\Temp\SharePointSync'
        ForestDnsName        = 'litware.ca'
        ForestCredential     = New-Object PSCredential ("LITWARE\administrator", (ConvertTo-SecureString 'J$p1ter' -AsPlainText -Force))
        OrganizationalUnits   = 'ou=Legal,dc=Litware,dc=ca;ou=Litwarians,dc=Litware,dc=ca'
        SharePointUrl        = 'http://cmvm38386:9140'
        SharePointCredential = New-Object PSCredential ("LITWARE\administrator", (ConvertTo-SecureString 'J$p1ter' -AsPlainText -Force))
    }
    Install-SharePointSyncConfiguration @spProps -Verbose

#>
    [CmdletBinding()]
    [OutputType([int])]
    Param
    (
        # Path to the configuration XML files
        [Parameter(Mandatory=$true, Position=0)]
        $Path,

        # DNS name of the Active Directory forest to synchronize (ie - litware.ca)
        [Parameter(Mandatory=$true, Position=1)]
        $ForestDnsName,

        # Credential for connecting to Active Directory
        [Parameter(Mandatory=$true, Position=2)]
        [PSCredential]
        $ForestCredential,

        # OU(s) to synchronize to SharePoint (semi-colon delimited)
        [Parameter(Mandatory=$true, Position=3)]
        $OrganizationalUnits,

        # URL for SharePoint
        [Parameter(Mandatory=$true, Position=4)]
        [Uri]    
        $SharePointUrl,

        # Credential for connecting to SharePoint
        [Parameter(Mandatory=$true, Position=5)]
        [PSCredential]
        $SharePointCredential,

        # Flow Direction for Profile Pictures
        [Parameter(Mandatory=$false, Position=6)]
        [ValidateSet('Export only (NEVER from SharePoint)', 'Import only (ALWAYS from SharePoint)')]
        [String]
        $PictureFlowDirection = 'Export only (NEVER from SharePoint)'
    )

    #region Pre-requisites
    if (-not (Get-SynchronizationServiceRegistryKey))
    {
        throw "The Synchronization Service is not installed on this computer.  Please install the MIM Synchronization Service on this computer, or run this script on a computer where the MIM Synchronization Service is installed." 
    }

    if (-not (Get-Service -Name FimSynchronizationService))
    {
        throw "The Synchronization Service is installed but not running.  Please start the MIM Synchronization Service before running this script (Start-Service -Name FimSynchronizationService).  If the service fails to start please see the event log for details." 
    }

    if ((Test-SynchronizationServicePermission) -eq $false)
    {
        throw "The current user must be a member of the Synchronization Service Admins group before this command can be run.  You may need to logoff/logon before the group membership takes effect."
    }

    $MimPowerShellModuleAssembly = Get-Item -Path (Join-Path (Get-SynchronizationServicePath) UIShell\Microsoft.DirectoryServices.MetadirectoryServices.Config.dll)
    if ($MimPowerShellModuleAssembly.VersionInfo.ProductMajorPart -eq 4 -and
        $MimPowerShellModuleAssembly.VersionInfo.ProductMinorPart -eq 3 -and 
        $MimPowerShellModuleAssembly.VersionInfo.ProductBuildPart -ge 2195)
    {
        Write-Verbose "Sufficient MIM PowerShell version detected (>= 4.3.2195): $($MimPowerShellModuleAssembly.VersionInfo.ProductVersion)"
    }
    else
    {
        throw "SharePoint Sync requires MIM PowerShell version 4.3.2064 or greater (this version is currently installed: $($MimPowerShellModuleAssembly.VersionInfo.ProductVersion). Please install the latest MIM hotfix."
    }
    #endregion

    ### Load the Synchronization PowerShell snap-in
    Import-Module -Name (Join-Path (Get-SynchronizationServicePath) UIShell\Microsoft.DirectoryServices.MetadirectoryServices.Config.dll) 

    Write-Verbose "Contacting AD to get the partition details"
    $RootDSE                = [ADSI]"LDAP://$ForestDnsName/RootDSE"
    $DefaultNamingContext   = [ADSI]"LDAP://$($RootDSE.defaultNamingContext)"
    $ConfigurationPartition = [ADSI]"LDAP://$($RootDSE.configurationNamingContext)"

    Write-Verbose "Configuring the Active Directory Connector"
    Write-Verbose "  AD Forest:               $ForestDnsName"
    Write-Verbose "  AD OU:                   $OrganizationalUnit"
    Write-Verbose "  AD Credential:           $($ForestCredential.UserName)" 
    Write-Verbose "  AD Naming Partition:     $($RootDSE.defaultNamingContext)"
    Write-Verbose "  AD Config Partition:     $($RootDSE.configurationNamingContext)"

   
    $admaXmlFilePath = Join-Path $Path MA-ADMA.XML
    [xml]$admaXml = Get-Content -Path $admaXmlFilePath
    $admaXml.Save("$admaXmlFilePath.bak")

    ### Fix up the Domain partition
    $domainPartition = Select-Xml -Xml $admaXml -XPath "//ma-partition-data/partition[name='DC=Litware,DC=com']"
    $domainPartition.Node.name = $DefaultNamingContext.distinguishedName.ToString()
    $domainPartition.Node.'custom-data'.'adma-partition-data'.dn = $DefaultNamingContext.distinguishedName.ToString()
    $domainPartition.Node.'custom-data'.'adma-partition-data'.name = $ForestDnsName
    $domainPartition.Node.'custom-data'.'adma-partition-data'.guid = (New-Object guid $DefaultNamingContext.objectGUID).ToString('B').ToUpper() 
    $domainPartition.Node.filter.containers.inclusions.inclusion = $DefaultNamingContext.distinguishedName.ToString()
    $domainPartition.Node.filter.containers.exclusions.exclusion = $ConfigurationPartition.distinguishedName.ToString()

    ### Fix up the Configuration partition
    $configPartition = Select-Xml -Xml $admaXml -XPath "//ma-partition-data/partition[name='CN=Configuration,DC=Litware,DC=com']"
    $configPartition.Node.name = $ConfigurationPartition.distinguishedName.ToString()
    $configPartition.Node.'custom-data'.'adma-partition-data'.dn = $ConfigurationPartition.distinguishedName.ToString()
    $configPartition.Node.'custom-data'.'adma-partition-data'.name = $ForestDnsName
    $configPartition.Node.'custom-data'.'adma-partition-data'.guid = (New-Object guid $ConfigurationPartition.objectGUID).ToString('B').ToUpper() 
    $configPartition.Node.filter.containers.inclusions.inclusion = "CN=Partitions," + $ConfigurationPartition.distinguishedName.ToString()
   
   
    
    $admaXml.Save($admaXmlFilePath)
    
    Write-Verbose "Importing the Synchronization Service configuration"
    Write-Verbose "  Path: $Path"
    Import-MIISServerConfig -Path $Path -Verbose    

    # requires 3092179
    $Partitions = "$($RootDSE.defaultNamingContext);$($RootDSE.configurationNamingContext)"
    Set-MIISADMAConfiguration -MAName ADMA -Credentials $ForestCredential -Forest $ForestDnsName -Partitions $Partitions -Container $OrganizationalUnits -Verbose

    
    Write-Verbose "Configuring the SharePoint Connector"
    Write-Verbose "  SharePoint URL:          $SharePointUrl"
    Write-Verbose "  SharePoint Host:         $($SharePointUrl.Host)"
    Write-Verbose "  SharePoint Port:         $($SharePointUrl.Port)"
    Write-Verbose "  SharePoint Picture Flow: $PictureFlowDirection"
    Write-Verbose "  SharePoint Protocol:     $($SharePointUrl.Scheme)"
    Write-Verbose "  SharePoint Credential:   $($SharePointCredential.UserName)"
    Set-MIISECMA2Configuration -MAName SPMA -ParameterUse ‘connectivity’ -HTTPProtocol $SharePointUrl.Scheme -HostName $SharePointUrl.Host -Port $SharePointUrl.Port -PictureFlowDirection $PictureFlowDirection -Credentials $SharePointCredential -Verbose

    Write-Verbose "Publishing the Sync Rules Extension DLL to the Sychronization Service extensions folder"      
    Publish-SynchronizationAssembly -Path (Join-Path $Path SynchronizationRulesExtensions.cs) -Verbose
 
    Write-Warning "======================================================================================="
    Write-Warning "IMPORTANT: the SP MA must be opened and closed to refresh the extensible connector"
    Write-Warning "           Use Start-SynchronizationServiceManager to open the Sync Manager tool, then"
    Write-Warning "           ->Management Agents"
    Write-Warning "           ->SPMA (double click)"
    Write-Warning "           ->Click OK three times"
    Write-Warning "======================================================================================="
}##Closing: function Install-SharePointSyncConfiguration

Now we can call this bad boy as before, but with a semi-colon delimited list of OUs to sync with. If we only want to sync with a single OU that’s fine also.

$WorkingPath = "c:\SP16MIMBase"                           # Path to the MA files and Module
$ForestDnsName = "fabrikam.com"                           # DNS name of the Forest
$SyncAccountName = "FABRIKAM\sppsync"                     # Account to use within the SP MA (dirsync rights)
$CentralAdminUrl = "https://spca.fabrikam.com"            # Url of Central Administration
$FarmAdminAccount = "FABRIKAM\Administrator"              # A Farm Administrator (to connect to CA)
$PictureFlow = 'Export only (NEVER from SharePoint)'      # Picture Flow - 'Export only (NEVER from SharePoint)' or 'Import only (ALWAYS from SharePoint)'

# Semi-colon delimited list of DNs of containers to Sync
$OrganizationalUnits = 'OU=Fabrikam Users,DC=fabrikam,DC=com;OU=Ent Users,DC=fabrikam,DC=com;OU=Legal,DC=fabrikam,DC=com'  

# Credential Requests
$SyncAccountCreds = Get-Credential $SyncAccountName
$FarmAccountCreds = Get-Credential $FarmAdminAccount


Import-Module $WorkingPath\SHSharePointSync.psm1 -Force

Install-SharePointSyncConfiguration -Path $WorkingPath -Verbose `
                                    -ForestDnsName $ForestDnsName `
                                    -ForestCredential $SyncAccountCreds `
                                    -SharePointUrl $CentralAdminUrl `
                                    -SharePointCredential $FarmAccountCreds `
                                    -PictureFlowDirection $PictureFlow `
                                    -OrganizationalUnits $OrganizationalUnits 

OK, so what about that downside? Well, sadly due to bugs in the Set-MIISADMAConfiguration cmdlet (or the Management API it’s a wrapper for) we can’t properly select the containers for the configuration partition. In the updated module I include the partition, but no containers within it. This means the entire partition is selected within the AD MA:

2016-08-25_07-10-34

The only container we need here is the CN=Partitions container. You can’t add that to the –Containers parameter because of the bug – you will get a error saying the container doesn’t exist in the partition – thus the only option is to leave it out. That means the entire partition will be selected.

But that’s the only downside, it’s not a very big one. It doesn’t mean we are syncing a bunch of crap to the metaverse. It does mean we are pushing a bunch of extra stuff to the AD connector space thou (705 extra objects in a single domain forest with default schema plus Exchange– basically negligible in that scenario). Of course we can go and clean this up – which needs us to enter the password (!!). It’s a trade off. For customers who are likely to be using this tool as opposed to those who will set it up properly one that’s probably worth it. I can go ahead and execute the Run Profiles and it works just great.

Wait, except for on initial deployment – something that none of the documentation deems it necessary to mention – because the SP MA has a Rules Extension added upon it’s initial configuration it requires a refresh before it will function. If you don’t do this the initial run will result in the following error:

2016-08-25_07-22-31

Simple (age old ILM) trick to sort this is to open the SPMA – double click it, then click OK, click OK again on the Connectivity page, wait on the egg timer, and then click OK once more. That will force a refresh. Now you are good to go.

Of course there’s lots of other things about this toolkit which are “sub optimal” but it’s all about perspective. Regardless of your viewpoint on the removal of UPS, the reality is UPS provided a SharePoint Admin friendly UI for configuring complex multi-domain, multi-forest, import/export scenarios. It’s not the intent of the toolkit to replicate that. It’s intended as a starter solution to get people up and running. For things like additional domains and so forth there is a point of diminishing returns here. Those customers will be doing it “properly” anyway and not faffing about with an import of a lamer default MA configuration. Similarly those doing it properly will be updating the configuration when they move their tested MA configs through Dev – Test – UAT – Production. (Yes, really. People do this!). And that’s without even getting into the whole “Classic Provisioning” thing. So bottom line is they won’t be using this toolkit.

However for those customers looking for a quick and dirty way to get up and running, or for SharePoint practitioners wanting to get started with MIM, these tweaks improve things in the two areas they most complain about.

Another tip is for those that have got their panties in a wad over things like UserAccountControl and so on. Do it on a dev box, and export your MA. You can then use the toolkit to move it into a fresh setup.  You don’t have to use the sample MA configurations provided.

Before I go I have to address another common question regarding the github, “why aren’t you doing pull requests on this stuff?”. Well it’s a very long story but the short and sweet is that I have zero interest in doing free work for Microsoft which masquerades as “open source” especially when there is no actual commitment or leadership. I especially don’t wish to do it when the contribution workflow is hopeless and the publication mechanism is at odds with the standard Windows PowerShell approach. If it’s not going to be done right…. you get the idea.

 

s.

Important Update for SharePoint folks: Hotfix Rollup for Microsoft Identity Manager 2016

posted @ Tuesday, August 23, 2016 1:33 AM | Feedback (0)

Back in the middle of March, Microsoft released a Hotfix Rollup for Microsoft Identity Manager 2016 (MIM). This hotfix rollup is version 4.3.2195.0. This is an extremely important build for those leveraging MIM for profile synchronization with SharePoint Server 2016. You can get the bits over at KB313475.

There are numerous articles out there suggesting that you should install build 4.3.2064.0. Don’t! 4.3.2195 is the fix package you need. Make this part of your base build of the MIM Sync server.

However, if you already have MIM Sync setup and you want to apply this patch, make sure to follow the instructions. The installer will not update the configuration file – because if it did it would break the configuration of the existing ECMA2 MAs (the SharePoint Connector is a ECMA2 MA).

Why is this important? Well aside from the previous fixes which effectively were the required baseline. There are a number of elements critical to a successful implementation of the SharePoint Connector:

  • The AD MA can now handle multiple partitions
  • New Windows PowerShell cmdlets
  • Run Profile fixes
  • Export Only ECMA2 Mas now actually work

In addition, for those who are actually doing things properly and using Declarative Provisioning – there are a number of important fixes to the MIM Service and Portal components also.

Get Patch Happy

s.

Zero Down Time Patching in SharePoint Server 2016

posted @ Thursday, August 04, 2016 7:48 PM | Feedback (0)

Zero Downtime Patching (ZDP) in SharePoint Server 2016 has a marketing heavy silly name, but it's actually sweetness on a stick.

Whilst I hate the name, it is accurate in respect to the basics of the new patching process and the changes made in 2016 to support it. Now as to whether a customer would actually perform real world patching operations with such an expectation is another matter entirely. Here's a hint: they wouldn't. There's a lot more to patching an environment than updating the bits of the software. Or there should be, otherwise you shouldn't be running the environment. Alas this is of course is not something Microsoft can have much influence on.

Regardless this is another one of those SharePoint 2016 "small" things with huge impact, especially for those who actually own and operate large on premises SharePoint deployments. It's all good.

Even better is that it's really simple and straightforward. Like all good features. No fuss, no mess. It just works.

But what isn't good is the legion of utter misinformation and claptrap promoted by those in positions of responsibility in the community who should know better. Myths such as MinRole is required, misnaming the components, and all the usual "we played with it on a simple VM rig and think we're experts" rubbish. You know the drill.

Luckily - chaps who actually know how it works, and what the real guidance should be, have put together a nice little video explanation and demonstration on this rather very impressive change in SharePoint. They also cover guidance for critical planning aspects such as Distributed Cache.

This is the one source of information on ZDP. Forget everything else. You can be confident of this source, and the fact they will take responsibility for any future updates.

Go check it out over at TechNet: https://technet.microsoft.com/EN-US/library/mt767550(v=office.16).aspx.

Very well played to Bob, Neil and Karl who put this together.

One question on this topic I get a lot at events and so on is: "Will we ever see this back ported to SharePoint 2013?". Now, I don't work for Microsoft so I can't answer that definitively, but I know enough to state it's basically not going to happen. if you want the goodness of the new mechanism, you should be planning to upgrade to SharePoint 2016 - which is effectively the new baseline for all future build iteration and servicing approaches. Yes of course the upgrade choice is more complicated, but in respect to ZDP it is the answer.

SharePoint 2016 Nugget #2: Distributed Cache Size in MinRole Farms

posted @ Friday, April 15, 2016 7:07 PM | Feedback (1)

In SharePoint 2013, the Distributed Cache size is set to half of ten percent of the total RAM on the server. This means that on a server with 8Gb RAM, the Cache Size (the allocation for data storage) is 410Mb. Another 410Mb is used for the overhead of running the Cache.

This is a reasonable default as the system has no way of knowing which other services will be provisioned onto the server. And of course by default in SharePoint 2013 every machine in the farm will host Distributed Cache, unless you build your farm properly using the –SkipRegisterAsDistributedCacheHost switch.

The problem is that for anything other than a dev/test/hacknslash/demo box this is a silly number, and nowhere near enough for the Distributed Cache to actually be of any use in terms of social data.

With SharePoint 2016 if you continue to build the farm using the old approach – i.e. using a role of Custom and/or 
–ServerRoleOptional this behaviour is retained and the default cache size will be half of ten percent of total RAM:

image

However if instead you build your farm using MinRole and add servers of role DistributedCache to the farm, we now provision a much more reasonable and useful default:

image

The size is now half of 80 percent of the total RAM, so on a box with 8Gb this will be 3276Mb. That’s much more like it.

This is because we know that this server and any others of the same role will only ever host Distributed Cache and the Web Application Service. Optionally it could also host Request Management and the Claims to Windows Token Service. Neither of those services will ever consume a significant amount of RAM. Whilst they shouldn’t be there (more on that in a future post), the impact is negligible.

Yet another reason why MinRole isn’t the bag of nails it seems so popular for community “celebrities” to wax about.  Another reason why a MinRole farm will batter a SharePoint 2013 in terms of performance and throughput out of the gate – much more appropriate defaults for critical system configuration that so many customers have ignored with SharePoint 2013.

The devil is in the details. MinRole… kinda tasty. I think I’ll scoff me one right now…

WP_20160415_004

SharePoint 2016 Nugget #1: Topology Service in MinRole Farms

posted @ Monday, April 04, 2016 8:43 PM | Feedback (3)

Whilst I have some much more in depth coverage of SharePoint 2016 coming soon, this is the first in a mini series of “nuggets” – tidbits of information on the new release. Unlike with previous releases I decided against publishing a lot of material whilst the product was in public preview and to wait until the RTM. This decision was driven by a number of factors I won’t bore you with.

Many will be of the opinion that not a great deal has changed in SharePoint 2016. That is somewhat true, especially in respect to visible end user or administrator capabilities. However there are a significant number of small but important things to be aware of, and this series will catalogue many of them over the next few weeks. This doesn’t mean I won’t also be providing some “all up” coverage in the near future.

In previous versions of SharePoint (i.e. 2010 and 2013) the Application Discovery and Load Balancing Service (aka the Topology Service) was deployed to every server in the farm. This service amongst other things is responsible for maintaining a list of addresses of service application endpoints. These are used for Web Application <-> Service Application and Service Application <-> Service Application communication. In the previous versions, it would be extremely likely to go to another machine even if the requested service was running locally. It was a basic round robin style approach.

In SharePoint 2016 this has been changed to always “prefer local”. In other words if the requested service is running locally, go there. Instead of hoping over to another machine. If that isn't possible then it will go to a remote server. This change was implemented after the Product Group was able to measure the positive impact of the “prefer local” model whilst running SharePoint Online. And the change has been baked into SharePoint 2016. This is one of a number of small but significant changes under the hood, which mean SharePoint 2016 can perform significantly better than SharePoint 2013. This is *exactly* the sort of engineering improvement needed across the product in broader terms.

The “prefer local” model is used within both farms leveraging MinRole Server Roles, and farms where every machine is of role Custom (aka Opting out of MinRole, or a 2013 style topology).

In addition to the “prefer local” model, the Topology service in SharePoint 2016 is no longer deployed on every server in the farm. It is only deployed to the Application, Search and Custom roles. In a MinRole farm there is no need to have the IIS Web Application present on servers of role DistributedCache or WebFrontEnd so it’s not deployed. Note the term IIS Web Application - don't confuse this with a SharePoint Web Application whose counterpart in IIS is a Web Site. Here’s a WebFrontEnd in my farm after the servers have been joined to the farm, but *before* any service applications have been created (with MinRole service instances are provisioned automatically when service applications are created):

image

No-one wants to be pointing and clicking around across a bunch of boxes (and hitting that annoying WPI pop up!). The following Windows PowerShell will display the IIS Web Applications under the SharePoint Web Services IIS Web Site for every server in the farm:

 # Show the Service Application Endpoints across the farm
 # ignores Outging Email and Database servers
 ForEach ($Server in (Get-SPServer | ? {$_.Role -ne "Invalid"})) {
    Write-Host "$($server.Address): $($server.Role)"
    Invoke-Command $Server.Address { 
            Import-Module WebAdministration;
            Get-ChildItem "IIS:\Sites\SharePoint Web Services" | 
            ? {$_.NodeType -eq "application"} |
            Select Name | Format-Table -HideTableHeaders
    } 
 }

And here’s the output of the above (I’ve filtered to show only one machine per role):

DistributedCache
----------------
SecurityTokenServiceApplication

WebFrontEnd
-----------
SecurityTokenServiceApplication

Application
-----------
SecurityTokenServiceApplication
Topology                       

Search
------
SecurityTokenServiceApplication
Topology                       

Custom
------
SecurityTokenServiceApplication
Topology

But what about after some service applications have been created? No big surprises here, the endpoints are only deployed on the roles that are hosting the service instances. Just like SharePoint 2013, but due to the topology model that MinRole enforces it’s worth covering a few aspects of this.

On the DistributedCache role we don’t care about endpoints. We’ll never use them. Even if we were (stupidly) running Request Management on that role we still would never need them. We certainly don’t want to advertise services to other roles.

On the WebFrontEnd role, if we need to call the low latency service applications like BDC, MMS, UPA etc we prefer local but we can go to another machine if necessary. If we need to call “not so low” latency apps such as Word Automation or PowerPoint Automation etc we go across to the Application server as we did in SharePoint 2013. On this role there is no desire to advertise services to the other roles.

Likewise on the Application role, we will also go local if we can or otherwise via the Topology service Application Addresses.  This is an important aspect to really understanding MinRole. This is why all those service instances are deployed on the Application role as well as the WebFrontEnd role. In addition we do wish to advertise our services to other roles.

It seems a little strange at first, but this is a key part of changing the approach to Farm Topology. MinRole in many ways could be: the making of min changes for max effect.

On the Search role, we’ll pretty much go via the Topology service in the uncommon scenarios where we need to call out to the other service instances. The only endpoints on the Search role, are wait for it, the Search ones!

Here’s the output after every service application has been created in the farm:

DistributedCache
----------------
SecurityTokenServiceApplication

WebFrontEnd
-----------
0e2a2a21d68f48e5ac6e5212386c68a1
65bac1dee73a45d980e8a727ecded730
7e5c51e5c9644e20acfb0ebfe9e1932c
86e16014fa714b2482fe61558f47f4c8
872ceab28a38461781b492e8d0cd246b
8e1c5639ec3f4ef988064f210f23fd46
9abc0e641df14da283b794ca5951a54c
a8f3b8c8fab44a1a98fb66d85865362f
da9a134d441e4d109ad9b76beee38fc5
SecurityTokenServiceApplication 

Application
-----------
0e2a2a21d68f48e5ac6e5212386c68a1
343877e231ee4bac8a6ad9450fd9a3c9
65bac1dee73a45d980e8a727ecded730
7e5c51e5c9644e20acfb0ebfe9e1932c
86e16014fa714b2482fe61558f47f4c8
8e1c5639ec3f4ef988064f210f23fd46
a8f3b8c8fab44a1a98fb66d85865362f
bf5d1dc0e3964dd99a96542d20b7f097
da9a134d441e4d109ad9b76beee38fc5
SecurityTokenServiceApplication 
Topology                        

Search
------
74fbbb18d6fe49e6bdec58bdca9178f2
d55ae60a84b649c28b9022baedecb2fa
SecurityTokenServiceApplication 
Topology                        

Custom
------
SecurityTokenServiceApplication
Topology        

Ahh, I love me some GUIDs, can’t get enough of ‘em. Could be worse, could be Octet strings. Let’s sort that out. Unfortunately this means using the SharePoint Snap-in. Sadly that’s not something that’s been fixed in SharePoint 2016!

$Credential = Get-Credential $ShellAdminAccountName
ForEach ($Server in (Get-SPServer | ? {$_.Role -ne "Invalid"})) {
    echo ""
    echo (Get-SPServer -Identity $Server | % {$_.Role})
    echo "-----------------------"
    icm $Server { 
            Import-Module WebAdministration
            Add-PSSnapin -Name "Microsoft.SharePoint.PowerShell"
            $Apps = Get-ChildItem "IIS:\Sites\SharePoint Web Services" | ? {$_.NodeType -eq "application"} | Select Name
            ForEach ($app in $apps) {
                If ($app.Name -eq "SecurityTokenServiceApplication" -or $app.Name -eq "Topology") { $app.Name }
                Else { (Get-SPServiceApplication | ? {$_.Id -eq $app.Name}).DisplayName }
            } 
    } -Credential $Credential -Authentication Credssp
}

Note, there’s lots of aliases and such like in the above script. You wouldn’t really ever do this for real, it’s just for demonstration purposes. Here’s the output:

DistributedCache
-----------------------
SecurityTokenServiceApplication

WebFrontEnd
-----------------------
Machine Translation Service Application
App Management Service Application
Secure Store Service Application
User Profile Service Application
Visio Graphics Service Application
Managed Metadata Service Application
Access Service Application
Business Data Connectivity Service Application
Subscription Settings Service Application
SecurityTokenServiceApplication

Application
-----------------------
Machine Translation Service Application
PowerPoint Automation Service Application
App Management Service Application
Secure Store Service Application
User Profile Service Application
Managed Metadata Service Application
Business Data Connectivity Service Application
Word Automation Service Application
Subscription Settings Service Application
SecurityTokenServiceApplication
Topology

Search
-----------------------
Search Service Application
Search Administration Web Service for Search Service Application
SecurityTokenServiceApplication
Topology

Custom
-----------------------
SecurityTokenServiceApplication
Topology

And that’s all there is to it. Pretty simple stuff. But again, these seemingly trivial changes have a significant positive impact on overall farm performance. As always with SharePoint, the devil is in the detail. As much legacy gunk is still in the product, there are some extremely smart cookies working hard to make it better for all of us. Respect due.

I will leave you with this:

cola1

s.

Distributed Cache Service Identity: Turning the Playbook into real Tools

posted @ Sunday, April 03, 2016 12:18 PM | Feedback (0)

A couple of weeks ago I posted about the Playbook Imperative and Changing the Distributed Cache Service Identity, which generated a lot of interest and feedback regarding the “tooling approach” presented. The original intention of the post was to articulate the importance of understanding the playbook when performing operational service management of SharePoint farms. I had never intended to show “how to do it” in terms of creating tooling in Windows PowerShell. The PowerShell examples were created purely to demonstrate the playbook and were deliberately done in a way that meant the focus was on the tasks being performed rather than the plumbing that governed execution. Similarly I deliberately picked the service account example as it is simple enough to not get bogged down in (too much) detail but with enough gotchas and variance to demonstrate why the playbook is paramount. It also didn’t hurt that it’s a “pattern” that wasn’t previously documented.

At any rate, many folks have asked “why do it like that?”, “why use Invoke-Expression?”, “what’s up with all the individual scripts?”, and so on. Again the point was that these are examples. Examples to demonstrate the playbook without any extraneous considerations. I even stressed in the article that “the implementation choice is a lifestyle preference”. The idea being that the information and basic scripts can be “leveraged” in your tooling implementation of choice. No matter, tons of you asked for a tool. Who knew that Distributed Cache was such a popular thing?! Everyone I normally speak to hates it with a passion! :)

So how would one go about turning the playbook into some form of real tool? It’s all dead simple, but based on the feedback its worth walking through the “method”…

Of course the tooling choice should only be made based upon some requirements and design constraints (remember those? :)). It’s all very well to start out with something very desirable, like Desired State Configuration but by jumping on that you immediately create some pretty hefty constraints and indeed add additional pre-requisites into the picture. As I mentioned in the earlier post I have a some DSC resources built from the same playbook, they don’t take long to create. But of course such resources mean you have to use DSC and have it setup ready to go in your farm. Very few non Azure customer farms are managed by DSC. It’s not how it should be, but it is how it is.

So let’s state some requirements.

  1. Firstly I want my tool to be useable in any SharePoint 2013 or SharePoint 2016 deployment. I only care about these two versions because they are the only two which include Distributed Cache.
  2. I want my tool to work regardless of how many machines are running Distributed Cache. 0, 1, 2, 3 or more. It should just take care of business.
  3. Also, the tool should work (i.e. be executable) from any server in the farm, regardless of whether the server is hosting Distributed Cache or not. I shouldn’t have to (as the operator) deal with Remoting onto the appropriate machine if I want to do something there. This is one of the biggest flaws of the native *-SPDistributedCache* cmdlets (because they can’t introduce a dependency on Remoting).
  4. I want the tool to be quickly installable on any SharePoint Farm I may have the pleasure of operating.
  5. I want the tool to follow as much as the PowerShell mantra as possible, to follow the rules as it were, unlike so many of the SharePoint cmdlets!
  6. I want the tool to work with a MinRole deployment, so I don’t need an alternative version when working with farms using MinRole.

Given these requirements we can also detail some constraints:

  1. PowerShell Remoting is required to be available on every server in the farm
  2. The operator is expected to know the playbook!

OK, so constraint number two is pretty risky. But hey, if you want the Big Magic Button for SharePoint Farms keep dreaming. It is never going to happen.

Because of all of the above, the best choice is a PowerShell Module. They are easy to create and test, but more importantly easy for people to get and use. They have no substantial dependencies, and the ones they do have can be avoided.

Thus, I have moved things around a little bit from the previous scripts and in a nutshell instead of a bundle of scripts there is now a Function for each activity. Each of them can be used independently (if the operator is doing something else with Distributed Cache and knows what they are doing) or following a very simple operational run book. Much of the modification is in respect to the Remoting aspects. Instead of calling Invoke-Expression to run a script, I build ScriptBlocks where necessary and Invoke those. This increases the overall plumbing but provides a better tool and also reduces the number of functions required.

The main change is to the logic which actually does the change of the service account. It takes the playbook outlined a step further and caters for all my requirements above and all the potential deployment scenarios it may be run in. By no means is it bullet proof, but it’s certainly adequate if the operator knows what they are doing.

The end result are a handful of functions I can use to execute the run book for changing the service account, as follows:

$ManagedAccountName = "FABRIKAM\sppservices"
$ShellAdminAccountName = "FABRIKAM\spshelladmin"
$CacheSize = 500

$Credential = Get-Credential $ShellAdminAccountName

Get-DistributedCacheStatus -Credential $Credential -CacheHostConfiguration
Update-DistributedCacheIdentity -ManagedAccountName $ManagedAccountName -Credential $Credential
Update-DistributedCacheSize -CacheSizeInMB $CacheSize -Credential $Credential
Get-DistributedCacheStatus -Credential $Credential -CacheHostConfiguration
And of course the other functions can be used as well if necessary.

Note: the reality is the scripts I originally presented were modified from a PowerShell Module I had already created many moons ago for the MCSM program and have been using since. I’ve pruned it down significantly for this “version” as my original module contains a whole bunch of other Distributed Cache stuff not related to the topic at hand.

The Module is available on the PowerShell Gallery. If you have WMF 5.0 installed and an Internet connected machine, you can install it simply by running this:

Install-Module -Name SharePointDistributedCache

If the SharePoint boxes don’t have Internet you can do this on another machine or use Save-Module to bring over the files and then you can just copy them to C:\Program Files\Windows PowerShell\Modules and you are good to go. Dead simple, just like any other Module. If you want to verify the module is available run:

Get-Module -ListAvailable

Note that the module does not require WMF 5 to work, only for the PowerShell Gallery hook-up. You can import the module on earlier versions with:

Import-Module SharePointDistributedCache

That’s all there is to it. Nothing complicated, merely a question of taking the scripts and turning them into Advanced Functions and then creating a Module containing them. PowerShell tool making basics at the end of the day.  There’s certainly some improvements I know I should make and it’s clearly obvious there are other flaws but hopefully this module is helpful to those who got in touch following the previous post. I encourage everyone working with SharePoint to build tools, not scripts!

 

s.

The Playbook Imperative and Changing the Distributed Cache Service Identity

posted @ Monday, March 21, 2016 5:06 PM | Feedback (3)

Introduction

One of the most common challenges facing those operating production SharePoint environments is the “missing playbook”. Even for deployments where operational service management (OSM) skills are strong it is impossible to deliver quality operational service without the playbook. It’s generally pretty uncommon for practitioners to factor OSM considerations into the design, or at least to do it well. Indeed, in many cases it is also impossible to do so completely as so much about the environment will not be known or understood prior to broad platform adoption.

Whilst the playbook is imperative for any system, there is no better product than SharePoint to highlight this factor. The issue is that the vast majority of SharePoint deployments don’t have one. And that’s a shocking if not unsurprising state of affairs.

This article uses a worked example to demonstrate the importance of the playbook, how to define it and then how to create the run book for operations staff. To demonstrate I will focus on everyone’s second favourite SharePoint Service Instance, the delightful and simple to manage Distributed Cache! :) I’ve picked this one as it’s relatively simple but with a catalogue of gotchas, and it also now thanks to SharePoint 2016 has some version variance.

 

The Playbook

I work a lot with customers in troubleshooting situations or otherwise in a reactive nature due to faults or issues cropping up in the environment. In virtually all cases they could be avoided by better OSM. The vast majority of deployments simply don’t have any. That’s a tough situation to resolve because it will involve a lot of time and energy and probably doesn’t fit into the structure defined by outsourcing agreements, company procedures and so on. What is always virtually non-existent is the playbook.

But what is the playbook? Simply put it’s the definition of what must be done in a given situation. A classic example of not following the playbook with SharePoint is deleting a Site Subscription before deleting the Site Subscription Profile Configuration. In this case you end up with orphaned data inside the UPA which cannot be removed in a supported fashion. Sounds simple, but the problem is the playbook isn’t documented so hundreds of people have made this mistake. In this example the playbook would include details of the order in which things should be deleted when you want to delete a Site Subscription. Note that the playbook doesn’t define how to perform the tasks. That’s a run book. More on run books later.

SharePoint is a complex product. When you add to that fact poor product implementation in some areas, missing and woeful documentation, poor management APIs, poor Windows PowerShell cmdlet implementation and a staggering amount of misinformation and worst practices out there across the interwebz, the importance of the playbook should be clear. All products have strange behaviours, things to be aware of and accommodate etc. In addition, the sheer variance in the size and topology of customer farms means the playbook remains a significant challenge. It’s okay if we need to do some slightly whacky things to manage an environment. However, if we are not provided the information on the what, we can’t do.

It’s somewhat interesting how often Microsoft’s marketing folks talk about how their experience running SharePoint Online can be baked into the on premises product for everyone else to benefit from. That is a good thing no doubt about it. However, what would actually be of vastly more benefit to on premises customers, would be to document some of their operational service management lessons and some of their playbook.

Now we are comfortable with the idea of the playbook, let’s apply it to a worked example.

 

Changing the Distributed Cache Service Account

This is a perfect example of how not to do things! We have a fairly straightforward requirement. To change the service account used by the Distributed Cache Service. We may want to do this because we made a mistake when deploying the farm originally, or perhaps we are rationalising service accounts. It’s a fundamental operational service procedure, the problem is it’s not actually as straightforward as it should be.

I’m also using this example as it’s something that none of the common scripting solutions out there accommodate, and even the superb xSharePoint doesn’t yet provide. It’s also something that I am forever being asked about. So it’s an excuse to get the scripts out there.

In SharePoint 2013 when we either create the farm, or join a server to a farm we can use the
-SkipRegisterAsDistributedCacheHost switch. We should do this. If we don’t every server in the farm will run Distributed Cache and we’ll need to fix that up. So using the switch we have zero servers in the farm running Distributed Cache.

However due to the “special” nature of this service instance, we cannot change the service account before one machine is running it. Thus we have to start it on a server. We can’t do that from Services on Server or with Start-SPServiceInstance. The reason for all these three things is that the service instance does not yet exist in the farm. We must run Add-SPDistributedCacheServiceInstance (on a server we wish to run Distributed Cache). This will take around two minutes to do its thing.

However, it will always run as the Farm Account by default. It’s effectively baked into the Provision() method of the service instance and there is no possibility to change it prior to starting the service instance on one machine. This is frankly, crap. Especially so given that SharePoint’s built in health analyser will warn us about running services as this account. The product’s default behaviour is to configure defaults at odds with its own deployment best practice. Note this crappy behaviour is so crappy that Microsoft has addressed it in SharePoint 2016. More on that later.

Great, so now we have Distributed Cache up and running on a server. Let’s go ahead and change its account. As pretty much everyone knows we can’t do this via Configure Service Accounts. Even though it will allow us to select the service and then a Managed Account, as soon as we hit OK we’ll see this delightful screen.

clip_image002

This folks, is just lame. We all understand how easy it would be to filter the service on the previous screen. And what’s up with “Sharepoint Powershell commandlets”? Yes, something went wrong indeed. So wrong that my Live Writer spell checker is all red and squiggly right now. The three minutes it would take to change this resource string would be well spent.

But no matter, we can easily find the details on how to do this via Windows PowerShell, it’s even documented by Microsoft over at Manage the Distributed Cache Service in SharePoint 2013. Sure the all on one-line thing is a bit silly, but you’ll find this snippet all over the web in various forms. Thus the first simple piece of the playbook. The service account change must be performed using Windows PowerShell.

$farm = Get-SPFarm
$cacheService = $farm.Services | where {$_.Name -eq "AppFabricCachingService"} 
$accnt = Get-SPManagedAccount -Identity domain_name\user_name 
$cacheService.ProcessIdentity.CurrentIdentityType = "SpecificUser" 
$cacheService.ProcessIdentity.ManagedAccount = $accnt 
$cacheService.ProcessIdentity.Update() 
$cacheService.ProcessIdentity.Deploy()
    

However, this script will not work unless you run it on a machine hosting and running the Distributed Cache service instance, a pretty important detail that is not mentioned. Actually it will complete just fine if the machine isn’t hosting Distributed Cache or indeed if it is but the service isn’t started. But it will leave your Cache Cluster broken. There are also a number of other considerations about using this ‘pattern’ to change the service account which we’ll cover later. For now, the second and third parts of the playbook. The service account change must be performed on a server which runs Distributed Cache. And the service account change must take place on a server with the Distributed Cache service running.

Let’s assume we are building a fresh farm and we have joined all our servers to the farm using
-SkipRegisterAsDistributedCacheHost, and then added one Distributed Cache server using Add-SPDistributedCacheServiceInstance. At this point we should change the service account using the above script. Then, when we add the second and third and so on Distributed Cache servers they will pick up the correct identity. This is an important factor in the playbook of topology deployment. Let’s say we have an array of servers and we loop through that to provision the correct service instances. We need to build into that the service account change. Or be willing to change it later across all servers if we want to maintain the beauty of our scripts! Thus the fourth part of the playbook. For new SharePoint 2013 farms, change the service account after deploying the first Distributed Cache server, and before deploying the additional Distributed Cache servers.

Of course if we care about availability (note I said availability, not high availability) we need more than one Distributed Cache server. Actually there really isn’t much point to a “distributed” cache if it’s only on one server. But we really should have three. Yes. Three. Not two. Three. AppFabric, the real software that Distributed Cache provides a wrapper for has a cluster quorum model. This means that three hosts are the minimum optimal configuration. However, SharePoint's implementation does NOT use this quorum, the ConfigDB holds host information. Never the less, you will get the best performance and reliability from three or more servers. And further if you only have two you will hit issues when attempting to gracefully shut down any single server (if you do that properly using the AppFabric cmdlets and NOT the SharePoint one, which doesn't work). None of that is important to the playbook for changing the service account, but it is extremely important more generally.

Ok, so we’ll have three servers in our farm running Distributed Cache. When we deployed our farm we did as above and changed the service account after adding the first. Thus all three machines have the correct identity and life is good. However, in the future we may need to change the service account again. Now what?

We could go ahead and run that script again right? Wrong. It will never work. If you try and run that script on a server in a multiple server Distributed Cache cluster, it will blow up in your face with a TCP port 22234 is already in use error:

clip_image004

What will also have happened is that the server you run the script on will no longer be part of the Cache Cluster and you will only now have two servers in the cluster. The other two servers by the way are still using the old service account. You have just broken your Distributed Cache by running the script provided in the official product documentation. Nice.

clip_image006

What makes this worse, is that many experts will tell you when you see this to run another script provided to “repair” a broken Cache Host, which actually means deleting it. And that’s just bad advice, period. The reality is you should have never tried to change the account like this in a multi server Distributed Cache setup.

Now out there on the interwebz there is a virtual ton of oxen pulling misinformation about the little account change script. Much of it details that the last line which calls Deploy() on ProcessIdentity is not required. It absolutely is required. If you don’t call Deploy() things will not work. Just because some Windows PowerShell completes without error does not mean it has worked! This is especially true with SharePoint PowerShell!

We can run the script without calling Deploy() but it won’t have any effect and will leave our Cache Cluster broken. A broken Cache Cluster is not good. It’s broken! :) What that means is items won’t be put in it. Bad!

There are many things we could try, such as stopping the service instances, running the service account script, then starting the service instances. That will appear to work, and the AppFabric service will be running as the correct account. However, the Service Status of each Cache Host will be UNKOWN. And that means that the Distributed Cache will crash within five minutes and it won’t be working.

Similarly, we could unprovision the service instance (yes that’s different than merely stopping it), run the service account script, then provision the service instances. The exact same result. The AppFabric service will be running as the new account, but the service status of each Cache Host will be UNKOWN. Your cluster is broken.

The bottom line is that any such attempts will fail, even thou on first glance they seem to have worked. If the service status of your cache cluster hosts report UNKOWN, your Distributed Cache is broken and items will not be being cached.

clip_image008

Obviously, this is no good. It’s not part of the playbook to have changed the account, but be left with a broken Cache Cluster!

What we need to do is actually pretty simple. Yup, that’s right. We must remove the second and third servers, so that we only have one. Then run the account change script. Then we can add back the other two servers. This is the way to change the service account for Distributed Cache when you have more than one server in the Cache Cluster. Not documented anywhere else until now. Makes you wonder how many broken Cache Clusters are out there, huh! Or rather just how few people are bothered by the wrong account being used.

The fifth part of the playbook. The service account change must take place when only a single server is in the Cache Cluster.

But wait there is more. If you run the service account change script and it takes less than about eight minutes, something is wrong. Because what Deploy() is doing (and why it’s mandatory) is effectively exporting the configuration, removing the server as a Distributed Cache host, applying the account change, and then re-adding it with the same configuration. Whilst the PowerShell looks like the same “pattern” as you would use to change any other service account in SharePoint, Deploy() is overridden inside the Distributed Cache service instance with its own special sauce. If it doesn’t take around eight minutes, something is wrong. One of the things this also does is to stop the AppFabric Cache Host, but it can often fail at that point and yet it falls through causing an error. This is another cause of the TCP Port 22234 is already in use error. Bad code. Not tested thoroughly enough.

There is no other pattern we can use. We can’t stop or unprovision the service instance as that will leave our Cache Cluster borked. We could choose to make use of the AppFabric cmdlets, but alas doing so is unsupported, that’s not a road we can travel down. We shouldn’t be “hacking” it, we have to tell both SharePoint and AppFabric about the change and to do that, we should use the mechanisms provided by the product, whilst accepting and working around their liabilities. We only have one choice to change the identity. Now this has some pretty serious implications. We are effectively removing all cache hosts. What that means is the sixth part of the playbook. The service account change will result in data loss as all cache hosts must be removed.

This is a significant planning consideration – i.e. when will this change take place, and plan on it not needing to be done often! It also means there is absolutely no point whatsoever in doing a “graceful” shutdown of any server, because we need to shutdown them all.

There’s also one more very important thing to note here. We have no choice but to run the account change script when a single server is running Distributed Cache. Because of this any previous change we have made to the Cache Size will be thrown away on the machines we remove before running the script. Let’s say for example we previously changed our Cache Size to 1200MB. Once we have changed the account and re-added the removed servers our Cache Host configuration will look like this:

clip_image010

As you can see only one server has our desired Cache Size. The other two servers have the default size (which in this example is 410Mb on servers with 8Gb RAM). Microsoft strongly recommend that the Cache Size be the same for all servers in the Cache Cluster and that is why Update-SPDistributedCacheSize sets it that way. I on the other hand state it as a requirement. I’ve seen way too many messed up Distributed Caches to mince my words on this particular topic. Therefore, we need to fix up the Cache Size once the account change has been made, the seventh part of the playbook. The service account change requires that any cache size configuration be re-applied.

Doing this is actually very straightforward (in comparison to the service identity at least). I need to stop all the Distributed Cache service instances, change the Cache Size, and finally restart all the Distributed Cache hosts. This time stopping and starting is entirely acceptable, we do NOT need to unprovision and provision, nor do we need to remove and add any hosts.

All good. Well, not all good. But it’s what it is. What about the recently released SharePoint 2016?

Well the good news is that they have refined the service instance plumbing so that the Distributed Cache service instance exists prior to a server being added as a Distributed Cache host. This means we can set the service identity before we ever add a Distributed Cache host. This works both when using MinRole (by specifying a -ServerRole of DistributedCache) or when using
-ServerRoleOptional. Therefore, we can change up the order of our farm provisioning solution to configure the service account before we join any additional machines to the farm. If we are using MinRole, the first server in the farm should be of role Web or Application, which also means
-SkipRegisterAsDistributedCache is no longer necessary. However, if you are deploying a server of role Custom, you will want to still include it to prevent Distributed Cache from being provisioned on that server. If 'opting out' of MinRole by using -ServerRoleOptional then we can continue to use -SkipRegisterAsDistributedCache.

However, there is an important detail. In this scenario, we do not call Deploy() on the ProcessIdentity! If we do it will fail with a CacheHostInfo is null error, which of course is expected at this point. This is the one time that not calling Deploy() is OK. Our service identity is updated, and when we later add a Distributed Cache host that identity will be picked up. This is actually one of the best side benefits to MinRole. Another part of our playbook. For new SharePoint 2016 farms, the service account should be changed after farm creation, and before deploying any Distributed Cache servers.

The not so good news, is that changing the service account at a later date, once the Distributed Cache service is provisioned remains exactly the same as with SharePoint 2013. This makes total sense.

Phew. Hopefully now the importance of the playbook is obvious. Even though what we are doing is relatively simple, the devil is in the detail. Our playbook for this operational aspect is as follows.

The Distributed Cache service account change…

  • must be performed using Windows PowerShell
  • must be performed on a server which runs Distributed Cache
  • must take place on a server with the Distributed Cache service running
  • For new SharePoint 2013 farms, the service account should be changed after deploying the first Distributed Cache server, and before deploying the additional Distributed Cache servers
  • must take place when only a single server is in the Cache Cluster
  • will result in data loss as all cache hosts must be removed
  • requires that any cache size configuration be re-applied
  • For new SharePoint 2016 farms, the service account should be changed after farm creation, and before deploying any Distributed Cache servers

If we didn’t have this playbook, we’d have no good chance of creating the run book, or the scripts to implement the run book. Because we have it we can produce a run book and scripts much more easily as we have our essential details and we won’t waste time thrashing out hacks that semi work or have environment specifics hard wired into them.

 

The Run book

Given our playbook above we can easily produce a run book and its implementation. We know it needs to be Windows PowerShell and we also know that we will either be required to run everything on a server running Distributed Cache or via Windows PowerShell Remoting. From an OSM perspective we really should have adopted Remoting as it’s such a common requirement within SharePoint to be able to run commands on a specific server. In addition, many organisations have a machine in the farm for Shell Access. Now the implementation choice is a lifestyle preference. We could use an uber-script, a set of advanced functions, or Desired State Configuration. It’s not the objective of this article to get into that discussion, and there is no single right answer for every situation. I normally try to avoid plumbing in my PowerShell examples as often they can distract from the key concepts being described. In this case though, we really should be using Remoting. This is after all an article with OSM as its focus. I assume you are comfortable with the setup and configuration required for Remoting.

My manager tells me this stuff needs to be run by any monkey they hire in ops! Just kidding, I work for myself :) But I’m inventing a couple of sensible requirements here. I want a modular solution which I can run from a “controller” script. I also want my solution to work no matter how many Distributed Cache servers I have in the Farm. To do this, I’m going to break out the key functionality I need into three scripts:

  1. Set-DistributedCacheIdentity.ps1 – runs on a Distributed Cache host and configures the service account
  2. Remove-DistributedCache.ps1 – removes a server as a Distributed Cache host
  3. Add-DistributedCache.ps1 – adds a server as a Distributed Cache host

I also need another three scripts to deal with the Cache Size:

  1. Update-DistributedCacheSize.ps1 – runs on a Distributed Cache host and sets the cache size
  2. Stop-DistributedCacheCluster.ps1 – stops all Distributed Cache service instances in the farm
  3. Start-DistributedCacheCluster.ps1 – starts all Distributed Cache service instances in the farm

Obviously this could be done in “less” but by building tools rather than scripts we have a far more robust and reusable (superior) end result. There’s a bunch of additional things to build into production scripts such as exception handling, but in order to keep this focused on the task at hand, I’ve neglected that stuff (well that’s my excuse and I’m sticking to it!). Let’s take a look at the “code”:

Set-DistributedCacheIdentity.ps1

<#
.SYNOPSIS
    Sets the Distributed Cache Service Identity
    

.DESCRIPTION

    spence@harbar.net
    25/06/2015
    



.NOTES
    File Name  : Set-DistributedCacheIdentity.ps1
    Author     : Spencer Harbar (spence@harbar.net)
    Requires   : PowerShell Version 3.0  
.LINK
.PARAMETER File  
    The configuration file

#>
[CmdletBinding()]
#region PARAMS
param (  
    [Parameter(Mandatory=$true,
               ValueFromPipelineByPropertyName=$true,
               Position=0)]
    [ValidateNotNullorEmpty()]
    [String]
    $ManagedAccountName
) 
#endregion PARAMS

begin {
    Add-PSSnapin -Name Microsoft.SharePoint.PowerShell
}

process {
    try {
        # Safety first, check we don't have a bunch of open connections
        $Count = 0
        $MaxCount = 5
        While ( ($Count -lt $MaxCount) -and (Get-NetTCPConnection -LocalPort 22234 -ErrorAction SilentlyContinue).Count -gt 3) {
            Write-Output "$(Get-Date -Format T) : Waiting on port to free up..."
            Start-Sleep 30
            $Count++
        }
        If ((Get-NetTCPConnection -LocalPort 22234 -ErrorAction SilentlyContinue).Count -gt 3) {
            Write-Output "$(Get-Date -Format T) : Port still in use, exiting..."
            Exit
        }

        # Grab the farm and managed account...
        $Farm = Get-SPFarm
        $ManagedAccount = Get-SPManagedAccount $ManagedAccountName

        # Set the service account...
        Write-Output "$(Get-Date -Format T) : Configuring Distributed Cache Service Account..."
        Write-Output "$(Get-Date -Format T) : This will take approximately 8 minutes..."
        $DistributedCacheService = $Farm.Services | Where-Object { $_.TypeName -eq "Distributed Cache" }
        $DistributedCacheService.ProcessIdentity.CurrentIdentityType = "SpecificUser"
        $DistributedCacheService.ProcessIdentity.ManagedAccount = $ManagedAccount
        $DistributedCacheService.ProcessIdentity.Update() 
        $DistributedCacheService.ProcessIdentity.Deploy()
    }
    catch {
        Write-Output "ERROR: We failed during changing the Distributed Cache Service Account."
        $_.Exception.Message
        Exit
    }
}

end {
    Write-Output "$(Get-Date -Format T) : Distributed Cache Service Account Updated!"
}

#EOF

All I do here is a little bit of semi-defensive “programming”, if the machine we are running on has a bundle of open connections on the cluster port then when we call Deploy() it will blow up. I avoid that by exiting. If something fails during the identity change we will add the removed hosts back into the cluster. It doesn’t cover all bases, but it’s better than nothing in event of failure here. This leaves the configuration as it was (except for the cache size, which will need to be re-set.)

Remove-DistributedCache.ps1

<#
.SYNOPSIS
    Removes host from cache cluster
    

.DESCRIPTION

    spence@harbar.net
    25/06/2015
    
.NOTES
    File Name  : Remove-DistributedCache.ps1
    Author     : Spencer Harbar (spence@harbar.net)
    Requires   : PowerShell Version 3.0  
.LINK
.PARAMETER File  
    The configuration file

#>
[CmdletBinding()]
#region PARAMS
param () 
#endregion PARAMS

begin {
    Add-PSSnapin -Name Microsoft.SharePoint.PowerShell
    $server = $env:ComputerName
}

process {
    try {
        
        Write-Output "$(Get-Date -Format T) : Removing server $server as Distributed Cache host..."
        Remove-SPDistributedCacheServiceInstance -ErrorAction Stop
   
        $Count = 0
        $MaxCount = 5
        While ( ($Count -lt $MaxCount) -and (Get-NetTCPConnection -LocalPort 22234 -ErrorAction SilentlyContinue).Count -gt 0) {
            Write-Output "$(Get-Date -Format T) : Waiting on port to free up..."
            Start-Sleep 30
            $Count++
        }
        Write-Output "$(Get-Date -Format T) : Removed server $server as Distributed Cache host!"
    }
    catch {
        Write-Output "ERROR: We failed during removing cache host cluster on $server."
        $_.Exception.Message
        Exit
    }
}

end {}

#EOF

Dead simple this one, it simply removes the host from the Cache Cluster, we then wait around two minutes for the port to free up. This is something we should do to ensure later actions play nice.

Add-DistributedCache.ps1

<#
.SYNOPSIS
    Adds host to cache cluster
    

.DESCRIPTION

    spence@harbar.net
    25/06/2015
    
.NOTES
    File Name  : Add-DistributedCache.ps1
    Author     : Spencer Harbar (spence@harbar.net)
    Requires   : PowerShell Version 3.0  
.LINK
.PARAMETER File  
    The configuration file

#>
[CmdletBinding()]
#region PARAMS
param () 
#endregion PARAMS

begin {
    Add-PSSnapin -Name Microsoft.SharePoint.PowerShell
    $server = $env:ComputerName
}

process {
    try {
        
        Write-Output "$(Get-Date -Format T) : Adding server $server as Distributed Cache host..."
        Add-SPDistributedCacheServiceInstance -ErrorAction Stop
        Write-Output "$(Get-Date -Format T) : Added server $server as Distributed Cache host!"
    }
    catch {
        Write-Output "ERROR: We failed during adding cache host cluster on $server."
        $_.Exception.Message
        Exit
    }
}

end {}

#EOF

As simple as it gets, we just add the server as a Distributed Cache server. Just like when we first created the Farm topology.

Update-DistributedCacheSize.ps1

<#
.SYNOPSIS
    Updates the Distributed Cache Service Cache Szie


.DESCRIPTION

    Stops all Cache Hosts, then updates the size, starts all cache hosts


    spence@harbar.net
    25/06/2015
    
.NOTES
    File Name  : Update-DistributedCacheSize.ps1
    Author     : Spencer Harbar (spence@harbar.net)
    Requires   : PowerShell Version 3.0  
.LINK
.PARAMETER File  
    The configuration file

#>
[CmdletBinding()]
#region PARAMS
param (
    [Parameter(Mandatory=$true,
               ValueFromPipelineByPropertyName=$true,
               Position=0)]
    [ValidateNotNullorEmpty()]
    [Int]
    $CacheSizeInMB,

    [Parameter(Mandatory=$false,
               ValueFromPipelineByPropertyName=$true,
               Position=1)]
    [ValidateNotNullorEmpty()]
    [PSCredential]
    $Credential
)
#endregion PARAMS

begin {
    Write-Output "$(Get-Date -Format T) : Initiated Distributed Cache Service Cache Size change."
    Add-PSSnapin -Name "Microsoft.SharePoint.PowerShell"
    $StopDistributedCacheScript = "$PSScriptRoot\Stop-DistributedCacheCluster.ps1"
    $StartDistributedCacheScript = "$PSScriptRoot\Start-DistributedCacheCluster.ps1"
}

process {
    try {
        # Get the servers running DC
        $DistributedCacheServers = Get-SPServer | `
                                   Where-Object {($_.ServiceInstances | ForEach TypeName) -eq "Distributed Cache"} | `
                                   ForEach Address

        # Stop service instances
        Invoke-Expression $StopDistributedCacheScript

        # Update the Cache Size
        Write-Output "$(Get-Date -Format T) : Changing Distributed Cache Service Cache Size..."
        Invoke-Command -ComputerName $DistributedCacheServers[0] -Credential $Credential -Authentication Credssp `
                       -ArgumentList $CacheSizeInMB `
                       -ScriptBlock {
                            Add-PSSnapin -Name "Microsoft.SharePoint.PowerShell"
                            Update-SPDistributedCacheSize -CacheSizeInMB ($args[0])
                        }
        # Start service instances
        Invoke-Expression $StartDistributedCacheScript
    }
    catch {
        Write-Output "ERROR: We failed during changing the Distributed Cache Size."
        $_.Exception.Message
        Exit
    }
}

end {
    Write-Output "$(Get-Date -Format T) : Completed Distributed Cache Service Cache Size Change!"
}

This one basically stops all Distributed Cache service instances in the farm, sets the Cache Size and then restarts the service instances, we call the following two scripts which may be useful in other situations as well.

Stop-DistributedCacheCluster.ps1

[CmdletBinding()]
param ()

begin {
}

process {
    Write-Output "$(Get-Date -Format T) : Stopping Distributed Cache Service Instance on all servers..."
    Get-SPServiceInstance | Where-Object { $_.TypeName -eq "Distributed Cache" } | Stop-SPServiceInstance -Confirm:$false | Out-Null
    While (Get-SPServiceInstance | Where-Object { $_.TypeName -eq "Distributed Cache" -and $_.Status -ne "Disabled" }) {
        Start-Sleep -Seconds 15
    }
    Write-Output "$(Get-Date -Format T) : All Distributed Cache Service Instances stopped!"
}

Start-DistributedCacheCluster.ps1

[CmdletBinding()]
param ()

begin {
}

process {
    Write-Output "$(Get-Date -Format T) : Starting Distributed Cache Service Instance on all servers..."
    Get-SPServiceInstance | Where-Object { $_.TypeName -eq "Distributed Cache" } | Start-SPServiceInstance | Out-Null
    While (Get-SPServiceInstance | Where-Object { $_.TypeName -eq "Distributed Cache" -and $_.Status -ne "Online" }) {
        Start-Sleep -Seconds 15
    }
    Write-Output "$(Get-Date -Format T) : All Distributed Cache Service Instances started!"
}

 

With these six scripts I can use a “controller” script Update-DistributedCacheServiceIdentity.ps1, to wrap everything together. I also have another script that reports on the status of the AppFabric cluster.

<#
.SYNOPSIS
    Updates the Distributed Cache Service Identity


.DESCRIPTION

    Deals with the requirement to only call Deploy() on the service identity when 
    only a SINGLE server is running DC. Deploy() is required to correctly set the
    identity, and have a healthy Cache Cluster.


    spence@harbar.net
    25/06/2015
    
.NOTES
    File Name  : Update-DistributedCacheServiceIdentity.ps1
    Author     : Spencer Harbar (spence@harbar.net)
    Requires   : PowerShell Version 3.0  
.LINK
.PARAMETER File  
    The configuration file

#>
[CmdletBinding()]
#region PARAMS
param (
    [Parameter(Mandatory=$true,
               ValueFromPipelineByPropertyName=$true,
               Position=0)]
    [ValidateNotNullorEmpty()]
    [String]
    $ManagedAccountName,

    [Parameter(Mandatory=$false,
               ValueFromPipelineByPropertyName=$true,
               Position=1)]
    [ValidateNotNullorEmpty()]
    [PSCredential]
    $Credential
)
#endregion PARAMS

begin {
    Write-Output "$(Get-Date -Format T) : Initiated Distributed Cache Service Identity Change."
    Add-PSSnapin -Name "Microsoft.SharePoint.PowerShell"
    $RemoveDistributedCacheScript = "$PSScriptRoot\Remove-DistributedCache.ps1"
    $AddDistributedCacheScript = "$PSScriptRoot\Add-DistributedCache.ps1"
    $DistributedCacheIdentityScript = "$PSScriptRoot\Set-DistributedCacheIdentity.ps1"
}

process {
    
    # Get the servers running DC
    $DistributedCacheServers = Get-SPServer | `
                               Where-Object {($_.ServiceInstances | ForEach TypeName) -eq "Distributed Cache"} | `
                               ForEach Address

    If ($DistributedCacheServers.Count -eq 0) {
        
        # No servers running DC
        Write-Error -Message "No servers in this farm are running Distributed Cache!"
    }
    ElseIf ($DistributedCacheServers.Count -eq 1) {
        
        # A single server running DC, update the service account
        Invoke-Command -ComputerName $DistributedCacheServers -FilePath $DistributedCacheIdentityScript `
                       -ArgumentList $ManagedAccountName `
                       -Credential $Credential -Authentication Credssp
    }
    ElseIf ($DistributedCacheServers.Count -gt 1) { 

        # More than one DC server, remove all but one
        For ($i=0; $i -le ($DistributedCacheServers.Count - 2); $i++) {
            Invoke-Command -ComputerName $DistributedCacheServers[$i] -FilePath $RemoveDistributedCacheScript `
                           -Credential $Credential -Authentication Credssp
        }

        # Update service account on remaining DC server
        $ChangeIdentityServer = Get-SPServer | `
                                Where-Object {($_.ServiceInstances | ForEach TypeName) -eq "Distributed Cache"} | `
                                ForEach Address

        Invoke-Command -ComputerName $ChangeIdentityServer -FilePath $DistributedCacheIdentityScript `
                       -ArgumentList $ManagedAccountName `
                       -Credential $Credential -Authentication Credssp
       
        # Add (back) the remaining servers
        ForEach ($DistributedCacheServer in $DistributedCacheServers) {
            If ($DistributedCacheServer -ne $ChangeIdentityServer) {
                Invoke-Command -ComputerName $DistributedCacheServer -FilePath $AddDistributedCacheScript `
                               -Credential $Credential -Authentication Credssp
            }
        }
    }
}

end {
    Write-Output "$(Get-Date -Format T) : Completed Distributed Cache Service Identity Change!"
}

This basically checks if I am on a Farm with a single Distributed Cache server, in which case I simply update the account. Otherwise I implement the required model – removing all but one Distributed Cache server, update the account, and add back the removed servers.

Now I can also add another layer to integrate with the Cache Size script and the AppFabric status script and have some fancy admin inputs. For example:

<#
.SYNOPSIS
    Controller for DC
    

.DESCRIPTION


    spence@harbar.net
    25/06/2015
    
.NOTES
    File Name  : Controller.ps1
    Author     : Spencer Harbar (spence@harbar.net)
    Requires   : PowerShell Version 2.0  
.LINK
.PARAMETER File  
    The configuration file

#>


try {
    Clear-Host

    # the account to run the remote commands, a Shell Admin or the Install Account
    $ShellAdminAccountName = "FABRIKAM\administrator"

    # the account we wish to run DC as
    $ServiceAccountName = "FABRIKAM\sppservices"

    # one of the boxes running DC
    $DcServer = "FABSP04"

    # the cache size
    $CacheSize = 500

    # Get creds
    $Creds = Get-Credential -UserName $ShellAdminAccountName `
                            -Message "Please provide the password for the $ShellAdminAccountName account"

 
    
    Invoke-Command -ComputerName $DcServer -FilePath ".\Get-DistributedCacheStatus.ps1" `
                   -ArgumentList $false `
                   -Credential $Creds -Authentication Credssp

    .\Update-DistributedCacheServiceIdentity.ps1 -ManagedAccountName $ServiceAccountName -Credential $Creds
    
    Invoke-Command -ComputerName $DcServer -FilePath ".\Get-DistributedCacheStatus.ps1" `
                   -ArgumentList $false `
                   -Credential $Creds -Authentication Credssp

    .\Update-DistributedCacheSize.ps1 -CacheSizeInMB $CacheSize -Credential $Creds
   
}
catch {
    Write-Output "OOOPS! We failed during DC_CONTROLLER on $server."
    $_
    Exit
}

#EOF

Here’s a sample output from running the above to change the service account:

PSComputerName : FABSP04
RunspaceId     : c72e518e-48ee-4fcc-a272-5341eef301d0
HostName       : FABSP04.fabrikam.com
PortNo         : 22233
ServiceName    : AppFabricCachingService
Status         : Up
VersionInfo    : 3[3,3][1,3]

PSComputerName : FABSP04
RunspaceId     : c72e518e-48ee-4fcc-a272-5341eef301d0
HostName       : FABSP05.fabrikam.com
PortNo         : 22233
ServiceName    : AppFabricCachingService
Status         : Up
VersionInfo    : 3[3,3][1,3]

PSComputerName : FABSP04
RunspaceId     : c72e518e-48ee-4fcc-a272-5341eef301d0
HostName       : FABSP06.fabrikam.com
PortNo         : 22233
ServiceName    : AppFabricCachingService
Status         : Up
VersionInfo    : 3[3,3][1,3]

15:55:24 : Initiated Distributed Cache Service Identity Change.
15:55:37 : Removing server FABSP04 as Distributed Cache host...
15:55:40 : Waiting on port to free up...
15:56:10 : Waiting on port to free up...
15:56:40 : Waiting on port to free up...
15:57:10 : Waiting on port to free up...
15:57:40 : Removed server FABSP04 as Distributed Cache host!
15:57:55 : Removing server FABSP05 as Distributed Cache host...
15:57:57 : Removed server FABSP05 as Distributed Cache host!
15:58:11 : Waiting on port to free up...
15:58:41 : Waiting on port to free up...
15:59:11 : Waiting on port to free up...
15:59:41 : Waiting on port to free up...
16:00:11 : Configuring Distributed Cache Service Account...
16:00:11 : This will take approximately 8 minutes...
16:08:35 : Distributed Cache Service Account Updated!
16:08:49 : Adding server FABSP04 as Distributed Cache host...
16:08:55 : Added server FABSP04 as Distributed Cache host!
16:09:08 : Adding server FABSP05 as Distributed Cache host...
16:09:14 : Added server FABSP05 as Distributed Cache host!
16:09:14 : Completed Distributed Cache Service Identity Change!
PSComputerName : FABSP04
RunspaceId     : d3d5273b-0053-41dd-8c52-f309339260fb
HostName       : FABSP04.fabrikam.com
PortNo         : 22233
ServiceName    : AppFabricCachingService
Status         : Up
VersionInfo    : 3[3,3][1,3]

PSComputerName : FABSP04
RunspaceId     : d3d5273b-0053-41dd-8c52-f309339260fb
HostName       : FABSP05.fabrikam.com
PortNo         : 22233
ServiceName    : AppFabricCachingService
Status         : Up
VersionInfo    : 3[3,3][1,3]

PSComputerName : FABSP04
RunspaceId     : d3d5273b-0053-41dd-8c52-f309339260fb
HostName       : FABSP06.fabrikam.com
PortNo         : 22233
ServiceName    : AppFabricCachingService
Status         : Up
VersionInfo    : 3[3,3][1,3]

16:09:31 : Initiated Distributed Cache Service Cache Size change.
16:09:31 : Stopping Distributed Cache Service Instance on all servers...
16:10:01 : All Distributed Cache Service Instances stopped!
16:10:01 : Changing Distributed Cache Service Cache Size...
16:10:17 : Starting Distributed Cache Service Instance on all servers...
16:10:48 : All Distributed Cache Service Instances started!
16:10:48 : Completed Distributed Cache Service Cache Size Change! 
Note the all important status of Up for each Cache Host. Also note how long it takes to do everything in a Farm with three Distributed Cache servers – around 16 minutes.

Note: I have a DSC version of this tooling and it took less than an hour to create from the above. Whilst DSC is certainly *the* way to go, it is not yet ready for prime time with SharePoint and of course has considerable platform dependencies. The point here is if you make modular tools, it’s a snap to move them over to DSC resources.

Now, with these assets my run book can be:

  1. Log on to a SharePoint Server with a Shell Admin account
  2. Run Update-DistributedCacheServiceIdentity.ps1, providing the name of the Managed Account and Credentials
  3. Run Update-DistributedCacheSize.ps1, providing the Cache Size and Credentials

Pretty simple. Now in the real world that run book would have a bunch of other things such as “don’t be running this whilst Windows Server Automatic Maintenance (TIWorker.exe) is running” and so on, but they are entirely superfluous to the concepts being discussed here.

 

Conclusion

As is often the case with a complex product like SharePoint and its myriad of foundational technologies, even simple things like changing a service account can have distinct challenges and “devil in the detail”. Because of this, understanding the playbook is paramount to running successful operational service management for SharePoint farms. Unfortunately, this area of guidance is sorely lacking, and indeed the practice discipline itself is very immature across the industry. I deliberately picked a very simple (in the scheme of things) example to demonstrate the “playbook imperative”, there are a multitude of much more complex scenarios which SharePoint administrators face frequently. Sadly, most “experts” bail after a project is complete and never truly get to understand the real world of OSM. Also far too frequently customers fail to invest in this area adequately.

As more and more of our administration models move to Windows PowerShell, perhaps one of the best things out of Redmond in the last ten years, it is long since passed time that SharePoint the product, and SharePoint practitioners in general embraced the idea of tools, not scripts. This after all is a driving force of the PowerShell Manifesto and a primary benefit of PowerShell. Not the ability to run a cmdlet, but to combine them together in interesting ways. Just like a band sounds pretty terrible when just one guy is beating the drums, but when the rest of the rhythm section joins in, beautiful things happen. Building tools is all about understanding the playbook. In some cases, you simply must have one. In others it will merely save you days of effort when building the tools.

I implore Microsoft to invest in providing better documentation and guidance, and pretty please share at least some of their operational service management experiences from operating the planet’s biggest and baddest SharePoint.

Oh, and I’ve shown you how you can change the service account for Distributed Cache, so that it works. In a real server farm instead of a single box.

Get your tools on.

 

s.

Configuring Kerberos Constrained Delegation with Protocol Transition and the Claims to Windows Token Service using Windows PowerShell

posted @ Tuesday, June 02, 2015 9:05 PM | Feedback (0)

Recently I’ve done a few pieces of work with SharePoint 2013 Business Intelligence and I have also delivered the “legendary”* Kerberos and Claims to Windows Service talk a few times this year. This reminded me to post my Windows PowerShell snippets for the required Active Directory configuration.

This topic area is perhaps one of the most misunderstood areas of SharePoint Server, and there is an utterly staggering amount of misinformation, out of date information, single server documentation and good old fashioned 100% bullshit out there. That’s a surprise with SharePoint stuff, huh?

Every guide or document out there that I could find talks to configuring Delegation using Active Directory Users and Computers (ADUC). They all also reference configuring Local Security Policy manually, or via Group Policy (without providing the details).

Of course there’s nothing wrong with doing it that way, and it sure makes for a better explanation of the concepts. However back in 2009 when we were working on pre-release materials I put together some Windows PowerShell to achieve the same configuration. So here they are in all their very simple glory.

* “Legendary” – I don’t know about that so much, but the Kerberos talks and in particular the AuthN+Z module of the SharePoint 2007, 2010 and 2013 MCM programs were recently described to me as such by five different SharePoint luminaries with rock solid credibility. Those people know who they are.

Every time I give this talk I get hassled for the “magic scripts”. They aren’t magic, but they always seem to surprise people as there is a misconception that delegation settings cannot be set using Windows PowerShell!

As you should be aware, in order to configure identity delegation for a Web Application in Claims mode within SharePoint Server 2010 or 2013 we must configure Kerberos Constrained Delegation with Protocol Transition. No ifs, no buts. It’s the only way it can work because in Claims mode there is no identity with which to perform either impersonation, basic delegation or true Constrained Delegation using Kerberos.

Thus, we make use of a component of the Windows Identity Framework, the Claims to Windows Token Service (C2WTS) to mock real delegation using a Windows Logon Token. C2WTS itself makes use of Service For User (S4U). S4U does NOT perform real delegation, it cannot because there are no user credentials to delegate. It instead grabs a bunch of SIDs for the user (in this case a service identity). What all this means is that there is a hard requirement to use Protocol Transition. Protocol Transition is named in the UI of ADUC as “Use any authentication protocol”.

Thus, in order to set things up, our settings in Active Directory for the C2WTS service identity and the application pool identity of the service application endpoint must be configured to perform Kerberos Constrained Delegation using Protocol Transition to the back end services.

In the example below I am allowing the C2WTS account to delegate to SQL Server Database Services and SQL Server Analysis Services using the SPNs which already exist on their service accounts. I of course repeat the exact same configuration on the application pool identity of the service application endpoint.

image

In order to complete this configuration using ADUC we are told we must create a “mock” or “fake” Service Principal Name (SPN) on the accounts first. Otherwise the Delegation tab in the account properties does not show up.

The reality is we can easily configure the attributes we are interested in using ADUC in Advanced Features mode, or ADSIEdit. However, there must be an SPN for the delegation to succeed. So it’s not a “mock” SPN at all. It’s not just about exposing the delegation tab. We must have a SPN!

It’s a complete breeze to configure the same settings using the Active Directory module for Windows PowerShell.

  • The services to delegate to are exposed by the AD schema extended attribute msDS-AllowedToDelegateTo. This can be manipulated using the standard Set-ADUser –Add pattern. As can the SPN itself. 
     
  • The setting for Protocol Transition is actually a UserAccountControl attribute.  It’s enumeration is ADS_UF_TRUSTED_FOR_DELEGATION or 524288. Remember this attribute is a cumulative bitmask. But the thing is we DON’T need to care! We don’t need some stinky “library” or utility function to manage the bitmask stuff or any of that noise. It can all be handled with the Set-ADAccountControl cmdlet with the –TrustedToAuthForDelegation parameter.
     
  • Note TrustedToAuthForDelegation == Protocol Transition, –TrustedForDelegation == Kerberos Only

And that’s it. Two cmdlets basically. A complete snap. Now as always, there’s some slinging needed to do this neatly for real requirements and perform end to end configuration. Here’s the Windows PowerShell script I use for basic setups:

<#
    Configures accounts in Active Directory to support identity delegation
    spence@harbar.net
    February 16th 2009

    1. Configures SPNs for SQL DB and SQL AS
       - does not check for duplicates
    2. Configures SPNs for SharePoint service identities
       (C2WTS and Service App Endpoint Identity)
    3. Configures Kerberos Constrained Delegation with 
       Protocol Transition to SPNs in #2
    
#>
Import-Module ActiveDirectory

## VARS
$sqlDBaccount = "sqldb"
$sqlASaccount = "sqlas"
$c2wtsAccount = "c2wts"
$servicesAccount = "sppservices"

$c2wtsSpn = "SP/c2wts"
$servicesSpn = "SP/Services"
$sqlDbSpns = @("MSSQLSvc/fabsql1.fabrikam.com:1433", "MSSQLSvc/fabsql1:1433")
$sqlAsSpns = @("MSOLAPSvc.3/fabsql1.fabrikam.com", "MSOLAPSvc.3/fabsql1")
$delegateToSpns = $sqlDbSpns + $sqlAsSpns
## END VARS

$delegationProperty = "msDS-AllowedToDelegateTo"

Write-Host "Configuring SPNs for SQL Server Services..."
$account = Get-ADUser $sqlDBaccount
$sqlDbSpns | %  {Set-AdUser -Identity $account -ServicePrincipalNames @{Add=$_}}
$account = Get-ADUser $sqlASaccount
$sqlAsSpns | %  {Set-AdUser -Identity $account -ServicePrincipalNames @{Add=$_}}

function ConfigKCDwPT($account, $spn) {
    $account = Get-ADUser $account
    $account | Set-ADUser -ServicePrincipalNames @{Add=$spn}
    $account  | Set-ADObject -add @{$delegationProperty=$delegateToSpns}
    Set-ADAccountControl $account -TrustedToAuthForDelegation $true
}

Write-Host "Configuring KCDwPT for C2WTS and Services Account..."
ConfigKCDwPT $c2wtsAccount $c2wtsSpn
ConfigKCDwPT $servicesAccount $servicesSpn

Write-Host "KCDwPT configuration complete!"

OK, so that’s the AD account configuration settings all taken care of. What about the C2WTS itself?

If we run C2WTS as it’s default identity, LocalSystem, we don’t need to do anything. But that’s a really stupid configuration. Why? Because in a real farm you have more than one machine running C2WTS. That means multiple points of configuration (on each computer object in AD). In addition any mistakes you make during configuration (say you fat finger the SPN) require a machine restart for corrections to take effect.  Thus there is a compromise between manageability, configuration approach and security.

The reality is that the security element of the compromise is completely null and void from a technical or information security perspective. The old arguments about TCB are now completely out-dated, and besides were invented by people who didn’t know information security and were designed for single server solutions! However, if you are unlucky enough to work with those customers with out-dated security policies it remains part of the compromise on those grounds alone.

Everyone else with any sense will change the identity to a named service account. If we do this, we also have to grant additional User Rights Assignments to the account in order for it to be able to call S4U. These are Act as part of the Operating System and Impersonate a Client after Authentication. The account must also be a member of the Local Administrators group on each server it runs. All of this can be done via Computer Management and Local Security Policy, or properly via Group Policy.

However it’s also a complete snap to configure this stuff using Windows PowerShell, making use of an old school utility NTRights from the Windows Server Resource Kit or the Carbon library. Here’s the script:

<#
    Configures C2WTS service identity with appropriate user rights
    spence@harbar.net
    February 16th 2009

    1. Configures Local Admins memebership
    2. Configures User Rights Assignments using NTRights 
       (update with path to WSRK)
    3. Configures User Rights Assignments using Carbon 
       (http://sourceforge.net/projects/morgansource/files/
       Third-Party-Sources/Carbon-1.6.0.zip/download)
    
#>
asnp Microsoft.SharePoint.PowerShell

## VARS
$user = "fabrikam\c2wts"
$CarbonDllPath = "C:\Tools\Carbon-1.6.0\Carbon\bin\Carbon.dll"
## END VARS


# adds user to local admins group
NET LOCALGROUP Administrators $user /ADD

# sets up the neccessary local user rights assignments using NTRights
C:\Tools\rk\NTRights.exe +r SeImpersonatePrivilege -u $user
C:\Tools\rk\NTRights.exe +r SeTcbPrivilege -u $user

# sets up the neccessary local user rights assignments
[Reflection.Assembly]::LoadFile($CarbonDllPath)
[Carbon.Lsa]::GrantPrivileges($user, "SeImpersonatePrivilege")
[Carbon.Lsa]::GrantPrivileges($user, "SeTcbPrivilege")

Note we do NOT have to set the c2wts account to Logon as a Service, as this User Rights Assignment is granted when we change the service identity within SharePoint…..

On a related note, I’ve also been asked for my snippets for managing the C2WTS process identity. TechNet has incorrect scripts for this work, which will only ever work on a single server farm (ooops!). Here’s how to change it properly, and also how to reset it back to LocalSystem (properly!).

<#
    Configures C2WTS service identity 
    spence@harbar.net
    February 16th 2009

    1. Sets dependency
    2. Sets desired process identity
#>
asnp Microsoft.SharePoint.PowerShell

## VARS
$accountName = "FABRIKAM\c2wts"
$serviceInstanceType = "Claims to Windows Token Service"
## END VARS

sc.exe config c2wts depend=CryptSvc

# configure to use a managed account
# Should use farm, otherwise in multi server farm you have an array of objects!
$farmServices = Get-SPFarm
$c2wts = $farmServices.Services | Where {$_.TypeName -eq $serviceInstanceType}
$managedAccount = Get-SPManagedAccount $accountName
$c2wts.ProcessIdentity.CurrentIdentityType = "SpecificUser";
$c2wts.ProcessIdentity.ManagedAccount = $managedAccount
$c2wts.ProcessIdentity.Update();
$c2wts.ProcessIdentity.Deploy();
$c2wts.ProcessIdentity 

# reset to local system
# Should use farm, otherwise in multi server farm you have an array of objects!
$farmServices = Get-SPFarm
$c2wts = $farmServices.Services | Where {$_.TypeName -eq $serviceInstanceType}
$c2wts.ProcessIdentity.CurrentIdentityType=0; #LocalSystem
$c2wts.ProcessIdentity.Update();
$c2wts.ProcessIdentity.Deploy();
$c2wts.ProcessIdentity 

Note I use this script to also configure the missing dependency on the Windows Service itself. We can of course start the C2WTS easily as well:

# start c2wts on server(s)
$servers = @("FABSP1", "FABSP2")
foreach ($server in $servers)
{
    Get-SPServiceInstance -Server $server | Where {$_.TypeName -eq $serviceInstanceType} | Start-SPServiceInstance
}

Nice and easy. No pointy clickity click click or “Working on it…” needed. The entire end to end configuration in Windows PowerShell takes less than 90 seconds.

 

 

s.

Insight to what’s going on, information keeps us strong

posted @ Sunday, April 05, 2015 1:25 PM | Feedback (1)

…what you don’t know can hurt you bad, take it from me you’ll be walkin’ around sad.

Great tune, but Terry Lewis’ bass can’t help you or your customers when Office 365 hits the skids. Most of you will now be familiar with the common valid arguments against “cloud” services such as Office 365, particularly those from the enterprise. However one of the common invalid arguments is around service availability and reliability. I can’t count the number of times I have had this conversation with customers over the last two years or so. In almost all cases it’s a completely pointless discussion. However it does also point to a seriously important latent and festering customer concern.

Seriously, the idea that the Contoso Corporation or the outsourced service delivery organisation they sub to can operate and maintain a better level of service than Office 365 is a joke. Of course it’s healthy for customers to be somewhat cynical – let’s face it, ten years ago if you had suggested that Microsoft (yes them of LAN Manager fame) could operate a large scale software as a service hosting operation you’d be laughed out of your job in a nanosecond. But a lot happens in a decade of “change”. Microsoft easily has the best service level of any of the players in this space. And the software they are offering as a service is far more complex operationally. No one else is even close, they aren’t even playing the same game. Especially in the SharePoint space where the nearest “competitor” is a laughably pathetic comparison in terms of operational agility and plain old fashioned skilled engineering. Microsoft are Serena Williams, the competition is any other WTA player.

However, even with those facts which are easily discovered and proven the discussions still come up. “Can they meet our availability targets?”, “How often does it go bang?”, “What happens when they mess up their SSL certificate renewals?”, and the ever amusing “can they handle our throughput targets?”.  When those same questions are asked of the alternative, which in this context is some “internal” IT, the general response is “hmm, good point, do we measure that?”. And that’s what this post is all about. Measurement.

Office 365 has a financially backed SLA. Which is a really good thing. So good in fact that now pretty much everyone expects it. But just like any other SLA, it’s as much use as a chocolate fireguard without measurement. And this is the area where Office 365 today has a significant operational weakness. There is nowhere near enough transparency around operations.

As someone who has been involved in the building from scratch of two of the largest managed operations infrastructures in Europe I am plenty familiar with just how difficult it is to do this well. Even more so when the actual service is evolving at breakneck pace and new features are being added practically on a weekly basis. Being hard isn’t an excuse though, it’s a fundamental infrastructure pre-requisite to doing managed services. Period.

We all know that outages happen, indeed customers in general are OK with that – which is just as well because they have to be. What *really* get’s their goat up is not knowing about them, and having to spend much more money than they will ever get back on the SLA fielding support calls from angry end users who can’t do their jobs due to the outage.

I’m sure you’ve all seen the Office 365 service health dashboard. Yeah, that. It is useful but it’s not what is needed in terms of transparency nor in execution. For all major Office 365 service outages that have made the headlines over the past 18 months or so customers reported that the dashboard often didn’t include any information on the outages and furthermore Microsoft have generally been poor at detailing root causes (something they are getting better at on a monthly basis).

Now of course, it’s pretty stupid to use the same service to deliver monitoring information that you are monitoring, and that’s not what Office 365 do, even if it appears that way in the user interface. But there is also an ownership consideration for customers. You can’t just buy a cloud service and expect the promised utopia. Just like government it’s up to us to hold them to account for their actions.

But wait. If one is to go “all in” with Office 365 does that mean we have to invest in expensive, complex, hard to manage OSM tooling so we will know if we’re getting the service we paid for?

No.

I’m sure plenty are familiar with standard web site uptime trackers and “pingers”. What if there was an exceptionally easy way to hit up your Office 365 services with an external and independent monitoring service that concentrates on the core information? And it came from a source that you could absolutely, without any doubt whatsoever trust. That would be sweet right? How about if it was *really* cheap?

o365mon


Office 365 Monitor
does exactly that. And it’s awesome.


I don’t do ISV product posts. Unless I’m slagging off Anti Virus vendors :). It’s mainly because there aren’t many good ones. Certainly not ones I'm willing to put my name behind. This one is an exception, and more importantly – this is a service that every Office 365 customer on the planet should be using. It’s that important. It’s good for customers, it’s good for Microsoft and it’s good for what matters most, our users.

Office 365 Monitor gives you the basic ability to monitor (24x7) SharePoint Online sites and Exchange Online Mailboxes, providing you with email or SMS notifications of outages. That’s the basic service offering. Over time additional Office 365 resources will likely be added. It doesn’t sound like much but that’s part of the beauty of the solution – it focuses on what’s important, rather than featureitus. Clean, simple and smart.

It’s a complete breeze to setup by simply signing in with your tenant logon and adding resources to monitor via a x-app consent model. No extra username or passwords, just the way it should be done.

The probes and alerts will be telling you of the outages before your users trash your helpdesk and in many cases before Office 365 itself knows there is one. It will also real time alert you when the service is restored.

There are also additional premium features which provide historical data on both outages and health checks – which give response times and status. There will be more features in this area coming soon. Data exports, cross tenant comparisons, averages – all that good stuff you always wanted your MOM to give you but never had the time to set up. Plus a comparison to the Office 365 SLA everyone is so keen on talking about.

Now, does this solve all of the things I was rambling about in the longwinded pre-amble? No, of course not. Microsoft themselves need to do a much better job on transparency and reporting. Which is something that they are working hard on (yes, they really are). The important point here is no matter what they do, independent monitoring will always be necessary. Furthermore if such tooling as Office 365 Monitor forces Microsoft to invest more in OSM – that’s spot right on the money.

Did I mention it’s easy to setup? One form. Done. Boom. It takes about 45 seconds. Sound a bit tricky? Check out the video over on YouTube for a feature walkthrough and setup example.

There’s no setup.exe, there’s no hardware, and there’s no cost. Yup, that’s right – it’s free.

I spend a significant portion of my time with customers helping them navigate the operational reality of the cloud services they have purchased or wish to purchase. No matter what feature offering or customisation angst is in play, the truth on the ground is paramount. Office 365 Monitor provides me with a service that greatly assists in this area, the single biggest IT consideration aside security for software as a service. Every Office 365 customer needs this service. Period.

Seriously if you are working with Office 365 in any capacity, quit watching that online training course on AngularJS, and go check out Office365mon.com. Be in the know, be in control. Get the knowledge.

Updated ULS Viewer

posted @ Saturday, August 23, 2014 11:44 AM | Feedback (0)

If you haven’t already grabbed it, just a quick note to let you know that Microsoft put an update of the ULS Viewer tool out recently. For quite a while the tool had been removed from code.msdn.microsoft.com and those who had “lost” a copy had to resort to annoying others to get it.

ULS Viewer, as I’ve written previously is an essential tool for working with SharePoint.  The new version has a number of tweaks including viewing across a farm, rather than manually having to configure that up.

Go get it!

ULS Viewer download

Bill’s post on the update

 

Props to Dan in particular!

 

s.

Support for SQL Server Always On Async Replication with SharePoint 2013

posted @ Thursday, March 20, 2014 10:19 PM | Feedback (0)

One of the most significant “IT Pro” or infrastructure related announcements at the recent SharePoint Conference in Las Vegas was related to a change in supportability for using SQL Server Always On for SharePoint databases, and in particular the use of Asynchronous replication in Business Continuity Management (BCM) scenarios.

This is a HUGE deal. Of course, it’s not sexy, it doesn’t directly provide SharePoint IT Pros with a new tool in their belt, and it doesn’t expand deployment scenarios like the announcement relating to 1TB site collections in Office 365. However it is perhaps the single most important piece of infrastructure related information in the entire show for customers and partners operating real farms.

I tend to avoid pimping such announcements unless they have real impact, this is one of the cases where the “news” deserves much broader exposure. Let me be entirely clear, I was in no way involved in this change, nor in any of the work done to achieve the end goal. My only, and extremely minor, contribution was to provide feedback over the last year or so that such supportability would be greatly appreciated and extend the ability of organisations to implement appropriate and durable Operational Service Management for SharePoint deployments. My only role here is to help get the word out.

Extreme props are due here to the SharePoint Product Group, and specifically the Office 365 Customer Advisory Team (CAT) for making this happen. When community naysayers or indeed customers complain unfairly about Microsoft’s commitment to on-premises and the SharePoint IT Pro in particular it really gets my goat up. Sometimes the criticism is valid, but that is very much the exception to the rule, and this work is a demonstration of just how deeply committed Microsoft is to it’s customers and the partners that support them in the marketplace.

Here follows a Q&A on the details of the supportability change, and it’s impact to your designs or implementations.

Q. Which databases support which Always On replication modes?

A.

Database

Sync Supported

Async Supported

Comments

Central Admin Content

Yes

No

Farm specific database

App Management

Yes

Yes

 

BDC

Yes

Yes

 

Farm Configuration

Yes

No

Farm specific database

Content

Yes

Yes

 

Managed Metadata

Yes

Yes

 

PerformancePoint

Yes

Yes

 

PowerPivot

Not Tested

Not Tested

TBD, goal is to be supported.
Additional work in progress

Project

Yes

Yes

 

Search  Analytic Reporting

Yes

No

See Search notes below

Search Admin

Yes

No

See Search notes below

Search Crawl

Yes

No

See Search notes below

Search Links

Yes

No

See Search notes below

Secure Store

Yes

Yes

 

State Service

Yes

No

Farm specific database

Subscription Settings

Yes

Yes

 

Translation Services

Yes

Yes

 

UPA Profile

Yes

Yes

 

UPA Social

Yes

Yes

 

UPA Sync

Yes

No

Backup and restore or recreate, see UPA notes below

Usage

Yes– NR

No

Farm specific and unsupported for attach DR. Could be use for data mining only.

Word Automation

Yes

Yes

 

Commentary: that’s a lot of green “yes”! A few notes are present. Clearly farm specific databases do not support Async, and to do so would be pointless in a DR scenario anyway as they store transient or dynamically generated data. There are a couple of stand outs which require further considerations (Search and UPA) which will be addressed below in more detail.

 

Q. Which version of SharePoint was tested

A. The testing was carried out on SharePoint 2013 April 2014 CU (Post SP1). Get your farms patched people!

Q. Can I implement Always On Async replication on a version of SharePoint 2013 prior to the tested version

A. While we have every reason to believe that the async replication capabilities will work just fine on a version of SharePoint 2013 prior to the tested version, we do not recommend it. This does not make it unsupported however, just consider that in the event of raising a support case your customer is likely to be asked to install the SharePoint 2013 April CU if the support case is in anyway related.

Q. Can I implement Always On Async replication on a version of SharePoint prior to SharePoint 2013

A. We have not tested or considered support for prior versions of SharePoint and as such the official stance here has to be, unsupported on versions prior to SharePoint 2013.

Q. What has changed in the product to support Always On Async replication in SharePoint 2013

A. Nothing has fundamentally changed in the data transfer or connection layer. We have added a new property to the SPDatabase Object. AlwaysOnAvailabilityGroup is a new property that is populated when you execute the new PowerShell command Add-DatabaseToAvailabilityGroup

 

Q. What new command have been added to support Always On in SharePoint 2013

A. There are three new commands

Add-DatabasetoAvailabilityGroup , this adds a database to an availability group by database and availability group name

Remove-DatabasefromAvailabilityGroup, this removes a database to an availability group and has options to prevent data loss on the secondaries and also to force the removal if needed

Get-AvailabiltiyGroupStatus, this allows you to interrogate a known availability group and check which SQL nodes and replication status is used on them.

Q. Are there any known limitations when using availability groups in sync or async mode

A. There are multiple reports of being unable to create new databases against listener names or having difficulties patching farms when used Sync and Async replication modes. Work is on-going to document the limitations on TechNet in the coming months.

 

Special considerations for the UPA Sync database
In the table above, backup and restore (or recreate) are noted as the appropriate steps to take in a BCM scenario for this particular database. This database has it’s schema and some data provisioned at the point of starting the User Profile Synchronization service instance. In addition when updates are applied any changes to it’s schema are only provisioned following a service instance restart. This means that Async replication mode is not appropriate and indeed offers no value. There is a compromise to be made between the backup/restore approach, which involves other components such as encryption keys and certificates. For many customers this is a significant burden and they will chose the “cleanest” approach which is to simply recreate and then provision the UPS service instance. Be aware however that this approach requires a reconfiguration of any Sync connections, filters and additional work with custom property mappings.

 

 

Special considerations for the Search service application databases.

You will note that the search databases are specifically called out as not supported for Async replication. This is due to the requirement to maintain synchronization between the search index files on disk and the search databases. With async replication this coordination cannot be guaranteed and the possibility of search index corruption or certainly instability is extremely high.

Thus the question remains, what to do about search?

If you cast your mind back to SharePoint 2010 we had a couple of options.

1. For a high degree of search freshness you could crawl read only databases on the DR side or crawl the production farm from a DR search service application

2. Use a log shipped copy of the search admin database to recreate the search service in DR and re-crawl the content, this has the advantage of bringing over the search configuration but the index needs to be rebuilt

3. Backup and Restore of the search service application. This is a high fidelity restore but may be unacceptable due to an extended restore time.

Two fundamental differences exist between SharePoint 2010 and SharePoint 2013 in this regard. In SharePoint 2013 several enhancements have been made to support tenant and site level search administration and this complicates the search DR story. These enhancements while surfaced at the site collection level and web level actually have their configuration stored in the search administration database. The second key change is the use of the search engine to process and manage analytical data, augmenting the search index and relevance of search results based on usage statistics and click throughs. This information resides inside the search index itself.

These changes give us a challenge for DR – The information in the search administration database can be retained for DR by recreating the service application from a copy of the production search admin database – similar to how we did this task in SP2010. The downside is that we cannot constantly update the DR side and so this has to be done at the point of failover followed by a full crawl. The analytics information however cannot be replicated in anyway except by a full service application backup and restore. This means we have different options for search DR in Sp2013 each with its own pros and cons.

 

Option

Advantage

Disadvantage

Crawl Read Only DBs in DR

Can maintain high degree of search content freshness at point of failover.

Cannot maintain search configuration settings below service app level. Also loses analytical influences from production

Recreate service app in DR from Admin Database

Brings across all the search configuration settings

Have to re-crawl all content. Also loses analytical influences from production

Search service application back and restore

Brings across all the search settings and the analytical influences

May take a while to get search operational. Search is offline until the restore complete

Depending on your requirements for search you will need to make a selection from the above. Alternatively you could use a combination. For example if your primary need for search was to discover content with a high degree of freshness then you could select the first option. At the same time – just on the off chance that failing back to production was not possible, perhaps due to a major incident then you could also take a backup from production periodically to be able to restore in DR to recover full fidelity perhaps after a trigger period of a number of hours after failover.

SharePoint CAT is working on a more in depth whitepaper to describe the steps for Search DR in more detail. This whitepaper will be published in due course.

 

So there you go. if you are in the business of designing or providing BCM for enterprise SharePoint deployments, you surely will agree the importance of these changes cannot be underplayed. If you want more information on the background of this space, and to re-live the actual announcement, head on over to Channel 9 (yes, it’s for IT geeks as well) and watch my good buddy, Neil Hodgkinson deliver the goods in his excellent presentation on BCM for SharePoint 2013.

I’m outta here like i just failed over my data centre.

s.

Online Workshop: SharePoint Advanced Infrastructure: Distributed Cache

posted @ Wednesday, February 19, 2014 10:26 AM | Feedback (1)

Audience: SharePoint Administrators, Infrastructure Architects and Support Professionals.

The esteemed Microsoft Certified Master certification is no longer obtainable... but you can still get master-class mentoring through our collection of Advanced Workshops. Delivered by one of the world's foremost SharePoint authorities, this workshop is a rare opportunity to learn from a recognised master in the field.

This module provides 360 coverage of Distributed Cache, the new foundational and pre-requisite service instance in SharePoint 2013 which is an implementation of Windows Server AppFabric Caching, and provides in memory caching across a farm.

Understand the background of this service, its architecture and usage within SharePoint 2013 along with design constraints and acceptable usage scenarios. Get to grips with the topology trade-offs for implementation, including the vendor guidance and practical reality from real world deployments. Core configuration and administration practices will be covered, along with how to managing updates. This module will also delve into troubleshooting a misbehaving and/or broken cache instance.

Clearing up some of the myths and out-dated information on a key service which commonly causes confusion and operational headaches, this module delivers all-up, “level 400” material for the SharePoint administrator, Infrastructure Architect or support professional.

Sign up over at http://www.combined-knowledge.com/Webinars/advanced_inf_workshop.html

Da Big Daddy: SharePoint Conference 2014

posted @ Monday, January 20, 2014 12:32 PM | Feedback (0)

[Updated 19/02 with session timeslots and an additional session]

Vegas, March, 10,000 SharePoint people. What could possibly go wrong?!

The big daddy is back, SharePoint Conference 2014 promises to be another great event, and I am once again happy to be speaking at the biggest SharePoint conference on the planet. I get asked a lot about which conferences are worth the money and so on, and of course being the official show, and with the associated travel and accommodation expenses SPC is often in the “requires justification” bucket. Seriously there is no debate, it’s worth it. If you are in the SharePoint business you absolutely have to be there. If you are a customer there is an incredible wealth of material and networking which will benefit your deployments. Seriously consider attending. Yes, I know it’s Vegas – with it’s pros (I’ve been told there are some!) and it’s not inconsiderable cons. However forget all that and concentrate on the conference itself and you will be very happy with the return, and have some fun to boot.

I’ll be again delivering three sessions this year, the abstracts of which I’ve copied below and added a little side commentary. In addition to these sessions, I will generally be loitering around the Infrastructure related tracks and available at the expo hall and Ask the Experts events.

406: Comprehensive User Profile Synchronization
Thursday, March 6, 2014, 12:00 PM-1:15 PM - Lido 3001-3103
Discover the changes and new capabilities in the foundational service for user discovery deployments of SharePoint Server 2013 and get the real deal on configuring User Profile Synchronization in this demo and best practices heavy session. This session will cover the architecture of the User Profile Service Application, the new AD Direct Mode and provide a walkthrough of the configuration requirements and setup.  We, too, will focus on coinciding architectural considerations for high availability, scalability and geographic deployments. Also covered will be general UPA related best practices in terms of synchronization, policy and privacy and leveraging social features inside the enterprise.

This is the return of the highly popular UPS session. Hmm, not to sure how I feel about this topic after so long, but as it’s the content you requested I will try my best to retain the original flavour but also include some new and or updated content. Remember thou, no matter the technology I will be imparting the reality of identity management!

411: Office 365 identity federation using Windows Azure and Windows Azure Active Directory
Tuesday, March 4, 2014, 9:00 AM-10:15 AM - Bellini 2001-2106
Windows Azure provides a compelling solution for deploying infrastructure for directory synchronization and identity federation with Office 365, while avoiding the cost of an on-premises implementation plus required operations. This session will present an end-to-end scenario leveraging Windows Azure and Windows Azure Active Directory, along with the Active Directory Sync tool and Active Directory Federation Services (ADFS) to provide the richest identity experience with Office 365. We'll also share and show tips and tricks to help automate deployment and accelerate your Office 365 Identity integration.

Brand new content exclusively for the SharePoint Conference. I’ve been told it’s a bad idea to debut content at SPC. That’s never stopped me before! This promises to be a fun session, I just hope the “live” aspects of the demonstration don’t fail me. The end to end scenario should provide a very interesting approach to dealing with the identity federation pieces of O365 for small to mid sized customers.

418: Subordinate integrity: Certificates for SharePoint 2013
Wednesday, March 5, 2014, 5:00 PM-6:15 PM  - Lido 3001-3103
Certificates play an increasingly important role in SharePoint 2013. As well as protecting data, they are one of the fundamental building blocks of service interaction between service applications and across products such as Lync and Exchange. With the introduction of Office Web Apps 2013 and Workflow Manager as well as the Cloud App Model, certificates are now effectively a pre-requisite for a complete SharePoint deployment. This session will de-mystify which types of certificate are needed, where they are used and how best customers can manage the certificate lifecycle for SharePoint 2013, Office Web Apps and Workflow Manager. Also covered will be client browser considerations and approaches to automating configuration with Windows PowerShell, along with certificate requirements for Office 365.

Brand new for SharePoint Conference. Another debut session, this time on the somewhat murky and clearly confusing topic of Certificates. I get asked about this topic all the time by customers and SharePoint practitioners, and also outwith the SharePoint domain completely. It’s great to see this type of content make it to the show, which brings me onto…

356: Designing, deploying, and managing Workflow Manager farms
Wednesday, March 5, 2014, 10:45 AM-12:00 PM - Lido 3001-3103
Workflow Manager is a new product that provides support for SharePoint 2013 workflows. This session will look at the architecture and design considerations that are required for deploying a Workflow Manager farm. We will also examine the business continuity management options to provide high availability and disaster recovery scenarios. If you want a deep dive on how Workflow Manager works, then this is the session for you.

This one is a co-present with my good friend, and fellow MCA and MCSM Wictor Wilén, which will dive into the IT Pro focused considerations for deployment of Workflow Manager after some architecture and design coverage. This should be a fun session and is commonly requested material in the marketplace both from customers and SharePoint practitioners.

There’s been a bunch of hoopla on the tubez about “is on premises dead” and all that hogwash. SPC shows the commitment Microsoft and SharePoint has to the customers with which it won the data centre, both on premises and on the journey to the cloud.  Take a look at the other sessions over on the official web site. Something for everyone, and tons of practical real world sessions as well. It promises to be a great event.

Viva Las Vegas.

 

s.

Updated Antivirus for SharePoint 2013 options

posted @ Tuesday, August 20, 2013 11:42 PM | Feedback (0)

Just a quick note to let you know I’ve updated my Antivirus and SharePoint 2013 post, with the details of all the current available options. Instead of the *single* option we had shortly after RTM, there are now four options with hopefully another one in the near future.

 

s.

SharePoint & Exchange Forum: somewhere in the Baltic Sea!

posted @ Thursday, August 08, 2013 9:36 AM | Feedback (0)

SEFlogoI’m delighted to announce that I will again be speaking at the excellent SharePoint and Exchange Forum (SEF), coming up September 30th through October 2nd. This year, the 10th anniversary, will be a little bit extra special as it’s taking place on board Silja Symphony – a (rather large) cruise ship running between Stockholm and Helsinki.

SEF is always an excellent event, with a great crowd, top quality speakers, great networking and superb evening entertainment.

Head on over to the SEF website to find out more, I look forward to seeing you at the event!

 

s.