harbar.net component based software & platform hygiene

European Collaboration Summit follow up: Tutorial Scripts and Identity Manager Demos

posted @ Tuesday, June 26, 2018 10:28 PM | Feedback (0)

13886357_1081186301972836_5470626544742346563_nMany thanks to everyone who attended the European Collaboration Summit in Mainz, Germany, last month. It’s safe to say that the event overall was a runaway success and yes, we have already started planning for the 2019 edition!

At the event, I promised to publish some additional resources. These are a little later than I had hoped but with a new job and a variety of “more important” things on a rather large to-do list, the delay was inevitable. At any rate, this post serves as a landing page for these resources.

Tutorial Attendees: SharePoint 2016 Automation Toolkit

For those who attended the Infrastructure PowerClass hosted by Thomas Vochten and I, the scripts we showed and discussed for building out the On Premises portions of the Hybrid Labs are over here on GitHub. This toolkit has been used to build thousands of customer farms and has also been used at other pre/post conference events such as Microsoft Ignite. More details and so on can be found in the readme. I am not offering support for this toolkit. I will answer questions on it if they are easy to deal with quickly. But I will not be engaging in any debates about their worth, nor any religious arguments about DSC! :)

2018-06-26_20-58-01The 2013 version will be published over the next couple of weeks.

Don’t worry about GitHub if you are not familiar with it.
Within the page, there is the ability to download the files as a ZIP file – without having to do the whole “install some tools, clone, open” dance that the developers love so much! :)

 

Identity Manager & SharePoint Server Demos

During my session on User Profiles there wasn’t enough time to complete the demonstrations. As promised I’ve put together a 90 minute video of the complete demos, with some additional explanations and remarks.  This is hosted over on YouTube. Apologies in advanced for the slightly annoying sound quality. I had, erm, a slight ‘accident’ with my mic stand earlier today that will mean a trip to the music store in the near future – drat! :)

 

Identity Manager & SharePoint Server Scripts

All of the scripts used within the above demos, and indeed at other conferences, are also available over here on GitHub.


I hope some of you find these resources useful and once again, thanks for helping make #CollabSummit 2018 the single best community technology event on the planet!

34747662_1760481667376626_617131227853357056_o

 

I’m outta here like I dropped a couple tables in a production database!

s.

RPC Server Unavailable when creating a SharePoint Farm… the curse of dodgy legacy NetDOM!

posted @ Monday, June 25, 2018 1:15 PM | Feedback (0)

Every so often a real blast from the past comes back to haunt me. Usually it’s some obscure “infrastructure” gubbins – you know, the sort of thing that 80% of so called IT Pros knew in 1999. These days thou. Not so much.

With SharePoint in particular there is a whole boat load of legacy. Not that legacy is bad. Lot’s of it is awesome. That’s why the product remains so successful. On the other hand some of it is real, real, real nasty!  :)

It always seems to come in waves. Over the last two weeks I’ve had six emails regarding problems creating the first server in a SharePoint Farm. The ye olde “An error occurred while getting information about the user SPFarm at server fabrikam.com: The RPC server is unavailable”.

Naturally, there are a whole bunch of wild goose chases out there on the interwebz about how to potentially resolve this issue. Most of them are complete claptrap. Or even worse, a ‘support bot’ automated answer, something like “are you running as an administrator?” :)

Now this old chestnut can have a multitude of root causes and the API that raises this exception isn’t very clever about that – it just bubbles it back up the stack. As you might imagine, when SharePoint was first developed nobody was sat there running through all the various deployment scenarios and fire testing every code path to ensure a nice neat experience for the next 20+ years.  But more importantly 17-18 years ago, idempotent, designed to be managed via automation and state independence were not so much of a thing as they are now.  Let’s say we do it via PowerShell using New-SPConfigurationDatabase. That isn’t doing one thing. It’s doing a WHOLE BUNCH of things. All masked via the Server Side OM. The same is true if we use the Configuration Wizard. After all, they are both simply masks or wrappers for the OM.

The real problem with this error (and others like it), when creating a Farm, is that it only partly fails. The Configuration database is created on the SQL Server. It’s sitting there pretty happy. Indeed, it will have 711 Objects. New-SPConfigurationDatabase has sorta kinda worked. Even though it’s raised an exception. But it’s not really worked, and as soon as we go to the next stage of farm creation (typically Initialize-SPResourceSecurity) that will fail as well, with the error “Cannot perform security configuration because the configuration database does not exist.  You must create or join to a configuration database before doing security configuration.”

Interesting. Most people will actually check if the Farm exists after New-SPConfigurationDatabase by calling Get-SPFarm. This is mainly because the very first “build a farm using PowerShell” posts included this. However, in this case – it is entirely worthless. because Get-SPFarm will report the farm exists and that it is Online.

The sample script that virtually everyone has been following since SharePoint 2010 is entirely flawed. There is zero point in calling Get-SPFarm at this point. If New-SPConfigurationDatabase works it appears to be a nice little bit of defensive coding. And it does actually deal with some other errors that can occur. But it won’t help in the case of bubbled up exceptions from dependencies.

What we ACTUALLY should be doing here is restarting the PowerShell session entirely. Indeed if you try and do anything which would otherwise be sensible, like removing and adding the Snap In – it will likely crash the process anyway. Indeed. if you just leave it sitting there for a bit, the process will crash all on it’s own. How do you like those apples?

This is an absolutely critical thing to realise about the Administration OM and the PowerShell that wraps it. This is why delivering true desired state configuration for SharePoint is currently impossible. The only way to catch this is to catch it. Neither the OM nor the PowerShell cmdlets are designed or developed to be leveraged in an automation first approach. Doesn’t mean we can’t get pretty close, but as soon as you start delving into the details, it becomes obvious just how much work is required to do all the things the back end OM doesn’t.  And how exactly does one create a DSC resource that can detect a failed process, know exactly when it failed, and kick off another one to carry on? Yeah, exactly.

Again, SharePoint itself doesn’t handle the exception it just bubbles it back. It does no clean up. Our box is shafted. So to speak. Until we delete the database, and of course restart (or create a new) PowerShell session. Thus, in order to deal with this properly, we’d need to catch the specific exception and handle it appropriately. Do some remoting and kill the database (using SQL Server PowerShell), then restart our session. In other words clean up after the mess that SharePoint has made for us.

But of course we also need to fix the root cause of the problem before retrying.

“The RPC server is unavailable” is one of the classic, generic, “we have no idea what happened, throw this” errors. Now we live in a modern, transformed, cloud first world, we only have one generic error, “access denied” :). But back in the day, when type was understood, we had lots of them.

Sometimes the RPC server really is unavailable. But when creating a SharePoint Farm there is a 98% chance your machine has multiple Network Interfaces, and the prioritised (default) NIC cannot route to the SQL Server. It looks like this.

2018-06-25_12-51-09

In this example “Ethernet1” is a shared network to a backup device. The customer is using this network to separate farm traffic from management traffic. Is that a good idea? Well that’s a post for another time (or perhaps not!) but the thing is it’s extremely common. Even in IaaS lots of people do this.

If I move the “10net” network to the top of the list. Hey presto. No more RPC server is unavailable. It’s a common gotcha. The trouble is most people don’t know there even IS a NIC order. Never mind where in the UI to configure it. For over a decade all but one causes of this error, where I’ve been involved either as escalation or hands on, has been the NIC order.  It’s in my checklist before deploying farms – not that I really do that anymore, but whatever, that checklist doesn’t get updated much.

The good news is we can fix it with PowerShell, decent PowerShell….
Set-NetIPInterface -InterfaceIndex <index> -InterfaceMetric <new metric>

It of course shouldn’t be this way – the reason it is – is because the SharePoint APIs (especially the core Farm Admin ones) rely on some real legacy. In this case, what old timers refer to as the NetDOM stack. That’s an in joke related to an old utility used to hack AD things, back before there was a RTM AD and Microsoft were still doing NetBIOS. Now, it works. But that is the depth of a product like this. A rabbit’s warren of more or less every API made by Microsoft from around 1996 through to today. It would take a brave (and quite possibly barking mad) person indeed to make the decision to re-architect and re-implement the core admin stack.

Anyway the point of this post was to:

  • document the 98% case cause and resolution (fix the NIC order, even if only temporarily whilst the farm is built) so I don’t have to actually explain it ever again (I hope!)
  • provide a worked example of the importance of understanding how things are actually built, rather than merely the mask of veneer that so many products have today.

Not to be all Donald Rumsfeld, but remember with most things in life, and especially SharePoint - The more you know, the more you know you don’t know. You know?! :)

s.

Welcome to our family!

posted @ Wednesday, May 16, 2018 12:10 PM

The building block of every community is a family. Welcome to our family.

See you in Mainz!

Resolving Catastrophic Distributed Cache Failures on VMWare vSphere or ESX guest virtual machines

posted @ Friday, April 27, 2018 6:48 PM | Feedback (0)

Ahh, Distributed Cache, everybody’s favourite SharePoint service instance, the most reliable and trouble-free implementation since User Profile Synchronization. I jest of course, it’s the most temperamental element of the current shipping release, not to mention the most ridiculous false dependency ever introduced into the product and should be killed as soon as possible. However, it is extremely important to a SharePoint Farm in terms of both functionality and ensuring maximum performance. Even in simple deployments the impact of the Search and LogonToken related caches can provide ~20% performance and throughput improvements.

But what to do when it’s busted? Once Distributed Cache gets out of whack all bets are off and you are best throwing away the cache cluster and starting again, cleanly. This procedure has been documented by yours truly in the past. But it doesn’t help if no matter how many times you follow this procedure to the letter the service doesn’t stay up. Your Cache Hosts report “DOWN” or “UNKOWN”, and the Windows Service is either stopped, or faults immediately after starting.

This occurs often in misconfigured VMWare environments (vSphere and ESX). And it’s all down to the relationship between the guest configuration and the AppFabric configuration. As you hopefully know, SharePoint server is not supported with Dynamic Memory. Dynamic Memory is Microsoft speak for dynamically adjusting the RAM allocated to a guest. In VMWare this is actually the default configuration. The system estimates the working set for a guest based on memory usage history. Whilst the common statement is that dynamic memory will cause “issues with Distributed Cache” it’s a bit more serious than this in the real world.

Generally speaking Guest VMs are provisioned by an infrastructure team responsible for the VM platform, and in almost all cases, the request to reserve all memory will either be not understood or ignored. This leads to the VM being configured with the defaults. If you are in control of setting up the guests then life is easy, you just ensure that the “Reserve all guest memory (All locked)” option is selected with the guest properties BEFORE you even install the SharePoint bits.

pic1

However, if you are in a situation where this was not configured originally and on that guest a SharePoint server is added to the farm and is running Distributed Cache things will go south quickly leading to a broken Distributed Cache. Even if you go back and select the option later, it’s hosed. The last machine which joins the cache cluster basically takes over. Just like with changing the service account – which resets cache size. The last one wins. Or in this case, totally hoses your farm.

One of the reasons the Distributed Cache and Dynamic Memory don’t get along, aside complete ignorance is that the Cache Cluster Configuration is not modified based on the memory resource management scheme in place (AppFabric includes this support, but it’s not exposed via the SharePoint configuration set).

The default Cache Partition Count varies for different memory resource management schemes. So, once we change that, we need to alter this configuration. If we install fresh on a new guest correctly configured it’s set automatically.

In this situation we need to deal with it ourselves, and as always there is zero (until now) documentation on this issue. This info has been used to support hundreds of customers via CSS cases over the last few years (I don’t know why it hasn’t been published properly).

Here’s the fix:

1. You need a Cache Cluster – it doesn’t need to be working (!) but if you’ve deleted all the service instances and so on, you need to bring back a broken Distributed Cache first.

2. Use the following PowerShell to export the Cache Config to a file on disk

Use-CacheCluster
Export-CacheClusterConfig c:\distCacheConfig.xml

3. Edit the XML file so that the partitionCount is 32:

<caches partitionCount="32">

4. Save the XML file

5. Use the following PowerShell to ensure the Cache Cluster is stopped and import the modified configuration, and then Start the Cache Cluster (this time it will actually start!)

Stop-CacheCluster
# should report all hosts down

Import-CacheClusterConfig C:\distCacheConfig.xml
Start-CacheCluster

6. Once you’ve done this you can use the standard tooling to report on the cluster or use my PowerShell module, each host in the cluster will report “UP” and you can interrogate the individual caches to verify they are being populated. You will of course need to modify your cache size if you have changed it previously, or wish to. Do it now before it starts being used!

 

Happy days!

 

s.

Update: Using WMU to transfer NEFs to iOS for editing etc.

posted @ Thursday, April 19, 2018 12:52 PM | Feedback (0)

In a previous post I showed Using Nikon Wireless Utility with the Nikon D500 on iOS to download NEFs to iOS which worked for a while. However Nikon made some significant updates to SnapBridge – making it actually much better in terms of connection ease and reliability and remote photography. It’s still not much of a bridge to your snaps thou – it still restricts you to the silly 2Mb file and it only transfers JPEGs which means you need to be shooting them. Rubbish. These updates also break the procedure detailed previously.

I fiddled around with qDslrDashboard – but alas this is not available in the UK iOS Store. And even on a Windows PC I could not get the machine to connect to the D500 or D850 wireless.

It turns out it’s so much easier now – still using WMU. BUT you need two devices. So a hoop jump, but here’s the updated procedure.

  1. Pair your camera with SnapBridge on device one (I'm using an iPhone) as normal.
  2. Click Remote Photography or Download Pictures – it doesn’t matter, as long as you do something to initiate the Wireless connection on the camera.  Click OK to enable the camera Wi-Fi, and wait whilst it’s started. When the Join screen appears, do NOT click Join.

    Note the Wi-Fi Network name, which will be the camera model_serial number, e.g. D850_6666666

    Leave device one on this screen.

    2018-05-10_09-38-27
  3. On device two (I’m using an iPad Pro) connect to the camera Wi-Fi using the regular iOS Settings. The password you can view from the camera menus. It will be NIKONDXXX, where XXX is your camera model. You only need to enter this the first time you connect.

    IMG_0424
  4. Once connected, open up WMU, click View PhotosPictures on Camera – then you can browse, select and download images from the camera cards.

    IMG_0425

 

And of course these are now available on your device two for editing in Lightroom mobile or whatever. Happy days!

IMG_0428


Remember to charge your batteries!!!! Smile and have a spare one or three in your pocket. Especially with the D850 these are some BIG ASS NEFs!

s.

Using PowerShell to import Profile Photos when using Active Directory Import and SharePoint Server 2013/2016/2019

posted @ Friday, February 02, 2018 7:02 PM | Feedback (2)

One of the most common requests I have received over the last couple years has been how to leverage PowerShell to get User Photos from Active Directory (or any other location really) into the SharePoint User Profile Store. With the removal of User Profile Synchronization (UPS) in SharePoint 2016 this need has increased significantly. For most mid market customers this is a key requirement, and implementing Microsoft Identity Manager (MIM) for this purpose is not practical.

I did spend a whole bunch of time before the release of SharePoint 2016 attempting to convince the powers that be, that Active Directory Import (ADI) should include this capability to ease the upgrade pain and so forth. Alas, the goal was to remove UPS, not address the gaps left by it’s removal.

At any rate, if the business requirements can be met by ADI, with the exception of User Photos, then MIM is absolutely NOT the right solution. From an architectural perspective, a operational management perspective, and most importantly a cost perspective, it’s just daft. Thus I have always suggested that for this requirement a simple PowerShell script, regularly scheduled is the appropriate approach.

Now in essence such a script is simple. We iterate Active Directory, get the photos from thumbnailPhoto, then put them in the Profile Pictures folder within the My Site host. Depending upon operational requirements we can enhance this basic capability with logging and caching of the images on the file system and so forth.

Active Directory provides us with two key APIs for this work. We can use the basic APIs exposed nicely thru the ActiveDirectory PowerShell module, or we can use DirSync. DirSync is more complicated, but vastly preferable as it provides a change log as well as much more efficient operations generally. It’s actually how ADI works under the covers (another reason why it’s farcical from a technical perspective why ADI doesn't include this capability). So there’s nothing to install on the box, whereas with the AD PowerShell we have to install the RSAT tools and import the module which is generally not done on a SharePoint server.  We do however require Replicating Directory Changes in order to make use of the change log – use the very same account you use to perform ADI and you’re all set.

But then we come to SharePoint! : ) As always it get’s more “interesting”….

Firstly, the “Profile Pictures” folder. That bad boy is not created until the very first profile photo is created. And it sits within the “User Photos” folder. i.e. https://mysitehost.fabrikam.com/User Photos/Profile Pictures. somebody, somewhere, somewhen thought this was clever. It is not. Anyway. no biggie. we need to accommodate checking it exists and if not, creating it.

Secondly the file name of the images, before they are “translated” into usable images by Update-SPProfilePhotoStore. This is a real problem because they include a “default partition ID”. There is no public API to discover this value. There is a way to get it but that involves calling a legacy web service which is unsupported (this is how Microsoft themselves do it in the MIM SharePoint Management Agent). The good news is that this GUID is the same on every SharePoint 2013/2016/2019 deployment.  So we can just slap it in as variable and use it to splat the filename. But we need to be aware that this means the solution will only work with a non partitioned UPA. and of course there is a possibility this GUID may change in the future.

We also still need to run Update-SPProfilePhotoStore once the import is complete, to create the three thumbnails that SharePoint uses.

We also need to ensure that both the import script and Update-SPProfilePhotoStore are run on a machine hosting the User Profile Service service instance. The latter will not raise any exception if it is run elsewhere, it merely does nothing and quits!

With all this said in terms of brief explanation, the basic script from Adam Sorenson can be found over at https://gallery.technet.microsoft.com/SharePoint-User-Profile-928b39c0.  You can update the initial variables to suit your environment. You will also need a “dummy” DNLookup.xml as described over at https://blogs.technet.microsoft.com/adamsorenson/2017/05/24/user-profile-picture-import-with-active-directory-import/.

Now yes, there are lots of things “wrong” with this script. Some are the fault of SharePoint as previously discussed, and others are more regarding operational management. But the point of posting is that this is a start point for which you can build out what you need. For example you may not wish to cache images on the file system (although for large environments that is a good idea). Further you would look to clean up the PowerShell and/or make it into a module and so on. Also watch out for the LDAP filter that’s used, this may not reflect your requirements.  I would also remove the horrible legacy API for alternative credentials.  Otherwise, this script is actually more or less identical to mine, except that I split out the gathering of photos to support Urls as well as Thumbnails, and I used the “private” API for the “partition ID” as I was looking for a way to encourage the powers that be to provide a public API (and needed a solution for Multi Tenant).

If there is enough interest I will publish my PowerShell Module but this script is all you need to get started…

 

s.

Configuring a Partitioned UPA in SharePoint 2016 with Active Directory Import

posted @ Tuesday, August 08, 2017 3:46 AM | Feedback (0)

Introduction

For about a year now I’ve been plagued by people asking me how to configure a partitioned User Profile Application (UPA) in SharePoint Server 2016, and perform successful profile import using Active Directory Import (ADI). Every few weeks someone asks for the configuration, and it basically got to the point where it made sense to post this article to which I can refer folks.

Now, I am not going to provide all up coverage here. I expect you to be familiar with the fundamental concepts of SharePoint Multi-Tenancy. You can head over to my other articles here (2010), here (2013), and on TechNet for that information. All I am going to do in this article is outline the changes in the configuration of the UPA and ADI because of the removal of UPS from SharePoint 2016. I’ll also point out a couple of significant gotchas which are imperative to plan for.

Having said that, it would be remiss to not state categorically that multi-tenancy is absolutely NOT something most customers should be doing. Neither I, or Microsoft, recommend this deployment approach in any way whatsoever because virtually no one has the time, money or expertise to implement all of the things out with the SharePoint product which are necessary to be successful. Indeed, one of the most significant non-Microsoft deployments of multi-tenant SharePoint which used to exist, no longer does – mainly because that vendor decided it was not worth it. You’ve been warned.

 

What’s different about SharePoint 2016?

Only one thing basically, but it’s a significant element. As you will be aware User Profile Synchronization (UPS) is no longer part of SharePoint Server. In 2010 and 2013 UPS was used to perform profile synchronization with a partitioned UPA. This was the canonical and recommended deployment approach.

We set up a directory structure that looked something like this:

image

A “base” OU (in this case Customers) which included a OU per tenant. We then configured UPS to sync with the base OU, and it used the SiteSubscriptionProfileConfig to match up each tenant to the child OUs using a simple string match.

As UPS is no longer available, we must use Active Directory Import to perform synchronization. For a long time during pre-release versions of SharePoint 2016, it was not possible to configure this due to a bug. This lead to statements such as “sync with a partitioned UPA is not supported”. This was never the case, and it was simply a few bugs that were resolved with the RTM of the product.

On the face of it pretty simple. However, there is some “interesting” configuration required in order to get things working correctly, and also extremely important planning considerations around how you manage synchronization operations which were not present in previous versions.

 

What’s the same as previous versions?

Pretty much everything. Creating Site Subscriptions, their Tenant Administration sites and provisioning initial member sites is identical. We don’t need change our tenant provisioning scripts in this respect.

Just like previous versions we need to tell the UPA about these new Subscriptions as they are created. If we browse to a Tenant Administration site, and click the Manage User Profile Application link we will see the following error.

clip_image002

clip_image004

This is totally expected at this point, and is no different from SharePoint 2010 or 2013, except for the rendering of the “modern” access denied experience :). This is basically one of these generic “access denied” exceptions which bubbles up and causes this rather lame UI. What actually is the case at this stage is that the UPA has no Subscriptions (Tenants) and therefore it is impossible to display this page. If we look at the UPA, we can see there are no Subscriptions (Tenants in UPA terminology!) configured.

clip_image006

We add the new Subscription to the UPA in the exact same way we did with previous versions, using Add-SPSiteSubscriptionProfileConfig. It’s worth pointing out that prior to RTM of SharePoint Server 2016 there was a bug which prevented this command from succeeding. This was fixed for RTM. We pass in a
-SynchronizationOU which is a STRING not a DN. In a real deployment, we would also pass in the configuration for the MySites. For example:

# Add this subscription to the Partitioned UPA
$UpaProxyName = "User Profile Service Application Proxy"
$MemberSiteUrl = "https://oracle.fabrikam.com"
$TenantSyncOU = "Oracle"

$Sub = Get-SPSiteSubscription $MemberSiteUrl
$UpaProxy = Get-SPServiceApplicationProxy | Where-Object {$_.Name -eq $UpaProxyName}

Add-SPSiteSubscriptionProfileConfig -Identity $Sub -SynchronizationOU $TenantSyncOU `
                                -MySiteHostLocation "$MemberSiteUrl/my" `
                                -MySiteManagedPath "$MemberSiteUrl/my/personal/" `
                                -ProfileServiceApplicationProxy $UpaProxy

Once that is complete, we can see the Subscription (Tenant) has been added to the UPA:

clip_image008

And if we go ahead and click Manage User Profile Application from Tenant Admin, we no longer see the request access screen and we can view the profiles and so forth as expected:

clip_image010

So now we have the Partitioned UPA configured, and added our Subscription to it. We would of course provision more than one subscription. Everything we have done up till now is identical to how it was done with SharePoint 2010 and 2013.

 

Configuring Synchronization Connections

Now we need to actually deal with getting profiles into these “partitions” of the UPA. In theory, we could use the Central Administration UI to add a new synchronization connection, hook that up to the Customers OU, then perform a sync. In theory. In practice, creating Synchronization Connections for a Partitioned UPA does not work, and is 100% unsupported.

If we try to configure using the UI and then go back and edit the connection, we will see that the changes are not persisted within the container selection. Yes, really. This is “by design”.

More importantly however, sync runs won’t work – with no profiles being imported to the UPA. We will see something similar to this in the ULS:

UserProfileADImportJob:ImportDC -- Regular DirSync scan: successes '0', failures '0', ignored '113', total duration '21', external time in Profile '0', external time in Directroy '16' (times in milliseconds)

Check out the spelling of “directroy” at the end of that trace log! :) Wicked!

In order to create our connections, we MUST use the currently undocumented online and rather esoteric Add-SPProfileSyncConnection PowerShell cmdlet. But it’s not that straightforward, as multi tenancy brings along a specific pattern to its usage.

Now, this cmdlet has a chequered history with a lot of problems. Some of those (such as the ability to exclude disabled accounts) have been fixed in SharePoint 2016. But it is important to note the behaviour of the cmdlet – which is counter to all PowerShell naming and best practices – will be confusing.

With UPS we used to add one container to the Sync connection, (e.g. Customers). With ADI we must add one container for each subscription we wish to sync. (e.g. Microsoft, Oracle, Amazon). We can NOT use a single container.

The PowerShell below creates an initial sync connection using the OU for one of the subscriptions.

$UpaName = "User Profile Service Application"
$ForestName = "fabrikam.com"
$DomainName = "FABRIKAM"
$AdImportUserName = "spupi"
$AdImportPassword = Read-Host "Please enter the password for the AD Import Account" -AsSecureString
$SyncOU = "OU=Microsoft,OU=Customers,DC=fabrikam,DC=com"

Add-SPProfileSyncConnection -ConnectionSynchronizationOU $SyncOU `
                            -ConnectionUseDisabledFilter $True `
                            -ProfileServiceApplication $Upa `
                            -ConnectionForestName $ForestName `
                            -ConnectionDomain $DomainName `
                            -ConnectionUserName $AdImportUserName `
                            -ConnectionPassword $AdImportPassword

Note that the -ConnectionSynchronizationOU is a DN. Note also that the -ConnectionUserName must not include DOMAIN\ - it’s merely the username itself.  Don’t ask! It’s what it is.

Once we have done this we can go ahead and perform a Synchronization run and the profiles for the Microsoft subscription will be imported into the Microsoft partition of the UPA.

To add other subscriptions, we repeat the use of Add-SPProfileSyncConnection, once for each additional subscription. Whilst it’s called Add- what is actually happening here is the existing profile connection is being updated to include the additional containers.

$SyncOU = "OU=Oracle,OU=Customers,DC=fabrikam,DC=com"
Add-SPProfileSyncConnection -ConnectionSynchronizationOU $SyncOU `
                            -ConnectionUseDisabledFilter $True `
                            -ProfileServiceApplication $Upa `
                            -ConnectionForestName $ForestName `
                            -ConnectionDomain $DomainName `
                            -ConnectionUserName $AdImportUserName `
                            -ConnectionPassword $AdImportPassword

$SyncOU = "OU=Adobe,OU=Customers,DC=fabrikam,DC=com"
Add-SPProfileSyncConnection -ConnectionSynchronizationOU $SyncOU `
                            -ConnectionUseDisabledFilter $True `
                            -ProfileServiceApplication $Upa `
                            -ConnectionForestName $ForestName `
                            -ConnectionDomain $DomainName `
                            -ConnectionUserName $AdImportUserName `
                            -ConnectionPassword $AdImportPassword

$SyncOU = "OU=Amazon,OU=HR,OU=Customers,DC=fabrikam,DC=com"
Add-SPProfileSyncConnection -ConnectionSynchronizationOU $SyncOU `
                            -ConnectionUseDisabledFilter $True `
                            -ProfileServiceApplication $Upa `
                            -ConnectionForestName $ForestName `
                            -ConnectionDomain $DomainName `
                            -ConnectionUserName $AdImportUserName `
                            -ConnectionPassword $AdImportPassword

Now when we run another Full Import, all subscriptions will get the appropriate profiles imported.

At the end of the day, we get a number of Subscriptions (tenants) and a bunch of profiles in each:

image

And that’s all there is to it…. Kind of… Sort of…

 

Implications

 

There are a couple of BIG considerations with this approach to be aware of. You will have to manage this stuff – or build tooling to manage it. If you get it wrong, then the wrong objects will be in the wrong tenants, and that’s not good.

Firstly, there is no way to easily manage the sync connection. You can’t use the UI to add/remove containers. It’s all PowerShell. Remove-SPProfileSyncConnection works just fine. But you have to match up exactly what you want removed, and you must always include the same connection and credential parameters with every call. If you accidentally mess up the connection, say by removing a container – then those objects will no longer be synced. We don’t have a Get-SPProfileSyncConnection – although you could build one “easily” enough.

Secondly, and more importantly, even thou we are adding a DN of a container to the sync connection – it is only for initial configuration – it serves no purpose during sync operations. Consider the following domain structure. See all those Microsoft OUs scattered around – all three of them? See that Amazon OU which is not within the Customers OU?

image

When we perform a sync, all of the objects within ALL of the Microsoft OUs will be imported to the Microsoft partition of the UPA! Yes, really! And the Amazon OU, even thou it’s not in the customers OU will also be imported. If we had two Amazon OUs, they would both be imported.

The only thing that governs whether a container is parsed is the configuration using Add-SPSiteSubscriptionProfileConfig. If this name matches, wherever it is in the domain, it will be synced. Remember we CANNOT use a DN for this value, it must be a string.

The addition of a DN using Add-SPProfileSyncConnection of one container whose name happens to be the same as that within the SiteSubscriptionProfileConfig is merely the trigger to make ADI aware of it. It doesn’t actually matter where it is as long as one or more CN=Foo matches up to a Foo in the SiteSubscriptionProfileConfig.

This is extremely important.

It means we actually have much more flexibility in our domain structure – something which many hosters asked for – although this is not why it’s like this :).

However, it also means it can get quite confusing and very dangerous. It means that using customer names for OUs might not be the smartest move – you really need to think through which structure best fits your needs, and be totally aware of the implications should you have many OUs with the same name as part of the solution. You must be really on top of planning and governing the AD used for your profile import in hosting scenarios.

None of this is of course how it should be. But it is how it is. And hopefully the explanation above helps those of you whom are looking to implement hosting scenarios with SharePoint 2016.

Ahh, the joys of user profile sync. Peace and B wild. May U live 2 see the dawn.

User Profile Photo Import from thumbnailPhoto using MIM and the SharePoint Connector

posted @ Saturday, May 13, 2017 8:37 PM | Feedback (0)

When leveraging Microsoft Identity Manager (MIM) and the SharePoint Connector for User Profile Synchronization, some customers have a requirement to import profile pictures from the thumbnailPhoto attribute in Active Directory.

This post details the correct way of dealing with this scenario, whilst retaining the principle of least privilege. The configuration that follows is appropriate for all of the following deployments:

  • SharePoint 2016, MIM 2016, and the MIM 2016 SharePoint Connector
  • SharePoint 2013, MIM 2016, and the MIM 2016 SharePoint Connector
  • SharePoint 2013, FIM 2010 R2 SP1 and the FIM 2010 R2 SharePoint Connector

    * Note: you can also use MIM or FIM 2010 R2 SP1 with SharePoint 2010, although this is not officially supported by the vendor.

Before we get started it is important to understand that if the customer requirement is to be able to import basic profile properties from Active Directory with the addition of profile photos, then MIM/FIM is almost certainly the wrong choice. SharePoint’s Active Directory Import capability alongside some simple PowerShell or a console application will deliver this functionality with significantly less capital and operational cost.

However, many customers are dealing with more complicated identity synchronization requirements and thumbnailPhoto is merely one of the elements required. Due to some bizarre behaviour of SharePoint’s ProfileImportExportService web service, previous vendor guidance on this capability has been inaccurate, and indeed yours truly has provided dubious advice on this topic in the past.

Most enterprise identity synchronization deployments have stringent requirements regarding the access levels granted to the variety of accounts used. This is just as it should be, there is no credible identity subsystem which allows more privilege than necessary to get the job done. Naturally a system which is providing a “hub” of identity data should be as secure as possible. Because of this security posture, many customers have complained about the level of access “required” by the account used within the SharePoint Connector (Management Agent). In some cases, customers, have refused to deploy or used alternative means to deal with thumbnailPhoto . It’s not a little deal at all for those customers.

 

What is the issue?

Assume that MIM Synchronization is configured using an Active Directory MA, and a SharePoint MA with the Export option for Picture Flow Direction*. The account used by the SharePoint MA is added to the SharePoint Farm Administrators group as required. We then perform an initial full synchronization. MIM Synchronization successfully exports 217 profiles to the UPA.

image

image

*Note: 218 is the Farm Administrator plus the 217 new profiles.
The Import option for Picture Flow Direction, whilst available in the UI and PowerShell, is not implemented and therefore won’t do anything.

We will however notice some rather puzzling results for the profile pictures. The Profile Pictures folder is correctly created within the My Site Host Site Collection’s User Photos Library. However only some of the profile pictures will be created, in this example 112 of them. What happened to the other 105?

image

image

The numbers will actually vary, I can run this test scenario hundreds of times (and believe me, I have!) and get different numbers each time. However, roughly half the pictures are created each time.

This is the problem which has led to incorrect guidance. It really is quite a puzzler. Obviously, some files are created, and thus logic suggests that the account which is calling the web service has the appropriate permissions. If the permissions were wrong, surely no files would be created. Alas, this is SharePoint after all, and sometimes it really isn’t worth the cycles! Bottom line there is an issue with the web service. That’s not something which can easily be resolved.

The documentation for the FIM 2010 R2 SP1 SharePoint Connector, which is the previous version of the currently shipping release, remains the best documentation available, it notes:

When you configure the management agent for SharePoint 2013, you need to specify an account that is used by the management agent to connect to the SharePoint 2013 central administration web site. The account must have administrative rights on SharePoint 2013 and on the computer where SharePoint 2013 is installed.

If the account doesn’t have full access to SharePoint 2013 and the local folders on the SharePoint computer, you might run into issues during, for example, an attempt to export the picture attribute.

If possible, you should use the account that was used to install SharePoint 2013.

 

This, to a SharePoint practitioner, is clearly poor guidance. Whilst it’s true the MA account must connect to Central Administration, that means it must be a Farm Administrator. There is no requirement for the account to have other administrative rights on the SharePoint Farm, and there is no requirement for any machine rights on any machine in the SharePoint Farm except for WSS_ADMIN_WPG, which is granted when adding a Farm Administrator. And certainly no access to the local file system of a SharePoint server is needed. Furthermore, there is no scenario whereby the SharePoint Install account should ever be used for runtime operations of any component, anywhere, in any farm!  Of course, this is material authored by the FIM folks, and there is no reason to expect them to be entirely familiar with the identity configuration of SharePoint, especially given that the topic is confusing to most SharePoint folks as well!

When I delivered the “announce” of the MIM MA at Microsoft Ignite last fall, I made a point of this issue, by stating that the Farm Account should be used by the SharePoint MA if importing from thumbnailPhoto . This is also incorrect guidance. In my defence, at the time we had worked a couple weeks to try and get to the bottom of the issue, and ran out of time before the session. Thus, to show it all working there was little choice. It’s pretty silly to do a reveal of something if the demo doesn't work.

Using the Farm Account for anything, other than the Farm is a bad idea. In this case, it’s extremely dubious, as in a real-world deployment the account’s password will need to be known by the MIM administrator. Internal security compliance of any large corporation is simply not going to accept that.

Others have suggested that the SP MA account is added to the Full Control Web Application User Policy for the My Site Host web application. Or rather that the GrantAccessToProcessIdentity() method of the web application is used, which results in the above policy. That guidance is also inherently very bad. A large number of deployments now make use of a single Web Application, and providing Full Control to the MA account to that is patently a bad idea. Furthermore, such configuration allows unfettered access to the underlying content databases (which store the user’s My Sites remember!) and provides Site Collection Administrator and Site Collection Auditor rights on the My Site host.

 

The Workaround

So, we don’t want to use the Install Account, we don’t want to use the Farm Account, and we don’t wish to configure an unrestricted policy.

The answer to this conundrum is to configure a brand-new Permission Policy to which we will add a User Policy for the SharePoint MA account. This enables all the pictures to be created, without granting any more permissions than necessary.

image

The Grant Permissions for this policy are: Add Items, Edit Items, Delete Items, View Items and Open Site. No more, no less.

Then we add a new User Policy for the Web Application hosting the My Site host Site Collection, for the SharePoint MA account, with this Policy Level:

image

Now at this point if we perform another Full Synchronization we have a problem. As far as MIM Synchronization is concerned the previous export worked flawlessly. It thinks all the pictures are present. This is because the ProfileImportExportService didn’t report any exceptions. The failures have been lost to the great correlation ID database in the sky. Gone forever. If we search the SharePoint MA’s Connector Space within MIM, we will see the photo data present and correct. There are zero updates to make.

Of course, the idea is to correctly configure all of this before we perform the initial Full Synchronization. However, if you are following along, we can “fix” this by deleting the SharePoint MA’s Connector Space, and then performing a Full Synchronization. This will force an fresh Export to SharePoint (there is no need to delete the AD Connector Space).

Once the Full Synchronization has completed, we will see the correct number of items within the User Photos Library (one item is the Profile Pictures folder, the other 217 are images).

image

Of course, we would also need to run Update-SPProfilePhotoStore at this point to generate the three images for each profile, and delete the initial data (the files with GUIDs for filenames).

 

But wait, there is more!

As you may be aware the UPA does not fully understand Claims identifiers for internal access control. This is why we must enter Windows Classic style identifiers for UPA permissions and administrators.

Whilst we can create the new Permission Policy with Central Administration, we cannot create a new User Policy using a Windows Classic identifier. Whatever we enter in the UI will be transformed to a claims identifier. For this to work the policy must be as shown in the screenshot above (FABRIKAM\spma) – using a Classic identifier. And yes, I do feel stupid calling a NetBIOS username, “classic” but I am not in charge of naming anything :)

In order to configure the policy correctly for this use case we must use PowerShell. Which is actually just fine, because we don’t really want to be using the UI anyway. We can also combine all this work into a simple little script to create both the Permission Policy and the User Policy, as shown below.

Add-PSSnapin -Name "Microsoft.SharePoint.Powershell"

# update these vars to suit your environment
$WebAppUrl = "https://onedrive.fabrikam.com"
$PolicyRoleName = "MIM Photo Import"
$PolicyRoleDescription = "Allows MIM SP MA to export photos to the MySite Host."
$GrantRightsMask = "ViewListItems, AddListItems, EditListItems, DeleteListItems, Open"
$SpMaAccount = "FABRIKAM\spma"
$SpMaAccountDescription = "MIM SP MA Account"


# do the work
$WebApp = Get-SPWebApplication -Identity $WebAppUrl
# Create new Permission Policy Level
$policyRole = $WebApp.PolicyRoles.Add($PolicyRoleName, $PolicyRoleDescription)
$policyRole.GrantRightsMask = $GrantRightsMask
# Create new User Policy with the specified account
$policy = $WebApp.Policies.Add($SpMaAccount, $SpMaAccountDescription)
# Configure the Permission Policy to the User Policy
$policy.PolicyRoleBindings.Add($policyRole)
# Commit
$WebApp.Update()

 

Summary

There you have it. How to use a least privilege account for the SharePoint MA, and successfully import thumbnailPhoto from Active Directory. In summary, the required steps are:

  1. Create an account in Active Directory for use by the SharePoint MA (e.g. FABRIKAM\spma)
  2. Add the account as a SharePoint Farm Administrator using Central Administration or PowerShell
  3. Create the Permission Policy and User Policy for the account using the PowerShell above
  4. Configure the SharePoint MA with this account, and select Export as the Picture Flow Direction. If you are using the MIMSync toolkit the thumbnailPhoto attribute flow is already taken care of. If you are not, obviously you will need to configure the necessary attribute flow
  5. Perform Synchronization operations
  6. Execute Update-SPProfilePhotoStore once Synchronization is complete to create the thumbnail images used by SharePoint

Now of course, we have added the SP MA account as a Farm Administrator. As such it could be used to do just about anything to the farm. Least privilege is always a compromise and in this case the farm administrator is a requirement of the ProfileImportExportService – a SharePoint product limitation. Therefore, this approach is the best compromise available, and one that has already been accepted by security compliance in three enterprise customers using MIM and the SharePoint Connector. The bottom line is that if you don’t trust your MIM administrators, or indeed your SharePoint ones, you’ve got bigger security problems than a couple of accounts!

Also, none of this explains why without the policy configuration or overly aggressive permissions, the web service creates some pictures but not others. But life is just too short to worry about that rabbit hole!

Finally, it is always good to remember the mantra of the SharePoint Advanced Certification programs, “just because you can, doesn’t mean you should”. This post is not intended to promote the use of MIM for just the profile photo. Furthermore, using thumbnailPhoto in AD for photos is just one approach of many. For some organisations, especially the larger ones, this would be a spectacularly stupid implementation choice, and of course in many others Active Directory is not the master source of identity anyway.

 

s.

Microsoft Identity Manager 2016 Service Pack 1 is now available!

posted @ Thursday, October 06, 2016 8:33 PM | Feedback (3)

Today, Microsoft released Service Pack 1 for Microsoft Identity Manager 2016 (MIM). This is an extremely important release for SharePoint practitioners who are looking to leverage MIM for User Profile Synchronization with SharePoint Server 2016.

This Service Pack provides a significantly streamlined deployment process – no more hotfix rollups (well, for the time being :)). This is important for those leveraging simply the Synchronization Service, but also for those working with declarative provisioning using the MIM Portal and Service – SharePoint Server 2016 support is also included, as is support for SQL Server 2016.

Service Pack 1 can be downloaded today from the Volume Licensing Service Center or from MSDN Subscriber Downloads. Go get it.

https://blogs.technet.microsoft.com/enterprisemobility/2016/10/06/microsoft-identity-manager-2016-service-pack-1-is-now-ga/

Please be aware that installation of Service Pack 1 requires an uninstall of MIM 2016 and then installation of MIM 2016 SP1 – after backing up the databases relevant to the components you are working with. It is not a in place “upgrade”.

[UPDATE] On November 8th, Microsoft announced an “in place” upgrade path and build.

Once you have upgraded your build will be 4.4.1237.0. Observants will have a little chuckle at the string resource used to describe the product. :)

image

Don’t forget to re-install the SharePoint Connector after upgrading the MIM Sync service.

I just completed a full upgrade of a fully operational MIM 2016 setup with the SharePoint Connector, AD and SQL MAs plus the MIM Portal (on SharePoint 2013) – it took around 25 minutes from start to finish, including fixing up the MIM Sync database name (which still can’t be changed unless using an unattended install). This time included performing a full sync run. So all pretty straightforward. I see no value in using SharePoint 2016 for the portal, but will try that at a later time to see if there are any issues to be concerned about.

Important Note: If you are using the “MIMSnyc toolkit” for SharePoint 2016 you will also need to update the module, as it has a (rather pathetic) version check which is broken. As we’ve gone from 4.3.xxxx to 4.4.xxxx it can’t handle it. We need to update lines 81 and 82. And don’t forget to reload the module after making the changes.

   $MimPowerShellModuleAssembly = Get-Item -Path (Join-Path (Get-SynchronizationServicePath) UIShell\Microsoft.DirectoryServices.MetadirectoryServices.Config.dll)
    if ($MimPowerShellModuleAssembly.VersionInfo.ProductMajorPart -eq 4 -and
        $MimPowerShellModuleAssembly.VersionInfo.ProductMinorPart -eq 4 -and 
        $MimPowerShellModuleAssembly.VersionInfo.ProductBuildPart -ge 1237)
    {
        Write-Verbose "Sufficient MIM PowerShell version detected (>= 4.4.1237): $($MimPowerShellModuleAssembly.VersionInfo.ProductVersion)"
    }
    else
    {
        throw "SharePoint Sync requires MIM PowerShell version 4.4.1237 or greater (this version is currently installed: $($MimPowerShellModuleAssembly.VersionInfo.ProductVersion). Please install the latest MIM hotfix."
    }

This is a pretty horrible implementation as the error (>) doesn’t reflect the condition (=). But easy fix. At a later stage this will get updated to be actually sustainable across hotfixes and service packs.

s.

Using Nikon Wireless Utility with the Nikon D500 on iOS to download NEFs to iOS

posted @ Wednesday, September 14, 2016 4:52 PM | Feedback (3)

As most Nikon D500 users are aware, one of the “features” of the camera is “SnapBridge”, Nikon’s attempt to do something useful with wireless and smartphone connectivity. The iOS app is pretty awful. The connection dance for bluetooth and then the silly switch to wireless for remote photography and image download. Admittedly part of that connection dance is the fault of iOS which restricts apps from interfacing with core device settings directly. The whole thing is just painful, slow and a battery killer. I attempted to use it to download and share one image at an air show recently. Not only could I not get it to function, trying battered my battery even with only 10 minutes use so towards the end of the day I was getting dangerously close to no power which is not where you want to be when a Typhoon jet is flying acrobatics in front of you at 600mph!

So, a total disaster basically for Nikon’s first attempt (really) to do something decent. Sure being able to add GPS to images via the phone and do clock sync is very useful. But really, I have to destroy the battery for this. Yeah not happening. I can take a picture with my phone to give me the GPS and you know, I can set the clock pretty easy!

But it gets worse. The remote photography stuff – i.e. control the camera from the phone – it’s rubbish also – not only can it only take small JPEGs, you can’t do much but select the focus point and hit the shutter. No control of aperture or shutter speed, or ISO or well anything basically. So you can’t be more than a couple feet away from the camera. Not exactly remote.

And it get’s worse. The primary use case of this app is to allow you to get images from your camera onto your phone. You know so that you can stick them on the interwebz and be all social. SnapBridge. Good name, but it doesn’t include a bridge for your snaps!It can only do the silly small files which you aren’t shooting, and you’ll never remember to switch to that QUAL mode.

Now, if the D500 didn’t include this feature I would have still brought the camera. I bought it for taking images. But it is pretty disappointing that this promised “additional” feature is just so shockingly lame. Nikon are rock solid at understanding how shooters shoot to deliver first class camera ergonomics and controls (at least on the pro bodies). But when it comes to how someone would use this app it’s a total shambles. How they thought this was ready for prime time is a complete mystery.

BUT! there is something you can do whilst waiting on Nikon to get their act together.

There is another app called the Nikon Wireless Mobile Utility. It’s designed for consumer bodies such as the D750, D600, D7200 etc.

We can use this to download NEF files from the camera.

Now, the bad news is we have to jump through some hoops and use it in conjunction with SnapBridge. So it’s not seamless and it’s nowhere near as easy as it should be. But it works and if you are looking to edit on an iPad Pro or similar. This is the best solution right now.

Here’s how to set it up.

Before starting make sure you have downloaded and installed the Wireless Mobile Utility (WMU) from the App Store.

1. Connect the D500 using SnapBridge Bluetooth pairing as normal. Yes, this can be quite painful and slow. But a straightforward process.

2. Once you are connected in SnapBridge, tap the Camera icon at the bottom.

3. Tap Download Selected Pictures:

1.

we aren’t going to use this, but we need to connect via Wi-Fi in order to use WMU, and this is initiated inside SnapBridge.

4. Wait for ages watching the spinning progress icon.!

5. When the Switch from Bluetooth to Wi-Fi for connection to D500 dialog appears, tap Yes.

2

6. Wait for ages (again) watching the spinning progress icon! This step is basically turning the Wi-Fi on the camera, on.

7. On the Wi-Fi has been enabled on the camera dialog, tap Go.

3

8. From the SnapBridge settings screen, tap Settings in the top left. This takes you “back” to the main Settings iOS app.

9. Scroll up to the top to Wi-Fi, tap it and select the D500 camera which will now show up in the list of available networks. Once the network has a tick, we are connected.

4

10. Now open the Nikon WMU app without closing SnapBrige, and tap the gear icon, followed by Connection Status. You will see the camera connection.

5

11. Tap Settings, followed by Done to return to the WMU home screen.

12. Tap View Photos.

6

13. Tap Pictures on D500.

7

14. You will see all the photos on the camera.

15. Tap Select and then the photos you want on the phone to select them.

IMG_0662

16. Tap Download, followed by Yes.

9

17. Wait whilst they are downloaded.

18. Once the download is complete, they will be in your WMU Camera Roll and in the photos library on the iPhone. A NEF can’t be viewed in photos but it can be in Photoshop, Lightroom etc. Here is one of my NEFs open in Lightroom for iPhone.

IMG_0667

Obviously from here you can save the final file out as JPEG and share on the interwebz.

Yes. it’s a mess. But it works. And you can edit your NEF from Photoshop Mobile etc on the iOS device.

Make sure your camera battery is charged Smile:)

s.

Enabling multiple OUs and avoiding credential touch up with the MIMSync “toolset” for SharePoint Server 2016

posted @ Thursday, August 25, 2016 7:56 AM | Feedback (1)

As many of you are aware there is a “toolset” published on GitHub which provides one way to get up and running using Microsoft Identity Manager 2016 (MIM) for profile synchronization with Active Directory. This Windows PowerShell Module and exported MA configurations basically provisions a base capability more or less akin to what shipped with SharePoint 2013’s User Profile Synchronization capability.

I’m not much of a fan of this Module or it’s approach. Seriously, if a customer is going down the road of implementing MIM they better be sure they have the right skills in place – and right skills won’t be using this toolkit. Furthermore, the default property mappings etc are well, defaults. The less said the better frankly.

But of course there is the ye olde upgrade consideration. Customers who were using UPS need something to make it easier to move to MIM and of course there are also many without MIM experience (perhaps this year’s greatest understatement so far!). So there is a need for the Module despite it’s faults conceptually and in terms of the implementation (which are not the fault of the coder but consequences of bugs in the products).

Unfortunately due to the nature of the SharePoint Connector and bugs with MIM PowerShell cmdlets, the current version only supports a single container selection and also requires that the MA exports are “fixed” – i.e. with the details of the customer domain.

With the MIM hotfix rollup 4.3.2195.0 and changes to the module both of these issues can be avoided. It’s not a full “fix” and it’s not without a downside. However it is a much more practical “quick start”. Especially for customers with privileged access management policies enforced for administrators. The primary benefit thou is multiple container selection – the most common complaint I’ve received.

Once the hotfix update is installed on the MIM Sync box, the Install-SharePointSyncConfiguration function of the SharePointSync.psm1 module needs altered.

Basically, I’ve changed a a parameter name, removed the credential touch up work, and added a call to the MIM Set-MIISADMAConfiguration cmdlet after preparing the –Partitions parameter value. I’ve also updated the version check.

function Install-SharePointSyncConfiguration
{
<#
.Synopsis
   Configures the Synchronization Service for SharePoint User Profile Synchronization
.DESCRIPTION
   Long description
.EXAMPLE
   Install-SharePointSyncConfiguration -Path C:\SharePointSync -ForestDnsName litware.ca -ForestCredental (Get-Credential LITWARE\Administrator) –OrganizationalUnits 'ou=Litwarians,dc=Litware,dc=ca' -SharePointUrl http://SharePointServer:5555 -SharePointCredential (Get-Credential LITWARE\Administrator)
.EXAMPLE
    $spProps = @{
        Path                 = 'C:\Temp\SharePointSync'
        ForestDnsName        = 'litware.ca'
        ForestCredential     = New-Object PSCredential ("LITWARE\administrator", (ConvertTo-SecureString 'J$p1ter' -AsPlainText -Force))
        OrganizationalUnits   = 'ou=Legal,dc=Litware,dc=ca;ou=Litwarians,dc=Litware,dc=ca'
        SharePointUrl        = 'http://cmvm38386:9140'
        SharePointCredential = New-Object PSCredential ("LITWARE\administrator", (ConvertTo-SecureString 'J$p1ter' -AsPlainText -Force))
    }
    Install-SharePointSyncConfiguration @spProps -Verbose

#>
    [CmdletBinding()]
    [OutputType([int])]
    Param
    (
        # Path to the configuration XML files
        [Parameter(Mandatory=$true, Position=0)]
        $Path,

        # DNS name of the Active Directory forest to synchronize (ie - litware.ca)
        [Parameter(Mandatory=$true, Position=1)]
        $ForestDnsName,

        # Credential for connecting to Active Directory
        [Parameter(Mandatory=$true, Position=2)]
        [PSCredential]
        $ForestCredential,

        # OU(s) to synchronize to SharePoint (semi-colon delimited)
        [Parameter(Mandatory=$true, Position=3)]
        $OrganizationalUnits,

        # URL for SharePoint
        [Parameter(Mandatory=$true, Position=4)]
        [Uri]    
        $SharePointUrl,

        # Credential for connecting to SharePoint
        [Parameter(Mandatory=$true, Position=5)]
        [PSCredential]
        $SharePointCredential,

        # Flow Direction for Profile Pictures
        [Parameter(Mandatory=$false, Position=6)]
        [ValidateSet('Export only (NEVER from SharePoint)', 'Import only (ALWAYS from SharePoint)')]
        [String]
        $PictureFlowDirection = 'Export only (NEVER from SharePoint)'
    )

    #region Pre-requisites
    if (-not (Get-SynchronizationServiceRegistryKey))
    {
        throw "The Synchronization Service is not installed on this computer.  Please install the MIM Synchronization Service on this computer, or run this script on a computer where the MIM Synchronization Service is installed." 
    }

    if (-not (Get-Service -Name FimSynchronizationService))
    {
        throw "The Synchronization Service is installed but not running.  Please start the MIM Synchronization Service before running this script (Start-Service -Name FimSynchronizationService).  If the service fails to start please see the event log for details." 
    }

    if ((Test-SynchronizationServicePermission) -eq $false)
    {
        throw "The current user must be a member of the Synchronization Service Admins group before this command can be run.  You may need to logoff/logon before the group membership takes effect."
    }

    $MimPowerShellModuleAssembly = Get-Item -Path (Join-Path (Get-SynchronizationServicePath) UIShell\Microsoft.DirectoryServices.MetadirectoryServices.Config.dll)
    if ($MimPowerShellModuleAssembly.VersionInfo.ProductMajorPart -eq 4 -and
        $MimPowerShellModuleAssembly.VersionInfo.ProductMinorPart -eq 3 -and 
        $MimPowerShellModuleAssembly.VersionInfo.ProductBuildPart -ge 2195)
    {
        Write-Verbose "Sufficient MIM PowerShell version detected (>= 4.3.2195): $($MimPowerShellModuleAssembly.VersionInfo.ProductVersion)"
    }
    else
    {
        throw "SharePoint Sync requires MIM PowerShell version 4.3.2064 or greater (this version is currently installed: $($MimPowerShellModuleAssembly.VersionInfo.ProductVersion). Please install the latest MIM hotfix."
    }
    #endregion

    ### Load the Synchronization PowerShell snap-in
    Import-Module -Name (Join-Path (Get-SynchronizationServicePath) UIShell\Microsoft.DirectoryServices.MetadirectoryServices.Config.dll) 

    Write-Verbose "Contacting AD to get the partition details"
    $RootDSE                = [ADSI]"LDAP://$ForestDnsName/RootDSE"
    $DefaultNamingContext   = [ADSI]"LDAP://$($RootDSE.defaultNamingContext)"
    $ConfigurationPartition = [ADSI]"LDAP://$($RootDSE.configurationNamingContext)"

    Write-Verbose "Configuring the Active Directory Connector"
    Write-Verbose "  AD Forest:               $ForestDnsName"
    Write-Verbose "  AD OU:                   $OrganizationalUnit"
    Write-Verbose "  AD Credential:           $($ForestCredential.UserName)" 
    Write-Verbose "  AD Naming Partition:     $($RootDSE.defaultNamingContext)"
    Write-Verbose "  AD Config Partition:     $($RootDSE.configurationNamingContext)"

   
    $admaXmlFilePath = Join-Path $Path MA-ADMA.XML
    [xml]$admaXml = Get-Content -Path $admaXmlFilePath
    $admaXml.Save("$admaXmlFilePath.bak")

    ### Fix up the Domain partition
    $domainPartition = Select-Xml -Xml $admaXml -XPath "//ma-partition-data/partition[name='DC=Litware,DC=com']"
    $domainPartition.Node.name = $DefaultNamingContext.distinguishedName.ToString()
    $domainPartition.Node.'custom-data'.'adma-partition-data'.dn = $DefaultNamingContext.distinguishedName.ToString()
    $domainPartition.Node.'custom-data'.'adma-partition-data'.name = $ForestDnsName
    $domainPartition.Node.'custom-data'.'adma-partition-data'.guid = (New-Object guid $DefaultNamingContext.objectGUID).ToString('B').ToUpper() 
    $domainPartition.Node.filter.containers.inclusions.inclusion = $DefaultNamingContext.distinguishedName.ToString()
    $domainPartition.Node.filter.containers.exclusions.exclusion = $ConfigurationPartition.distinguishedName.ToString()

    ### Fix up the Configuration partition
    $configPartition = Select-Xml -Xml $admaXml -XPath "//ma-partition-data/partition[name='CN=Configuration,DC=Litware,DC=com']"
    $configPartition.Node.name = $ConfigurationPartition.distinguishedName.ToString()
    $configPartition.Node.'custom-data'.'adma-partition-data'.dn = $ConfigurationPartition.distinguishedName.ToString()
    $configPartition.Node.'custom-data'.'adma-partition-data'.name = $ForestDnsName
    $configPartition.Node.'custom-data'.'adma-partition-data'.guid = (New-Object guid $ConfigurationPartition.objectGUID).ToString('B').ToUpper() 
    $configPartition.Node.filter.containers.inclusions.inclusion = "CN=Partitions," + $ConfigurationPartition.distinguishedName.ToString()
   
   
    
    $admaXml.Save($admaXmlFilePath)
    
    Write-Verbose "Importing the Synchronization Service configuration"
    Write-Verbose "  Path: $Path"
    Import-MIISServerConfig -Path $Path -Verbose    

    # requires 3092179
    $Partitions = "$($RootDSE.defaultNamingContext);$($RootDSE.configurationNamingContext)"
    Set-MIISADMAConfiguration -MAName ADMA -Credentials $ForestCredential -Forest $ForestDnsName -Partitions $Partitions -Container $OrganizationalUnits -Verbose

    
    Write-Verbose "Configuring the SharePoint Connector"
    Write-Verbose "  SharePoint URL:          $SharePointUrl"
    Write-Verbose "  SharePoint Host:         $($SharePointUrl.Host)"
    Write-Verbose "  SharePoint Port:         $($SharePointUrl.Port)"
    Write-Verbose "  SharePoint Picture Flow: $PictureFlowDirection"
    Write-Verbose "  SharePoint Protocol:     $($SharePointUrl.Scheme)"
    Write-Verbose "  SharePoint Credential:   $($SharePointCredential.UserName)"
    Set-MIISECMA2Configuration -MAName SPMA -ParameterUse ‘connectivity’ -HTTPProtocol $SharePointUrl.Scheme -HostName $SharePointUrl.Host -Port $SharePointUrl.Port -PictureFlowDirection $PictureFlowDirection -Credentials $SharePointCredential -Verbose

    Write-Verbose "Publishing the Sync Rules Extension DLL to the Sychronization Service extensions folder"      
    Publish-SynchronizationAssembly -Path (Join-Path $Path SynchronizationRulesExtensions.cs) -Verbose
 
    Write-Warning "======================================================================================="
    Write-Warning "IMPORTANT: the SP MA must be opened and closed to refresh the extensible connector"
    Write-Warning "           Use Start-SynchronizationServiceManager to open the Sync Manager tool, then"
    Write-Warning "           ->Management Agents"
    Write-Warning "           ->SPMA (double click)"
    Write-Warning "           ->Click OK three times"
    Write-Warning "======================================================================================="
}##Closing: function Install-SharePointSyncConfiguration

Now we can call this bad boy as before, but with a semi-colon delimited list of OUs to sync with. If we only want to sync with a single OU that’s fine also.

$WorkingPath = "c:\SP16MIMBase"                           # Path to the MA files and Module
$ForestDnsName = "fabrikam.com"                           # DNS name of the Forest
$SyncAccountName = "FABRIKAM\sppsync"                     # Account to use within the SP MA (dirsync rights)
$CentralAdminUrl = "https://spca.fabrikam.com"            # Url of Central Administration
$FarmAdminAccount = "FABRIKAM\Administrator"              # A Farm Administrator (to connect to CA)
$PictureFlow = 'Export only (NEVER from SharePoint)'      # Picture Flow - 'Export only (NEVER from SharePoint)' or 'Import only (ALWAYS from SharePoint)'

# Semi-colon delimited list of DNs of containers to Sync
$OrganizationalUnits = 'OU=Fabrikam Users,DC=fabrikam,DC=com;OU=Ent Users,DC=fabrikam,DC=com;OU=Legal,DC=fabrikam,DC=com'  

# Credential Requests
$SyncAccountCreds = Get-Credential $SyncAccountName
$FarmAccountCreds = Get-Credential $FarmAdminAccount


Import-Module $WorkingPath\SHSharePointSync.psm1 -Force

Install-SharePointSyncConfiguration -Path $WorkingPath -Verbose `
                                    -ForestDnsName $ForestDnsName `
                                    -ForestCredential $SyncAccountCreds `
                                    -SharePointUrl $CentralAdminUrl `
                                    -SharePointCredential $FarmAccountCreds `
                                    -PictureFlowDirection $PictureFlow `
                                    -OrganizationalUnits $OrganizationalUnits 

OK, so what about that downside? Well, sadly due to bugs in the Set-MIISADMAConfiguration cmdlet (or the Management API it’s a wrapper for) we can’t properly select the containers for the configuration partition. In the updated module I include the partition, but no containers within it. This means the entire partition is selected within the AD MA:

2016-08-25_07-10-34

The only container we need here is the CN=Partitions container. You can’t add that to the –Containers parameter because of the bug – you will get a error saying the container doesn’t exist in the partition – thus the only option is to leave it out. That means the entire partition will be selected.

But that’s the only downside, it’s not a very big one. It doesn’t mean we are syncing a bunch of crap to the metaverse. It does mean we are pushing a bunch of extra stuff to the AD connector space thou (705 extra objects in a single domain forest with default schema plus Exchange– basically negligible in that scenario). Of course we can go and clean this up – which needs us to enter the password (!!). It’s a trade off. For customers who are likely to be using this tool as opposed to those who will set it up properly one that’s probably worth it. I can go ahead and execute the Run Profiles and it works just great.

Wait, except for on initial deployment – something that none of the documentation deems it necessary to mention – because the SP MA has a Rules Extension added upon it’s initial configuration it requires a refresh before it will function. If you don’t do this the initial run will result in the following error:

2016-08-25_07-22-31

Simple (age old ILM) trick to sort this is to open the SPMA – double click it, then click OK, click OK again on the Connectivity page, wait on the egg timer, and then click OK once more. That will force a refresh. Now you are good to go.

Of course there’s lots of other things about this toolkit which are “sub optimal” but it’s all about perspective. Regardless of your viewpoint on the removal of UPS, the reality is UPS provided a SharePoint Admin friendly UI for configuring complex multi-domain, multi-forest, import/export scenarios. It’s not the intent of the toolkit to replicate that. It’s intended as a starter solution to get people up and running. For things like additional domains and so forth there is a point of diminishing returns here. Those customers will be doing it “properly” anyway and not faffing about with an import of a lamer default MA configuration. Similarly those doing it properly will be updating the configuration when they move their tested MA configs through Dev – Test – UAT – Production. (Yes, really. People do this!). And that’s without even getting into the whole “Classic Provisioning” thing. So bottom line is they won’t be using this toolkit.

However for those customers looking for a quick and dirty way to get up and running, or for SharePoint practitioners wanting to get started with MIM, these tweaks improve things in the two areas they most complain about.

Another tip is for those that have got their panties in a wad over things like UserAccountControl and so on. Do it on a dev box, and export your MA. You can then use the toolkit to move it into a fresh setup.  You don’t have to use the sample MA configurations provided.

Before I go I have to address another common question regarding the github, “why aren’t you doing pull requests on this stuff?”. Well it’s a very long story but the short and sweet is that I have zero interest in doing free work for Microsoft which masquerades as “open source” especially when there is no actual commitment or leadership. I especially don’t wish to do it when the contribution workflow is hopeless and the publication mechanism is at odds with the standard Windows PowerShell approach. If it’s not going to be done right…. you get the idea.

 

s.

Important Update for SharePoint folks: Hotfix Rollup for Microsoft Identity Manager 2016

posted @ Tuesday, August 23, 2016 1:33 AM | Feedback (2)

Back in the middle of March, Microsoft released a Hotfix Rollup for Microsoft Identity Manager 2016 (MIM). This hotfix rollup is version 4.3.2195.0. This is an extremely important build for those leveraging MIM for profile synchronization with SharePoint Server 2016. You can get the bits over at KB313475.

There are numerous articles out there suggesting that you should install build 4.3.2064.0. Don’t! 4.3.2195 is the fix package you need. Make this part of your base build of the MIM Sync server.

However, if you already have MIM Sync setup and you want to apply this patch, make sure to follow the instructions. The installer will not update the configuration file – because if it did it would break the configuration of the existing ECMA2 MAs (the SharePoint Connector is a ECMA2 MA).

Why is this important? Well aside from the previous fixes which effectively were the required baseline. There are a number of elements critical to a successful implementation of the SharePoint Connector:

  • The AD MA can now handle multiple partitions
  • New Windows PowerShell cmdlets
  • Run Profile fixes
  • Export Only ECMA2 Mas now actually work

In addition, for those who are actually doing things properly and using Declarative Provisioning – there are a number of important fixes to the MIM Service and Portal components also.

Get Patch Happy

s.

Zero Down Time Patching in SharePoint Server 2016

posted @ Thursday, August 04, 2016 7:48 PM | Feedback (2)

Zero Downtime Patching (ZDP) in SharePoint Server 2016 has a marketing heavy silly name, but it's actually sweetness on a stick.

Whilst I hate the name, it is accurate in respect to the basics of the new patching process and the changes made in 2016 to support it. Now as to whether a customer would actually perform real world patching operations with such an expectation is another matter entirely. Here's a hint: they wouldn't. There's a lot more to patching an environment than updating the bits of the software. Or there should be, otherwise you shouldn't be running the environment. Alas this is of course is not something Microsoft can have much influence on.

Regardless this is another one of those SharePoint 2016 "small" things with huge impact, especially for those who actually own and operate large on premises SharePoint deployments. It's all good.

Even better is that it's really simple and straightforward. Like all good features. No fuss, no mess. It just works.

But what isn't good is the legion of utter misinformation and claptrap promoted by those in positions of responsibility in the community who should know better. Myths such as MinRole is required, misnaming the components, and all the usual "we played with it on a simple VM rig and think we're experts" rubbish. You know the drill.

Luckily - chaps who actually know how it works, and what the real guidance should be, have put together a nice little video explanation and demonstration on this rather very impressive change in SharePoint. They also cover guidance for critical planning aspects such as Distributed Cache.

This is the one source of information on ZDP. Forget everything else. You can be confident of this source, and the fact they will take responsibility for any future updates.

Go check it out over at TechNet: https://technet.microsoft.com/EN-US/library/mt767550(v=office.16).aspx.

Very well played to Bob, Neil and Karl who put this together.

One question on this topic I get a lot at events and so on is: "Will we ever see this back ported to SharePoint 2013?". Now, I don't work for Microsoft so I can't answer that definitively, but I know enough to state it's basically not going to happen. if you want the goodness of the new mechanism, you should be planning to upgrade to SharePoint 2016 - which is effectively the new baseline for all future build iteration and servicing approaches. Yes of course the upgrade choice is more complicated, but in respect to ZDP it is the answer.

SharePoint 2016 Nugget #2: Distributed Cache Size in MinRole Farms

posted @ Friday, April 15, 2016 7:07 PM | Feedback (1)

In SharePoint 2013, the Distributed Cache size is set to half of ten percent of the total RAM on the server. This means that on a server with 8Gb RAM, the Cache Size (the allocation for data storage) is 410Mb. Another 410Mb is used for the overhead of running the Cache.

This is a reasonable default as the system has no way of knowing which other services will be provisioned onto the server. And of course by default in SharePoint 2013 every machine in the farm will host Distributed Cache, unless you build your farm properly using the –SkipRegisterAsDistributedCacheHost switch.

The problem is that for anything other than a dev/test/hacknslash/demo box this is a silly number, and nowhere near enough for the Distributed Cache to actually be of any use in terms of social data.

With SharePoint 2016 if you continue to build the farm using the old approach – i.e. using a role of Custom and/or 
–ServerRoleOptional this behaviour is retained and the default cache size will be half of ten percent of total RAM:

image

However if instead you build your farm using MinRole and add servers of role DistributedCache to the farm, we now provision a much more reasonable and useful default:

image

The size is now half of 80 percent of the total RAM, so on a box with 8Gb this will be 3276Mb. That’s much more like it.

This is because we know that this server and any others of the same role will only ever host Distributed Cache and the Web Application Service. Optionally it could also host Request Management and the Claims to Windows Token Service. Neither of those services will ever consume a significant amount of RAM. Whilst they shouldn’t be there (more on that in a future post), the impact is negligible.

Yet another reason why MinRole isn’t the bag of nails it seems so popular for community “celebrities” to wax about.  Another reason why a MinRole farm will batter a SharePoint 2013 in terms of performance and throughput out of the gate – much more appropriate defaults for critical system configuration that so many customers have ignored with SharePoint 2013.

The devil is in the details. MinRole… kinda tasty. I think I’ll scoff me one right now…

WP_20160415_004

SharePoint 2016 Nugget #1: Topology Service in MinRole Farms

posted @ Monday, April 04, 2016 8:43 PM | Feedback (3)

Whilst I have some much more in depth coverage of SharePoint 2016 coming soon, this is the first in a mini series of “nuggets” – tidbits of information on the new release. Unlike with previous releases I decided against publishing a lot of material whilst the product was in public preview and to wait until the RTM. This decision was driven by a number of factors I won’t bore you with.

Many will be of the opinion that not a great deal has changed in SharePoint 2016. That is somewhat true, especially in respect to visible end user or administrator capabilities. However there are a significant number of small but important things to be aware of, and this series will catalogue many of them over the next few weeks. This doesn’t mean I won’t also be providing some “all up” coverage in the near future.

In previous versions of SharePoint (i.e. 2010 and 2013) the Application Discovery and Load Balancing Service (aka the Topology Service) was deployed to every server in the farm. This service amongst other things is responsible for maintaining a list of addresses of service application endpoints. These are used for Web Application <-> Service Application and Service Application <-> Service Application communication. In the previous versions, it would be extremely likely to go to another machine even if the requested service was running locally. It was a basic round robin style approach.

In SharePoint 2016 this has been changed to always “prefer local”. In other words if the requested service is running locally, go there. Instead of hoping over to another machine. If that isn't possible then it will go to a remote server. This change was implemented after the Product Group was able to measure the positive impact of the “prefer local” model whilst running SharePoint Online. And the change has been baked into SharePoint 2016. This is one of a number of small but significant changes under the hood, which mean SharePoint 2016 can perform significantly better than SharePoint 2013. This is *exactly* the sort of engineering improvement needed across the product in broader terms.

The “prefer local” model is used within both farms leveraging MinRole Server Roles, and farms where every machine is of role Custom (aka Opting out of MinRole, or a 2013 style topology).

In addition to the “prefer local” model, the Topology service in SharePoint 2016 is no longer deployed on every server in the farm. It is only deployed to the Application, Search and Custom roles. In a MinRole farm there is no need to have the IIS Web Application present on servers of role DistributedCache or WebFrontEnd so it’s not deployed. Note the term IIS Web Application - don't confuse this with a SharePoint Web Application whose counterpart in IIS is a Web Site. Here’s a WebFrontEnd in my farm after the servers have been joined to the farm, but *before* any service applications have been created (with MinRole service instances are provisioned automatically when service applications are created):

image

No-one wants to be pointing and clicking around across a bunch of boxes (and hitting that annoying WPI pop up!). The following Windows PowerShell will display the IIS Web Applications under the SharePoint Web Services IIS Web Site for every server in the farm:

 # Show the Service Application Endpoints across the farm
 # ignores Outging Email and Database servers
 ForEach ($Server in (Get-SPServer | ? {$_.Role -ne "Invalid"})) {
    Write-Host "$($server.Address): $($server.Role)"
    Invoke-Command $Server.Address { 
            Import-Module WebAdministration;
            Get-ChildItem "IIS:\Sites\SharePoint Web Services" | 
            ? {$_.NodeType -eq "application"} |
            Select Name | Format-Table -HideTableHeaders
    } 
 }

And here’s the output of the above (I’ve filtered to show only one machine per role):

DistributedCache
----------------
SecurityTokenServiceApplication

WebFrontEnd
-----------
SecurityTokenServiceApplication

Application
-----------
SecurityTokenServiceApplication
Topology                       

Search
------
SecurityTokenServiceApplication
Topology                       

Custom
------
SecurityTokenServiceApplication
Topology

But what about after some service applications have been created? No big surprises here, the endpoints are only deployed on the roles that are hosting the service instances. Just like SharePoint 2013, but due to the topology model that MinRole enforces it’s worth covering a few aspects of this.

On the DistributedCache role we don’t care about endpoints. We’ll never use them. Even if we were (stupidly) running Request Management on that role we still would never need them. We certainly don’t want to advertise services to other roles.

On the WebFrontEnd role, if we need to call the low latency service applications like BDC, MMS, UPA etc we prefer local but we can go to another machine if necessary. If we need to call “not so low” latency apps such as Word Automation or PowerPoint Automation etc we go across to the Application server as we did in SharePoint 2013. On this role there is no desire to advertise services to the other roles.

Likewise on the Application role, we will also go local if we can or otherwise via the Topology service Application Addresses.  This is an important aspect to really understanding MinRole. This is why all those service instances are deployed on the Application role as well as the WebFrontEnd role. In addition we do wish to advertise our services to other roles.

It seems a little strange at first, but this is a key part of changing the approach to Farm Topology. MinRole in many ways could be: the making of min changes for max effect.

On the Search role, we’ll pretty much go via the Topology service in the uncommon scenarios where we need to call out to the other service instances. The only endpoints on the Search role, are wait for it, the Search ones!

Here’s the output after every service application has been created in the farm:

DistributedCache
----------------
SecurityTokenServiceApplication

WebFrontEnd
-----------
0e2a2a21d68f48e5ac6e5212386c68a1
65bac1dee73a45d980e8a727ecded730
7e5c51e5c9644e20acfb0ebfe9e1932c
86e16014fa714b2482fe61558f47f4c8
872ceab28a38461781b492e8d0cd246b
8e1c5639ec3f4ef988064f210f23fd46
9abc0e641df14da283b794ca5951a54c
a8f3b8c8fab44a1a98fb66d85865362f
da9a134d441e4d109ad9b76beee38fc5
SecurityTokenServiceApplication 

Application
-----------
0e2a2a21d68f48e5ac6e5212386c68a1
343877e231ee4bac8a6ad9450fd9a3c9
65bac1dee73a45d980e8a727ecded730
7e5c51e5c9644e20acfb0ebfe9e1932c
86e16014fa714b2482fe61558f47f4c8
8e1c5639ec3f4ef988064f210f23fd46
a8f3b8c8fab44a1a98fb66d85865362f
bf5d1dc0e3964dd99a96542d20b7f097
da9a134d441e4d109ad9b76beee38fc5
SecurityTokenServiceApplication 
Topology                        

Search
------
74fbbb18d6fe49e6bdec58bdca9178f2
d55ae60a84b649c28b9022baedecb2fa
SecurityTokenServiceApplication 
Topology                        

Custom
------
SecurityTokenServiceApplication
Topology        

Ahh, I love me some GUIDs, can’t get enough of ‘em. Could be worse, could be Octet strings. Let’s sort that out. Unfortunately this means using the SharePoint Snap-in. Sadly that’s not something that’s been fixed in SharePoint 2016!

$Credential = Get-Credential $ShellAdminAccountName
ForEach ($Server in (Get-SPServer | ? {$_.Role -ne "Invalid"})) {
    echo ""
    echo (Get-SPServer -Identity $Server | % {$_.Role})
    echo "-----------------------"
    icm $Server { 
            Import-Module WebAdministration
            Add-PSSnapin -Name "Microsoft.SharePoint.PowerShell"
            $Apps = Get-ChildItem "IIS:\Sites\SharePoint Web Services" | ? {$_.NodeType -eq "application"} | Select Name
            ForEach ($app in $apps) {
                If ($app.Name -eq "SecurityTokenServiceApplication" -or $app.Name -eq "Topology") { $app.Name }
                Else { (Get-SPServiceApplication | ? {$_.Id -eq $app.Name}).DisplayName }
            } 
    } -Credential $Credential -Authentication Credssp
}

Note, there’s lots of aliases and such like in the above script. You wouldn’t really ever do this for real, it’s just for demonstration purposes. Here’s the output:

DistributedCache
-----------------------
SecurityTokenServiceApplication

WebFrontEnd
-----------------------
Machine Translation Service Application
App Management Service Application
Secure Store Service Application
User Profile Service Application
Visio Graphics Service Application
Managed Metadata Service Application
Access Service Application
Business Data Connectivity Service Application
Subscription Settings Service Application
SecurityTokenServiceApplication

Application
-----------------------
Machine Translation Service Application
PowerPoint Automation Service Application
App Management Service Application
Secure Store Service Application
User Profile Service Application
Managed Metadata Service Application
Business Data Connectivity Service Application
Word Automation Service Application
Subscription Settings Service Application
SecurityTokenServiceApplication
Topology

Search
-----------------------
Search Service Application
Search Administration Web Service for Search Service Application
SecurityTokenServiceApplication
Topology

Custom
-----------------------
SecurityTokenServiceApplication
Topology

And that’s all there is to it. Pretty simple stuff. But again, these seemingly trivial changes have a significant positive impact on overall farm performance. As always with SharePoint, the devil is in the detail. As much legacy gunk is still in the product, there are some extremely smart cookies working hard to make it better for all of us. Respect due.

I will leave you with this:

cola1

s.