harbar.net component based software & platform hygiene

Configuring Kerberos Constrained Delegation with Protocol Transition and the Claims to Windows Token Service using Windows PowerShell

posted @ Tuesday, June 02, 2015 9:05 PM | Feedback (0)

Recently I’ve done a few pieces of work with SharePoint 2013 Business Intelligence and I have also delivered the “legendary”* Kerberos and Claims to Windows Service talk a few times this year. This reminded me to post my Windows PowerShell snippets for the required Active Directory configuration.

This topic area is perhaps one of the most misunderstood areas of SharePoint Server, and there is an utterly staggering amount of misinformation, out of date information, single server documentation and good old fashioned 100% bullshit out there. That’s a surprise with SharePoint stuff, huh?

Every guide or document out there that I could find talks to configuring Delegation using Active Directory Users and Computers (ADUC). They all also reference configuring Local Security Policy manually, or via Group Policy (without providing the details).

Of course there’s nothing wrong with doing it that way, and it sure makes for a better explanation of the concepts. However back in 2009 when we were working on pre-release materials I put together some Windows PowerShell to achieve the same configuration. So here they are in all their very simple glory.

* “Legendary” – I don’t know about that so much, but the Kerberos talks and in particular the AuthN+Z module of the SharePoint 2007, 2010 and 2013 MCM programs were recently described to me as such by five different SharePoint luminaries with rock solid credibility. Those people know who they are.

Every time I give this talk I get hassled for the “magic scripts”. They aren’t magic, but they always seem to surprise people as there is a misconception that delegation settings cannot be set using Windows PowerShell!

As you should be aware, in order to configure identity delegation for a Web Application in Claims mode within SharePoint Server 2010 or 2013 we must configure Kerberos Constrained Delegation with Protocol Transition. No ifs, no buts. It’s the only way it can work because in Claims mode there is no identity with which to perform either impersonation, basic delegation or true Constrained Delegation using Kerberos.

Thus, we make use of a component of the Windows Identity Framework, the Claims to Windows Token Service (C2WTS) to mock real delegation using a Windows Logon Token. C2WTS itself makes use of Service For User (S4U). S4U does NOT perform real delegation, it cannot because there are no user credentials to delegate. It instead grabs a bunch of SIDs for the user (in this case a service identity). What all this means is that there is a hard requirement to use Protocol Transition. Protocol Transition is named in the UI of ADUC as “Use any authentication protocol”.

Thus, in order to set things up, our settings in Active Directory for the C2WTS service identity and the application pool identity of the service application endpoint must be configured to perform Kerberos Constrained Delegation using Protocol Transition to the back end services.

In the example below I am allowing the C2WTS account to delegate to SQL Server Database Services and SQL Server Analysis Services using the SPNs which already exist on their service accounts. I of course repeat the exact same configuration on the application pool identity of the service application endpoint.


In order to complete this configuration using ADUC we are told we must create a “mock” or “fake” Service Principal Name (SPN) on the accounts first. Otherwise the Delegation tab in the account properties does not show up.

The reality is we can easily configure the attributes we are interested in using ADUC in Advanced Features mode, or ADSIEdit. However, there must be an SPN for the delegation to succeed. So it’s not a “mock” SPN at all. It’s not just about exposing the delegation tab. We must have a SPN!

It’s a complete breeze to configure the same settings using the Active Directory module for Windows PowerShell.

  • The services to delegate to are exposed by the AD schema extended attribute msDS-AllowedToDelegateTo. This can be manipulated using the standard Set-ADUser –Add pattern. As can the SPN itself. 
  • The setting for Protocol Transition is actually a UserAccountControl attribute.  It’s enumeration is ADS_UF_TRUSTED_FOR_DELEGATION or 524288. Remember this attribute is a cumulative bitmask. But the thing is we DON’T need to care! We don’t need some stinky “library” or utility function to manage the bitmask stuff or any of that noise. It can all be handled with the Set-ADAccountControl cmdlet with the –TrustedToAuthForDelegation parameter.
  • Note TrustedToAuthForDelegation == Protocol Transition, –TrustedForDelegation == Kerberos Only

And that’s it. Two cmdlets basically. A complete snap. Now as always, there’s some slinging needed to do this neatly for real requirements and perform end to end configuration. Here’s the Windows PowerShell script I use for basic setups:

    Configures accounts in Active Directory to support identity delegation
    February 16th 2009

    1. Configures SPNs for SQL DB and SQL AS
       - does not check for duplicates
    2. Configures SPNs for SharePoint service identities
       (C2WTS and Service App Endpoint Identity)
    3. Configures Kerberos Constrained Delegation with 
       Protocol Transition to SPNs in #2
Import-Module ActiveDirectory

$sqlDBaccount = "sqldb"
$sqlASaccount = "sqlas"
$c2wtsAccount = "c2wts"
$servicesAccount = "sppservices"

$c2wtsSpn = "SP/c2wts"
$servicesSpn = "SP/Services"
$sqlDbSpns = @("MSSQLSvc/fabsql1.fabrikam.com:1433", "MSSQLSvc/fabsql1:1433")
$sqlAsSpns = @("MSOLAPSvc.3/fabsql1.fabrikam.com", "MSOLAPSvc.3/fabsql1")
$delegateToSpns = $sqlDbSpns + $sqlAsSpns

$delegationProperty = "msDS-AllowedToDelegateTo"

Write-Host "Configuring SPNs for SQL Server Services..."
$account = Get-ADUser $sqlDBaccount
$sqlDbSpns | %  {Set-AdUser -Identity $account -ServicePrincipalNames @{Add=$_}}
$account = Get-ADUser $sqlASaccount
$sqlAsSpns | %  {Set-AdUser -Identity $account -ServicePrincipalNames @{Add=$_}}

function ConfigKCDwPT($account, $spn) {
    $account = Get-ADUser $account
    $account | Set-ADUser -ServicePrincipalNames @{Add=$spn}
    $account  | Set-ADObject -add @{$delegationProperty=$delegateToSpns}
    Set-ADAccountControl $account -TrustedToAuthForDelegation $true

Write-Host "Configuring KCDwPT for C2WTS and Services Account..."
ConfigKCDwPT $c2wtsAccount $c2wtsSpn
ConfigKCDwPT $servicesAccount $servicesSpn

Write-Host "KCDwPT configuration complete!"

OK, so that’s the AD account configuration settings all taken care of. What about the C2WTS itself?

If we run C2WTS as it’s default identity, LocalSystem, we don’t need to do anything. But that’s a really stupid configuration. Why? Because in a real farm you have more than one machine running C2WTS. That means multiple points of configuration (on each computer object in AD). In addition any mistakes you make during configuration (say you fat finger the SPN) require a machine restart for corrections to take effect.  Thus there is a compromise between manageability, configuration approach and security.

The reality is that the security element of the compromise is completely null and void from a technical or information security perspective. The old arguments about TCB are now completely out-dated, and besides were invented by people who didn’t know information security and were designed for single server solutions! However, if you are unlucky enough to work with those customers with out-dated security policies it remains part of the compromise on those grounds alone.

Everyone else with any sense will change the identity to a named service account. If we do this, we also have to grant additional User Rights Assignments to the account in order for it to be able to call S4U. These are Act as part of the Operating System and Impersonate a Client after Authentication. The account must also be a member of the Local Administrators group on each server it runs. All of this can be done via Computer Management and Local Security Policy, or properly via Group Policy.

However it’s also a complete snap to configure this stuff using Windows PowerShell, making use of an old school utility NTRights from the Windows Server Resource Kit or the Carbon library. Here’s the script:

    Configures C2WTS service identity with appropriate user rights
    February 16th 2009

    1. Configures Local Admins memebership
    2. Configures User Rights Assignments using NTRights 
       (update with path to WSRK)
    3. Configures User Rights Assignments using Carbon 
asnp Microsoft.SharePoint.PowerShell

$user = "fabrikam\c2wts"
$CarbonDllPath = "C:\Tools\Carbon-1.6.0\Carbon\bin\Carbon.dll"

# adds user to local admins group
NET LOCALGROUP Administrators $user /ADD

# sets up the neccessary local user rights assignments using NTRights
C:\Tools\rk\NTRights.exe +r SeImpersonatePrivilege -u $user
C:\Tools\rk\NTRights.exe +r SeTcbPrivilege -u $user

# sets up the neccessary local user rights assignments
[Carbon.Lsa]::GrantPrivileges($user, "SeImpersonatePrivilege")
[Carbon.Lsa]::GrantPrivileges($user, "SeTcbPrivilege")

Note we do NOT have to set the c2wts account to Logon as a Service, as this User Rights Assignment is granted when we change the service identity within SharePoint…..

On a related note, I’ve also been asked for my snippets for managing the C2WTS process identity. TechNet has incorrect scripts for this work, which will only ever work on a single server farm (ooops!). Here’s how to change it properly, and also how to reset it back to LocalSystem (properly!).

    Configures C2WTS service identity 
    February 16th 2009

    1. Sets dependency
    2. Sets desired process identity
asnp Microsoft.SharePoint.PowerShell

$accountName = "FABRIKAM\c2wts"
$serviceInstanceType = "Claims to Windows Token Service"

sc.exe config c2wts depend=CryptSvc

# configure to use a managed account
# Should use farm, otherwise in multi server farm you have an array of objects!
$farmServices = Get-SPFarm
$c2wts = $farmServices.Services | Where {$_.TypeName -eq $serviceInstanceType}
$managedAccount = Get-SPManagedAccount $accountName
$c2wts.ProcessIdentity.CurrentIdentityType = "SpecificUser";
$c2wts.ProcessIdentity.ManagedAccount = $managedAccount

# reset to local system
# Should use farm, otherwise in multi server farm you have an array of objects!
$farmServices = Get-SPFarm
$c2wts = $farmServices.Services | Where {$_.TypeName -eq $serviceInstanceType}
$c2wts.ProcessIdentity.CurrentIdentityType=0; #LocalSystem

Note I use this script to also configure the missing dependency on the Windows Service itself. We can of course start the C2WTS easily as well:

# start c2wts on server(s)
$servers = @("FABSP1", "FABSP2")
foreach ($server in $servers)
    Get-SPServiceInstance -Server $server | Where {$_.TypeName -eq $serviceInstanceType} | Start-SPServiceInstance

Nice and easy. No pointy clickity click click or “Working on it…” needed. The entire end to end configuration in Windows PowerShell takes less than 90 seconds.




Insight to what’s going on, information keeps us strong

posted @ Sunday, April 05, 2015 1:25 PM | Feedback (0)

…what you don’t know can hurt you bad, take it from me you’ll be walkin’ around sad.

Great tune, but Terry Lewis’ bass can’t help you or your customers when Office 365 hits the skids. Most of you will now be familiar with the common valid arguments against “cloud” services such as Office 365, particularly those from the enterprise. However one of the common invalid arguments is around service availability and reliability. I can’t count the number of times I have had this conversation with customers over the last two years or so. In almost all cases it’s a completely pointless discussion. However it does also point to a seriously important latent and festering customer concern.

Seriously, the idea that the Contoso Corporation or the outsourced service delivery organisation they sub to can operate and maintain a better level of service than Office 365 is a joke. Of course it’s healthy for customers to be somewhat cynical – let’s face it, ten years ago if you had suggested that Microsoft (yes them of LAN Manager fame) could operate a large scale software as a service hosting operation you’d be laughed out of your job in a nanosecond. But a lot happens in a decade of “change”. Microsoft easily has the best service level of any of the players in this space. And the software they are offering as a service is far more complex operationally. No one else is even close, they aren’t even playing the same game. Especially in the SharePoint space where the nearest “competitor” is a laughably pathetic comparison in terms of operational agility and plain old fashioned skilled engineering. Microsoft are Serena Williams, the competition is any other WTA player.

However, even with those facts which are easily discovered and proven the discussions still come up. “Can they meet our availability targets?”, “How often does it go bang?”, “What happens when they mess up their SSL certificate renewals?”, and the ever amusing “can they handle our throughput targets?”.  When those same questions are asked of the alternative, which in this context is some “internal” IT, the general response is “hmm, good point, do we measure that?”. And that’s what this post is all about. Measurement.

Office 365 has a financially backed SLA. Which is a really good thing. So good in fact that now pretty much everyone expects it. But just like any other SLA, it’s as much use as a chocolate fireguard without measurement. And this is the area where Office 365 today has a significant operational weakness. There is nowhere near enough transparency around operations.

As someone who has been involved in the building from scratch of two of the largest managed operations infrastructures in Europe I am plenty familiar with just how difficult it is to do this well. Even more so when the actual service is evolving at breakneck pace and new features are being added practically on a weekly basis. Being hard isn’t an excuse though, it’s a fundamental infrastructure pre-requisite to doing managed services. Period.

We all know that outages happen, indeed customers in general are OK with that – which is just as well because they have to be. What *really* get’s their goat up is not knowing about them, and having to spend much more money than they will ever get back on the SLA fielding support calls from angry end users who can’t do their jobs due to the outage.

I’m sure you’ve all seen the Office 365 service health dashboard. Yeah, that. It is useful but it’s not what is needed in terms of transparency nor in execution. For all major Office 365 service outages that have made the headlines over the past 18 months or so customers reported that the dashboard often didn’t include any information on the outages and furthermore Microsoft have generally been poor at detailing root causes (something they are getting better at on a monthly basis).

Now of course, it’s pretty stupid to use the same service to deliver monitoring information that you are monitoring, and that’s not what Office 365 do, even if it appears that way in the user interface. But there is also an ownership consideration for customers. You can’t just buy a cloud service and expect the promised utopia. Just like government it’s up to us to hold them to account for their actions.

But wait. If one is to go “all in” with Office 365 does that mean we have to invest in expensive, complex, hard to manage OSM tooling so we will know if we’re getting the service we paid for?


I’m sure plenty are familiar with standard web site uptime trackers and “pingers”. What if there was an exceptionally easy way to hit up your Office 365 services with an external and independent monitoring service that concentrates on the core information? And it came from a source that you could absolutely, without any doubt whatsoever trust. That would be sweet right? How about if it was *really* cheap?


Office 365 Monitor
does exactly that. And it’s awesome.

I don’t do ISV product posts. Unless I’m slagging off Anti Virus vendors :). It’s mainly because there aren’t many good ones. Certainly not ones I'm willing to put my name behind. This one is an exception, and more importantly – this is a service that every Office 365 customer on the planet should be using. It’s that important. It’s good for customers, it’s good for Microsoft and it’s good for what matters most, our users.

Office 365 Monitor gives you the basic ability to monitor (24x7) SharePoint Online sites and Exchange Online Mailboxes, providing you with email or SMS notifications of outages. That’s the basic service offering. Over time additional Office 365 resources will likely be added. It doesn’t sound like much but that’s part of the beauty of the solution – it focuses on what’s important, rather than featureitus. Clean, simple and smart.

It’s a complete breeze to setup by simply signing in with your tenant logon and adding resources to monitor via a x-app consent model. No extra username or passwords, just the way it should be done.

The probes and alerts will be telling you of the outages before your users trash your helpdesk and in many cases before Office 365 itself knows there is one. It will also real time alert you when the service is restored.

There are also additional premium features which provide historical data on both outages and health checks – which give response times and status. There will be more features in this area coming soon. Data exports, cross tenant comparisons, averages – all that good stuff you always wanted your MOM to give you but never had the time to set up. Plus a comparison to the Office 365 SLA everyone is so keen on talking about.

Now, does this solve all of the things I was rambling about in the longwinded pre-amble? No, of course not. Microsoft themselves need to do a much better job on transparency and reporting. Which is something that they are working hard on (yes, they really are). The important point here is no matter what they do, independent monitoring will always be necessary. Furthermore if such tooling as Office 365 Monitor forces Microsoft to invest more in OSM – that’s spot right on the money.

Did I mention it’s easy to setup? One form. Done. Boom. It takes about 45 seconds. Sound a bit tricky? Check out the video over on YouTube for a feature walkthrough and setup example.

There’s no setup.exe, there’s no hardware, and there’s no cost. Yup, that’s right – it’s free.

I spend a significant portion of my time with customers helping them navigate the operational reality of the cloud services they have purchased or wish to purchase. No matter what feature offering or customisation angst is in play, the truth on the ground is paramount. Office 365 Monitor provides me with a service that greatly assists in this area, the single biggest IT consideration aside security for software as a service. Every Office 365 customer needs this service. Period.

Seriously if you are working with Office 365 in any capacity, quit watching that online training course on AngularJS, and go check out Office365mon.com. Be in the know, be in control. Get the knowledge.

Updated ULS Viewer

posted @ Saturday, August 23, 2014 11:44 AM | Feedback (0)

If you haven’t already grabbed it, just a quick note to let you know that Microsoft put an update of the ULS Viewer tool out recently. For quite a while the tool had been removed from code.msdn.microsoft.com and those who had “lost” a copy had to resort to annoying others to get it.

ULS Viewer, as I’ve written previously is an essential tool for working with SharePoint.  The new version has a number of tweaks including viewing across a farm, rather than manually having to configure that up.

Go get it!

ULS Viewer download

Bill’s post on the update


Props to Dan in particular!



Support for SQL Server Always On Async Replication with SharePoint 2013

posted @ Thursday, March 20, 2014 10:19 PM | Feedback (0)

One of the most significant “IT Pro” or infrastructure related announcements at the recent SharePoint Conference in Las Vegas was related to a change in supportability for using SQL Server Always On for SharePoint databases, and in particular the use of Asynchronous replication in Business Continuity Management (BCM) scenarios.

This is a HUGE deal. Of course, it’s not sexy, it doesn’t directly provide SharePoint IT Pros with a new tool in their belt, and it doesn’t expand deployment scenarios like the announcement relating to 1TB site collections in Office 365. However it is perhaps the single most important piece of infrastructure related information in the entire show for customers and partners operating real farms.

I tend to avoid pimping such announcements unless they have real impact, this is one of the cases where the “news” deserves much broader exposure. Let me be entirely clear, I was in no way involved in this change, nor in any of the work done to achieve the end goal. My only, and extremely minor, contribution was to provide feedback over the last year or so that such supportability would be greatly appreciated and extend the ability of organisations to implement appropriate and durable Operational Service Management for SharePoint deployments. My only role here is to help get the word out.

Extreme props are due here to the SharePoint Product Group, and specifically the Office 365 Customer Advisory Team (CAT) for making this happen. When community naysayers or indeed customers complain unfairly about Microsoft’s commitment to on-premises and the SharePoint IT Pro in particular it really gets my goat up. Sometimes the criticism is valid, but that is very much the exception to the rule, and this work is a demonstration of just how deeply committed Microsoft is to it’s customers and the partners that support them in the marketplace.

Here follows a Q&A on the details of the supportability change, and it’s impact to your designs or implementations.

Q. Which databases support which Always On replication modes?



Sync Supported

Async Supported


Central Admin Content



Farm specific database

App Management








Farm Configuration



Farm specific database





Managed Metadata









Not Tested

Not Tested

TBD, goal is to be supported.
Additional work in progress





Search  Analytic Reporting



See Search notes below

Search Admin



See Search notes below

Search Crawl



See Search notes below

Search Links



See Search notes below

Secure Store




State Service



Farm specific database

Subscription Settings




Translation Services




UPA Profile




UPA Social




UPA Sync



Backup and restore or recreate, see UPA notes below


Yes– NR


Farm specific and unsupported for attach DR. Could be use for data mining only.

Word Automation




Commentary: that’s a lot of green “yes”! A few notes are present. Clearly farm specific databases do not support Async, and to do so would be pointless in a DR scenario anyway as they store transient or dynamically generated data. There are a couple of stand outs which require further considerations (Search and UPA) which will be addressed below in more detail.


Q. Which version of SharePoint was tested

A. The testing was carried out on SharePoint 2013 April 2014 CU (Post SP1). Get your farms patched people!

Q. Can I implement Always On Async replication on a version of SharePoint 2013 prior to the tested version

A. While we have every reason to believe that the async replication capabilities will work just fine on a version of SharePoint 2013 prior to the tested version, we do not recommend it. This does not make it unsupported however, just consider that in the event of raising a support case your customer is likely to be asked to install the SharePoint 2013 April CU if the support case is in anyway related.

Q. Can I implement Always On Async replication on a version of SharePoint prior to SharePoint 2013

A. We have not tested or considered support for prior versions of SharePoint and as such the official stance here has to be, unsupported on versions prior to SharePoint 2013.

Q. What has changed in the product to support Always On Async replication in SharePoint 2013

A. Nothing has fundamentally changed in the data transfer or connection layer. We have added a new property to the SPDatabase Object. AlwaysOnAvailabilityGroup is a new property that is populated when you execute the new PowerShell command Add-DatabaseToAvailabilityGroup


Q. What new command have been added to support Always On in SharePoint 2013

A. There are three new commands

Add-DatabasetoAvailabilityGroup , this adds a database to an availability group by database and availability group name

Remove-DatabasefromAvailabilityGroup, this removes a database to an availability group and has options to prevent data loss on the secondaries and also to force the removal if needed

Get-AvailabiltiyGroupStatus, this allows you to interrogate a known availability group and check which SQL nodes and replication status is used on them.

Q. Are there any known limitations when using availability groups in sync or async mode

A. There are multiple reports of being unable to create new databases against listener names or having difficulties patching farms when used Sync and Async replication modes. Work is on-going to document the limitations on TechNet in the coming months.


Special considerations for the UPA Sync database
In the table above, backup and restore (or recreate) are noted as the appropriate steps to take in a BCM scenario for this particular database. This database has it’s schema and some data provisioned at the point of starting the User Profile Synchronization service instance. In addition when updates are applied any changes to it’s schema are only provisioned following a service instance restart. This means that Async replication mode is not appropriate and indeed offers no value. There is a compromise to be made between the backup/restore approach, which involves other components such as encryption keys and certificates. For many customers this is a significant burden and they will chose the “cleanest” approach which is to simply recreate and then provision the UPS service instance. Be aware however that this approach requires a reconfiguration of any Sync connections, filters and additional work with custom property mappings.



Special considerations for the Search service application databases.

You will note that the search databases are specifically called out as not supported for Async replication. This is due to the requirement to maintain synchronization between the search index files on disk and the search databases. With async replication this coordination cannot be guaranteed and the possibility of search index corruption or certainly instability is extremely high.

Thus the question remains, what to do about search?

If you cast your mind back to SharePoint 2010 we had a couple of options.

1. For a high degree of search freshness you could crawl read only databases on the DR side or crawl the production farm from a DR search service application

2. Use a log shipped copy of the search admin database to recreate the search service in DR and re-crawl the content, this has the advantage of bringing over the search configuration but the index needs to be rebuilt

3. Backup and Restore of the search service application. This is a high fidelity restore but may be unacceptable due to an extended restore time.

Two fundamental differences exist between SharePoint 2010 and SharePoint 2013 in this regard. In SharePoint 2013 several enhancements have been made to support tenant and site level search administration and this complicates the search DR story. These enhancements while surfaced at the site collection level and web level actually have their configuration stored in the search administration database. The second key change is the use of the search engine to process and manage analytical data, augmenting the search index and relevance of search results based on usage statistics and click throughs. This information resides inside the search index itself.

These changes give us a challenge for DR – The information in the search administration database can be retained for DR by recreating the service application from a copy of the production search admin database – similar to how we did this task in SP2010. The downside is that we cannot constantly update the DR side and so this has to be done at the point of failover followed by a full crawl. The analytics information however cannot be replicated in anyway except by a full service application backup and restore. This means we have different options for search DR in Sp2013 each with its own pros and cons.





Crawl Read Only DBs in DR

Can maintain high degree of search content freshness at point of failover.

Cannot maintain search configuration settings below service app level. Also loses analytical influences from production

Recreate service app in DR from Admin Database

Brings across all the search configuration settings

Have to re-crawl all content. Also loses analytical influences from production

Search service application back and restore

Brings across all the search settings and the analytical influences

May take a while to get search operational. Search is offline until the restore complete

Depending on your requirements for search you will need to make a selection from the above. Alternatively you could use a combination. For example if your primary need for search was to discover content with a high degree of freshness then you could select the first option. At the same time – just on the off chance that failing back to production was not possible, perhaps due to a major incident then you could also take a backup from production periodically to be able to restore in DR to recover full fidelity perhaps after a trigger period of a number of hours after failover.

SharePoint CAT is working on a more in depth whitepaper to describe the steps for Search DR in more detail. This whitepaper will be published in due course.


So there you go. if you are in the business of designing or providing BCM for enterprise SharePoint deployments, you surely will agree the importance of these changes cannot be underplayed. If you want more information on the background of this space, and to re-live the actual announcement, head on over to Channel 9 (yes, it’s for IT geeks as well) and watch my good buddy, Neil Hodgkinson deliver the goods in his excellent presentation on BCM for SharePoint 2013.

I’m outta here like i just failed over my data centre.


Online Workshop: SharePoint Advanced Infrastructure: Distributed Cache

posted @ Wednesday, February 19, 2014 10:26 AM | Feedback (1)

Audience: SharePoint Administrators, Infrastructure Architects and Support Professionals.

The esteemed Microsoft Certified Master certification is no longer obtainable... but you can still get master-class mentoring through our collection of Advanced Workshops. Delivered by one of the world's foremost SharePoint authorities, this workshop is a rare opportunity to learn from a recognised master in the field.

This module provides 360 coverage of Distributed Cache, the new foundational and pre-requisite service instance in SharePoint 2013 which is an implementation of Windows Server AppFabric Caching, and provides in memory caching across a farm.

Understand the background of this service, its architecture and usage within SharePoint 2013 along with design constraints and acceptable usage scenarios. Get to grips with the topology trade-offs for implementation, including the vendor guidance and practical reality from real world deployments. Core configuration and administration practices will be covered, along with how to managing updates. This module will also delve into troubleshooting a misbehaving and/or broken cache instance.

Clearing up some of the myths and out-dated information on a key service which commonly causes confusion and operational headaches, this module delivers all-up, “level 400” material for the SharePoint administrator, Infrastructure Architect or support professional.

Sign up over at http://www.combined-knowledge.com/Webinars/advanced_inf_workshop.html

Da Big Daddy: SharePoint Conference 2014

posted @ Monday, January 20, 2014 12:32 PM | Feedback (0)

[Updated 19/02 with session timeslots and an additional session]

Vegas, March, 10,000 SharePoint people. What could possibly go wrong?!

The big daddy is back, SharePoint Conference 2014 promises to be another great event, and I am once again happy to be speaking at the biggest SharePoint conference on the planet. I get asked a lot about which conferences are worth the money and so on, and of course being the official show, and with the associated travel and accommodation expenses SPC is often in the “requires justification” bucket. Seriously there is no debate, it’s worth it. If you are in the SharePoint business you absolutely have to be there. If you are a customer there is an incredible wealth of material and networking which will benefit your deployments. Seriously consider attending. Yes, I know it’s Vegas – with it’s pros (I’ve been told there are some!) and it’s not inconsiderable cons. However forget all that and concentrate on the conference itself and you will be very happy with the return, and have some fun to boot.

I’ll be again delivering three sessions this year, the abstracts of which I’ve copied below and added a little side commentary. In addition to these sessions, I will generally be loitering around the Infrastructure related tracks and available at the expo hall and Ask the Experts events.

406: Comprehensive User Profile Synchronization
Thursday, March 6, 2014, 12:00 PM-1:15 PM - Lido 3001-3103
Discover the changes and new capabilities in the foundational service for user discovery deployments of SharePoint Server 2013 and get the real deal on configuring User Profile Synchronization in this demo and best practices heavy session. This session will cover the architecture of the User Profile Service Application, the new AD Direct Mode and provide a walkthrough of the configuration requirements and setup.  We, too, will focus on coinciding architectural considerations for high availability, scalability and geographic deployments. Also covered will be general UPA related best practices in terms of synchronization, policy and privacy and leveraging social features inside the enterprise.

This is the return of the highly popular UPS session. Hmm, not to sure how I feel about this topic after so long, but as it’s the content you requested I will try my best to retain the original flavour but also include some new and or updated content. Remember thou, no matter the technology I will be imparting the reality of identity management!

411: Office 365 identity federation using Windows Azure and Windows Azure Active Directory
Tuesday, March 4, 2014, 9:00 AM-10:15 AM - Bellini 2001-2106
Windows Azure provides a compelling solution for deploying infrastructure for directory synchronization and identity federation with Office 365, while avoiding the cost of an on-premises implementation plus required operations. This session will present an end-to-end scenario leveraging Windows Azure and Windows Azure Active Directory, along with the Active Directory Sync tool and Active Directory Federation Services (ADFS) to provide the richest identity experience with Office 365. We'll also share and show tips and tricks to help automate deployment and accelerate your Office 365 Identity integration.

Brand new content exclusively for the SharePoint Conference. I’ve been told it’s a bad idea to debut content at SPC. That’s never stopped me before! This promises to be a fun session, I just hope the “live” aspects of the demonstration don’t fail me. The end to end scenario should provide a very interesting approach to dealing with the identity federation pieces of O365 for small to mid sized customers.

418: Subordinate integrity: Certificates for SharePoint 2013
Wednesday, March 5, 2014, 5:00 PM-6:15 PM  - Lido 3001-3103
Certificates play an increasingly important role in SharePoint 2013. As well as protecting data, they are one of the fundamental building blocks of service interaction between service applications and across products such as Lync and Exchange. With the introduction of Office Web Apps 2013 and Workflow Manager as well as the Cloud App Model, certificates are now effectively a pre-requisite for a complete SharePoint deployment. This session will de-mystify which types of certificate are needed, where they are used and how best customers can manage the certificate lifecycle for SharePoint 2013, Office Web Apps and Workflow Manager. Also covered will be client browser considerations and approaches to automating configuration with Windows PowerShell, along with certificate requirements for Office 365.

Brand new for SharePoint Conference. Another debut session, this time on the somewhat murky and clearly confusing topic of Certificates. I get asked about this topic all the time by customers and SharePoint practitioners, and also outwith the SharePoint domain completely. It’s great to see this type of content make it to the show, which brings me onto…

356: Designing, deploying, and managing Workflow Manager farms
Wednesday, March 5, 2014, 10:45 AM-12:00 PM - Lido 3001-3103
Workflow Manager is a new product that provides support for SharePoint 2013 workflows. This session will look at the architecture and design considerations that are required for deploying a Workflow Manager farm. We will also examine the business continuity management options to provide high availability and disaster recovery scenarios. If you want a deep dive on how Workflow Manager works, then this is the session for you.

This one is a co-present with my good friend, and fellow MCA and MCSM Wictor Wilén, which will dive into the IT Pro focused considerations for deployment of Workflow Manager after some architecture and design coverage. This should be a fun session and is commonly requested material in the marketplace both from customers and SharePoint practitioners.

There’s been a bunch of hoopla on the tubez about “is on premises dead” and all that hogwash. SPC shows the commitment Microsoft and SharePoint has to the customers with which it won the data centre, both on premises and on the journey to the cloud.  Take a look at the other sessions over on the official web site. Something for everyone, and tons of practical real world sessions as well. It promises to be a great event.

Viva Las Vegas.



Updated Antivirus for SharePoint 2013 options

posted @ Tuesday, August 20, 2013 11:42 PM | Feedback (0)

Just a quick note to let you know I’ve updated my Antivirus and SharePoint 2013 post, with the details of all the current available options. Instead of the *single* option we had shortly after RTM, there are now four options with hopefully another one in the near future.



SharePoint & Exchange Forum: somewhere in the Baltic Sea!

posted @ Thursday, August 08, 2013 9:36 AM | Feedback (0)

SEFlogoI’m delighted to announce that I will again be speaking at the excellent SharePoint and Exchange Forum (SEF), coming up September 30th through October 2nd. This year, the 10th anniversary, will be a little bit extra special as it’s taking place on board Silja Symphony – a (rather large) cruise ship running between Stockholm and Helsinki.

SEF is always an excellent event, with a great crowd, top quality speakers, great networking and superb evening entertainment.

Head on over to the SEF website to find out more, I look forward to seeing you at the event!



Article: Workflow Manager Farms for SharePoint 2013 Part Four: End to End Configuration using Domain CA issued Certificates

posted @ Friday, August 02, 2013 9:53 AM | Feedback (0)

In the previous parts of this article we covered the core concepts and critical considerations, creating a Workflow Manager Farm using Auto Generated Certificates and converting that farm to use Domain CA issued certificates.

This part will cover the end to end configuration of a Workflow Manager Farm using Domain CA issued certificates. This is of particular importance to those who have an organisational policy in force which prohibits the use of self signed or auto generated certificates. Whilst we can change an existing farm which uses auto generated certificates to use Domain CA issued certificates, it is NOT possible to do so for the Outbound Signing certificate.

It is imperative to be familiar with the material covered earlier, and in particular part one. I assume you are so make sure to have read this before continuing!

Workflow Manager Farms for SharePoint 2013 Part Four: End to End Configuration using Domain CA issued certificates

This part wraps up the article. In the future I may post more, especially around operational service management considerations for Workflow Manager farms, if the community feels this material has been useful.



Article: Workflow Manager Farms for SharePoint 2013 Part Three: Switching an existing farm to use Domain CA issued certificates

posted @ Wednesday, July 31, 2013 5:11 AM | Feedback (0)

In the previous parts of this article we covered the core concepts along with high availability, certificate and SharePoint considerations for Workflow Manager Farms, and the end to end configuration using Auto Generated Certificates. If you are not familiar with this material, make sure to read it before continuing as I assume you have done so!

This part will cover switching the existing Workflow Manager farm to using Domain CA issued certificates.

Whilst this part is intended as primarily step by step configuration guidance, I will take the opportunity to also explain a few things which didn’t make sense to cover in part one, particularly in the realm of Domain CA issued certificates.

As we are taking the environment from the end of part two and changing it up to make use of Domain CA issued certificates, it’s essential you are familiar with the sample scenario environment and configuration detailed in part two! This part is also useful to demonstrate the tasks necessary when updating the Workflow Manager (and Service Bus) configuration.

There is a significant constraint of Workflow Manager configuration, which means whilst it is possible to update the Outbound Signing certificate to a CA issued certificate, workflows will get “stuck” in their initial stage after doing so. If your organisational policy dictates that only CA issued certificates should be used,  then you must initially create the Workflow Manager farm with the correct certificates, as detailed in part four.

Workflow Manager Farms for SharePoint 2013 Part Three: Switching an existing farm to use Domain CA issued certificates


Article: Workflow Manager Farms for SharePoint 2013 Part Two: End to End Configuration using Auto Generated Certificates and NLB

posted @ Saturday, July 27, 2013 2:12 AM | Feedback (0)

This second part will cover the deployment of a highly available, SSL, Workflow Manager Farm for SharePoint 2013 using auto generated certificates and Network Load Balancing. As discussed in part one, this is the most suitable deployment model for the majority of SharePoint On-premises customers. In addition it is also the easiest way to deploy for production.

Whilst this part is intended as primarily step by step configuration guidance, I will take the opportunity to also explain a few things which didn’t make sense to cover in part one.

Workflow Manager Farms for SharePoint 2013 Part Two: End to End Configuration using Auto Generated Certificates and NLB



Article: Workflow Manager Farms for SharePoint 2013 Part One: Core Concepts, High Availability, Certificate and SharePoint considerations

posted @ Friday, July 26, 2013 6:07 AM | Feedback (0)

There’s not a lot of high quality documentation for Workflow Manager 1.0. What exists is generally accurate, however it’s the key missing information and lack of detail which presents challenges in the field. During the initial content development work for the MCSM: SharePoint it became clear there is a very large gap with respect to actually implementing the high level deployment guidance provided by the vendor. Following recent discussions in the MCSM: SharePoint community more generally, the topic again raised its head and led to the publication of this article.

This guide is an attempt to help address that gap. The objective is not to cover background conceptual or architectural aspects of Workflow Manager or Service Bus specifically, as these are well covered already elsewhere. Rather the objective is to provide clear and accurate explanation along with repeatable deployment and configuration details for common scenarios faced by the SharePoint practitioner.

This article will focus on the deployment and configuration of a highly available, SSL, Workflow Manager Farm for use with SharePoint 2013. In most test and lab environments a single server Workflow Manager farm is deployed and is sufficient. Pretty much everything you will find will talk to this scenario with scant mention of doing it “properly in production”.

This article is in four parts, due to its length, and also to avoid switching between exposition and configuration:

  1. Core Concepts, High Availability, Certificate and SharePoint considerations
  2. End to End Configuration using auto generated certificates and NLB
  3. Switching an existing farm to use Domain CA issued certificates
  4. End to End Configuration using Domain CA issued certificates

For those comfortable with the technology this first part will be enough. The other two parts will include lots of nice Windows PowerShell, and some not so nice Windows PowerShell!



Configuring a Dedicated “Crawl Front End” with Request Management

posted @ Wednesday, July 24, 2013 12:06 AM | Feedback (2)

I keep getting asked about how to use Request Management in SharePoint 2013 to configure a dedicated “crawl front end”. In other words how to use RM to ensure that your search crawl traffic gets sent to a specific machine or machines in the farm, which do not serve end user requests.

Hopefully you already know that by simply turning on RM on your Web Servers in your farm and with no additional configuration, you get health based routing for free. And this is health based routing that actually works, unlike the default configuration of the most popular “intelligent” load balancers. All of which require additional scripting to make them “intelligent”. But you may want to leverage RM to do other things, and “crawl front ends” are the best example of a reasonably common requirement.

If you are not familiar with RM, head on over to my Request Management article series find out about it before doing anything with the scripts in this post!

It’s all very simple, but the number of times I’ve been asked about it means it makes sense to do a quick post on the required configuration. Now, before we get into the details I want to make a couple of things very clear.

  1. Just because you can, doesn’t mean you should™!
    Whilst I will show you how to set it up, this does not mean “Spence says you should use RM for configuring a crawl front end." RM is simply one of the tools available to us in SharePoint 2013 that we can choose to use. The call as to whether to use it or not depends to a large degree on the specifics of your deployment. Furthermore if you are not comfortable with RM, then this is not the approach you are looking for. Remember, it’s a tool. Another tool for the same job is the hosts file on your server file system. It’s your decision to make, not mine! :)
  2. Complexity is the enemy of everything!
    The vast majority of on-premises customers don’t need a dedicated crawl front end in the first place. They make the farm topology more complicated, expensive and increase the operational service management burden. None of those things are good! “Architects” that over-complicate farm designs for customers by deploying such things really get on my wick. Only deploy dedicated crawl front end(s) if you really need them. Some folks do. Most don’t and would be better off by increasing the resources available to their existing web servers. Oh, and whilst I'm in rant mood, just say no to the idea of “crawling yourself” by running the Web Application Service on the box running the crawler. That’s just stupid. Period.
  3. Remember that a “front end” is complete and utter claptrap!
    Yup, “Web Front End”, “Crawl Front End”. Stupid terms which simply won’t die. Everything is a web server. that’s it. Nothing more to see here. I was always very disappointed with Microsoft when they didn’t ship a role called “Web Back End” in SharePoint 2007! :)  “Web Front End” is a term made up by people that don’t (or didn’t) understand HTTP. But of course the trouble is everyone uses them (even me, when I'm tired or otherwise not concentrating). The stupidest term in SharePoint (at least since the “Single Sign On” service got renamed :).


Okay, cool. Disclaimers are always fun. With that out of the way let’s get into some details.


Host Named Site Collections

The fundamental thing here to understand is that Request Management only really works with Host Named Site Collections. That means a single Web Application in the farm, hosting all Site Collections. Microsoft’s preferred logical architecture design for SharePoint 2013, and incidentally the way we should have been doing it all along if the product had been up to scratch.

Whilst it is possible to hack things together to make RM work with Host Header Web Applications, I will NOT be detailing that configuration, ever. It’s not something you should even consider.

So, Host Named Site Collections it is. That means if you are not using them, this approach can NOT be used to configure your crawl front ends. That means you need to look at the alternative approaches such as the properties in the Search Service Application, or better yet, good old name resolution.


Starting the Request Management Service Instance on Web Servers

Request Management should be started on all Web Servers in the farm. You can do this via Services on Server or with the Start-SPServiceInstance Windows PowerShell cmdlet. I created a helper script which basically does all the work, regardless of the Farm you are deploying to.

# Start the RM service instance on all machines in the farm which run the WA service instance
# we could of course simply pass in an array of servers

# Gets a list of server names for servers runnning a given service instance type
function GetServersRunningServiceInstance ($svcType) {
    return get-spserviceinstance | ? {$_.TypeName -eq $svcType -and $_.Status -eq "Online"} | select Server |%{($_.Server.Name)}

# Starts a service instance on a given server
function StartServiceInstance ($svcType, $server) {
    Write-Host("Starting " + $svcType + " on " + $server + "...")
    $svc = Get-SPServiceInstance -Server $server | where {$_.TypeName -eq $svcType}
    $svc | Start-SPServiceInstance
    while ($svc.Status -ne "Online") {
        Start-Sleep 2
        $svc = Get-SPServiceInstance $svc
    Write-Host("Started "+ $svcType + " on " + $server + "!")

$waSvcType = "Microsoft SharePoint Foundation Web Application"
$rmSvcType = "Request Management"

# loop thru servers and start RM..
GetServersRunningServiceInstance $waSvcType | % {StartServiceInstance $rmSvcType $_}


Configuring Request Management for dedicated crawl front end(s)

The first step is to get the RM Settings for our Web Application. We will pass this into other cmdlets later on. I like to output the objects to validate things are working.

# grab and display the settings
$waUrl = "http://webapp/”
$wa = Get-SPWebApplication $waUrl 
$rmSettings = $wa | Get-SPRequestManagementSettings

Obviously the Web App address is NOT http://webapp/ unless we have modified the Internal URL. It will be a server name, and you should update that variable to reflect your setup.

Now we will configure a Machine Pool which includes the servers we wish to act as crawl front ends. We do this by creating an array of server names, and then pass that into the Add-SPRoutingMachinePool cmdlet.

# Create a Machine Pool for the dedicated crawl front end
$crawlFrontEnds = @("Server1", "Server2")
$mpName = "Crawl Front End Machine Pool"
$mp1 = Add-SPRoutingMachinePool -RequestManagementSettings $rmSettings -Name $mpName -MachineTargets $crawlFrontEnds
# validate settings
Get-SPRoutingMachinePool -RequestManagementSettings $rmSettings -Name $mpName  

Now we need to create a Rule Criteria which will be evaluated to see if the incoming request is from the search crawler. This is the “magic” here (if it can be called such!). We will match the HTTP User Agent against that provided by the crawler.

$userAgent = "Mozilla/4.0 (compatible; MSIE 4.01; Windows NT; MS Search 6.0 Robot)"
$critera = New-SPRequestManagementRuleCriteria -Property UserAgent -MatchType Equals -Value $userAgent

Note the $userAgent variable there. That must match the crawler’s User Agent. This one is the default with SharePoint 2013. It’s stored in a Registry Key. There are plenty reasons why this may need to be changed. Make sure to check yours, it’s at

HKLM\SOFTWARE\Microsoft\Office Server\15.0\Search\Global\Gathering Manager\UserAgent

Of course, you could read the value directly in Windows PowerShell and shove it into the –Value parameter.

Lastly we create a Routing Rule using the above Criteria and Machine Pool. I also add this rule to Execution Group 0 to ensure that it will always fire.

$ruleName = "Serve Crawler requests from Crawl Front End Machine Pool"
$rule = Add-SPRoutingRule -RequestManagementSettings $rmSettings -Name $ruleName -ExecutionGroup 0 -MachinePool $mp1 -Criteria $critera
# validate

That’s it! One Machine Pool, one Routing Rule. It doesn’t get much simpler. Well unless you are editing a hosts file :).

Now we can kick off a Search Crawl and see it working by viewing the RM ULS (remember to view the ULS on the Web Servers and not the Crawler!). You will see a whole bunch of URI Mappings happening across those boxes, for example:

Mapping URI from 'http://webapp:80/robots.txt' to 'http://server2/robots.txt'

Mapping URI from 'http://webapp:80/_vti_bin/sitedata.asmx' to 'http://server2/_vti_bin/sitedata.asmx'

Mapping URI from 'http://cool.fabrikam.com:80/robots.txt' to 'http://server2/robots.txt'

Mapping URI from 'http://lame.fabrikam.com:80/robots.txt' to 'http://server2/robots.txt'

Mapping URI from 'http://cool.fabrikam.com:80/_vti_bin/sitedata.asmx' to 'http://server1/_vti_bin/sitedata.asmx'

Mapping URI from 'http://lame.fabrikam.com:80/_vti_bin/sitedata.asmx' to 'http://server2/_vti_bin/sitedata.asmx'

Mapping URI from 'http://cool.fabrikam.com:80/_vti_bin/publishingservice.asmx' to 'http://server1/_vti_bin/publishingservice.asmx'

So we can sit staring at the ULS forever whilst the crawler does it’s thing, but the important thing is that it’s working.

But wait. On my demo rig I have a couple of other web apps there – cool and lame – those are host header web apps, and it’s mapping those requests as well. That’s not so cool, because the response will be for the HNSC not the Path Based Sites. See I told you RM was designed for HNSC only. Don’t mix and match, weird shit will happen. You’ll get weird search results and users won’t be able to find a whole bunch of stuff:

24-07-2013 00-03-31



There you have it. Configuring dedicated crawl front ends using Request Management. If you are in a Host Named Site Collection world, life is good. It’s very simple. Still stuck in the world of Host Headers, then just say no to Request Management, period. Enjoy managing your crawler requests!


Shooting Roland Garros

posted @ Thursday, June 20, 2013 9:36 AM | Feedback (1)

After recently posting a bunch of “picks” from our recent trip to Roland Garros on Facebook and Flickr I have been inundated with questions and comments regarding how I achieved these results.

These have come both from close friends and acquaintances along with random SharePoint people, it seems photography is much more interesting than SharePoint (duh! obviously). Incredibly even other tennis photographers have shown interest, sadly at this point no pro player reps have been in touch (go figure!). Folks have said, “you should blog that stuff”.

So here are the details for those interested, my tips and tricks for shooting at Roland Garros. For the English and American readers that’s the “French Open”, a professional tennis tournament which takes place in late May early June each year on the outskirts of Paris.

It’s somewhat interesting how little information is out there about shooting tennis. There are lots of general shooting sports tutorials and the like, including for beginners an excellent, very recent video class over at kelbytraining.com. However most of these will focus very much on American sports, and mostly team sports at that. Whilst the vast majority of what they talk about remains relevant, there are some additional specific considerations for tennis which I will touch on as I go through my experiences. Also I will cover some of the more practical aspects which are certainly as important as camera settings and the like.

I’m splitting this into a couple of parts as it’s reasonably long. This part will talk about preparation and equipment. The 2nd part will talk about logistics, the grounds and stadia, making shots and post production. It’s a bit rough and ready, I certainly haven’t spent as much time or asked anyone else to test/review as I would do with my SharePoint posts. :)


Practice and Preparation

I’m a trained photographer, I studied it at college as I was in visual arts all the way through school. Fundamentals of good shots such as how to hold the camera properly, settings and “technique” I have down cold. But none of that really matters if you don’t practice and prepare.

I am not saying you don’t need to learn these things, certainly shooting pro tennis you are ideally in manual mode and certainly at least in shutter priority mode. If you have a DSLR and shoot in auto or program mode, you likely won’t make many decent images at an event like this except for a few based entirely on luck. If you don’t know how to hold your camera properly, it doesn’t matter which settings you use you will likely be disappointed with the results.

However preparation for the event will make a world of difference to your results. Having an idea of what shots you are looking for, and what to avoid are key. How you will approach shooting the action is something you can practice in front of the television. Just like Moose says, you can learn to shoot wildlife in your backyard so that when you are on safari you are not learning those things and just concentrating on getting great shots. Same here, shooting a small event or even your friends playing will help enormously. You definitely don’t want to be fiddling with your camera settings when the best players on the planet are in action. You need to have that stuff down before the main event.

I didn’t shoot a local event (there are none) or shoot my friends playing. But I have shot tennis before and I did practice camera control with televised tennis so that these things became muscle memory for the most part.

Think about how much you’ve invested in equipment. If you can’t spend a few hours practicing because you are “too busy” or “don’t have enough time” then again, you will be disappointed with the results. No-one is too busy or doesn’t have enough time. It’s your choice when and what you spend your time on. We all wish there were more hours in the day, but if you manage your time badly it doesn’t matter if there are 48 hours in a day, there still won’t be enough.

As I mentioned in the intro there is a general dearth of good advice for shooting tennis. In many respects tennis is the worst thing to shoot as it effectively presents all of the main challenges to good shots all in combination. You need a fast shutter speed, subject isolation, to avoid distracting environmental elements (advertising hoardings, spectators), to avoid distracting elements (line judges, ball kids) and are often at the wrong angle to the action. Add to that restrictions on gear you can take with you and things like the weather then you are all set for a disappointing week (and missing some great tennis as a regular spectator) if you aren’t ready for business.

However there is some great stuff out there. Specifically over on youtube is a two hour long lecture by commercial sports snapper Chris Nicholson who talks thru his experiences shooting the US Open. This is one of the excellent B&H Photo events. Furthermore Chris has a great book called Photographing Tennis. These are both exceptional resources, highly recommended.

Along with this, check out the sheer mountain of tennis action shots from the newspapers and magazines, all of which are one click away in Bing or Google image search, or any of the wire service’s or stock library’s web sites. Checking out others work is a fundamental way to get ideas and into a mind-set of what you want to shoot.

Perhaps rather obviously then the key tip here, is just like in any other endeavour, preparation is paramount.



No modern article on photography would be complete without a section dedicated to equipment! It’s just the way it is. Photographers seem to be intrinsically addicted to gear and this has certainly increased during my lifetime as the move from film to digital is effectively complete. Even the most right brained photographer can talk your ear off on the subject. Tip: unless you are really interested, never ask a photographer what equipment they use and why. :)

The good news is you can take great shots as a spectator at an outdoor tennis event with what would be generally considered consumer grade equipment. With the exception of my camera body, pretty much all of my shots were taken with that stuff. If you know how to use them well, consumer grade bodies and lenses are perfect for the event. This is especially important because aside from the obvious cost consideration, fundamental logistics such as carrying your gear and getting the gear inside the event effectively rule out “pro spec” for the spectator.

A good shooter can take great shots with the crappiest equipment. Give Rafael Nadal a wooden racquet and he will still kick your ass. The fundamental fact is that the better equipment makes it *easier*, *faster* and more convenient to take a greater percentage of keepers. As for lenses, the better the glass the more likelihood of the results you are looking for. But a great lens in the hands of a bad photographer doesn’t make a good shot. Thus don’t get too hung up on gear, it’s likely what you have will be fine, or you can invest a little to fill a gap. No big spend necessary. Spending time (above) and good technique are the best investments you can make.

Camera Body

In almost all cases, you will be taking the camera you have. Unless you are rich you don’t have the luxury of choosing the body to take to the event. The good news is pretty much any recent DSLR body will do the job. They will if used correctly all deliver stunning results. You can’t really buy a “bad” DSLR these days. Again the “better” all depends on your level and other factors such as ergonomics.

The only thing here is really that it’s a DSLR. Point and shoots or anything that doesn’t have a viewfinder in my opinion is a bad candidate. It likely doesn’t have the focal length needed. Sure a Nikon 1 series with the uber lens may work, but really a DSLR is what you need for this sort of thing. In five years it might be different.

I shot the whole time with a Nikon D800. (Wait, what a D800? – I can hear the fotoistas chuckling already). Why? Simple, because this is the only DSLR camera I own. It’s not for sports. Its sweet spot is portraits and landscapes where the resolution is important and the burst rate is not, it’s a camera used often on a tripod. It’s almost the anti-sports camera. All over photoista forums you will read this camera being slagged off as terrible for sports. Take a look at my shots and judge for yourself if you think you can’t make good tennis images with a D800.

If you are thinking of a new body and wish to shoot sports, there are a couple of characteristics you need to consider.


This is key for any photography, but especially important for sports. Does it feel good in your hands? Are you comfortable with it? Can you access key functions easily and quickly? Does it have or can you customise it so you can use a back focus button? Does it have or can you add a battery grip for portrait shooting and/or increased frame rate?

Frame Rate (aka burst speed, aka frames per second)

This actually isn’t that important, but something that most would consider a pre-requisite. This all depends on how you shoot. For example the Nikon D800 has a “poor” max FPS of 4 in full frame mode. The most it can do is 6 FPS with a battery grip and in DX mode. A top end Nikon can do 11 FPS, a top end Canon can do 14 FPS. That’s what the pros use. But the pros don’t have to worry about scaring the spectator next to them that you have an automatic weapon (that’s what 14 FPS sounds like). And besides pros don’t actually pay for their own cameras or if they do it’s like us SharePoint people writing off their 32Gb laptop as a business expense.

So 4 FPS is slow, actually 6 FPS is good enough. Would I be better off with a D300S which can do 8? Sure. But it’s not that big of a deal. Here’s the real deal. Shooting in burst mode is one approach, some call it “spray and prey”, the idea being that you just shoot a ton of images in the hope that you get some good ones. This works, but it also means you are spending a lot more time in front of your computer after the shoot sorting the wheat from the chaff. And unless you are going for a specific effect it becomes more about luck than skill. Remember photography is an art and a science. I’m not saying I never use burst mode, but 80% of my images from Roland Garros are from single clicks.

That’s why poor FPS is not a deal breaker. I’ll talk more about other factors that allow you to make single click images later. If you still think that you need high FPS to make decent tennis images, ask yourself how come those pictures of Ivan Lendl or Martina Navratilova are so sharp, or for that matter what about the shots of Borg and Conners, or Billie Jean King. I think you see my point, with 14 FPS that roll of film is done pretty quick. Oldtimers didn’t have high FPS and oldtimers would call 5 FPS fast anyway. BTW if you are shooting tennis on film, all the best but you should probably go somewhere else for tips. :)

What is more important in this regard is buffer speed and shutter lag. As the camera takes shots it has to write them to the memory card. If you do use spray and prey mode, this can become an issue. You need fast memory cards, and also consider if you are shooting in RAW or JPG, or both. If you have a high FPS camera but put a cheapo card in it, you won’t get decent FPS! Shutter lag can also be a problem as there is a delay from you pressing the shutter button and when the shutter is raised. Shorter is better, but it doesn’t really matter what it is (e.g. a D70 has a 124ms lag, a D800 is 42ms) but you need to be used to it, so your timing is in line, in order to make the best shots.

Autofocus speed

Better cameras tend to have faster autofocus. This of course is actually a combination of camera and lens, but a slow autofocusing camera such as D70 or similar will be very frustrating. This is another area where the D800 is excellent.


Full frame or crop sensor (APS-C)? Does it matter? Well yes and no. I used the D800 which is a full frame camera. Full frame cameras have the advantage of usually being better at High ISO and this is still important even when shooting outdoors during the day (I’ll cover that later). However as a spectator having a crop sensor is actually an advantage. It’s likely you don’t have the expensive top end lens so getting closer to the action is easier. Sensor size technically alters field of view not focal length, but in this context it can be of great use. For example on a DX camera (Nikon speak for APS-C) a 300mm focal length becomes effectively 450mm. You can get closer to the action without having to spring for the real expensive glass.

The good news here is if you have a full frame camera, it likely has a crop mode. The D800 can shoot in DX mode, and this is what I used often (more details later). Crop mode images are also smaller so write to the card faster and download to your computer faster.

So that’s about it. You don’t need top end, you don’t need mega FPS. You can get good results with pretty much any modern DSLR. The real thing is knowing your camera and its controls well. I’ll talk more about settings later but there is one more thing that’s important about your camera as a spectator at such an event.

Paris, late spring, what could possibly go wrong? Yup you guessed it, the weather.

When it’s nice it’s stunningly beautiful. When it’s raining and you aren’t in the city taking shots of the street life it’s awful! We were there for three days and one day of that it was raining. And sometimes when it comes down, it really comes down. Now obviously when it’s chucking it down you aren’t shooting and your gear is away in its bag. But they play in light drizzle. Having a well weather sealed camera is an advantage. So is one that is well made. You don’t want to be covering up your camera as soon as it’s starts spitting especially if the action is still happening. Minor knocks and bangs you also want to be confident won’t hurt or break your camera. Having a D800 is a benefit here, as it’s built like a brick house, similarly a D300S would be good. The lesser bodies in this regard will be OK, but you have to be that bit more careful both in the wet, but also around the grounds.

Incidentally when the rain comes, usually so does the dark. Many say that when shooting outdoors you don’t need good high ISO performance. Not true if you live in north west Europe :). Remember for tennis we need fast shutter speed. If using consumer grade lenses, that means we need good ISO performance at least up to ISO 2000, another area where the D800 excels. These pro tennis guys play often in murky conditions a lot better than a thousand dollar camera can.


This will be the area that causes the most procrastination. Of course it depends on what you want to shoot, let’s cover the players in action first.

Our objective here is to get as close to the action as possible and fill the frame.

If you want a full body shot of a tennis pro with their arms, legs and racquet extended, from the stands at a pro event, that means 300mm. 450mm would be even better. You probably also want to be able to zoom. Depending upon your position in the stands and also the angle of view, zoom for a spectator is common - you will be zooming as the player moves whilst retaining focus (or trying to).


Roger Federer. 1/1250s at f/5.6 ISO 320 300mm Full Frame using a Nikkor 28-300. No post production crop. This is why we need 300mm. This image will be used again as we cover other shooting aspects later in the article.

So that means 300mm, or a zoom that can go to 300mm. That’s a problem, because we also want a fast shutter speed, which means you need a wide aperture. A Nikon 300mm f/2.8 costs $6,000. It weighs 2.9kg. It’s expensive, it’s heavy, and not a lens you own. But here’s the thing, none of that matters, because you won’t be able to get it into the event anyway.

So what can we do? Well we have to suck it up and use a “consumer” zoom. Nikon has two – a 70-300 f4.5-5.6 and a 28-300 f3.5-5.6. Both of these are full frame so will work both on full frame cameras and a crop sensor camera, on the crop sensor (or in crop mode) they are effectively 450mm at f5.6. Canon has equivalents to these. They are smaller, lighter and cheaper. I’ve used both the Nikons above for tennis, but choose the 28-300 for this trip as I wanted to be able to go wider for other shots and also have one lens for walking around the city if we chose to do that. The downside of course is the slow aperture, this means you have to raise your ISO and can’t blur backgrounds as well. This is the compromise we have to accept if we want to shoot without a credential and at a reasonable cost.

For DX shooters, there are other options which are all zooms that reach 300mm and are smaller, lighter and less expensive. Whichever one you get it does 300mm at f5.6. You are good to go.

What about the 70-200 with or without a tele-convertor I hear you ask. Well that’s $3,000. And not kit I owned at the time. Will I try that in the future? Sure. But the reality is 70-200 is a big heavy professional lens. And it still doesn’t go to 450mm without losing stops.

Another compromise is auto focus speed. I agonised over the 70-300 versus 28-300 knowing that neither are anywhere close to a 70-200. In the end it wasn’t an issue, other technique more than makes up for it, and the 28-300 is very good regardless. One key thing that can slow down auto focus speed, or rather the ability to grab focus is VR (IS in Canon speak). Shooting at the shutter speed necessary for tennis, you don’t care about VR so it’s turned off. Not an issue. Forget about VR for shooting tennis action shots.

I also had with me a wide angle lens (a 16-35 f4 VR) these are useful for shooting the environment (the stadia, the grounds, people etc). I didn’t use it as much as I would have liked mainly due to the poor weather, but make sure you have something like this or a zoom that can go reasonably wide. Remember you can always use your feet, so a fixed 35mm often will be all you need for these shots.

I also had a 24-120 f4. I used this quite a bit on the outside courts and when the weather got poor. The extra stop of light certainly helped on the outside courts – in DX mode this is a 180mm f4. And we can always crop in post-production, although that should be an exception rather than a rule (more on that later).

Some of my shots were with a 24-70 f2.8. Great for environmental or full court action, worthy of consideration in place of the wide zoom. For action will require extreme cropping. I won’t be taking this lens to a pro tennis event again unless I skip the wide angle. I was really just experimenting with this one late in the day as the light faded and I had already made most of my good shots.

Bear in mind that each day I only took three lenses, on the second day (when most of the tennis took place) I made do with two. You won’t be changing often (pros don’t ever change - they have more than one body). The reality is 80% of the time and 80% of the shots I made were with the 28-300. The fewer lenses you have the better.

Teleconvertors? Forget about it unless you own pro glass. A Nikon teleconvertor doesn’t fit on a 70-300 or a 28-300 anyway, you can’t physically attach one. A 3rd party teleconvertor may do, but even if it does you are losing stops again, so you may be able to reach 420mm (630 in DX) but now you are at f8 so your shutter speed is too slow. FORGET about teleconvertors unless you own pro glass.

Hoods and filters. You might be tempted to use a hood to protect against glare and the front element. You can, but be aware that the hood may annoy the spectator next to you, that’s not cool. They paid for their ticket just the same as you. If they can’t see the tennis because of your lens hood, remove it. The zoom will be bad enough as it is. When you sit down as a photographer and the guy in front of you is tall that pisses you off. Guess what, when the person who wants to watch the tennis sits down next to a guy with big glass they have exactly the same reaction.

As for filters, if you use a UV or NC filter to protect your front element that’s your call. I won’t get into the debate here, it won’t make any difference one way or another as far as shooting tennis is concerned.

Circular Polarizers may be useful if the sun is real strong to prevent the court washing out. But for clay courts this won’t be an issue, and the shadows on the clay actually are a cool part of the image. CPs are really only useful on a hard court like at the Australian or US Opens. CPs will be of use around the grounds, but for this trip I never used one once due to the weather.

Other stuff you need.

Memory cards. Simple you need large, high speed ones. Don’t skimp on cards. It’s easy – Lexar 1000 Compact Flash or Lexar or SanDisk 95x SD Cards. I used 32gb – smaller is no problem but make sure you have plenty, formatted ready to go. Many people will tell you to shoot in JPEG to increase your burst/buffer rate. I shot in RAW plus JPG Fine the whole time for reasons I’ll get into later. It wasn’t a problem at any time. Just keep your eye on the frames remaining on the camera, and change when you can do during a changeover or other break. Don’t let the camera get to zero before changing! Cameras lie about how many shots you have left on the card!

Batteries, make sure you have a backup, charged battery for your camera. If it’s cold put the battery in your jeans pocket or otherwise close to your body so it doesn’t lose its charge. You don’t want to be the sucker who ran out of battery. How do you avoid that?

Use a battery grip. I shot virtually the whole time with a battery grip attached to the camera. I set the camera to use the battery grip first, then the camera battery. This is good for other reasons. With the grip you can shoot in portrait mode (vertically) much better as you have additional focus and shutter controls. Plus on a D800 and other Nikon bodies (although not the D600) it increases your FPS in DX mode (to 6 in my case). This is another reason to buy a “better” body, i.e. does it have an optional battery grip. Other benefits of the grip are more support if you can shoot with your left eye to the viewfinder (the body rests in the nook of your shoulder and chest).

With the grip filled up with Lithium AA batteries I shot three days of tennis and thousands of images without ever going to the camera battery itself. I did shoot some without the grip and I did charge camera batteries overnight but the charging was for safety. However have enough AA spares if you use a grip!! I had one set in the grip and another set of spares. They are now both done. If it hadn’t rained for almost a day I’d have been eating into the battery in the camera and losing FPS.

If you do care about FPS remember to keep your eye on the battery indicator, as soon as the AAs are done, you will be back at the camera native FPS. Plan to change the batteries during a changeover or between matches. Nikons have an excellent info screen on the back which shows battery and FPS mode. So do Canons. Use it!

Lithium are what you need. There are fancy rechargeable ones especially for flash and battery grips but they are expensive if you are not a pro photographer. Just get lithium. No tree huggers here. Remember also to make sure to tell the camera you are using lithium in the grip otherwise it will report charge incorrectly and also not up the FPS.

And don’t forget the best way to avoid running out of battery – shoot less! :)

Strap – you need one, but forget about the rapid strap type thing, you need a regular neck strap. You won’t be walking around needing a sling shot. Out and about in downtown Paris, sure – at Roland Garros – no, just don’t do it!

Most of the time you will be sat in the stands with the camera strap around your neck. It’s all you need. Don’t use the crap one that comes with your camera. Its crap, it’s uncomfortable, has “steal me” or “I’m a douchebag showing off my camera is great” written in large yellow or red letters on it. Get a $40 replacement. You need the strap for when you might have to stand briefly, either to stretch your legs at the changeover, or to let another spectator pass.

When you are walking around a pro tennis event, your camera should be in your bag. Don’t be a muppet! There are 30 thousand people each day at the first week of Roland Garros, they’ve all come to enjoy the tennis, you do not want to be the guy bumping into folks with your gear, and you want your gear to be safe. Unless you are shooting your camera should be in your bag.

Monopod – forget it. You can’t use one, and you don’t need one. Pros use them because a $6000 lens is very heavy. You are not a pro. Forget monopods.

But you do need a bag. A good one. One that allows you to get your gear in and out quickly, can accommodate your gear, and preferably is not too large. I used the excellent think tank retrospective 7. Other brands have similar bags, but this is definitely the one I recommend. Perfect. It doesn’t look much like a camera bag, is roomy and is quick to access. It’s also comfortable with quite some weight in it after a long day. It’s not too large to be bothersome to you or other spectators. It fits neatly under your seat. The only downside is again the weather. Be prepared. Your bag should have a rain cover. Also make sure you have something to place on the wet concrete beneath the seat or around you. A poncho or a large food freezer bag does the trick.

And finally (at last) the single best accessory you can have at an event like this is the superb Hoodman Loupe. This is a loupe for your LCD which allows you to see that thing in bright sunlight (yes, despite the rain a lot of the time it was beautiful weather). It also magnifies a little bit, which makes it very easy to check if you are getting sharp images.

I have my camera set so that when I hit the centre button on the multi selector it will zoom the display to 100% on the focus point. I can also scroll around and zoom out a little. This with the loupe allows me to check focus very fast, if my shots are blurred I can make the necessary adjustments. It’s also useful to let others see your shots. Camera LCDs are a great innovation, but in bright sunlight they are worthless! Also at 3” across, everything looks sharp even when it’s not.

Get the Hoodman Loupe! It’s easily the best $100 you can spend. I rate this as the single best investment I’ve made in photo gear in the last two years. I also use their replacement eyepiece (at $30) which better fits your eye and blocks stray light. The loupe you can use with any camera, the eyepieces only work with cameras that allow you to change them.

Phew, that’s a lot of gear talk, but basically all you need is a camera, one lens, to take care of the basics (card, battery and strap), a bag, and if you are a “pro” :) a lcd loupe.

Coming in part two will be coverage of logistics, the grounds and stadia, making shots and post production.

SharePoint Evolution Conference 2013

posted @ Sunday, March 31, 2013 3:03 PM | Feedback (0)

In just over two weeks time we are back in London, for the fifth year, with the SharePoint Evolution Conference 2013. Simply the best SharePoint event outside of North America, with the best speakers, the best content, and the best entertainment, this year’s conference promises to live up to past events.

speaker_web_banner_thumb[1]It will be a little less stressful this year, returning to a regular content schedule with a few surprises thrown in! Aside from mature content on both SharePoint 2013 and SharePoint 2010, the conference features four of the five MCAs, twelve MCMs and a boatload of MVPs. It promises to be a lot of fun, and I encourage you to think about attending this, the best SharePoint conference of 2013.

I’ll be presenting the following sessions, which will all feature brand new content exclusively for the Evolution conference:

IT104: Request Management with SharePoint 2013
SharePoint 2013 includes a compelling new Request Management service, which gives much greater control over how incoming requests are processed. Learn about how Request Management enables request routing to ensure server health, request prioritization, and the implementation of server machine pools for large deployments. Understand the key components of the Request Manager and RM Routing, RM Pools and Execution Groups. This session will demonstrate how to configure Request Management for a number of key deployment scenarios.

IT110: Understanding Service Application Federation for SharePoint 2013 (with Bill Baer)
SharePoint 2013 provides architects with a compelling model for service publishing and federation, opening up exciting new approaches to farm design. This session will cover what’s new in SharePoint 2013 and how Service Application Federation plays out in the real world, based upon early enterprise adopters. Learn how to approach the design of a enterprise services farms, provide true scalability and discover the constraints for each service application which can be published, including global deployment considerations. Related aspects such as Security, High Availability and performance will also be covered. This session will be split 60/40 between lecture and demonstrations.

IT115: Rational Guide to User Profile Synchronization in SharePoint Server 2013
Discover the changes and new capabilities in the foundational service for Social deployments of SharePoint Server 2013 and get the real deal on configuring User Profile Synchronization in this demo and best practices heavy session. This session will cover the architecture of the User Profile Service Application, the new AD Direct Mode and provide a walkthrough of the configuration requirements and setup. Understand the key architectural considerations in terms of high availability, scalability and geographic deployments. Also covered will be general UPA related best practices in terms of synchronization, policy and privacy and leveraging social features inside the enterprise This session will be split 60/40 between demonstrations and lecture.


I look forward to seeing you in London at the event, and remember, Tuesday night is NOT to be missed! :)