Does Microsoft Support Multiple ADFS Instances on One AD Domain?

Recently I was working on a SharePoint project for a client with offices throughout the globe, with key offices in UK, North America and Australia.

They had one SharePoint environment in each region.

For the Australian SharePoint farm we wanted to start authenticating users via ADFS.

Our client’s only ADFS environment was in the UK though and one of the key concerns I had was how long it would take to authenticate users to the UK given the high network latency, and the fact that there is pretty much nothing that can be done about the latency.

One option that I wanted to explore was standing up a new ADFS environment locally within Australia, however there was some doubt was to whether Microsoft actually supported having more than one ADFS environment connected to the same AD DS domain.

I checked with my colleagues and they confirmed that it would work fine, but the key question was still… does Microsoft support this scenario?

So I contacted Microsoft Support and as expected (though to my pleasure I now had something official) this was the response:

“Yes Microsoft supports multiple ADFS farms in one domain in different sites. So if the environment matches the below conditions then only it will work in multiple ADFS farm scenario:-
1)      The service names for ADFS farms should be different for each site (location)
2)      You cannot federate same application with two farms in the same domain. i.e. the ADFS farms should be configured with different applications.
3)      You can have only one ADFS farm in a site (location).”

How To Catch Key Security Issues Before Your Application Is Built

When is the best time to catch security issues with your application? Before you’ve built it of course!

If you’re responsible for an application from a technical design perspective, the last thing you want to have happen is get all the way into the security testing process and find out that there is a major issue with your application and now the whole thing has to be redesigned!

Well, now your job is even easier, thanks to the Microsoft Threat Modeling tool, and the new 2014 version that has just been released recently.

This tool lets you map out your application flows and then it will automatically analyze it for security issues.

To be frank, I’m not a massive fan of the UI appearance, but I am certainly willing to put up with it because it definitely does a quality job.

Now, whilst I would like this to be a set and forget type of deal… i.e. just put the design in, make sure it says there are no issues and then give it the greenlight, in this interesting world we live in, the bad guys are always evolving… so I’ll still continue to manually review designs and use the tool as an initial quick up front check.

Go grab the tool here, now, it’s free!

MSThreatModelingTool

Now You Too Can Break Into Security Cameras… Just Like In The Movies

What do the following movies have in common?

  • Speed
  • Entrapment
  • Ocean’s 11

Well I think they were pretty enjoyable movies, and classic 90’s Sandra Bullock and Catherine Zeta Jones were certainly easy on the eye.  The key scene in common that I had in mind though is this:  Security camera’s being compromised – e.g. the ‘bad guys’ breaking into a camera and then having footage play on a “loop” so that they have their way.

Is this really possible though? How hard is it to compromise a security camera? Well, surprisingly and disconcertingly it turns out that cameras from a number of vendors are trivial to break into.

It seems the vendors of these cameras haven’t fully thought through the implications of their products being connected to the internet 24×7 and have therefore not placed much emphasis on the software security during their Quality Control processes. The more cynical view of course (and perhaps more accurate) is that in an effort to make a quick buck they don’t really care. Admittedly, if an attacker is determined and well funded enough they will find a way to break into any system online.  However, what is revealed in a recent security research whitepaper shows that these are trivial vulnerabilities to exploit such as:

  • Blindly Evaluating arbitrary code
  • Broken Access Controls

If they required more cutting edge exploits then fine. But these?? Come on. These are not challenging for a script kiddie to pull off. I think we all expect a lot better security around a product that has the word ‘security’ or ‘surveillance’ in its name.

If you’d like the link to the security paper please just contact me by posting a comment.

Is It Possible for an Attacker To Break Out of a VM?

I love VMs – they make life so much easier in many regards all the way from development and spiking new technologies all the way through to being able to provide elastic production solutions.

Of course, they do have their frustrations – such as getting performance right and having the occasional corrupt VM every now and then.

However – let’s consider an issue of security – is it possible to break out of a VM and get direct access to the host? Well, if you’ve been following popular security blogs then you’ll know that yes – it has at least been possible in the past.  How has it been done in the past though? Well, there’s a great paper here. It is slightly old [2009] though it does demonstrate an interesting technique.

Also, back in June 2012 there was also a vulnerability warning published by US CERT that you can read about here.

 

If you’re security paranoid/skeptical like me then knowing that nothing is ever 100% secure, you’d probably expect that there are new ways to still do it.  However, I haven’t come across any recent papers that illustrate new VM attacks.

So what does this really show and what is the point of posting this?

Well it highlights the need to practice defense in depth – i.e. it’s important to avoid thinking that a VM can never break out of another VM and therefore its ok to be blasé about additional security mechanisms.

The Content Database Support and Remote BLOB Storage Myth

There’s a popular myth that keeps popping up that I wanted to post about.

Why is it so popular?
Well, because it seems intuitive if you aren’t working with SharePoint on a regular basis. If you are then I’m sure you don’t think this… and if you did, well shortly you’ll know the truth.

So here’s the myth
“We don’t need to split our content across separate content databases because if we need more than 200GB support for each database we will [1] move subsites around to different site collections in different databases or [2] use remote blob storage and put it all on file shares… then we’ll have a very small content database size.”

Why is this a myth?
Let’s address the second part of the statement first – “[2] use remote blob storage and put it all on file shares… then we’ll have a very small content database size”. This is a myth because the content database will still not be supported by Microsoft. The reason for this is that both the actual database size itself PLUS the content offloaded and stored on file shares count in the 200GB (or 4TB if you meet additional requirements). This means that even if you had a 1GB database and 225GB offloaded onto fileshares for this content database, then you’re actually at 226GB and therefore not supported if you do not meet these requirements. If you do meet the requirements and have a 1GB database with 4.5TB offloaded onto fileshares for a specific content database, now you’re at 5.5TB of content and again, not supported.
From http://technet.microsoft.com/en-us/library/cc262787.aspx: “If you are using Remote BLOB Storage (RBS), the total volume of remote BLOB storage and metadata in the content database must not exceed this limit.”

Now let’s address the first part of the statement “we will [1] move subsites around to different site collections in different databases”. This also is a myth because although this is doable, it doesn’t make it a good idea.. Do you remember that old Chris Rock line.. “You can drive your car with your feet if you want to, but that don’t make it a good, [expletive] idea?” Yes? Well that’s the same here. So why isn’t it a good idea? In this case it’s because subsites are contained within a site collection. There is a close relationship between a site collection and its subsites. Objects such as site columns and content types are associated with and shared by subsites. If you want to move an individual subsite, then you have to consider how you are going to move these shared objects as well – and this is where it gets tricky. This is because there are a number of objects are difficult to move – for example workflow history and approval field values. Even if you investigate using third party tools to perform the move your subsites, you will likely encounter issues.

Ok I get it, but what do you recommend… what is the fix?
Essentially what needs to happen is that the architecture of the SharePoint environment should be considered carefully up front as much as possible, in conjunction with:

In case you are thinking, well, surely Microsoft should just raise their support limits even higher, or that subsites should be able to be moved around in a more full fidelity manner. I understand this point and was guilty thinking this myself when I first started working with SharePoint. As time grew on though, I also asked myself.. Is there any other product that offers all of the functionality that SharePoint does and has comparable supportability limits? Well frankly I couldn’t think of any. Besides, given that there is full transparency with the supportability limits and a wealth of information on TechNet to make it clear (at least to IT pros) what to do and what not to, I’m happy with this, at least for now.

Is It Possible to Integrate Chris21 with SharePoint?

Chris21 is a surprisingly popular system for a large number of clients I work with, so I wanted to comment on a solution that may help unlock value for your company if you are using it.

What I’ve found working with a number of companies is that their Active Directory is often out of date (surprise surprise), yet Chris21 is, and must be up to date because they are using it as their Payroll system.

Wouldn’t it be great if you could automatically update Active Directory based on information in Chris21? For example, finally the Manager or Contact Details of each employee might be accurate for once!

The flow on effect of this is that you can then have accurate SharePoint profiles. This can be a good approach when you’re initially setting up your SharePoint environment because you don’t need everyone to manually update them. Perhaps not an issue in a very small company, but for larger ones this is important. Then, if you want to empower your staff, and need to keep any fields up to date in Chris21 easily, well, then you have the ability to let users update their user profile fields and sync the data back into Chris21! This is pretty cool if you ask me.

Now, let’s take this solution a step further – what if you could stop sending those email or paper based Leave Request forms around and leverage a web based / electronic form? What if you already have a web based Leave Request form but it can’t really do much because the data just sits inside SharePoint and that’s it? How about instead of the form just sitting there, you leverage it instead, so that once a manager has approved a Leave Request, it is sent to Chris21 where the leave balance is automatically adjusted… wouldn’t that save a lot of time?

Well, the good news is that all of this is actually is possible. A while back I was fortunate enough to work with one of the few people I’ve met that had a good handle on both SharePoint & Chris21, and we implemented a solution to do this very successfully. As much as I would like to comment on the specifics, I’m unable to because the detailed solution is confidential IP owned by the company I work for – however I thought this post may still have value to at least mention what is possible in terms of the integration because looking around on the web now, there is very sparse content on this.

Essentially, the high level approach is that you can achieve the integration through the use of Chris21 web services – which does require additional licensing from Frontier.  The Chris21 web services only got us so far though so ultimately we ended up with additional custom code, which ended up working solidly, though was quite a headache to write and may very well have taken double or triple the time/budget if it weren’t for the expertise of a colleague who had previously worked on a similar project before.

Looking back on dollars invested vs. returned – does it seem worth it? For the companies that I’ve worked with, yes – namely because they were going to be ‘stuck’ with Chris21 for the next few years, they had staff members manually entering and processing leave requests in Chris21 and AD was way, way out of date.  Could it be worth it for you? Well, if you have staff manually entering data into Chris21 or use it as your primary source of truth for employee details then it very may well be.

InfoPath Retired & Sneak Peak of New MS Forms Tech To Be Given in March

Well, it’s official.  Microsoft have just announced the retirement of InfoPath and that they have been working on a new forms technology, of which a sneak peak will be given to the public at the SharePoint Conference in March.

For those in the know, or developing with InfoPath on a regular basis this was not really a surprise.  Ever since the 2007 release of InfoPath, fewer and fewer investments had been made in the product, bucking the trend of other Microsoft technologies, causing people to speculate that Microsoft may move in a different direction for the enterprise forms technology.

This different direction is to add new forms capabilities into SharePoint, Word and Access.

So where to next then?

The official position of Microsoft is to keep creating InfoPath forms because the product is still supported, and a migration plan will be announced.

In my opinion this makes sense for companies using InfoPath for simple forms and ones that are not heavily used in workflows.

For companies that have more advanced form requirements, such as those used heavily in workflows, you may want to consider a technology such as K2 Forms or Nintex Forms.

SharePoint 2013: Search Suggestions Not Working After Configuration

Simple trap to avoid when configuring SP2013 Search – once you’ve set up search suggestions, they won’t automatically show when you do a search.

The reason for this is that behind the scenes there is a timer job that performs the processing of the search suggestions you’ve added.

To have the suggestions appear immediately, you need to run this command in PowerShell:

Start-SPTimerJob -Identity “prepare query suggestions”

 

Architectural Mistakes to Avoid #1 – Interstate Stretched Farm

In discussions with IT Pro’s at client sites, a few times I have seen them start off designing their farm to handle performance requirements for interstate users (e.g. Brisbane, Sydney, Melbourne) by having the core of the farm in Sydney, and then one web front end in Brisbane and another in Melbourne. Essentially an architecture that looks like this:

SP Architecture - Unsupported

What’s the challenge here?

The challenge is that technically it won’t be supported by Microsoft, because what has essentially been created here is a stretched farm, that has a packet latency of > 1ms between the WFEs (W), App Servers (A) and SQL Servers (S).  So why isn’t an environment like this supported? Because it will cause performance problems, as all the internal farm servers need to communicate with one another quickly. To get an idea for how significant the performance will be degraded, the typical statistic quoted is 50% per 1ms delay, ouch!

Also, occasionally I have heard the statement that, yes, it is possible to ping Sydney to Melbourne in < 1ms. Well, with the help of Physics 101 we can prove that this cannot be the case. Enter Wolfram Alpha to save us some time – let’s check how long it would take for a beam of light to travel from Sydney to Melbourne (just in one direction, not bouncing back again):

WolframAlpha

2.38ms. How about light being sent through fibre? 3.34ms. What does this mean? In the absolute optimal case, it would take at least 3.34ms for data to be sent from Sydney to Melbourne. But not really – because there is of course routing overhead and network congestion. And this is why an interstate stretched farm such as this cannot be supported by Microsoft.

So how do we fix the supportability issue?

To get the farm back into a usable (and supported state) we basically need to drop the idea of the web front end in Brisbane and Melbourne.  Then all requests for users in Brisbane and Melbourne are routed through Sydney.

SP Architecture - Supported

The other solution here, if you really must stretch the farm across data centres (usually for cheap(er) and simple(r) Disaster Recovery) is to ensure that the data centres are in the same city – e.g. Sydney CBD to Mascot.  Note that this doesn’t address the original concern though – improving performance for interstate users.

How do we improve the performance for interstate users in a publishing (e.g. intranet / public website) scenario?

If you’re having performance issues where users in Brisbane and Melbourne are performing heavy reads of content and few writes – e.g. in an intranet scenario, then you’ll want to ensure that you are using SharePoint Publishing Cache aggressively.  This will give you a dramatic performance boost because SharePoint won’t be fetching data out of SQL constantly and then trying to render it.  Users will just get a straight HTML dump of pages.

How do we improve the performance for interstate users in a collaboration scenario?

The most popular solution employed here is to use Wan Optimization (WanOp) devices such as those made by RiverBed and SilverPeak.  These devices have the ability to not only cache data/content, at each branch (i.e. Brisbane and Melbourne) but also perform compression and de-duplication techniques to minimize the number of bytes actually sent to the client.  Note that these capabilities are required other than just simple caching of the data, because in a collaboration scenario, the content is typically changing regularly.

Of course, from Windows 7 and Windows 8 client machines also have Microsoft BranchCache built-in which provides similar capabilities to the WanOp devices, though it does have limitations (e.g. it only works with Windows devices).  Here are some further details on BranchCache:

  • http://technet.microsoft.com/en-au/network/dd425028.aspx
  • http://www.enterprisenetworkingplanet.com/windows/article.php/3896131/Simplify-Windows-WAN-Optimization-With-BranchCache.htm

Of course, the overall number of servers and specifications needs to be determined during the SharePoint infrastructure design process (e.g. in the above diagram for a reasonably sized office it would be wise to add at least one more WFE for performance and high availability), however hopefully I’ve at least shown you one critical design mistake to avoid.

Weird Hack: Play Pong and Snake in Super Mario World!

Recently I came across an article highlighting how someone had exploited “in game objects” to turn the classic 90’s Super Mario World game into Pong and Snake.. incredible! Essentially the game is susceptible to running arbitrary code.

Here’s a screenshot:

MarioWorld

The original article is here and the YouTube video is embedded there also. The video is a bit slow to going, you may want to jump straight to 1min 30 to avoid waiting.