Difference between Refiners Active and Latent

Recently I was trying to setup search for a client and the 2 options caught my eye:

Refinable

Let me explain what these exactly do in real life.

When you set to Yes – Latent – this means that the property is set to refinable, yes,, but is not active. You do this when in design mode not worrying about performance at all since the property is not active right now. (I fail to image such a situation frankly speaking)

When you actually move to production you can set this property to Yes-Active which means I am fully performing and active right now.

Leave a comment

Filed under Uncategorized

SharePoint Versions are not records

During a migration planning exercise for a departmental site on SharePoint 2007 to a new site structure in SharePoint 2010, I came across an interesting use of versions to store records.

The user had no version retention options set on some libraries because they didn’t want old versions removed as particular versions represented a signed agreement at a point in time. They may now be on version 8 for instance that had amendments but version 4 was the signed the original agreed version with the customer.

A most unusual way of using version handling and not advisable given that most SharePoint implementations would impose retention of a set number of major and minor versions to control storage space. The recommended approach would be to declare the agreed copy as a record, copy it to a records centre or create a PDF/XPS copy for later referral.

Moral of the story is if you are discussing version handling with your users be sure to point out that is to allow them to recall an old version for comparison or revert back to a version before unwanted changes were made and not as a record keeping process.

This also reminded of another record keeping faux par unrelated to SharePoint but where a senior exec had an Outlook rule to copy everything to Deleted Items on arrival, he would then move it back to the inbox if it required attention. An IT policy to clear deleted items on Outlook exit removed all his emails….

 

Leave a comment

Filed under Uncategorized

Getting the PivotViewer Extension for Microsoft SQL Server Reporting Services working

At the SharePoint Conference this year, I watched … session on DIY BI in which the PivotViewer extension tweaked my curiosity. I’d used the PivotViewer in the past and created my own expamples of using it to analyse SharePoint User Profiles, the most painful thing about this experience was in generating the zoom images and XML that just seemed to cumbersome. I was hoping the extension from the BI Labs would take out the pain and demonstrate the value of this BI tool.

I decided to install this on my Windows 7 machine that is running a local copy of SharePoint although this isn’t listed as a supported operating system, it worked fine but not without some headaches along the way. In the process I stumbled upon the same problems as others and some new ones so thought I’d share these so you have a better experience.
If you want some more detail on this tool then visit Kasper de Jonge’s blog on the subject also provides further insites

Firstly, you need to download the project and unzip it to the SharePoint server or the laptop in my case. There is a word document detailing the steps that is really good but I struggled with the following in the setup and then example deployment:

  • I didn’t have PowerPivot installed which is a pre-requisite. You’ll need to run the SQL Server setup to deploy this. Once you get PowerPivot installed, you need to deploy it to your web application as this is not done by default otherwise you’ll get Could not load type ‘Microsoft.AnalysisServices.SharePoint.Integration.ReportGalleryView’s.
  • Don’t be tempted to install SQL Server 2012 Reporting Services rather than SQL 2008 SSRS. Although you can create a service and point the database to SQL Server 2008, when you go to the service configuration page, it pops up a Correlation ID warning that it’s not the correct version of SQL. And when you uninstall it, it also removes the SharePoint SSRS add-in which you will need to re-install by running the rsSharePoint.msi file from the SharePoint install.
  • When you configure reporting services in SharePoint, it may complain that it must use a secure connection to fix this, you need to update the key <Add Key=”SecureConnectionLevel” Value=”0″/> in rsreportserver.config file from 2 to 0 and then restart the Report Services.

Now we can start deploying the example:

  • The first step of the sample deployment is to run the BICollectionDBProvision.exe that uses an example parameter of -scristiandt\powerpivot. If you installed PowerPivot with the default settings, you would end up with an Instance of SQL Server Analysis called “POWERPIVOT”, do not be fooled by the example parameter as this is not referring to this analysis services instance but a SQL Server instance. So provide the Server Name and Instance of the SQL Server 2008 Database Engine.
  • You’ll probably hit a connection issue at this point. To fix this you need to enable TCP/IP for Network and Client connections in the SQL Server configuration manager.
  • The next step is to run the .\installBIPivot.ps1 which worked successfully and deployed the solutions
  • The sample builder which deploys a sample report, people images and a PowerPivot spreadsheet all worked well but it didn’t add entries to the “BI Collection Administration” list so I had to add these manually by copying the values from the SampelData/application.xml file
  • Now I could add the Silverlight application generator file; remember to set height to 800 otherwise you don’t see errors and, errors are exactly what I got “500 internal server error” as did Peter who commented on Cristian Petculescu blog about this project. The report worked fine and I checked the parameter configuration and spent a while trying to call the SSRS report in the browser with the AccountManager parameter but as this was in the form of an MDX value, you can use the querystring only a form submit due to the “&” character.
    Turns out to be an issue with the AccountManagerEx inventory as the AccountManager inventory entry in the SharePoint list worked fine

Hopefully you’ve now managed to configure everything as I did in which case you will see this:

I don’t know if the people images are meant to be there, but they are not in the sample report.

This diagram illustrates how the build process works for the sample:

What I haven’t figured out is where the properties are stored or whether these are generated on the fly by the data browser rather the application generator

And I thought this was going to be simple to do when compared to creating all the XML files and zoom images. Maybe it will for my next BI report.

Leave a comment

Filed under Business Intelligence, SharePoint 2010

My thoughts on SPC11

This year’s SharePoint conference was the one in between major releases so I wasn’t expecting anything new and exciting. To get the most out of the conference I followed @marcykellar advice on 11 tips to get the most out of the conference and planned to attend sessions with speakers who I knew were good and build on/extend my existing personal network. I achieved all my goals for the conference, with a rocky start, so put keyboard to MSWord to summarise my experience.

Caught up with NZ community and other SharePoint people at the AirNZ lounge (Debbie, Bendon, Paul, Grant, Steve, Jullian) for a pre-conference chat and make new acquaintances. I didn’t get upgraded as everyone else had also applied to do the same thing and had higher membership than me, so I didn’t get much sleep.

Not booking transportation from LAX beforehand was a big mistake. We ended up on a Super Shuttle in the end but it took us 2 and half hours after landing to get to our hotels. After registering at the conference and catching up for lunch we made plans for evening and then it was into the conference at full tilt.

The key note summed up SharePoint 2010 in a phrase: “Productivity Delivered”, my feeling is this is definitely what Microsoft have done but businesses haven’t yet utilised all the productivity features; based on talking to people and watching sessions that covered some of the fundamentals that people didn’t seem to know. The demo of a TB farm failing over in 4 seconds or so under a load test using Denali was impressive. They also announced that BCS for web services would be available in the next update to Office 365.

The first few days sessions I went to were probably the wrong choice as I picked them around areas of SharePoint I thought I had good experience but not in-depth; unfortunately these sessions didn’t go any deeper than what already know which was disappointing although re-assuring around my own skill set.

I decided to take advantage of the free Microsoft certifications so booked a slot on Tuesday after lunch (turned out so did everyone else). I brought the exam book from one of the exhibitors and spent every moment between sessions reading it. I passed the configuration exam although was disappointed that I didn’t get 100% but I guess with only 4 hours revision and my experience it’s a reasonable achievement. These free exams were a great idea and hope it makes reappearance in Vegas.

The sessions on document composition were my biggest takeaway as this is an underused gem in SharePoint and Office integration that I don’t see regularly implemented but can have big returns. John Peltonen‘s session introduced the concepts and that Disneyland has an army of 200 cats they let out at night to keep down the mice. The basic concept is to use Content Controls in MSWord and then through code, populate these with content from SharePoint lists, External Lists, other documents, forms etc. One takeaway from this session is that the OOXML SDK has a tool to simplify the process of writing the code to make edits. John also showed a cool idea using the multi authoring to extract content from a document when someone places a particular tag on the fly; in this case a task to populate a task list. Generating business documents session by Scot Hillier also on this subject was great as it showed another real world example of document composition and some decomposition.

Rafal Lukawiecki’s session on using BI components, an area I’m fairly familiar with, I’d heard he was worth seeing and wasn’t disappointed as his humour had the whole room laughing; You need that with what could be a very boring subject, the highlight being when he showed his website with photographs he’d taken of a location in the US (totally of topic). He showed the new SQL Server dataconnector to BING maps which is a quick way to show your data graphically. He mentioned a new tool released on for showing SQL data in a PivotViewer which I’m going to investigate some more.

Monday night’s RED party was the place to be at the Heat Ultra Lounge. Chance to catch up with some of the speakers, people I’d connected to on Twitter like Erica Toelle and let off some steam on the dance floor. Getting in at 1am though does make the following day tiring but that was the earliest I got in from then on.

The new version of Duet was something I wanted to get a deep dive on although the sessions were all high level. One of the people on the Duet SAP stand went through a personal demo for me so I could bounce around ideas and see how solutions can be built. This solution now uses External Content Types heavily which is a great extraction layer compared to past versions therefore easily understood by the SharePoint professionals. The SAP integration includes a user mapping tool and an application that enables the SAP experts to build ETC definitions using a UI to identify and configure the SAP entities; this means the SAP expert does not need to understand SharePoint concepts.

Not having to queue for Disneyland rides, what can I say? Walking around with 3 other grown men and no kids also a bonus if not a little creepy. Got my spinning R2D2 and R2D2 Mickey Mouse ears that I’ve missed out on my other 5 visits to parks around the world including this one. Also picked up a “build your own car” for my sons Christmas present or is it mine?

The sessions on HTML 5 and Jquery were packed so thankfully I’d gathered a lot of information at Tech-Ed in NZ this year. Just a word of caution, don’t use these tools to customise the ribbon and sparingly elsewhere as required, otherwise you’ll spend a lot of time on cross browser compatibility.

Leave a comment

Filed under SharePoint 2010

Sensibly govern use of folders in SharePoint

It’s the age old question of should you allow users to create folders in document libraries and lists when designing your Information Architecture and governance plan. I worked for an organisation that banned folders and didn’t fully explain to users how to leverage meta data, in SharePoint 2007 with no Managed Meta Data and Hierachy Navigation filters in libraries this affected adoption. As the network drivers were shutdown, users had to store documents in SharePoint so they simply opened the library in explorer view and dragged in folders; the folder restriction was simply and interface change and not event handler.

Personally I think there is a place for folders which Mikhail Dikov
covers well in a blog on
SharePoint folders need more love.

So now you’re convinced that folders are a good thing we still need to keep some control otherwise users will create 5 level deep folder structures as they did on the file system with no metadata.

The solution I’ve built here uses an event receiver to allow folders to be created at the root level of a document library or list but no deeper i.e. one folder level only.

Following Karine Bosch’s Blog on creating an event receivers
http://karinebosch.wordpress.com/walkthroughs/event-receivers-walkthrough1/, I created an ItemAdding receiver that prevents sub-folders of folders being created.

The following code snippet checks if the item that is being created is a Folder Content Type or a content type inherited from folder (itemContentType.StartsWith(spFolderContentType)), if it is a folder, then the root folder of the library is compared to the parent folder (generated by Remove statement) and if they do not match then returns the error message. Items do have a ParentFolder property but this doesn’t appear to be set until the item has been added to the library.

public
override
void ItemAdding(SPItemEventProperties properties)

{


string spFolderContentType = “0x0120”;


string itemContentType = properties.AfterProperties[“ContentTypeId”].ToString();


// Is this a folder, check if the content Type starts with the base Folder ID?


if (itemContentType.StartsWith(spFolderContentType))

{


//Get the web site for this list


using (SPWeb web = properties.OpenWeb())

{


// get the list


SPList list = web.Lists[properties.ListId];


SPFolder rootFolder = list.RootFolder;


string folderURL = properties.AfterUrl;


string folderName = properties.AfterProperties[“Title”].ToString();


string folderParentURL = folderURL.Remove(folderURL.Length – folderName.Length -1, folderName.Length +1);


// Is the new folder being created in the root of the library


if (folderParentURL != rootFolder.Url)

{

properties.ErrorMessage = “Folders can only be created in root of this library/list”;

properties.Status = SPEventReceiverStatus.CancelWithError;

properties.Cancel = true;

}


else

{


base.ItemAdding(properties);

}

}

}


else

{


base.ItemAdding(properties);

}

}

The following “Error” message is displayed to the user if they try to create a folder in another folder:

This isn’t a very nice message as it infers that an error occurred when actually it was more a policy was applied. You will probably need to provide some better feedback explaining that it is not an error and where to read the policy.

And then I thought I was done until I tried to test explorer view at which point I discovered that the AfterProperties are not supplied in the ItemAdding event when someone copies or creates a new item directly in explorer view and the ItemUpdating event is also called. This thread covers the same question but the proposed answer doesn’t work:

http://social.msdn.microsoft.com/Forums/en/sharepointdevelopment/thread/8712648e-cf09-4f7b-ab13-1c6aacdf588a

So now what? Well I said a few bad words about the inconsistencies of SharePoint and then attempted to find an answer. I decided to ignore ItemAdding as this was essentially already completed as it was now doing an update and focus on building a solution around ItemUpdating.

Within the ItemUpdating function, I could access the ListItemID so was able to retrieve this list item object and it’s properties to evaluate if it was a top level folder. Interestingly (or annoyingly), you could get the item Title or Name it just included the base content type properties but the FileLeafRef provided the value for me to use in building the parentURL string.

So ignore the code above as this is what works with a tweak to ItemAdding to check if AfterProperties exist and if not presume this was an Explorer View update and ignore the item:


public
override
void ItemAdding(SPItemEventProperties properties)

{


string spFolderContentType = “0x0120”;


bool isFolder = false;


string itemTitle = “”;


//Get the web site for this list


using (SPWeb web = properties.OpenWeb())

{


// Need to handle explorer view not containing AfterProperties


if (properties.AfterProperties[“ContentTypeId”] != null)

{


string itemContentType = properties.AfterProperties[“ContentTypeId”].ToString();


if (itemContentType.StartsWith(spFolderContentType))

{

isFolder = true;

itemTitle = properties.AfterProperties[“Title”].ToString();

}

}


// Is this a folder, check if the content Type starts with the base Folder ID?


if (isFolder)

{


// get the list


SPList list = web.Lists[properties.ListId];


SPFolder rootFolder = list.RootFolder;


string folderURL = properties.AfterUrl;


string folderName = itemTitle;


string folderParentURL = folderURL.Remove(folderURL.Length – folderName.Length – 1, folderName.Length + 1);


// Is the new folder being created in the root of the library


if (folderParentURL != rootFolder.Url)

{

properties.ErrorMessage = “Folders can only be created in root of this library/list”;

properties.Status = SPEventReceiverStatus.CancelWithError;

properties.Cancel = true;

}


else

{


base.ItemAdding(properties);

}

}


else

{


base.ItemAdding(properties);

}

}

}


public
override
void ItemUpdating(SPItemEventProperties properties)

{


string spFolderContentType = “0x0120”;


bool isFolder = false;


string itemTitle = “”;


SPItem CurrentListItem = null;


//Get the web site for this list


using (SPWeb web = properties.OpenWeb())

{


// Need to handle explorer view not containing AfterProperties


if (properties.AfterProperties[“ContentTypeId”] == null)

{


SPList CurrentList = web.Lists[properties.ListId];

CurrentListItem = CurrentList.GetItemById(properties.ListItemId);


string itemContentType = CurrentListItem[“ContentTypeId”].ToString();


if (itemContentType.StartsWith(spFolderContentType))

{

itemTitle = CurrentListItem[“FileLeafRef”].ToString();

isFolder = true;

}

}


// Is this a folder, check if the content Type starts with the base Folder ID?


if (isFolder)

{


// get the list


SPList list = web.Lists[properties.ListId];


SPFolder rootFolder = list.RootFolder;


string folderURL = properties.AfterUrl;


string folderName = itemTitle;


string folderParentURL = folderURL.Remove(folderURL.Length – folderName.Length – 1, folderName.Length + 1);


// Is the new folder being created in the root of the library


if (folderParentURL != rootFolder.Url)

{

properties.ErrorMessage = “Folders can only be created in root of this library/list”;

properties.Status = SPEventReceiverStatus.CancelWithError;

properties.Cancel = true;

CurrentListItem.Delete();

}


else

{


base.ItemUpdating(properties);

}

}


else

{


base.ItemUpdating(properties);

}

}

}

When the user tries to add a folder through the explorer view, they will get the following message after changing the folder name:

When the explorer view is refreshed, the item doesn’t exist. They don’t see the returned ErrorMessage so no idea why this happened so it might be prudent to send them an email or even better an instant message to tell them what happened.

This isn’t the most robust code and I’m sure there are other ways of comparing the parent folder strings so don’t use this in production without some further testing and due diligence.

Leave a comment

Filed under Custom Development, Fixes / Solutions

User Profiles in a Silverlight PivotViewer

After seeing the PivotViewer used for Cricket World Cup and SharePoint sites, I thought it would be great way of viewing user profiles in SharePoint. As always, I had a great idea without the time or in depth technical knowledge of PivotViewer, in this case, to implement it until now.

My aim was to great a Pivot View that could be updated daily or weekly for users to see people information in a graphical form that can be used for both identifying groups of individuals with similar characteristics such as location and/or skills as well as being able to display in histograms the number of people that fitted in the selected filters i.e. how many people in each department.

Here’s the result with how I did it to follow:

Zooming in on an individual shows the picture in full and the profile properties I mapped.

The process involves creating the image collection and xml definitions that the Pivot Viewer uses to display the content and filtering, sorting and grouping controls.

For my first run, I’m keeping it simple and just including the person’s Name, description (About Me), department and link to My Site. This meant I could focus on the solution rather than working out how to get each property and format it.

To start with I need to get an export of the User Profile pictures (presuming this is the main source, you may have another), onto the file system. As I intend to automate the build of these files to support scheduling, I also needed to use the command line tools to generate the deep zoom images. I decided to use powershell, as it’s my new best friend and quick to prototype.

In my first run, I created one powershell file to export the images and CXML file but discovered that the CXML must reference the correct image ID created by the command line tool DZCollection.exe so I split the file in two. The process is now as follows:

  1. Use Powershell to export images – Get Enumerator over the UserProfileManager
  2. Export each profile image if exists to the file system
  3. Execute the command line tools to generate the deep zoom images and collections
  4. Use Power Shell to export properties – Enumerator again over the profiles looking up the image ID from the XML file created in step 3
  5. Create the CXML file using a template

Powershell script for exporting images:

$ver = $host | select version

if ($ver.Version.Major -gt 1) {$Host.Runspace.ThreadOptions = “ReuseThread”}

Add-PsSnapin Microsoft.SharePoint.PowerShell -ErrorAction SilentlyContinue

Set-location $home

Start-SPAssignment -Global

$LocalPath = “C:\Development\UserProfilePivot\ProfilePictures”

$MySiteUrl = “http://intranet.contoso.com/my/&#8221;

$site = Get-SPSite $MySiteUrl

$siteWeb = Get-SPWeb –site $site

Write-Host “My Site retrieved:” + $site

$context = Get-SPServiceContext($site)

$pm = New-Object Microsoft.Office.Server.UserProfiles.UserProfileManager($context)

$AllProfiles = $pm.GetEnumerator()

foreach($profile in $AllProfiles)

{

$photourl = $profile[[Microsoft.Office.Server.UserProfiles.PropertyConstants]::PictureUrl].Value

if($photourl){

$personName = $profile.DisplayName

Write-Host “Exporting: ” $personName

#Download file

$photoFile = $siteWeb.GetFile($photourl)

$binary = $photoFile.OpenBinary()

$stream = New-Object System.IO.FileStream($LocalPath + “/” + $personName + “.jpg”), Create

$writer = New-Object System.IO.BinaryWriter($stream)

$writer.write($binary)

$writer.Close()

}

}

Stop-SPAssignment -Global

If you have a large collection of profile properties, I would use the paging function overload of the GetEnumerator to specify the start row and page size

Create the deep zoom images and collection

I create a command line file to this piece that calls the 2 tools available in the SDK:

cd “C:\Program Files (x86)\Microsoft Expression\Deep Zoom Composer\DZSamples”

DZConvert.exe C:\Development\UserProfilePivot\ProfilePictures\*.jpg C:\Development\UserProfilePivot\ProfilePictures\dzimages

DZCollection.exe C:\Development\UserProfilePivot\ProfilePictures\dzimages C:\Development\UserProfilePivot\UserPicturecollection.xml

Export the properties to create the CXML file

To keep it simple, I created the CXML file first by hand (You could use the Excel tool) and then just clone the XML nodes in the power shell (get this tip from here http://powershell.com/cs/blogs/tobias/archive/2009/02/02/xml-part-2-write-add-and-change-xml-data.aspx)

The template looked like this:

<?xml version=”1.0″ encoding=”utf-8″?>

<Collection Name=”User Profiles” SchemaVersion=”1.0″ xmlns=”http://schemas.microsoft.com/collection/metadata/2009&#8243; xmlns:p=”http://schemas.microsoft.com/livelabs/pivot/collection/2009&#8243; xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance&#8221; xmlns:xsd=”http://www.w3.org/2001/XMLSchema”&gt;

<FacetCategories>

<FacetCategory Name=”Department” Type=”String” />

</FacetCategories>

<Items ImgBase=”ProfilePictures/UserPicturecollection.xml”>

<Item Img=”#51″ Id=”51″ Href=”http://intranet.contoso.com&#8221; Name=”Person Name”>

<Description>About Me</Description>

<Facets>

<Facet Name=”Department”>

<String Value=”Executive” />

</Facet>

</Facets>

</Item>

</Items>

</Collection>

Here’s the powershell script that creates that generates the updated file:

$ver = $host | select version

if ($ver.Version.Major -gt 1) {$Host.Runspace.ThreadOptions = “ReuseThread”}

Add-PsSnapin Microsoft.SharePoint.PowerShell -ErrorAction SilentlyContinue

Set-location $home

Start-SPAssignment -Global

$LocalPath = “C:\Development\UserProfilePivot”

$MySiteUrl = “http://intranet.contoso.com/my/&#8221;

$XMLFileLoc= “C:\Development\UserProfilePivot\UserProfilePivot.xml”

$PicCollFileLoc= “C:\Development\UserProfilePivot\UserPicturecollection.xml”

Write-Host “Load existing XML file: ” $XMLFileLoc

$UserProfileXML = New-Object XML

$UserProfileXML.Load($XMLFileLoc)

Write-Host “XML file Loaded”

$cloneItem = @($UserProfileXML.Collection.Items.Item)[0]

Write-Host “Load collection XML file: ” $PicCollFileLoc

$PicCollXML = New-Object XML

$PicCollXML.Load($PicCollFileLoc)

Write-Host “XML file Loaded”

$site = Get-SPSite $MySiteUrl

$siteWeb = Get-SPWeb –site $site

Write-Host “My Site retrieved:” + $site

$context = Get-SPServiceContext($site)

$pm = New-Object Microsoft.Office.Server.UserProfiles.UserProfileManager($context)

$AllProfiles = $pm.GetEnumerator()

foreach($profile in $AllProfiles)

{

$photourl = $profile[[Microsoft.Office.Server.UserProfiles.PropertyConstants]::PictureUrl].Value

if($photourl){

$personName = $profile.DisplayName

$personDept = $profile[[Microsoft.Office.Server.UserProfiles.PropertyConstants]::Department].Value

$personAbout = $profile[[Microsoft.Office.Server.UserProfiles.PropertyConstants]::AboutMe].Value

$personURL = $profile[[Microsoft.Office.Server.UserProfiles.PropertyConstants]::PersonalSpace].Value

Write-Host “Exporting: ” $personName

#Find users image reference in collections file

$ImagePath = “ProfilePictures/dzimages/” + $personName.Replace(” “,”%20”) + “.xml”

$ImageID = $PicCollXML.Collection.Items.I | Where-Object { $_.Source -eq $ImagePath }

if($ImageID){

$ProfileCount = $ImageID.Id

$newperson = $cloneItem.Clone()

$newperson.Img = “#” + $ProfileCount

$newperson.Id = $ProfileCount

if($personURL){

$newperson.Href = “http://intranet.contoso.com&#8221; + $personURL

} else {

$newperson.Href = “http://intranet.contoso.com&#8221;

}

$newperson.Name = $personName

$newperson.Description = $personAbout

#Assume we know the order of the Facets

$FacetDept = @($newperson.Facets.Facet)[0]

$FacetDept.String.Value = $personDept

$UserProfileXML.Collection.Items.AppendChild($newperson) > $null

}

}

}

Write-Host “Save XML file back”

$UserProfileXML.Save($XMLFileLoc)

Stop-SPAssignment –Global

In brief, I open the template CXML file and create a clone of the Item element, I then loop through all the profiles and if they have a picture, find the element in the collection XML file using the Where-Object clause on the Items node. If we find a corresponding image, then get the user properties and start updating the clone node. The Department value was tricky to reference as you need to get the sub element first in this case Facets.Facet[0] rather than just reference through the XML tree. Last of all, remember to append your cloned node and when you’ve reached the last profile, write it all back to file.

Some improvements are needed as follows:

  • Only process users who’s profiles were updated since the routine last ran
  • Add some more facets such as skills, past projects etc
  • Re-use the same CXML file and lookup the users entry by Name and update
  • Enumerate in batches using the overload
  • Include users without an image and create a dummy profile picture instead
  • Bigger images to start with would work better.
  • Remove the Bing search result link as it’s probably irrelevant in a corporate environment
  • My script could always be improved I expect and it may be possible to use FIM to do the export somehow but this works for now

The final step is to upload all the files into a SharePoint document library and a the Silverlight Pivot Viewer control to the page and reference the CXML file. Unfortunately my VM gave me the “Your client does not support opening this list with Windows Explorer” and I gave up trying to fix it.

You can find out more about the Pivot Viewer here along with all the downloadable examples and tools:

http://www.silverlight.net/learn/data-networking/pivot-viewer/pivotviewer-control

As always, use the code as-is.

Leave a comment

Filed under PowerShell, SharePoint 2007, SharePoint 2010, Silverlight, User Profiles

Create an External Content Type against large SQL Server table with no Code

The Problem

I had a large dataset stored in SQL Server that I wanted to use in SharePoint 2010 as a lookup field against a contract record. Due to the external data throttling limit in SP2010, I can’t browse through more than 2000 records. This blog is how I managed this with some SQL Stored Procedures and XSLTView changes to page through 2800 rows in the database without modifying SharePoint’s throttling limits.

I started by creating an External Content Type and pointing it directly to my table; follow this MSDN article for how to do this: http://msdn.microsoft.com/en-us/library/ee557243.aspx. Initially I thought that the “All Items” view only selected a small number of rows so displaying in batches of 25 would work but, when you try to display the list you get correlation ID error, would have expected “You selected too many items” rather than an error.

I then did some further investigation and discovered External Data has throttling applied by SharePoint, this can be overridden using PowerShell but can have a performance impact and mostly relates to coded solutions: http://blogs.msdn.com/b/bcs/archive/2010/02/16/bcs-powershell-introduction-and-throttle-management.aspx . Setting the throttling off then showed my list but that wasn’t the ideal option for a production environment.

Steps to my solution

The solution is based on 2 SQL Server stored procedures, an External Content Type and a customised XSLTListView.

First create a stored procedure to select rows in batches and test by sending row numbers and limit sizes.

The SP has 2 parameters, one for the row number (called PageNo) and the other for page limit. The first parameter will be used later to create a filter when picking using as a lookup in a list.

USE [Gel_ExternalData]

GO

/****** Object: StoredProcedure [dbo].[getSuppliers] Script Date: 09/05/2011 10:17:46 ******/

SET
ANSI_NULLS
ON

GO

SET
QUOTED_IDENTIFIER
ON

GO

— =============================================

— Author:        Alan Marshall

— Create date: 05/04/2011

— Description:    Get Vendors for External Content Type

— =============================================

ALTER
PROCEDURE [dbo].[getSuppliers]

    — Add the parameters for the stored procedure here

    @SupplierName nvarchar(200)
=
null,

    @pageNo int
= 1,

    @limit int
= 200

AS

BEGIN

    — SET NOCOUNT ON added to prevent extra result sets from

    — interfering with SELECT statements.

    SET
NOCOUNT
ON;


    IF @pageNo IS
null

    SET @pageNo = 1;


    DECLARE @startIndex int

    –SET @startIndex = ((@pageNo – 1) * @limit) + 1

    SET @startIndex = @pageNo

    DECLARE @endIndex int

    SET @endIndex = @startIndex + @limit

    DECLARE @SupplierNameLookup nvarchar(200)
=
null

    IF @SupplierName IS
NOT
null

    SET @SupplierNameLookup =
‘%’
+ @SupplierName +
‘%’;


WITH [CTE] AS (

        SELECT
ROW_NUMBER()
OVER (ORDER
BY [MAILING_NAME])
AS
[RowNumber]


,[ADDRESS_NUMBER]


,[MAILING_NAME]


,[LEGAL_ENTITY]


,[ADDRESS_LINE_2]


,[ADDRESS_LINE_3]


,[ADDRESS_LINE_4]


,[CITY]


,[POSTAL_CODE]


,[COUNTRY]

        FROM [Gel_ExternalData].[dbo].Suppliers

        WHERE [MAILING_NAME] LIKE
ISNULL(@SupplierNameLookup, [MAILING_NAME])

)

SELECT
*
FROM [CTE]

WHERE [CTE].[RowNumber] >= @startIndex

AND [CTE].[RowNumber] <= @endIndex;

END

This may not be the most efficient method but I’m not a TSQL expert and I found this solution on another blog

Create a stored procedure to select a single row

USE [Gel_ExternalData]

GO

/****** Object: StoredProcedure [dbo].[getSuppliersItem] Script Date: 09/05/2011 16:21:42 ******/

SET
ANSI_NULLS
ON

GO

SET
QUOTED_IDENTIFIER
ON

GO

— =============================================

— Author:        <Author,,Name>

— Create date: <Create Date,,>

— Description:    <Description,,>

— =============================================

ALTER
PROCEDURE [dbo].[getSuppliersItem]

    — Add the parameters for the stored procedure here

    @AddressNumber int,

    @pageNo int
= 1,

    @limit int
= 200

AS

BEGIN

    — SET NOCOUNT ON added to prevent extra result sets from

    — interfering with SELECT statements.


WITH [CTE] AS (

        SELECT
ROW_NUMBER()
OVER (ORDER
BY [MAILING_NAME] DESC) AS [RowNumber]


,[ADDRESS_NUMBER]


,[MAILING_NAME]


,[LEGAL_ENTITY]


,[ADDRESS_LINE_2]


,[ADDRESS_LINE_3]


,[ADDRESS_LINE_4]


,[CITY]


,[POSTAL_CODE]


,[COUNTRY]

        FROM [Gel_ExternalData].[dbo].[Suppliers]

        WHERE [ADDRESS_NUMBER] = @AddressNumber

)

SELECT
*
FROM [CTE]

END

This SP must have a unique identifier that will be used when the user chooses to view an item in the list. In my case this is ADDRESS_NUMBER.
You’ll have noticed I have a hard coded sort field which is needed to page through the items. A parameter can be added for this later but outside scope of this blog at the moment.

Create an external content type

  1. Open External Content Types in SharePoint Designer
  2. Select new content type button on the ribbon
  3. Provide a name for your content type
  4. Use Generic list although you can use contacts etc but I suspect you’ll hit the hard limit in Outlook.
  5. Select to link to external data source
  6. Add a connection and select SQL Server
    1. Note: you must have direct access at this point even if you intend to use Secure Store Service later. I built this solution on a single server so didn’t need Kerberos.
    2. I noticed the error messages are vague if the connection is unsucceful for instance, I got an unable to connect message even though I only got database name wrong and thought it was a credentials issue
  7. Open up Routines in the returned database view
  8. Right click on the getSuppliers SP and select Read operation
  9. Enter the display name, note that this appears as the name of the default view in SharePoint when you create a list
  10. Your identifier must be Int32 as I found others don’t work well.
  11. You now need to specify the settings for the filters
  12. Click on the link to add a filter for your identifier (ADDRESS_NUMBER)
  13. Provide a sensible name and set this as a Comparison filter type as below:

  1. Set PageNo to be a comparison. Don’t be tempted to select PageNumber as the FilterType.
  2. Set Limit to be of type limit and 200
  3. Click Next and Map all the fields or rename as required. You need to select Map to Identifier for your unique column.

  1. If you renamed the identifier in the filters, you need to select it here for the ADDRESS_NUMBER
  2. Click Finish
  3. Select the paged items stored procedure and Read Items operation
  4. Set up the parameters as before with the SupplierName configured as below as this field is used for searching in the lookup picker:

  1. Set PageNo to be a comparison
  2. Set Limit as type Limit
  3. Click Next and then set the fields you want in the Show in Picker and rename to the same as the read item operation

  1. Click Finish and then Save the ECT.
  2. Now you can create a list and forms using the button on the ribbon
  3. Before you browse to the list, you need to set permissions on the BDC definition
    1. Open Central admin and navigate to Manage Services
    2. Select your Business Data Connectivity Service
    3. Locate the definition you just created and select Set Permissions from the drop down.

  1. Add administrators and Users with appropriate permissions
  2. Navigate to your new list and select modify view in the ribbon
  3. Set the data source filters.
    1. PageNo = {dvt_firstrow}
    2. Limit =25

  1. Set the Item limit to match the limit above

I then used SQL profiler to see what calls SharePoint was making to the stored procedures

Startup SQL Profiler and capture events for the database. Check values being passed in.

So everything seemed to be working but wait what’s happening on page 2! It sent the correct stored procedure parameter i.e. @PageNo=26 and @limit=25 but it shows only 1 record. I ran the SQL procedure again pasting it from the profiler and it was returning 25 records so what happened.

It turns out that the XSLTListView on the view actually queries all the data and then using XSL selects only the rows it is interested in. Apart from this seeming wildly in efficient, it needs to do this to create the filter drop downs over the whole dataset.
When you look at the XSLTListView XSL stylesheet, you will find the following line:

<xsl:param name=”AllRows” select=”/dsQueryResponse/Rows/Row[$EntityName = ” or (position() &gt;= $FirstRow and position() &lt;= $LastRow)]”/>

What this does is select items from row 26 to row 52. This being the start of the second page first item and last row is firstrow + the limit +1. So it then selected from the returned SQL data row 26 as there were no more rows.

This was easily fixed by doing the following:
<xsl:param name=”AllRows” select=”/dsQueryResponse/Rows/Row”/>

Note: remember to remove all the ghost reference in SP Designer otherwise it won’t save your changes

This new XPath query selects all the rows that come back because there will only ever be 25 or whatever the limit is. This fixed page 2 as I could now see all the records but at the bottom of the page, there wasn’t a right arrow to go to the next page so I moved my attention to the Navigation XSL template and the CommandFooter.

There were 2 values that were essentially incorrect based on my solution; the last row and the row count. The first row would correctly start 26 for the second page but the last row would also be 26 and the rowcount returned 26. So I needed to make the XSLT think that the last row was potentially the first row + the page limit making 51 and the total rows in my table was at least more than currently displayed.

I created 2 new variables to set the last row to be the first row 26 + the current row count and then created a new current row count to always be more that the potential page content by adding it to the Lastrow value.

<xsl:template name=”CommandFooter”>

            <xsl:param name=”FirstRow” select=”1″/>

            <xsl:param name=”LastRow” select=”1″/>

            <xsl:param name=”dvt_RowCount” select=”1″/>

            <xsl:variable name=”TrueLastRow“>

                <xsl:number value=”$FirstRow + $dvt_RowCount”/>

                </xsl:variable>

                <xsl:variable name=”Truedvt_RowCount“>

               <xsl:number value=”$LastRow + $dvt_RowCount”/>

                </xsl:variable>

            <xsl:if test=”$FirstRow &gt; 1 or $dvt_nextpagedata”>

                <xsl:call-template name=”Navigation“>

                <xsl:with-param name=”FirstRow” select=”$FirstRow” />

                <xsl:with-param name=”LastRow” select=”$TrueLastRow”/>

                <xsl:with-param name=”dvt_RowCount” select=”$Truedvt_RowCount”/>

                </xsl:call-template>


</xsl:if>

The next variable which prevent the next option displaying correctly was the RowTotalCount which is passed into the XSLT by the calling web part code. As I can’t retrieve how many rows are actually in my database table I fudged this and put in 1 million as the value. Essentially this means that the value for LastRowValue will always be equal to LastRow.

<xsl:template name=”Navigation”>

            <xsl:param name=”FirstRow” select=”1″/>

            <xsl:param name=”LastRow” select=”1″/>

            <xsl:param name=”dvt_RowCount” select=”1″/>

            <xsl:variable name=”TrueRowTotalCount select=”1000000″ />

            <xsl:variable name=”LastRowValue”>

            <xsl:choose>

            <xsl:when test=”$EntityName = ” or $LastRow &lt; $TrueRowTotalCount”>


<xsl:value-of select=”$LastRow”/>


</xsl:when>

            <xsl:otherwise>


<xsl:value-of
select=”$TrueRowTotalCount”/>


</xsl:otherwise>

            </xsl:choose>

The final result is I got to records 2003 – 2028 as shown below before I got bored clicking next:

If you try this out your will notice that the last page is blank when you get to the end of your data but you could probably add a count to the /Row XPath and show a message instead and back button.
Filtering won’t work either as it filters over just the 25 items shown and not the whole dataset.
Sorting can be fixed by passing dvt_sortfield and dvt_sortdir and handling this in your stored procedures.

To complete the solution I created a document library and added my Suppliers List as an “External Data” column:

Add when I upload a document I can search through the list of suppliers to pick the one I want. You can remove the PageNo option by setting its type to Limit instead of comparison. Notice that this does return a warning about too many results.

Leave a comment

Filed under Fixes / Solutions, Integration, Out-Of-Box Development, SharePoint 2010

Can your perfect IA be implemented in SharePoint 2010

I worked with a customer recently on designing the Information Architecture for a Contracts Management site. This consisted of identifying document templates, meta data, workflows, notifications and retention policies.
The IA was depicted in a Visio diagram as it gives the end user something to visualise how the features of SharePoint will be used to deliver their solution; a screenshot of this is shown. A spreadsheet accompanies this to define all the content types, site columns, libraries and lists, workflows etc using the Microsoft Services Ready Excel template.
I digress from the main topic of this post which is how you’re perfect IA might not be implementable in SharePoint 2010 as you had planned.

The basic plan for the IA was to use the document centre template for each business area that used a document set to encapsulate the meta data about the contract, related documents and provide a means of approving and reviewing contract information as a single entity. It’s important when designing an IA for SharePoint to have a depth of understanding around the document management features so one of things I was very conscious of when doing the design was the limitations and boundaries of document sets which are essential folders on steroids with some initial guidance from Paul Culmsee. On completion of the IA I handed this over to a SharePoint Developer (Me) to implement thinking I’d made allowances for known features.

So where did my design fall down? External Lists.

Each contract had a supplier name attached to it and this list could be sourced from JDE. In my design I specified an External List that would retrieve data from a SQL Table populated with data JDE. This would mean all contracts would be attached to a known supplier and consistently used for all contracts and for other content in the future. This makes perfect sense from an IA perspective.
First of all the external list was too long so I hit the 2000 item limit so wrote some stored procedures to do paging (talk about this in another blog). The next issue I encountered was that Business Data columns can’t be added to content types which meant I had to add the column to the library in the document centre instead. I could add it as a lookup field but then you don’t get to pick from a list by filtering. I went back and update the IA documents but no needed to change my overall plan.
Then came the content query web part which in the wireframes showed a roll up of all contract document sets I was the owner of at the top of the site selecting by content type. Unfortunately it doesn’t like Business data columns. If I added a display value in the Item Style the CQWP would request the field in the new Edit Web Part properties but wouldn’t except the name of the field. Doh. I shouted at the screen and complained at my own ability to design something I couldn’t implement.

In the end I had to create a list in SharePoint and have the lookup column attached to the document set. Later we’ll have to hook up the list to JDE by using a transformation probably in SQL Server Integration Services.

Moral of the story is if you have used a feature in SharePoint before, make sure you can implement the key components of your IA design before creating the perfect solution.

Diagram of the Contracts IA created in Visio. You could use other tools such as Balsamiq mockups but I find the use of SharePoint Icons helpful to explain the features and relationships that will be implemented in SharePoint.

Leave a comment

Filed under SharePoint 2010

SharePoint 2010 Web Analytics under the covers

I encountered an issue with Web Analytics not processing usage that was evident through the lack of hits in the usage reports in central administration and site collections. Fortunately I got the problem fixed which I’ll share with you in a moment as in the process I did some digging into the how the Web Analytics process works and the reporting databases that may be of interest.

I started with the high level view provided in Microsoft Enterprise Content Management (ECM) Team Blog http://blogs.msdn.com/b/ecm/archive/2010/03/21/introducing-web-analytics-in-sharepoint-2010.aspx.

I’ve interpreted the process visually for how web analytics get collected look something like this:

These are the steps that occur:

  • User request a page and action gets picked up by the Web Analytics service that runs on each SharePoint server in your farm (this may be server side or JS call but have investigated).
  • The Web Analytics service logs this in the “.usage” files in the location specified in central administration.
  • A Timer job called “Microsoft SharePoint Foundation Usage Data Import” that by default runs every 30 minutes that imports the logs into the staging database.
  • Each night the “Microsoft SharePoint Foundation Usage Data Processing” Timer job runs and transforms the data into the reporting database; from my investigation, this populated the “WAClickFact” (Fact) table.
    • This timer job also runs a number of stored procedure to aggregate the data into other tables (WAClickAggregationByDate , WATrafficAggregationByDate, WASearchTrafficAggregationByDate, etc) that are displayed in the reports. Note: Running this manually does not seem to execute the latter part of this process.
  • The last run time of the import from staging and the aggregation is logged in the Settings table in the Reporting database

In my case, the data for hits was being populated into the fact table in the reporting database but the aggregation tables were missing data. The problem was that database files were missing which is when I discovered that part of the Timer Job processing task creates new files for the aggregation tables every 4 or 5 days (this may be based on size) and is done so to improve performance by partioning the table. The reason for the missing files. Not enough disk space was available and as the routine never attempts to create the files again it fails until you manually create the file.

Microsoft assisted in locating the missing file which was logged in the Diagnostic logs when we set Web Analytics logging to Verbose. We could then create the files manually using the script below and leave the overnight jobs to run. Thankfully this processed all the missing aggregations and we lost no data so much thanks to Microsoft’s support team.

Use this SQL statement to find any groups and check for ones with zero files.

SELECT f.*,g.groupname, g.status AS FGStatus FROM sysfiles f LEFT OUTER JOIN sysfilegroups g ON f.groupid= g.groupid

Use the following SQL to get the file location and create a new file.

use DBName

go

DECLARE @DBFilePath NVARCHAR(2000)

SELECT @DBFilePath=LEFT(filename,LEN(filename)-CHARINDEX(N’\’, REVERSE(filename))+1) FROM sysfiles WHERE RIGHT(filename,3)=’mdf’

IF NOT EXISTS (SELECT 1 FROM sysfiles f INNER JOIN sysfilegroups g ON f.groupid= g.groupid WHERE g.groupname=’SharePoint_WebAnalytics_ReportingAggregation20101128′)

EXEC(‘ALTER DATABASE showFGDB ADD FILE (

NAME= ”SharePoint_WebAnalytics_ReportingAggregation20101128”,

FILENAME = ”’+@DBFilePath+’SharePoint_WebAnalytics_ReportingAggregation20101128.ndf”)

TO FILEGROUP SharePoint_WebAnalytics_ReportingAggregation20101128’)

By Alan Marshall

Twitter: @pomealan

Principal Architect

Gen-i NZ

5 Comments

Filed under Admin / Config, Fixes / Solutions, SharePoint 2010, Uncategorized

“No valid proxy can be found” when publishing content type in SharePoint 2010

I found a number of posts with possible fixes for this problem, many of them relating to the Content Type hub URL in the Managed Metadata Service    e.g.

http://social.technet.microsoft.com/Forums/en/sharepoint2010setup/thread/d9efe46c-7d55-4f51-9e09-c41ff4d40bda

http://charliedigital.com/2010/01/06/sharepoint-2010-content-type-publishing-setup/

However, I received this error and the Content Type Hub url in the Managed Metadata Service appeared to be fine. It turned out that a new host header for the content type hub site had been created in IIS and an alternate access mapping added in SharePoint. If I accessed the content type hub site using the original host header (this was the one specified as the Content Type Hub in the managed Metadata Service) and tried to publish a content type everything worked fine. But doing the same thing using the newer host header resulted in the “No valid proxy can be found” message.

So if you get the”No valid proxy can be found” message and the Content Type Hub url in the Managed Metadata Service looks OK, take a look at your host headers in IIS and alternate access mappings in SharePoint.

Ian Docking – Senior Technical Consultant

1 Comment

Filed under Admin / Config, Fixes / Solutions, SharePoint 2010, SharePoint 2010 Foundation