Category Archives: Fixes / Solutions

A fix to an issue or problem solved

Sensibly govern use of folders in SharePoint

It’s the age old question of should you allow users to create folders in document libraries and lists when designing your Information Architecture and governance plan. I worked for an organisation that banned folders and didn’t fully explain to users how to leverage meta data, in SharePoint 2007 with no Managed Meta Data and Hierachy Navigation filters in libraries this affected adoption. As the network drivers were shutdown, users had to store documents in SharePoint so they simply opened the library in explorer view and dragged in folders; the folder restriction was simply and interface change and not event handler.

Personally I think there is a place for folders which Mikhail Dikov
covers well in a blog on
SharePoint folders need more love.

So now you’re convinced that folders are a good thing we still need to keep some control otherwise users will create 5 level deep folder structures as they did on the file system with no metadata.

The solution I’ve built here uses an event receiver to allow folders to be created at the root level of a document library or list but no deeper i.e. one folder level only.

Following Karine Bosch’s Blog on creating an event receivers
http://karinebosch.wordpress.com/walkthroughs/event-receivers-walkthrough1/, I created an ItemAdding receiver that prevents sub-folders of folders being created.

The following code snippet checks if the item that is being created is a Folder Content Type or a content type inherited from folder (itemContentType.StartsWith(spFolderContentType)), if it is a folder, then the root folder of the library is compared to the parent folder (generated by Remove statement) and if they do not match then returns the error message. Items do have a ParentFolder property but this doesn’t appear to be set until the item has been added to the library.

public
override
void ItemAdding(SPItemEventProperties properties)

{


string spFolderContentType = “0x0120”;


string itemContentType = properties.AfterProperties[“ContentTypeId”].ToString();


// Is this a folder, check if the content Type starts with the base Folder ID?


if (itemContentType.StartsWith(spFolderContentType))

{


//Get the web site for this list


using (SPWeb web = properties.OpenWeb())

{


// get the list


SPList list = web.Lists[properties.ListId];


SPFolder rootFolder = list.RootFolder;


string folderURL = properties.AfterUrl;


string folderName = properties.AfterProperties[“Title”].ToString();


string folderParentURL = folderURL.Remove(folderURL.Length – folderName.Length -1, folderName.Length +1);


// Is the new folder being created in the root of the library


if (folderParentURL != rootFolder.Url)

{

properties.ErrorMessage = “Folders can only be created in root of this library/list”;

properties.Status = SPEventReceiverStatus.CancelWithError;

properties.Cancel = true;

}


else

{


base.ItemAdding(properties);

}

}

}


else

{


base.ItemAdding(properties);

}

}

The following “Error” message is displayed to the user if they try to create a folder in another folder:

This isn’t a very nice message as it infers that an error occurred when actually it was more a policy was applied. You will probably need to provide some better feedback explaining that it is not an error and where to read the policy.

And then I thought I was done until I tried to test explorer view at which point I discovered that the AfterProperties are not supplied in the ItemAdding event when someone copies or creates a new item directly in explorer view and the ItemUpdating event is also called. This thread covers the same question but the proposed answer doesn’t work:

http://social.msdn.microsoft.com/Forums/en/sharepointdevelopment/thread/8712648e-cf09-4f7b-ab13-1c6aacdf588a

So now what? Well I said a few bad words about the inconsistencies of SharePoint and then attempted to find an answer. I decided to ignore ItemAdding as this was essentially already completed as it was now doing an update and focus on building a solution around ItemUpdating.

Within the ItemUpdating function, I could access the ListItemID so was able to retrieve this list item object and it’s properties to evaluate if it was a top level folder. Interestingly (or annoyingly), you could get the item Title or Name it just included the base content type properties but the FileLeafRef provided the value for me to use in building the parentURL string.

So ignore the code above as this is what works with a tweak to ItemAdding to check if AfterProperties exist and if not presume this was an Explorer View update and ignore the item:


public
override
void ItemAdding(SPItemEventProperties properties)

{


string spFolderContentType = “0x0120”;


bool isFolder = false;


string itemTitle = “”;


//Get the web site for this list


using (SPWeb web = properties.OpenWeb())

{


// Need to handle explorer view not containing AfterProperties


if (properties.AfterProperties[“ContentTypeId”] != null)

{


string itemContentType = properties.AfterProperties[“ContentTypeId”].ToString();


if (itemContentType.StartsWith(spFolderContentType))

{

isFolder = true;

itemTitle = properties.AfterProperties[“Title”].ToString();

}

}


// Is this a folder, check if the content Type starts with the base Folder ID?


if (isFolder)

{


// get the list


SPList list = web.Lists[properties.ListId];


SPFolder rootFolder = list.RootFolder;


string folderURL = properties.AfterUrl;


string folderName = itemTitle;


string folderParentURL = folderURL.Remove(folderURL.Length – folderName.Length – 1, folderName.Length + 1);


// Is the new folder being created in the root of the library


if (folderParentURL != rootFolder.Url)

{

properties.ErrorMessage = “Folders can only be created in root of this library/list”;

properties.Status = SPEventReceiverStatus.CancelWithError;

properties.Cancel = true;

}


else

{


base.ItemAdding(properties);

}

}


else

{


base.ItemAdding(properties);

}

}

}


public
override
void ItemUpdating(SPItemEventProperties properties)

{


string spFolderContentType = “0x0120”;


bool isFolder = false;


string itemTitle = “”;


SPItem CurrentListItem = null;


//Get the web site for this list


using (SPWeb web = properties.OpenWeb())

{


// Need to handle explorer view not containing AfterProperties


if (properties.AfterProperties[“ContentTypeId”] == null)

{


SPList CurrentList = web.Lists[properties.ListId];

CurrentListItem = CurrentList.GetItemById(properties.ListItemId);


string itemContentType = CurrentListItem[“ContentTypeId”].ToString();


if (itemContentType.StartsWith(spFolderContentType))

{

itemTitle = CurrentListItem[“FileLeafRef”].ToString();

isFolder = true;

}

}


// Is this a folder, check if the content Type starts with the base Folder ID?


if (isFolder)

{


// get the list


SPList list = web.Lists[properties.ListId];


SPFolder rootFolder = list.RootFolder;


string folderURL = properties.AfterUrl;


string folderName = itemTitle;


string folderParentURL = folderURL.Remove(folderURL.Length – folderName.Length – 1, folderName.Length + 1);


// Is the new folder being created in the root of the library


if (folderParentURL != rootFolder.Url)

{

properties.ErrorMessage = “Folders can only be created in root of this library/list”;

properties.Status = SPEventReceiverStatus.CancelWithError;

properties.Cancel = true;

CurrentListItem.Delete();

}


else

{


base.ItemUpdating(properties);

}

}


else

{


base.ItemUpdating(properties);

}

}

}

When the user tries to add a folder through the explorer view, they will get the following message after changing the folder name:

When the explorer view is refreshed, the item doesn’t exist. They don’t see the returned ErrorMessage so no idea why this happened so it might be prudent to send them an email or even better an instant message to tell them what happened.

This isn’t the most robust code and I’m sure there are other ways of comparing the parent folder strings so don’t use this in production without some further testing and due diligence.

Leave a comment

Filed under Custom Development, Fixes / Solutions

Create an External Content Type against large SQL Server table with no Code

The Problem

I had a large dataset stored in SQL Server that I wanted to use in SharePoint 2010 as a lookup field against a contract record. Due to the external data throttling limit in SP2010, I can’t browse through more than 2000 records. This blog is how I managed this with some SQL Stored Procedures and XSLTView changes to page through 2800 rows in the database without modifying SharePoint’s throttling limits.

I started by creating an External Content Type and pointing it directly to my table; follow this MSDN article for how to do this: http://msdn.microsoft.com/en-us/library/ee557243.aspx. Initially I thought that the “All Items” view only selected a small number of rows so displaying in batches of 25 would work but, when you try to display the list you get correlation ID error, would have expected “You selected too many items” rather than an error.

I then did some further investigation and discovered External Data has throttling applied by SharePoint, this can be overridden using PowerShell but can have a performance impact and mostly relates to coded solutions: http://blogs.msdn.com/b/bcs/archive/2010/02/16/bcs-powershell-introduction-and-throttle-management.aspx . Setting the throttling off then showed my list but that wasn’t the ideal option for a production environment.

Steps to my solution

The solution is based on 2 SQL Server stored procedures, an External Content Type and a customised XSLTListView.

First create a stored procedure to select rows in batches and test by sending row numbers and limit sizes.

The SP has 2 parameters, one for the row number (called PageNo) and the other for page limit. The first parameter will be used later to create a filter when picking using as a lookup in a list.

USE [Gel_ExternalData]

GO

/****** Object: StoredProcedure [dbo].[getSuppliers] Script Date: 09/05/2011 10:17:46 ******/

SET
ANSI_NULLS
ON

GO

SET
QUOTED_IDENTIFIER
ON

GO

— =============================================

— Author:        Alan Marshall

— Create date: 05/04/2011

— Description:    Get Vendors for External Content Type

— =============================================

ALTER
PROCEDURE [dbo].[getSuppliers]

    — Add the parameters for the stored procedure here

    @SupplierName nvarchar(200)
=
null,

    @pageNo int
= 1,

    @limit int
= 200

AS

BEGIN

    — SET NOCOUNT ON added to prevent extra result sets from

    — interfering with SELECT statements.

    SET
NOCOUNT
ON;


    IF @pageNo IS
null

    SET @pageNo = 1;


    DECLARE @startIndex int

    –SET @startIndex = ((@pageNo – 1) * @limit) + 1

    SET @startIndex = @pageNo

    DECLARE @endIndex int

    SET @endIndex = @startIndex + @limit

    DECLARE @SupplierNameLookup nvarchar(200)
=
null

    IF @SupplierName IS
NOT
null

    SET @SupplierNameLookup =
‘%’
+ @SupplierName +
‘%’;


WITH [CTE] AS (

        SELECT
ROW_NUMBER()
OVER (ORDER
BY [MAILING_NAME])
AS
[RowNumber]


,[ADDRESS_NUMBER]


,[MAILING_NAME]


,[LEGAL_ENTITY]


,[ADDRESS_LINE_2]


,[ADDRESS_LINE_3]


,[ADDRESS_LINE_4]


,[CITY]


,[POSTAL_CODE]


,[COUNTRY]

        FROM [Gel_ExternalData].[dbo].Suppliers

        WHERE [MAILING_NAME] LIKE
ISNULL(@SupplierNameLookup, [MAILING_NAME])

)

SELECT
*
FROM [CTE]

WHERE [CTE].[RowNumber] >= @startIndex

AND [CTE].[RowNumber] <= @endIndex;

END

This may not be the most efficient method but I’m not a TSQL expert and I found this solution on another blog

Create a stored procedure to select a single row

USE [Gel_ExternalData]

GO

/****** Object: StoredProcedure [dbo].[getSuppliersItem] Script Date: 09/05/2011 16:21:42 ******/

SET
ANSI_NULLS
ON

GO

SET
QUOTED_IDENTIFIER
ON

GO

— =============================================

— Author:        <Author,,Name>

— Create date: <Create Date,,>

— Description:    <Description,,>

— =============================================

ALTER
PROCEDURE [dbo].[getSuppliersItem]

    — Add the parameters for the stored procedure here

    @AddressNumber int,

    @pageNo int
= 1,

    @limit int
= 200

AS

BEGIN

    — SET NOCOUNT ON added to prevent extra result sets from

    — interfering with SELECT statements.


WITH [CTE] AS (

        SELECT
ROW_NUMBER()
OVER (ORDER
BY [MAILING_NAME] DESC) AS [RowNumber]


,[ADDRESS_NUMBER]


,[MAILING_NAME]


,[LEGAL_ENTITY]


,[ADDRESS_LINE_2]


,[ADDRESS_LINE_3]


,[ADDRESS_LINE_4]


,[CITY]


,[POSTAL_CODE]


,[COUNTRY]

        FROM [Gel_ExternalData].[dbo].[Suppliers]

        WHERE [ADDRESS_NUMBER] = @AddressNumber

)

SELECT
*
FROM [CTE]

END

This SP must have a unique identifier that will be used when the user chooses to view an item in the list. In my case this is ADDRESS_NUMBER.
You’ll have noticed I have a hard coded sort field which is needed to page through the items. A parameter can be added for this later but outside scope of this blog at the moment.

Create an external content type

  1. Open External Content Types in SharePoint Designer
  2. Select new content type button on the ribbon
  3. Provide a name for your content type
  4. Use Generic list although you can use contacts etc but I suspect you’ll hit the hard limit in Outlook.
  5. Select to link to external data source
  6. Add a connection and select SQL Server
    1. Note: you must have direct access at this point even if you intend to use Secure Store Service later. I built this solution on a single server so didn’t need Kerberos.
    2. I noticed the error messages are vague if the connection is unsucceful for instance, I got an unable to connect message even though I only got database name wrong and thought it was a credentials issue
  7. Open up Routines in the returned database view
  8. Right click on the getSuppliers SP and select Read operation
  9. Enter the display name, note that this appears as the name of the default view in SharePoint when you create a list
  10. Your identifier must be Int32 as I found others don’t work well.
  11. You now need to specify the settings for the filters
  12. Click on the link to add a filter for your identifier (ADDRESS_NUMBER)
  13. Provide a sensible name and set this as a Comparison filter type as below:

  1. Set PageNo to be a comparison. Don’t be tempted to select PageNumber as the FilterType.
  2. Set Limit to be of type limit and 200
  3. Click Next and Map all the fields or rename as required. You need to select Map to Identifier for your unique column.

  1. If you renamed the identifier in the filters, you need to select it here for the ADDRESS_NUMBER
  2. Click Finish
  3. Select the paged items stored procedure and Read Items operation
  4. Set up the parameters as before with the SupplierName configured as below as this field is used for searching in the lookup picker:

  1. Set PageNo to be a comparison
  2. Set Limit as type Limit
  3. Click Next and then set the fields you want in the Show in Picker and rename to the same as the read item operation

  1. Click Finish and then Save the ECT.
  2. Now you can create a list and forms using the button on the ribbon
  3. Before you browse to the list, you need to set permissions on the BDC definition
    1. Open Central admin and navigate to Manage Services
    2. Select your Business Data Connectivity Service
    3. Locate the definition you just created and select Set Permissions from the drop down.

  1. Add administrators and Users with appropriate permissions
  2. Navigate to your new list and select modify view in the ribbon
  3. Set the data source filters.
    1. PageNo = {dvt_firstrow}
    2. Limit =25

  1. Set the Item limit to match the limit above

I then used SQL profiler to see what calls SharePoint was making to the stored procedures

Startup SQL Profiler and capture events for the database. Check values being passed in.

So everything seemed to be working but wait what’s happening on page 2! It sent the correct stored procedure parameter i.e. @PageNo=26 and @limit=25 but it shows only 1 record. I ran the SQL procedure again pasting it from the profiler and it was returning 25 records so what happened.

It turns out that the XSLTListView on the view actually queries all the data and then using XSL selects only the rows it is interested in. Apart from this seeming wildly in efficient, it needs to do this to create the filter drop downs over the whole dataset.
When you look at the XSLTListView XSL stylesheet, you will find the following line:

<xsl:param name=”AllRows” select=”/dsQueryResponse/Rows/Row[$EntityName = ” or (position() &gt;= $FirstRow and position() &lt;= $LastRow)]”/>

What this does is select items from row 26 to row 52. This being the start of the second page first item and last row is firstrow + the limit +1. So it then selected from the returned SQL data row 26 as there were no more rows.

This was easily fixed by doing the following:
<xsl:param name=”AllRows” select=”/dsQueryResponse/Rows/Row”/>

Note: remember to remove all the ghost reference in SP Designer otherwise it won’t save your changes

This new XPath query selects all the rows that come back because there will only ever be 25 or whatever the limit is. This fixed page 2 as I could now see all the records but at the bottom of the page, there wasn’t a right arrow to go to the next page so I moved my attention to the Navigation XSL template and the CommandFooter.

There were 2 values that were essentially incorrect based on my solution; the last row and the row count. The first row would correctly start 26 for the second page but the last row would also be 26 and the rowcount returned 26. So I needed to make the XSLT think that the last row was potentially the first row + the page limit making 51 and the total rows in my table was at least more than currently displayed.

I created 2 new variables to set the last row to be the first row 26 + the current row count and then created a new current row count to always be more that the potential page content by adding it to the Lastrow value.

<xsl:template name=”CommandFooter”>

            <xsl:param name=”FirstRow” select=”1″/>

            <xsl:param name=”LastRow” select=”1″/>

            <xsl:param name=”dvt_RowCount” select=”1″/>

            <xsl:variable name=”TrueLastRow“>

                <xsl:number value=”$FirstRow + $dvt_RowCount”/>

                </xsl:variable>

                <xsl:variable name=”Truedvt_RowCount“>

               <xsl:number value=”$LastRow + $dvt_RowCount”/>

                </xsl:variable>

            <xsl:if test=”$FirstRow &gt; 1 or $dvt_nextpagedata”>

                <xsl:call-template name=”Navigation“>

                <xsl:with-param name=”FirstRow” select=”$FirstRow” />

                <xsl:with-param name=”LastRow” select=”$TrueLastRow”/>

                <xsl:with-param name=”dvt_RowCount” select=”$Truedvt_RowCount”/>

                </xsl:call-template>


</xsl:if>

The next variable which prevent the next option displaying correctly was the RowTotalCount which is passed into the XSLT by the calling web part code. As I can’t retrieve how many rows are actually in my database table I fudged this and put in 1 million as the value. Essentially this means that the value for LastRowValue will always be equal to LastRow.

<xsl:template name=”Navigation”>

            <xsl:param name=”FirstRow” select=”1″/>

            <xsl:param name=”LastRow” select=”1″/>

            <xsl:param name=”dvt_RowCount” select=”1″/>

            <xsl:variable name=”TrueRowTotalCount select=”1000000″ />

            <xsl:variable name=”LastRowValue”>

            <xsl:choose>

            <xsl:when test=”$EntityName = ” or $LastRow &lt; $TrueRowTotalCount”>


<xsl:value-of select=”$LastRow”/>


</xsl:when>

            <xsl:otherwise>


<xsl:value-of
select=”$TrueRowTotalCount”/>


</xsl:otherwise>

            </xsl:choose>

The final result is I got to records 2003 – 2028 as shown below before I got bored clicking next:

If you try this out your will notice that the last page is blank when you get to the end of your data but you could probably add a count to the /Row XPath and show a message instead and back button.
Filtering won’t work either as it filters over just the 25 items shown and not the whole dataset.
Sorting can be fixed by passing dvt_sortfield and dvt_sortdir and handling this in your stored procedures.

To complete the solution I created a document library and added my Suppliers List as an “External Data” column:

Add when I upload a document I can search through the list of suppliers to pick the one I want. You can remove the PageNo option by setting its type to Limit instead of comparison. Notice that this does return a warning about too many results.

Leave a comment

Filed under Fixes / Solutions, Integration, Out-Of-Box Development, SharePoint 2010

SharePoint 2010 Web Analytics under the covers

I encountered an issue with Web Analytics not processing usage that was evident through the lack of hits in the usage reports in central administration and site collections. Fortunately I got the problem fixed which I’ll share with you in a moment as in the process I did some digging into the how the Web Analytics process works and the reporting databases that may be of interest.

I started with the high level view provided in Microsoft Enterprise Content Management (ECM) Team Blog http://blogs.msdn.com/b/ecm/archive/2010/03/21/introducing-web-analytics-in-sharepoint-2010.aspx.

I’ve interpreted the process visually for how web analytics get collected look something like this:

These are the steps that occur:

  • User request a page and action gets picked up by the Web Analytics service that runs on each SharePoint server in your farm (this may be server side or JS call but have investigated).
  • The Web Analytics service logs this in the “.usage” files in the location specified in central administration.
  • A Timer job called “Microsoft SharePoint Foundation Usage Data Import” that by default runs every 30 minutes that imports the logs into the staging database.
  • Each night the “Microsoft SharePoint Foundation Usage Data Processing” Timer job runs and transforms the data into the reporting database; from my investigation, this populated the “WAClickFact” (Fact) table.
    • This timer job also runs a number of stored procedure to aggregate the data into other tables (WAClickAggregationByDate , WATrafficAggregationByDate, WASearchTrafficAggregationByDate, etc) that are displayed in the reports. Note: Running this manually does not seem to execute the latter part of this process.
  • The last run time of the import from staging and the aggregation is logged in the Settings table in the Reporting database

In my case, the data for hits was being populated into the fact table in the reporting database but the aggregation tables were missing data. The problem was that database files were missing which is when I discovered that part of the Timer Job processing task creates new files for the aggregation tables every 4 or 5 days (this may be based on size) and is done so to improve performance by partioning the table. The reason for the missing files. Not enough disk space was available and as the routine never attempts to create the files again it fails until you manually create the file.

Microsoft assisted in locating the missing file which was logged in the Diagnostic logs when we set Web Analytics logging to Verbose. We could then create the files manually using the script below and leave the overnight jobs to run. Thankfully this processed all the missing aggregations and we lost no data so much thanks to Microsoft’s support team.

Use this SQL statement to find any groups and check for ones with zero files.

SELECT f.*,g.groupname, g.status AS FGStatus FROM sysfiles f LEFT OUTER JOIN sysfilegroups g ON f.groupid= g.groupid

Use the following SQL to get the file location and create a new file.

use DBName

go

DECLARE @DBFilePath NVARCHAR(2000)

SELECT @DBFilePath=LEFT(filename,LEN(filename)-CHARINDEX(N’\’, REVERSE(filename))+1) FROM sysfiles WHERE RIGHT(filename,3)=’mdf’

IF NOT EXISTS (SELECT 1 FROM sysfiles f INNER JOIN sysfilegroups g ON f.groupid= g.groupid WHERE g.groupname=’SharePoint_WebAnalytics_ReportingAggregation20101128′)

EXEC(‘ALTER DATABASE showFGDB ADD FILE (

NAME= ”SharePoint_WebAnalytics_ReportingAggregation20101128”,

FILENAME = ”’+@DBFilePath+’SharePoint_WebAnalytics_ReportingAggregation20101128.ndf”)

TO FILEGROUP SharePoint_WebAnalytics_ReportingAggregation20101128’)

By Alan Marshall

Twitter: @pomealan

Principal Architect

Gen-i NZ

5 Comments

Filed under Admin / Config, Fixes / Solutions, SharePoint 2010, Uncategorized

“No valid proxy can be found” when publishing content type in SharePoint 2010

I found a number of posts with possible fixes for this problem, many of them relating to the Content Type hub URL in the Managed Metadata Service    e.g.

http://social.technet.microsoft.com/Forums/en/sharepoint2010setup/thread/d9efe46c-7d55-4f51-9e09-c41ff4d40bda

http://charliedigital.com/2010/01/06/sharepoint-2010-content-type-publishing-setup/

However, I received this error and the Content Type Hub url in the Managed Metadata Service appeared to be fine. It turned out that a new host header for the content type hub site had been created in IIS and an alternate access mapping added in SharePoint. If I accessed the content type hub site using the original host header (this was the one specified as the Content Type Hub in the managed Metadata Service) and tried to publish a content type everything worked fine. But doing the same thing using the newer host header resulted in the “No valid proxy can be found” message.

So if you get the”No valid proxy can be found” message and the Content Type Hub url in the Managed Metadata Service looks OK, take a look at your host headers in IIS and alternate access mappings in SharePoint.

Ian Docking – Senior Technical Consultant

1 Comment

Filed under Admin / Config, Fixes / Solutions, SharePoint 2010, SharePoint 2010 Foundation

What is really fast in returning results from a list based query ?

What is really fast in returning results from a list based query ?

Is it SPList, SPQuery, w index, w/o index, List Web service, Search or PortalSiteMapProvider ?

I found these really surprising results with test done on very large lists with 100K items: 

PortalSiteMapProvider performs well with indexing and even better without indexing:

 

And also well when using an ID field:

Search comes second as it uses the crawled index but overhead is that have to wait for the crawl interval which is usually 30minutes plus managed properties need to be configured.

Without these considerations:

To load a page with 100 item per page, it takes around 16 seconds.

Deleting just 1  item from a 100K list can take about 1 minute.

The total memory consumption for a list with 2 million items under root will be 4.5 GB.

Leave a comment

Filed under Admin / Config, Fixes / Solutions, SharePoint 2007

Search Administration – Could not connect to server

If you have the query server and index server roles on different servers in the farm and you get the following error showing in the ‘Server Status’ area of the Search Administration page….

Could not connect to server yourservernamehere for application ‘yourSSPnamehere’. This error might occur if the server is not responding to client requests, the firewall or proxy configuration is preventing the server from being contacted, or the search administration Web service is not running on the server.

….it could be because .Net 3.5 SP1 was installed prior to installing SharePoint 2007 SP2.

We encountered this a little while ago at a client site (SharePoint 2007 – including October 2009 cumulative updates – installed on servers running Windows Server 2003) and it turned out to be caused by corruption of the self-issued certificate used by the Office Server Web Services. .
 
Following the steps in this Microsoft support article to create a new self-issued certificate fixed it for us: http://support.microsoft.com/kb/962928
 
The symptoms we were experiencing weren’t exactly as mentioned in the support article however. We could get to the Search Settings page without any problem and while error event ID 6482 was being logged, it certainly wasn’t every minute. And the event description was more along the lines of ‘object not found’ rather than ‘application server administration job failed’. Search results were also being returned successfully. In fact, apart from the error msg on the Search Admin page there was no obvious indication that there was any problem at all.

If you prefer, before jumping in and creating a new certificate you can check to see if the self-issued SSL is working OK by opening a browser on the app/index server and type in the following addresses:
 
http://yourwfename:56737/yourSSPname/Search/SearchAdmin.asmx
https://yourwfename:56738/yourSSPname/Search/SearchAdmin.asmx
 
..substituting in your WFE server name(s) and SSP name as appropriate.
 
Then do the same on the WFE(s) substituting in the app/index server name.
 
If you get to the SearchApplicationWebService page in each case, the servers are talking to each other OK. Otherwise you may have a problem.
 
At the client mentioned above, browsing to both addresses from the app/index server to the WFE worked fine, but the https address didn’t work trying to get from the WFE to the app/index server.

Ian Docking – Senior Technical Consultant

Leave a comment

Filed under Admin / Config, Fixes / Solutions, SharePoint 2007

Replicating SPThemes.XML Changes to Multiple FrontEnds

After being asked to create a new custom SharePoint 2007 Theme on an environment with three front-end servers for an internal project, I soon noticed (post deployment) that the Add/Remove items changes in the SPThemes.xml file performed by the FeatureReciever were not replicated propertly to all of the web front-end servers.  After some investigation I found that within the FeatureReciever code I needed to execute the AddThemeItem/RemoveThemeItem function calls from inside the FeatureInstalled/FeatureUninstalling events rather than the FeatureActivated/FeatureDeactivating events.

This is because in a SharePoint farm with multiple servers, the FeatureActivated/FeatureDeactivating event only gets fired on a single server on the farm!

By putting the AddThemeItem/RemoveThemeItem function calls in the FeatureInstalled/FeatureUninstalling events instead, the code will get fired on every server in the farm.

Quick Example:

public class FeatureReceiver: SPFeatureReceiver {
public override void FeatureActivated(SPFeatureReceiverProperties properties) {
}
public override void FeatureDeactivating(SPFeatureReceiverProperties properties) {
}
public override void FeatureInstalled(SPFeatureReceiverProperties properties) {
AddThemeItem();
}
public override void FeatureUninstalling(SPFeatureReceiverProperties properties) {
RemoveThemeItem();
}
}

Vimal Naran – Senior Developer

1 Comment

Filed under Fixes / Solutions, SharePoint 2007, WSS 3.0

Windows Server 2008 firewall ports on a multi-server SharePoint farm

I ran into this problem a few weeks ago when installing & configuring a customer’s Sharepoint 2007 farm comprising an application/index server and a single web front-end on Windows Server 2008.

I was unable to view or configure the services (Office Sharepoint Search in my case) on the web front-end. Trying to do so via ‘Services on Server’ in Central Admin (Operations tab) resulted in the following error message:

An unhandled exception occurred in the user interface.Exception Information: Could not connect to server xxxxxxxx. This error might occur if the firewall or proxy configuration is preventing the server from being contacted.

A technet article on configuring Windows Firewall with Advanced Security mentioned that all ports used by web applications, including Central Admin, needed to be opened. After finding all of the port numbers as per the article (from the list of web apps in Central Admin) and making sure they were open on both server firewalls we still got the same error message.

We fixed the problem by opening ports 56737 and 56738. These ports are used by Office Server Web Services. You can see this site and its port numbers in IIS Manager but there’s no sign of it in the list of web apps in Central Admin of course.

A lot of posts about Windows firewall configuration mention opening the ports for ‘all of the web apps’ but don’t mention the Office Server Web Services.

Ian Docking – Senior Technical Consultant

Leave a comment

Filed under Admin / Config, Fixes / Solutions, SharePoint 2007

Fix errors in Search Administration page

I was recently asked to look at a search issue reported by one of our clients.  Although users could search, it was no longer indexing documents which had been working correctly until now.
In the search administration page, the query and server status was reported as “Error”.  In the check services enabled in this farm page, it reported that no index was associated with the SSP.  I re-assigned the index server through the shared service page which then removed some of the errors from the search adminstration page but not the server status and still no indexing. 
The event viewer was now reporting that it could not find the index file so I restarted all the osearch services on each web front end and then the index server; re-starting them in the reverse order.  The server status messages were then displaying the disk size but, when clicking on the content sources page, I received an access denied message even though I was signed in as installation account.

I found a KB article 926959 from Microsoft relating to the error message reported in the event viewer for this access denied.  This recommended changing permissions on the Tasks folder in Windows.  This stepped fixed the problem and I could then do a full crawl of the content.

So what changed?  I never traced the source of this problem but, I suspect it was a group policy in AD that modified the permission to the Tasks Scheduler or a Windows Patch.

Alan Marshall – Principal Architect

Leave a comment

Filed under Admin / Config, Fixes / Solutions