Optimize Performance of Team Foundation Server 2010


If you look around the internet you will find my suggestions on how to optimize performance of Microsoft Team Foundation Server 2010. There are multiple growth factors for TFS over time such as the collection database, builds, attachments and workspaces. The key decision that need to be made is data retention, in every data lifecycle there must end in sight.

To identify what is in your TFS Project Collection database, execute the following query:

CREATE TABLE #t (     [name] NVARCHAR(128),    [rows] CHAR(11),    reserved VARCHAR(18),     data VARCHAR(18),     index_size VARCHAR(18),    unused VARCHAR(18))
GO
 
INSERT #t
EXEC [sys].[sp_MSforeachtable] ‘EXEC sp_spaceused ”?”’
GO
 
SELECT
    name as TableName,
    Rows,
    ROUND(CAST(REPLACE(reserved, ‘ KB’, ”) as float) / 1024,2) as ReservedMB,
    ROUND(CAST(REPLACE(data, ‘ KB’, ”) as float) / 1024,2) as DataMB,
    ROUND(CAST(REPLACE(index_size, ‘ KB’, ”) as float) / 1024,2) as IndexMB,
    ROUND(CAST(REPLACE(unused, ‘ KB’, ”) as float) / 1024,2) as UnusedMB
FROM #t
ORDER BY CAST(REPLACE(reserved, ‘ KB’, ”) as float) DESC
GO
 
DROP TABLE #t
GO

Save this result so that you know where you were prior to cleaning up the database.

Workspaces

While you can use tf.exe to manage workspaces I prefer to manage purging workspaces manually using Team Foundation Sidekicks – Workspace Sidekick as it provides a easy to use interface to manage workspaces. I recommend using this app to remove workspaces older than a defined last access date. This will remove excess data from your TFS collection database

Builds

It is also important to clean-up old build data especially continues integration builds the tfsbuild.exe application is designed to accomplish this, here is an example.

tfsbuild delete /collection:TeamProjectCollectionUrl TeamProject /deleteOptions:All /builddefinition:"\TeamProject\*CI" /noprompt /DateRange:~12/6/2011

tfsbuild destroy /collection:TeamProjectCollectionUrl TeamProject /builddefinition:"\TeamProject\*CI" /noprompt /DateRange:~12/6/2011

Attachments

To purge old attachments I recommend using the Test Attachment Cleaner, here is my recommended settings file:

<!– View/Delete all attachments on test runs older than 30 days, that are not linked to active bugs –>
<DeletionCriteria>
  <TestRun>
    <AgeInDays OlderThan="30" />
  </TestRun> 
  <LinkedBugs>    
    <Exclude state="Active" />
    <Exclude state="Resolved" />
  </LinkedBugs>
</DeletionCriteria>

Here is an example command:

tcmpt.exe /collection: TeamProjectCollectionUrl /teamProject: TeamProject /settingsFile: Settings.xml /mode: Delete

 

Database

Now that you have removed wide ranges of data rerun the following SQL query:

CREATE TABLE #t (     [name] NVARCHAR(128),    [rows] CHAR(11),    reserved VARCHAR(18),     data VARCHAR(18),     index_size VARCHAR(18),    unused VARCHAR(18))
GO
 
INSERT #t
EXEC [sys].[sp_MSforeachtable] ‘EXEC sp_spaceused ”?”’
GO
 
SELECT
    name as TableName,
    Rows,
    ROUND(CAST(REPLACE(reserved, ‘ KB’, ”) as float) / 1024,2) as ReservedMB,
    ROUND(CAST(REPLACE(data, ‘ KB’, ”) as float) / 1024,2) as DataMB,
    ROUND(CAST(REPLACE(index_size, ‘ KB’, ”) as float) / 1024,2) as IndexMB,
    ROUND(CAST(REPLACE(unused, ‘ KB’, ”) as float) / 1024,2) as UnusedMB
FROM #t
ORDER BY CAST(REPLACE(reserved, ‘ KB’, ”) as float) DESC
GO
 
DROP TABLE #t
GO

Compare these results to the results from the initial query to user stand the impact of the data size of each table. If there is a huge difference in the size of tables you will want to perform two stages of optimizations on the database it self. At this point I would recommend rebuilding every index in the database. There are lots of way to do this, here is the tsql query that I have used:

DECLARE @cmd NVARCHAR(1000) 
DECLARE @Table VARCHAR(255) 
DECLARE @SchemaName VARCHAR(255)
DECLARE @IndexName VARCHAR(255)
DECLARE @AvgFragmentationInPercent DECIMAL
DECLARE @fillfactor INT 
DECLARE @FragmentationThresholdForReorganizeTableLowerLimit VARCHAR(10)
DECLARE @FragmentationThresholdForRebuildTableLowerLimit VARCHAR(10)
DECLARE @Message VARCHAR(1000)

SET NOCOUNT ON

–You can specify your customized value for reorganize and rebuild indexes, the default values
–of 10 and 30 means index will be reorgnized if the fragmentation level is more than equal to 10 
–and less than 30, if the fragmentation level is more than equal to 30 then index will be rebuilt
SET @fillfactor = 90 
SET @FragmentationThresholdForReorganizeTableLowerLimit = ‘10.0’ — Percent
SET @FragmentationThresholdForRebuildTableLowerLimit = ‘30.0’ — Percent

BEGIN TRY

— ensure the temporary table does not exist
IF (SELECT OBJECT_ID(‘tempdb..#FramentedTableList’)) IS NOT NULL
DROP TABLE #FramentedTableList;

SET @Message = ‘DATE : ‘ + CONVERT(VARCHAR, GETDATE()) + ‘ – Retrieving indexes with high fragmentation from ‘ + DB_NAME() + ‘ database.’
RAISERROR(@Message, 0, 1) WITH NOWAIT

SELECT OBJECT_NAME(IPS.OBJECT_ID) AS [TableName], avg_fragmentation_in_percent, SI.name [IndexName], 
schema_name(ST.schema_id) AS [SchemaName], 0 AS IsProcessed INTO #FramentedTableList
FROM sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL , NULL) IPS
JOIN sys.tables ST WITH (nolock) ON IPS.OBJECT_ID = ST.OBJECT_ID
JOIN sys.indexes SI WITH (nolock) ON IPS.OBJECT_ID = SI.OBJECT_ID AND IPS.index_id = SI.index_id
WHERE ST.is_ms_shipped = 0 AND SI.name IS NOT NULL
AND avg_fragmentation_in_percent >= CONVERT(DECIMAL, @FragmentationThresholdForReorganizeTableLowerLimit) 
ORDER BY avg_fragmentation_in_percent DESC

SET @Message = ‘DATE : ‘ + CONVERT(VARCHAR, GETDATE()) + ‘ – Retrieved indexes with high fragmentation from ‘ + DB_NAME() + ‘ database.’

RAISERROR(@Message, 0, 1) WITH NOWAIT
RAISERROR(”, 0, 1) WITH NOWAIT

WHILE EXISTS ( SELECT 1 FROM #FramentedTableList WHERE IsProcessed = 0 )
BEGIN

  SELECT TOP 1 @Table = TableName, @AvgFragmentationInPercent = avg_fragmentation_in_percent, 
  @SchemaName = SchemaName, @IndexName = IndexName
  FROM #FramentedTableList
  WHERE IsProcessed = 0

  –Reorganizing the index
  IF((@AvgFragmentationInPercent >= @FragmentationThresholdForReorganizeTableLowerLimit) AND (@AvgFragmentationInPercent < @FragmentationThresholdForRebuildTableLowerLimit))
  BEGIN
    SET @Message = ‘DATE : ‘ + CONVERT(VARCHAR, GETDATE()) + ‘ – Reorganizing Index for [‘ + @Table + ‘] which has avg_fragmentation_in_percent = ‘ + CONVERT(VARCHAR, @AvgFragmentationInPercent) + ‘.’
    RAISERROR(@Message, 0, 1) WITH NOWAIT
    SET @cmd = ‘ALTER INDEX ‘ + @IndexName + ‘ ON [‘ + RTRIM(LTRIM(@SchemaName)) + ‘].[‘ + RTRIM(LTRIM(@Table)) + ‘] REORGANIZE’ 
    EXEC (@cmd)
    –PRINT @cmd 
    SET @Message = ‘DATE : ‘ + CONVERT(VARCHAR, GETDATE()) + ‘ – Reorganize Index completed successfully for [‘ + @Table + ‘].’ 
    RAISERROR(@Message, 0, 1) WITH NOWAIT
    RAISERROR(”, 0, 1) WITH NOWAIT
  END
  –Rebuilding the index
  ELSE IF (@AvgFragmentationInPercent >= @FragmentationThresholdForRebuildTableLowerLimit )
  BEGIN
    SET @Message = ‘DATE : ‘ + CONVERT(VARCHAR, GETDATE()) + ‘ – Rebuilding Index for [‘ + @Table + ‘] which has avg_fragmentation_in_percent = ‘ + CONVERT(VARCHAR, @AvgFragmentationInPercent) + ‘.’
    RAISERROR(@Message, 0, 1) WITH NOWAIT
    SET @cmd = ‘ALTER INDEX ‘ + @IndexName + ‘ ON [‘ + RTRIM(LTRIM(@SchemaName)) + ‘].[‘ + RTRIM(LTRIM(@Table)) + ‘] REBUILD WITH (FILLFACTOR = ‘ + CONVERT(VARCHAR(3),@fillfactor) + ‘, STATISTICS_NORECOMPUTE = OFF)’ 
    EXEC (@cmd)
    –PRINT @cmd
    SET @Message = ‘DATE : ‘ + CONVERT(VARCHAR, GETDATE()) + ‘ – Rebuild Index completed successfully for [‘ + @Table + ‘].’
    RAISERROR(@Message, 0, 1) WITH NOWAIT
    RAISERROR(”, 0, 1) WITH NOWAIT
  END

  UPDATE #FramentedTableList
  SET IsProcessed = 1 
  WHERE TableName = @Table
  AND IndexName = @IndexName
END

DROP TABLE #FramentedTableList 

END TRY

BEGIN CATCH
  PRINT ‘DATE : ‘ + CONVERT(VARCHAR, GETDATE()) + ‘ There is some run time exception.’
  PRINT ‘ERROR CODE : ‘ + CONVERT(VARCHAR, ERROR_NUMBER()) 
  PRINT ‘ERROR MESSAGE : ‘ + ERROR_MESSAGE()
END CATCH

Once this is completed I recommend reorganizing pages and releasing unused space, the shrink wizard in SQL Management Studio will accomplish this.

The last step should be to start the rebuild of the warehouse database and analysis services database, this can be done from the Team Foundation Server Administration Console (Application Tier > Reporting) interface.