Getting Drive Info, Part 5, the SSIS WMI Data Reader Task

In the final installment of the Getting Drive Info series (Part 1, Part 2, Part 3, Part 4), SSIS will again be used to collect and save the drive information. This WMI Data Reader Task will be used to collect the drive information.

WMI, Windows Management Instrumentation, is the Microsoft implementation of the industry standard Web-based Enterprise Management, WBEM, using the Common Information Model, CIM. WMI provides easy access to information about the computer hardware and software via classes like Win32_LogicalDisk. The classes can be queried with a SQL like language called WQL (SQL for WMI).

WMI Data Reader Task

The package uses the WMI Data Reader Task to gather the disk information. It requires a WMI connection and a WQL query to define what information to gather and from where.

Let’s start with the WMI connection configuration. Right click in the BIDS Connection Managers pane and click on New Connection… which will pop up the Add SSIS Connection Manager dialog.

Select WMI and click the Add… button. The WMI Connection Manager Editor will be displayed.

Configuring the WMI connection for the local computer is fairly simple. Set it up as shown above with the name of your choice of course and click OK. This configuration will only work if the account that the package will run under has rights to read data from the WMI provider. If there are security issues reading WMI data, other credentials will need to be provided or the account that runs the package given the rights to query the WMI provider.

In the case where the server being queried for WMI data is not the computer the package is running on, credentials will need to be supplied. If the account that is running the package has access on this other computer, then checking Use Windows Authentication should work. When the other computer is in a different domain or the account running the package does not have access, then a windows user and password will need to be supplied. This can be a domain account or a local windows account on the other computer. Always supply the user name with a domain or the other computer name. In highly secure environments remote WMI security access may need to be configured on the other computer. WMI security configuration will not be covered here.

The Namespace property can usually be set to \root\cimv2. There will be cases when the WMI class to be queried is under a different WMI namespace. These other namespaces can be browsed WMI CMI Studio or Scriptomatic 2.0. I also recommend these tools to browse the classes available and see the properties and values. A sample of the namespaces available is shown below.

Now that the connection is configured, work can begin on the WMI Data Reader Task.

Create a WMI Data Reader Task on the Control Flow designer pane.

Open the WMI Data Reader Editor. The SQL Server 2005 version is displayed. The SQL Server 2008 version has an additional editor called General when the Name and Description of the task can be set.

Let’s setup the WMI Options.

The WmiConnection has been set the one configured above.

WqlQuerySourceType is set to Direct Input, since the query will be statically defined in the WqlQuerySource property. WqlQuerySourceType can also be set to File connection or Variable. Use File connection if the query is defined in a file. Variable would be used if the query needs to be dynamic. A dynamic query is usually required if the WMI Data Reader Task is placed in a looping container or the query is dependent on the results of a previous task in the package.

WqlQuerySource needs to be set to a valid WQL query. In order to get the drive information, there are 2 different WMI classes that could be queried. The first, Win32_LogicalDisk, only seems to handle local attached storage. The query to get the drive information is:

SELECT DeviceId, Size, FreeSpace FROM Win32_LogicalDisk WHERE DriveType = 3

The other class to try is Win32_Volume. This class would probably need to be used to get the information for mount points. (Thanks Dave Levy!) Here is the query:

SELECT DriveLetter, Capacity, FreeSpace FROM Win32_Volume WHERE DriveType = 3

DriveType 3 limits the query to fixed drives. DriveType 2, removable drives, may need to be included. Add OR DriveType = 2 to the queries if all the drives expected are not being returned.

OutputType should be set to Data table so that the results from multiple drives can be returned. The data will end up in a variable that we will define later. The other values for OutputType are Property name and value and Property value.

OverwriteDestination should be set to OverwriteDestination. The other values are Keep Original and Append to destination. The SQL Server 2008 editor disables this property which really makes more sense when writing to a variable.

DestinationType should be set to Variable. The other value is File connection. Use File connection when the data needs to be written to a file. Changing this property changes the options in the next property Destination.

Destination should be set to a variable of type Object. The variable DriveInfo can be created from here or beforehand from the variables browser. Remember to watch the variable scoping. It should scoped at the package level.

That completes the configuration of the WMI Data Reader Task.

Data Flow Tasks

The contents of the DriveInfo variable need to be processed with a Data Flow Task. This will be accomplished with a Script Component used as a data source feeding an Oledb destination that will write the drive information to a table.

Create a Data Flow Task on the Control Flow designer pane and connect the WMI Data Reader Task to it.

The Script Component

Switch over to the Data Flow designer pane and drag a Script Component object from the Data Flow Transformations section of the Toolbox. The Select Script Component Type will appear. The Script Component can be a source of data, a transformer of data or a consumer of data. Choose Source as this Script Component is going to provide drive information in a format that an Oledb provider can consume and click OK. Rename the Script Component Get DriveInfo. All of the images below came from BIDS 2005. BIDS 2008 is slightly different.

Rename the Script Component Get DriveInfo.

Double click on the Script Component to open the editor. In SQL Server 2008 the Script editor is the default. Click on Inputs and Outputs first to display that editor. Now let’s add some output columns. Rename Output 0 to DriveInfo and add the columns:

  • Drive, data type string [DT_STR]
  • DriveSize, data type float [DT_R4]
  • FreeSpace, data type float [DT_R4]

Here are the column property details for the Drive column.

It is important to define the output columns first so they show up in the OutputBuffer object while the coding the script. The output buffer will be named DriveInfoBuffer in this case.

Switch Script Transformation editor to Script mode by clicking on Script in the left pane.

The ReadOnlyVariables property in the Custom Properties section needs to be set to User::DriveInfo so that the Data Table filled by the WMI Data Reader Task can be processed by the Script Component.

Now it’s time to add the code to get the drive information and send it to the output buffer. Click the Design Script… button and add the code below. The sample code is for the Win32_LogicalDisk query. If Win32_Volume is being used, change the column names to match that WMI query. Note that the User::DriveInfo variable has to be converted to a DataTable type. Here is the code in VB.Net for SSIS 2005.

Imports System
Imports System.Data
Imports System.Math
Imports Microsoft.SqlServer.Dts.Pipeline.Wrapper
Imports Microsoft.SqlServer.Dts.Runtime.Wrapper

Public Class ScriptMain
    Inherits UserComponent

    Public Overrides Sub CreateNewOutputRows()
        Dim dr As DataRow
        Dim dt As DataTable

        dt = CType(Variables.DriveInfo, DataTable)
        For Each dr In dt.Rows
            DriveInfoBuffer.AddRow()
            DriveInfoBuffer.Drive = dr("DeviceId").ToString()
            DriveInfoBuffer.TotalSize = CType(dr("Size"), Single)
            DriveInfoBuffer.FreeSpace = CType(dr("FreeSpace"), Single)
        Next
    End Sub

End Class

Here is the code in C# for SSIS 2008.

Make sure to set the ScriptLanguage property to Microsoft Visual C# 2008 in BIDS 2008 before clicking the Edit Script… button and pasting in the code below.

using System;
using System.Data;
using System.IO;
using Microsoft.SqlServer.Dts.Pipeline.Wrapper;
using Microsoft.SqlServer.Dts.Runtime.Wrapper;

[Microsoft.SqlServer.Dts.Pipeline.SSISScriptComponentEntryPointAttribute]
public class ScriptMain : UserComponent
{
    public override void PreExecute()
    {
        base.PreExecute();
    }

    public override void PostExecute()
    {
        base.PostExecute();
    }

    public override void CreateNewOutputRows()
    {
        DataTable di = (DataTable)Variables.DriveInfo;

        foreach (DataRow dr in di.Rows)
        {
            DriveInfoBuffer.AddRow();
            DriveInfoBuffer.Drive = dr["DeviceId"].ToString();
            DriveInfoBuffer.TotalSize = Convert.ToSingle(dr["Size"]);
            DriveInfoBuffer.FreeSpace = Convert.ToSingle(dr["FreeSpace"]);
        }
    }

}

Now we need to add the destination to store the drive information. Grab the OLE DB Destination from the Data Flow Destinations of the Toolbox and drag it onto the Data Flow designer and name it Put DriveInfo. Connect the output of the Get DriveInfo Script Component to Put DriveInfo.

 

In the OLE DB Destination Editor, set the OLE DB connection manager to the DBA_Tools connection created earlier and pick the DriveInfo table from the table or view drop down.

Here is the create table statement for the DriveInfo table used in the earlier parts of the series.

CREATE TABLE dbo.DriveInfo
( Drive VARCHAR(255)
, DriveSize FLOAT
, FreeSpace FLOAT)

Run the package and all the drive information for the computer should be stored in the DriveInfo table.

That concludes the Drive Information series. This is great pattern to follow to get any information via WMI and store it in a central database for server reporting. Have fun experimenting.

Troubleshooting Connectivity with UDL files

I had a new SSIS package that was ready to deploy to production. It was pulling data from Oracle and this was the first package being placed into production that was using Oracle as a source. Arrangements had been made to have the Oracle OLE DB drivers installed on the production SQL Server as the performance was determined to be better that the OLE DB driver Microsoft provided at the time. On the first run of the package in production, the Oracle connection failed validation with a very useless error message, Connection failed validation. Why? The production server did not have any client tools to test connectivity as the system administrator had merely installed the drivers. As this was financial institution and the server was locked down, the SQL Server service account was setup with minimal rights. I sat down with the production DBA to troubleshoot the issue. Fortunately we were able to login interactively using the SQL Server service account. We created a UDL file to test whether a connection could be made to the production Oracle server. The first thing we checked was the Provider tab which verified that the Oracle OLE DB driver was indeed installed. We then attempted to create a connection and an error was raised. The error showed that the SQL Server service account did not have rights to the directory where the Oracle driver was installed and could not read the tnsnames.ora file. We contacted the system administrator to grant the directory rights to the SQL server service account and the connection was able to be made on the next test. The SSIS package ran successfully as well.

Let’s take a closer look at UDL files.

UDL files are useful to:

  • troubleshoot connectivity issues.
  • prove that a computer can connect to a database server.
  • determine if the OLE DB driver you need is installed on the machine.
  • generate a connection string.

 

A UDL file can be created two different ways. If you have the “Hide extensions for known file types” option turned off in explorer, the UDL file can be created directly from the Desktop. Right click on the Destop and select New->Text Document. Rename the new file test.udl. Windows will pop the warning below. Click Yes.

If you haven’t seen it before, here is the “Hide extensions for known file types” option.

The second way is to open Notepad and save an empty file to the Desktop named test.udl. The file icon should look like this on new versions of Windows.

Here is the older version of the icon that would appear on Windows XP/2003.

Double click the newly created icon and the Data Link Properties dialog will be displayed.

The Connection tab is ready to create a connection to a SQL Server by default. In order to create a connection using a different driver, click on the Provider tab and select the driver.

Now when the Connection tab is selected, notice how the UI has changed to provide the appropriate options for the new driver.

The Advanced tab allows additional connection properties to be set.

The All tab shows all of the properties that can be set for the chosen provider. There are properties on this tab that may not be in the documentation provide by the vendor.

Now that we have seen all this fancy UI magic the question is “What is a UDL file”? It is just a test file with a connection string. Windows launches the UI based on the file extension.

Let’s create a UDL using the SQL Server OLE DB driver with windows authentication, pointing to the AdventureWorks database and with an application name of “Test UDL”. Here are the results.

; Everything after this line is an OLE DB initstring

Provider=SQLOLEDB.1;Integrated Security=SSPI;Persist Security Info=False;Initial Catalog=AdventureWorks;Data Source=.\SQL2005;Application Name=Test UDL

The UDL been around for a while, is available on any windows box and is a quick light weight way to troubleshoot your connections.

Getting Drive Info, Part 4, the SSIS Script Component

In this installment of the Getting Drive Info series (Part 1, Part 2, Part 3), SSIS will be used to collect and save the drive information. SSIS provides multiple ways to accomplish this task. This installment will focus on using the Script Component to collect the drive information.

The Script component will be used as the source of a Data Flow where the DriveInfo class of the System.IO namespace will be used to collect the drive information. The DBA_Tools database and DriveInfo table from the earlier installments will also be needed.

Fire up BIDS and create a new project. Name the package something appropriate.

Drag a Data Flow task onto the Control Flow designer and name it DriveInfo.

Create a new Connection Manager that points to the DBA_Tools database on your SQL Server.

Bring the Data Flow designer into focus. Grab the Script Component from the Data Flow Transformations of the Toolbox and drag it onto the Data Flow designer. The Select Script Component Type dialog will appear. The Script Component can be a source of data, a transformer of data or a consumer of data. Choose Source, since this Script Component is going to provide drive information and click OK. Rename the Script Component Get DriveInfo.

Double click on the Script Component to open the editor. Now let’s add some output columns. Rename Output 0 to DriveInfo and add the columns:

  • Drive, data type string [DT_STR]
  • DriveSize, data type float [DT_R4]
  • FreeSpace, data type float [DT_R4]

Here are the column property details for the Drive column.

It is important to define the output columns first so they show up OutputBuffer object while the coding the script. The output buffer will be named DriveInfoBuffer in this case.

Now it’s time to add the code to get the drive information and send ti to the output buffer. Here is the code in VB.Net for SSIS 2005.

Imports System
Imports System.Data
Imports System.IO
Imports System.Math
Imports Microsoft.SqlServer.Dts.Pipeline.Wrapper
Imports Microsoft.SqlServer.Dts.Runtime.Wrapper

Public Class ScriptMain
    Inherits UserComponent

    Public Overrides Sub CreateNewOutputRows()

        For Each di As DriveInfo In DriveInfo.GetDrives()
            If di.IsReady And di.DriveType = DriveType.Fixed Then
                Me.DriveInfoBuffer.AddRow()
                Me.DriveInfoBuffer.Drive = di.Name
                Me.DriveInfoBuffer.DriveSize = di.TotalSize
                Me.DriveInfoBuffer.FreeSpace = di.TotalFreeSpace
            End If
        Next

    End Sub

End Class

Here is the code in C# for SSIS 2008.

Make sure to set the ScriptLanguage property to Microsoft Visual C# 2008 in BIDS 2008 before clicking the Edit Script… button and pasting in the code below.

using System;
using System.Data;
using System.IO;
using Microsoft.SqlServer.Dts.Pipeline.Wrapper;
using Microsoft.SqlServer.Dts.Runtime.Wrapper;

[Microsoft.SqlServer.Dts.Pipeline.SSISScriptComponentEntryPointAttribute]
public class ScriptMain : UserComponent
{
    public override void PreExecute()
    {
        base.PreExecute();
        /*
          Add your code here for preprocessing or remove if not needed
        */
    }

    public override void PostExecute()
    {
        base.PostExecute();
        /*
          Add your code here for postprocessing or remove if not needed
          You can set read/write variables here, for example:
          Variables.MyIntVar = 100
        */
    }

    public override void CreateNewOutputRows()
    {
        /*
          Add rows by calling the AddRow method on the member variable named "<Output Name>Buffer".
          For example, call MyOutputBuffer.AddRow() if your output was named "MyOutput".
        */
        foreach (DriveInfo di in DriveInfo.GetDrives())
        {
            if (di.IsReady && di.DriveType == DriveType.Fixed)
            {
                DriveInfoBuffer.AddRow();
                DriveInfoBuffer.Drive = di.Name;
                DriveInfoBuffer.DriveSize = di.TotalSize;
                DriveInfoBuffer.FreeSpace = di.TotalFreeSpace;
            }
        }
    }
}

Now we need to add the destination to store the drive information. Grab the OLE DB Destination from the Data Flow Destinations of the Toolbox and drag it onto the Data Flow designer and name it Put DriveInfo. Connect the output of the Get DriveInfo Script Component to Put DriveInfo.

In the OLE DB Destination Editor, set the OLE DB connection manager to the DBA_Tools connection created earlier and pick the DriveInfo table from the table or view drop down.

Here is the create table statement for the DriveInfo table used in the earlier parts of the series.

CREATE TABLE dbo.DriveInfo
( Drive VARCHAR(255)
, DriveSize FLOAT
, FreeSpace FLOAT)

Now you are ready to start collecting and saving the drive information on your SQL Server. I will leave it to you to explore the DriveInfo object to see what other bits of information that may be useful to collect and store for the drives. The only limitation of this technique is that the package can only get the drive information on the server that the package is running on. The package would need to be deployed and run on every SQL Server where it is desried to collect the drive information. The last part of this series will address this limitation by implementing the collection of drive information using the WMI Task in SSIS.

Getting Drive Info Part 3, CLR

In the first two parts, (Part 1, Part 2), of the Getting Drive Info series the techniques to gather drive info with methods that will work on SQL Server 2000 were presented. Now it is time to move on to look at the options that the newer versions of SQL Server can use. In this article the drive info will be retrieved via the CLR. Andy Novick (Blog) wrote the code to accomplish this in his article, CLR Function to return free space for all drives on a server, on MSSQLTips.com. Security and environment set up were not covered in this article. Rather than reinvent the wheel, the code from the referenced article will be used. I will discuss security and the setup I created to run it .

The first attempt was to build and deploy to the master database. This resulted in the following error:

Msg 6522, Level 16, State 2, Line 1
A .NET Framework error occurred during execution of user-defined routine or aggregate “drive_info”:
System.Security.SecurityException: Request for the permission of type ‘System.Security.Permissions.SecurityPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089’ failed.
System.Security.SecurityException:
at System.Security.CodeAccessSecurityEngine.Check(Object demand, StackCrawlMark& stackMark, Boolean isPermSet)
at System.Security.CodeAccessPermission.Demand()
at System.IO.Directory.GetLogicalDrives()
at System.IO.DriveInfo.GetDrives()
at UserDefinedFunctions.drive_info()

After getting the above error and not really understanding the ramifications of an unsafe assembly, I decide it was time to do some reading. I found that the book, A Developers Guide to SQL Server 2005, had an excellent discussion of the CLR and the different assembly security types. Now I was ready to create an environment for this function.

Since this was a function for DBAs to use for system monitoring, I created a new database named DBA_Tools and set the owner to sa. In order to let an Unsafe assembly execute, the TRUSTWORTHY option needs to be enabled. Turn on Trustworthy with this command.

ALTER DATABASE DBA_Tools SET TRUSTWORTHY ON

Make sure that the CLR server option is enabled, otherwise you can’t execute the CLR function. This can be accomplished with this script.

EXECUTE sp_configure 'show advanced options', 1
RECONFIGURE WITH OVERRIDE

EXECUTE sp_configure 'clr enabled', 1
RECONFIGURE WITH OVERRIDE

One thing that I did notice is having the CLR server option enabled or disabled had no impact on deploying the function. This is a great feature if you typically don’t want the CLR enabled all of the time. The CLR can be enabled by the script that runs drive_info() function and then be disabled immediately after.

Moving on to the Visual Studio setup, make sure the connection in the CLR database project points to the DBA_Tools database. Set the Assembly name property on the Application tab of the Solution property page to something more appropriate.

Set the Assembly Name

The CLR function performed as advertised after setting everything up correctly. The function was built in both Visual Studio 2005 and 2008 and deployed to the corresponding versions of SQL Server. The setup worked on both SQL Server 2005 and 2008. Here is a sample output.

Results of the drive_info function

Utilizing the table, DriveInfo, that was used in the previous installments, the script to populate it would be:

INSERT INTO DriveInfo
SELECT letter   AS Drive
     , total_mb AS DriveSize
     , free_mb  AS FreeSpace 
  FROM drive_info()
 WHERE type = 'Fixed'

Although this solution works well enough, there is some risk with this approach. Based on how an Unsafe assembly is handled with in the SQLOS, there is a risk for memory leaks when an unhandled exception occurs. If a solution can be created that uses alternative SQL Server features rather than an Unsafe CLR assembly, I would choose the alternative. Speaking of alternatives, that where the next installments come in. They will cover using SSIS to get the drive info via the Script Component and WMI.

sp_help_partition

If you are using partitioned tables, this may come in handy. I’ll keep it short and just post the code. Let me know if there are errors or you have improvements.

IF  EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[sp_help_partition]') AND type in (N'P', N'PC'))
DROP PROCEDURE [dbo].[sp_help_partition]
GO

CREATE PROCEDURE dbo.sp_help_partition @OBJECT_NAME sysname = NULL
AS
-------------------------------------------------------------------------------
-- 
-- Proc: sp_help_partition
-- Author: Norman Kelm, Gerasus Software, (C) 2010
--
-- Like sp_help, but for partitioned table partition info.
--
-- Usage:
-- Return a list of table that are partitioned
-- EXECUTE sp_help_partition
--
-- Return the partitioning information for a table
-- EXECUTE sp_help_partition 'Production.TransactionHistory'
--
-- Revision History:
-- 08/06/2010 - 1.0 - Original version
--
-------------------------------------------------------------------------------
DECLARE @schema sysname
      , @TABLE_NAME sysname
      
SELECT @schema = CASE WHEN PARSENAME(@OBJECT_NAME,2) IS NULL THEN 'dbo' ELSE PARSENAME(@OBJECT_NAME,2) END
     , @TABLE_NAME = PARSENAME(@OBJECT_NAME,1)

IF @OBJECT_NAME IS NULL
BEGIN
  -- List all paritioned tables when no paramters
  SELECT DISTINCT SCHEMA_NAME(OBJECTPROPERTY(object_id, 'SchemaId')) + '.' + OBJECT_NAME(si.object_id) AS [Name]
    FROM sys.partition_schemes AS ps
   INNER JOIN
         sys.indexes AS si
      ON ps.data_space_id = si.data_space_id
END
ELSE
BEGIN
  IF EXISTS(SELECT 1 
              FROM sys.partition_schemes AS ps
             INNER JOIN
                   sys.indexes AS si
                ON ps.data_space_id = si.data_space_id
               AND OBJECT_ID(@OBJECT_NAME) = si.[object_id])
  BEGIN
  
    -- Borrowed from sp_help
    SELECT [Name]  = o.name
         , [Owner] = USER_NAME(OBJECTPROPERTY(object_id, 'ownerid'))
         , [Schema] = SCHEMA_NAME(OBJECTPROPERTY(object_id, 'SchemaId'))
         , [Type]  = substring(v.name,5,31)
         , [Created_datetime] = o.create_date  
     FROM sys.all_objects o
        , master.dbo.spt_values v  
    WHERE o.object_id = OBJECT_ID(@OBJECT_NAME)
      AND o.type = substring(v.name,1,2) collate database_default
      AND v.type = 'O9T'  
 
   SELECT COUNT(*) AS NumberOfPartitions
     FROM sys.partitions  p
    INNER JOIN
          sys.indexes i
       ON p.object_id = i.object_id
      AND p.index_id = i.index_id
    WHERE OBJECT_ID(@OBJECT_NAME) = p.[object_id]  
      AND i.type in (0,1) --0 = heap, 1 = clustered, skip the nonclustered for the count
       
    SELECT [Scheme] = ps.name 
      FROM sys.partition_schemes AS ps
     INNER JOIN
           sys.indexes AS si
        ON ps.data_space_id = si.data_space_id
       AND OBJECT_ID(@OBJECT_NAME) = si.[object_id]  
       AND si.type in (0,1) --0 = heap, 1 = clustered, skip the nonclustered for the count
      
    SELECT [Function] = pf.name
         , [Type]     = pf.type_desc
         , [fanout]   = pf.fanout 
         , [boundary_value_on_right] = pf.boundary_value_on_right
         , [create_date]             = pf.create_date
         , [modify_date]             = pf.modify_date
      FROM sys.partition_functions AS pf
     INNER JOIN
           sys.partition_schemes AS ps
        ON ps.function_id = pf.function_id
     INNER JOIN
           sys.indexes AS si
        ON ps.data_space_id = si.data_space_id
       AND OBJECT_ID(@OBJECT_NAME) = si.[object_id]  
       AND si.type in (0,1) --0 = heap, 1 = clustered, skip the nonclustered for the count

    SELECT [Function Parameters] = pf.name
         , [parameter_id] = pp.parameter_id
         , [Type] = st.name
         , pp.max_length
         , pp.precision
         , pp.scale
         , pp.collation_name
      FROM sys.partition_parameters AS pp
     INNER JOIN
           sys.partition_functions AS pf
        ON pf.function_id = pp.function_id
     INNER JOIN
           sys.partition_schemes AS ps
        ON ps.function_id = pf.function_id
     INNER JOIN
           sys.indexes AS si
        ON ps.data_space_id = si.data_space_id
     INNER JOIN
           sys.types AS st
        ON pp.system_type_id = st.system_type_id
       AND OBJECT_ID(@OBJECT_NAME) = si.[object_id]  
       AND si.type in (0,1) --0 = heap, 1 = clustered, skip the nonclustered for the count

    SELECT [Function Range Values] = pf.name
         , prv.boundary_id
         , prv.value         
      FROM sys.partition_range_values AS prv
     INNER JOIN
           sys.partition_parameters AS pp
        ON prv.function_id = pp.function_id
       ANd prv.parameter_id = pp.parameter_id
     INNER JOIN
           sys.partition_functions AS pf
        ON pf.function_id = pp.function_id
     INNER JOIN
           sys.partition_schemes AS ps
        ON ps.function_id = pf.function_id
     INNER JOIN
           sys.indexes AS si
        ON ps.data_space_id = si.data_space_id
     INNER JOIN
           sys.types AS st
        ON pp.system_type_id = st.system_type_id
       AND OBJECT_ID(@OBJECT_NAME) = si.[object_id]  
       AND si.type in (0,1) --0 = heap, 1 = clustered, skip the nonclustered for the count
     ORDER BY
           prv.boundary_id
           
     SELECT tc.CONSTRAINT_NAME
          , cc.CHECK_CLAUSE
       FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS AS tc
       JOIN INFORMATION_SCHEMA.CHECK_CONSTRAINTS AS cc
         ON tc.CONSTRAINT_NAME = cc.CONSTRAINT_NAME
        AND tc.CONSTRAINT_SCHEMA = cc.CONSTRAINT_SCHEMA
      WHERE tc.TABLE_SCHEMA = @schema
        AND tc.TABLE_NAME   = @TABLE_NAME
        AND tc.CONSTRAINT_TYPE = 'CHECK'
   END
   ELSE
   BEGIN
     DECLARE @error NVARCHAR(255)
     SELECT @error = @OBJECT_NAME + ' is not partitioned!!!'
     RAISERROR (@error, 16, 1)
   END
END
GO


Getting Drive Info Part 1, sp_OA

This blog series will cover several different methods to collect drive information from a Windows server that is running SQL Server. I’ve answered several requests in forums, news groups and twitter in the past about how to get the drive size of all of the hard drives on the server that houses a SQL Server instance. These people found the undocumented system stored procedure xp_fixeddrives which returns a result set of the drives on the server along with free space. They were looking for a way to get the drive size as well. This would be a useful addition to xp_fixeddrives to create a disk growth monitoring system. This blog series will also show methods that are specific to SQL Server 2000 as well as methods that utilize the newer features available in versions 2005 and 2008 in future posts. The techniques presented can be modified to get just about any information about the Windows server that houses the SQL Server instance.

Retrieving Drive Size Using sp_OA

The first method discussed is old to say the least as I have been using it since SQL Server 7.0 and there are still plenty of SQL Server 2000 instances running that can make use of this method. It utilizes the sp_OA suite of stored procedures. I have used this method to gather data for an automated SQL Server status report daily for years on SQL Server 7.0, 2000 and 2005. Although there are reports that the sp_OA stored procedures can have memory problems and are a security risk, I have yet to have an issue. They are also set to be deprecated at some point. These points are something that needs to be considered before using this solution. The sp_OA suite of stored procedures allows the use of COM objects from within Transact-SQL. Yes, in order to get the drive size you need to do a little bit of object oriented programming. Much like xp_cmdshell, this suite of store procedures has security risks. Microsoft recognized this and disabled them by default in SQL Server 2005 and 2008. They need to be enabled by running the script (the following is not required on SQL Server 2000):

-- For SQL Server 2005 and 2008 only
EXECUTE sp_configure 'show advanced options', 1
RECONFIGURE WITH OVERRIDE

EXECUTE sp_configure 'Ole Automation Procedures', 1
RECONFIGURE WITH OVERRIDE

If you are really worried about the security aspects of having these stored procedures enabled continuously, the above script could be made part of the entire drive info script. This would enable the store procedures for only as long as necessary to get the drive information. The Ole Automation Procedures configuration option could be disabled at the end of the drive size script.

In order to get the drive size a new instance of the Windows Scripting Host File System Object has to be created. The GetDrive method has to be called and then the property TotalSize is retrieved. Here is the script for the C: drive.

DECLARE @rs INTEGER
      , @fso INTEGER
      , @drv INTEGER
      , @drivesize VARCHAR(20)

-- Create a new Windows Scripting Host File System Object
EXEC @rs = sp_OACreate 'Scripting.FileSystemObject', @fso OUTPUT

-- Get the drive information for the C: drive
IF @rs = 0
  EXEC @rs = sp_OAMethod @fso, 'GetDrive("C:\")', @drv OUTPUT

-- Get the size of this drive
IF @rs = 0
  EXEC @rs = sp_OAGetProperty @drv,'TotalSize', @drivesize OUTPUT
IF @rs<> 0
  SET @drivesize = '0'

-- Clean up
EXEC sp_OADestroy @drv
EXEC sp_OADestroy @fso

SELECT CAST( @drivesize AS FLOAT ) / 1024.0 / 1024.0

The next step in the process is to include xp_fixeddrives to get a list of all the drives on a server and iterate through the result set of drives to get the drive size. A table is populated with all the fixed drive info. This data is iterated and the drive size for each drive is collected from the file system object. Here is the script.

DECLARE @rs INTEGER
      , @fso INTEGER
      , @drv INTEGER
      , @drivesize VARCHAR(20)
      , @drive VARCHAR(255)
      , @GetDrive VARCHAR(255)

DECLARE @DriveInfo TABLE(Drive VARCHAR(255), DriveSize FLOAT, FreeSpace FLOAT)

-- Get all the drives and free space in MB
INSERT INTO @DriveInfo(Drive, FreeSpace) EXECUTE xp_fixeddrives
-- Convert Free Space to GB if applicable to the environment
UPDATE @DriveInfo SET FreeSpace = FreeSpace / 1024.

SELECT TOP 1 @drive = Drive FROM @DriveInfo WHERE DriveSize IS NULL

WHILE @drive IS NOT NULL
BEGIN

  -- Create a new Windows Scripting Host File System Object
  EXEC @rs = sp_OACreate 'Scripting.FileSystemObject', @fso OUTPUT

  -- Get the drive information for the drive
  SELECT @GetDrive = 'GetDrive("'+ @drive + ':\")'
  IF @rs = 0
    EXEC @rs = sp_OAMethod @fso, @GetDrive, @drv OUTPUT

  -- Get the size of this drive
  IF @rs = 0
    EXEC @rs = sp_OAGetProperty @drv,'TotalSize', @drivesize OUTPUT
  IF @rs<> 0
    SET @drivesize = '0'

  UPDATE @DriveInfo SET DriveSize = CAST(@drivesize AS FLOAT) / 1024.0 / 1024.0 WHERE Drive = @drive
  -- Convert Drive Size to GB if applicable to the environment
  UPDATE @DriveInfo SET DriveSize = DriveSize / 1024. WHERE Drive = @drive

  SELECT @drive = NULL
  SELECT TOP 1 @drive = Drive FROM @DriveInfo WHERE DriveSize IS NULL

END

SELECT * FROM @DriveInfo

-- Clean up
EXEC sp_OADestroy @drv
EXEC sp_OADestroy @fso

The data in the @DriveInfo table can be stored permanently in a persisted table for size monitoring and historical reporting.

This concludes the first part of Drive Info series. The sp_OA stored procedures can be used to gather all kinds of information. There are many more properties to look at in the Drive object, go here <add link to Drive docs> for more info. Other COM objects could be instantiated via sp_OA and used to gather server information.

In the next installment of the drive info series, I will show another method supported by SQL Server 2000 and collect the same information using DTS.

Living with DTS on SQL Server 2005 and 2008

The DTS support in SQL Server 2005 and 2008 is excellent (almost). Typically the packages and jobs do not need any modification in order to run on SQL Server 2005 or 2008. Unfortunately, I have read far too many news group posts that indicate a complete conversion to SSIS MUST take place during a SQL Server 2000 upgrade. It is not mandatory. I am not saying to avoid SSIS. The existing DTS packages can be reviewed for candidates to convert to SSIS during the upgrade. The old DTS packages could also be converted as time permits after the upgrade.

DTS Packages can be run and edited in SSMS as long as the Backwards Compatibility and DTS Designer components from the Sql Server 2005 Feature Pack have been installed. It is available for SQL Server 2005 SP2 here, SQL Server 2005 SP3 here and SQL Server 2008 SP1 here.

The two features that are missing from SSMS are the ability to create a new package from scratch and the detailed list of DTS packages which are available in Enterprise Manager. The lack of these two features in SSMS requires a copy of Enterprise Manager to be available and the continued use of Windows XP since Sql Server 2000 is not compatible with Vista and later versions of Windows.

In order to address these missing features, I have created a new utility and two custom reports for SSMS.

The utility is Create DTS Package. It will create an empty DTS package in a structure storage file or SQL Server.

Create DTS Package

Create DTS Package creates a blank DTS package and saves it in as a structured storage file or in a SQL Server.

Enter the name of the new package in the Package name text box.

If the package storage destination is a structured storage file, enter the full path in the File Name text box and click on the Save To File button.

If the package storage destination is a sql server, choose the authentication, enter the server name in the Server text box, provide the credentials and click on the Save To Sql Server button.

Create DTS Package is an HTA application.

Create DTS Package has eliminated the need for the Enterprise Manager installation in a SQL Server 2005/2008 environment that still needs to support DTS packages.

Create DTS Package will run on both 32(x86) and 64(x64) bit installations.

Create DTS Package could also be modified to add a standard group of objects for your organization to create a package template.

The source code for Create DTS Package is available for download here (right click the link and click Save Target As…\Save Link As..) or here for a listing of the source code (trying to save an hta is causing security violations for some people).

The Package Summary custom report recreates the Local Packages view from Enterprise Manager in SSMS.

In order to support mixed DTS/SSIS environments, the report shows both types of packages.

Since this custom report does not require any object inputs, it can be run from anywhere in the SSMS Object Explorer.

The SQL Server 2005 version is available here.

The SQL Server 2008 version is available here.

Here is a sample output of the report.

Package Summary

Update: If you are running 32-bit Windows 7, there is quite a bit more to do to get the environment set up correctly. See Jason Brimhall’s excellent post SQL 2008 DTS

Update: In order to get DTS working on 32-bit Windows XP with SQL Server 2008 R2, the above steps will need to be followed as well.

Anti-Virus Adventures

Denny Cherry wrote a blog post recently discussing whether to run anti-virus software on Sql Server machines. He said you should and I agree. It is just too risky not to run anti-virus with all of the notebook computers coming in and out of the office. The post did remind me of something that happened to a Sql Server cluster a few years back was an “if it can happen, it will happen.” The anti-virus vendor pushed out a bad virus signature file that flagged sqlservr.exe as a virus. They realized the mistake and push out a corrected file after the bad one was out for a only short period. Of course our cluster had pulled bad file. Of course sqlservr.exe was found on the passive node and quarrentined. Of course the cluster failed over, or tried to. We were a bit perplexed why sqlservr.exe had disappeared, but figured out where it went and got things running with out too much down time.

SQL This, Not That – Episode 2

When creating a table with a column that will hold the name of a stored procedure that feeds an EXEC command, make sure the data type of this column is sysname. The data type of this column should not be VARCHAR(50) or some other arbitrary data type that should just do the job. This will also be true for columns that hold table and column names. Any column in a user table that holds meta data from a Sql Server system table should match the data type of the source system table column.

SQL This, Not That – Episode 1

I’ve been working on performance tuning a stored procedure that has many optional parameters and is dynamicized by using the always optimizer confusing COALESCE in the WHERE clause (along with other SARG hiding constructs that cause indexes to be ignored).

The stored procedure is called with a date range defined in the parameters @StartDate and @EndDate which are optional.

<code snippet>
, @StartDate DATETIME = NULL
, @EndDate DATETIME = NULL
</code snippet>

The parameters are used in the WHERE clause with a COALESCE.

<code snippet>
WHERE StartDate >= COALESE(@StartDate, StartDate)
AND EndDate <= COALESE(@EndDate, EndDate)
</code snippet>

Using this technique causes the query optimizer to ignore the indexes that have been created on the StartDate and EndDate columns.

One method to fix this problem is initialize the @StartDate and @EndDate with some minimum and maximum dates that make sense for the data being queried as the parameter defaults.

<code snippet>
, @StartDate DATETIME = '01/01/2009'
, @EndDate DATETIME = '01/01/2010'
</code snippet>

Another way is to initialize the parameters in the procedure body for the case where the default dates need to be calculated.

<code snippet>
-- If the date parameters are not supplied,
-- make them 1 month in the past and today
-- to make the query optimizer happy
IF @StartDate IS NULL
SET @StartDate = DATEADD(MONTH, -1, GETDATE())

IF @EndDate IS NULL
SET @EndDate = GETDATE()
</code snippet>

The WHERE clause for either of these cases would change.

<code snippet>
WHERE StartDate >= @StartDate
AND EndDate <= @EndDate
</code snippet>

Now the query optimizer has good SARGs and will do a much better job utilizing those indexes that are using all that space in the database.

Using this technique resulted in a 400% performance improvement for the stored procedure that was being tuned.