Wednesday, February 22, 2012

padding dateparts with zeros

So I take in a date '2012-02-01 12:30:00.000' and I need to turn it into a string, ohhh excuse me, varchar.

'20120201'

What to do? Padding with zeros.



declare @date datetime
set @date = getdate()
SELECT convert(varchar,DATEPART(YYYY,@date)) + right(N'0' + convert(varchar,month(@date)), 2) + right(N'0' + datename(d, @date), 2)


declare @now datetime
set @now = getDate()
select datename(yyyy, @now) as year
, right(N'0' + convert(nvarchar(2), month(@now)), 2) as month
, right(N'0' + datename(d, @now), 2) as day
, right(N'0' + datename(hh, @now), 2) as hour
, right(N'0' + datename(n, @now), 2) as minute
, right(N'0' + datename(s, @now), 2) as second

thanks to the good folks at dzone.

http://snippets.dzone.com/posts/show/6379

Labels:

Friday, February 10, 2012

Replacing the tab "\t" character in text files.

I had to replace "tab" characters in a text file today and I wasn't able to find the an answer on the net very quick. However I was able to get lucky based on my experience with regular expressions and textpad.

I am using textpad version 4.4.1

My data looked like this (tab separated)

10.10.10.1 10.10.10.154

I need it to look like this

10.10.10.1 - 10.10.10.154

For input into a SQL discovery tool, but that is a different blog post.

Anyway I loaded the original data into textpad, went to the replace feature input the regular expression seen below



and WHAM! I got my results.

It really wasn't much of an issue but was a pain to find. Good luck, maybe this can save someone time.

Wednesday, February 01, 2012

OPENDATASOURCE join syntax

As defined by BOL OPENDATASOURCE provides ad hoc connection information as part of a four-part object name without using a linked server name.

By the way, I am using SQL Server 2008 R2 and BOL from SQL Server 2008R2.

Great! This means you can set up a "on the fly" query to a remote server and database using ODBC, OLE DB or other connections without taking the time to set up linked servers etc.

Here is the example from BOL

SELECT *
FROM OPENDATASOURCE('SQLNCLI',
'Data Source=London\Payroll;Integrated Security=SSPI')
.AdventureWorks2008R2.HumanResources.Employee

I was connecting to earlier versions of SQL (Both 2000 and 2005) for my distributed queries and used the SQLOLEDB provider instead.

So, here was my first OPENDATASOURCE query for this project.

select col1,col2,col3
from OPENDATASOURCE('SQLOLEDB','Data Source=server;User ID=user;Password=password)
.[dbname].schema.tablename
where condition1 = value1

This worked great. However my query was not complete. I needed to join to a different table. So I tried a few iterations of syntax for joining OPENDATASOURCE and didn't get it right. Google to the rescue! But I didn't find it there either.

After more trial and error I got it and since it was a pain in the neck to find I decided to post it here.

OPENDATASOURCE join syntax:

select
a.col1,a.col2,a.col3,
b.col1,b.col2,b.col3
from OPENDATASOURCE('SQLOLEDB','Data Source=server;User ID=user;Password=password)
.[dbname].schema.tablename1 a
join OPENDATASOURCE('SQLOLEDB','Data Source=server;User ID=user;Password=password)
.[dbname].schema.tablename2 b
where condition1 = a.value1
and condition2 = b.value2

This can go on forever too.

Happy coding!

Labels:

Thursday, July 01, 2010

Accessing SQL Server with Powershell

The more I use Powershell the more I realize how useful it can be in certain scenarios. Here I will demonstrate how to access SQL Server with a Powershell script. This can be used for almost any purpose where integration with SQL Server is required.

First you'll need to create a simple sql table named "test" with 2 columns "column1" and "column2" both of type varchar(10). For simplicity I suggest a value of "1" in column1 and a value of "2" in column2.

create table test
(
column1 varchar(10),
column2 varchar(10)
)

Populate the table with 1 row.

insert into test
(column1,column2)
values
('1','2')

Let's start out with the whole script and break it down.



$ScriptName = $myInvocation.MyCommand.Name
$Server = "yoursqlservername"

#variable that will be a parameter
#whatever value you populated into column1 in test table.

$column1 = 1

function New-SQLconnection {
Param ([string]$server,
[string]$database = "yourdatabasename",
[string]$connectionName = $ScriptName)
if (test-path variable:\conn) {
$conn.close()
} else {
$conn = new-object ('System.Data.SqlClient.SqlConnection')
}
$connString = "Server=$server;Integrated Security=SSPI;Database=$database;Application Name=$connectionName"
$conn.ConnectionString = $connString
$conn.StatisticsEnabled = $true
$conn.Open()
$conn
}

function Query-SQL {
Param ($query, $conn, [int]$CommandTimeout = 30)
$sqlCmd = New-Object System.Data.SqlClient.SqlCommand
$sqlCmd.CommandTimeout = $CommandTimeout
$sqlCmd.CommandText = $query
$sqlCmd.Connection = $conn

#add parameters here
$sqlCmd.Parameters.AddWithValue("@column1",$column1) | Out-Null

$Rset = $sqlCmd.ExecuteReader()
if ($? -eq $false) {
Close-SQLquery $rset
[int]$rset = -1
}
if ($Rset.hasrows -eq $false) {
Close-SQLquery $rset
[string]$rset = ""
}
,$Rset ## The comma is used to create an outer array, which PS strips off automatically when returning the $Rset
}

function Close-SQLquery {
Param ($query)
$query.close()
$query = $null
}

function Remove-SQLconnection {
Param ($connection)
$connection.close()
$connection = $null
}

$myConn = New-SQLconnection $Server "yourdbnamehere"
if ($x.state -eq "closed") {"Failed to establish a connection";EXIT}
$query = "SELECT column2 from test where column1 = @column1"
$data = Query-SQL $query $myConn
while($data.Read())
{
Write-Host $data.GetValue(0)
}

Close-SQLquery $data

Remove-SQLconnection $data


Lets begin.

$ScriptName = $myInvocation.MyCommand.Name
$Server = "yoursqlservername"

#variable that will be a parameter
#whatever value you populated into column1 in test table.

$column1 = 1


Here I am declaring variables and populating them with values that are pretty straight forward. $column1 is going to be hardcoded for the purposes of this exaple. This is why generic create table and insert statements above are recommended.

function New-SQLconnection {
Param ([string]$server,
[string]$database = "yourdatabasename",
[string]$connectionName = $ScriptName)
if (test-path variable:\conn) {
$conn.close()
} else {
$conn = new-object ('System.Data.SqlClient.SqlConnection')
}
$connString = "Server=$server;Integrated Security=SSPI;Database=$database;Application Name=$connectionName"
$conn.ConnectionString = $connString
$conn.StatisticsEnabled = $true
$conn.Open()
$conn
}


Here I create a function that allows me create a SQL connection and open that connection.

function Query-SQL {
Param ($query, $conn, [int]$CommandTimeout = 30)
$sqlCmd = New-Object System.Data.SqlClient.SqlCommand
$sqlCmd.CommandTimeout = $CommandTimeout
$sqlCmd.CommandText = $query
$sqlCmd.Connection = $conn

#add parameters here
$sqlCmd.Parameters.AddWithValue("@column1",$column1) | Out-Null

$Rset = $sqlCmd.ExecuteReader()
if ($? -eq $false) {
Close-SQLquery $rset
[int]$rset = -1
}
if ($Rset.hasrows -eq $false) {
Close-SQLquery $rset
[string]$rset = ""
}
,$Rset ## The comma is used to create an outer array, which PS strips off automatically when returning the $Rset
}


Here I am creating a function that assigns necessary values and executes a executereader command. Note the use of parameter creation and mapping. This can be tailored for execute scalar and others. This also drops the result set into a variable ($Rset).

function Close-SQLquery {
Param ($query)
$query.close()
$query = $null
}

function Remove-SQLconnection {
Param ($connection)
$connection.close()
$connection = $null
}


Close and remove connections are pretty self explainatory.

$myConn = New-SQLconnection $Server "yourdbnamehere"
if ($x.state -eq "closed") {"Failed to establish a connection";EXIT}
$query = "SELECT column2 from test where column1 = @column1"
$data = Query-SQL $query $myConn
while($data.Read())
{
Write-Host $data.GetValue(0)
}

Close-SQLquery $data

Remove-SQLconnection $data


Here I put it all in motion. Here I call the function with inputs and pass in the variables. Next I drop the results to a dataset variable ($data). To complete the test I write the result to the code window. Finally, I close and remove.

Reading data from SQL server from a powershell script isn't so bad. Extending this for reasons such as data insertions and lookups for applications will be next in the series of Powershell scripts. You'll see this next. Until them Happy Coding!

Labels:

Adding items to a SharePoint document library or list with Powershell

We have created a document library system to house digital copies of important documents for our client groups. Prior to the creation of the document library the client groups would store documents in nested file system folders. Over time this repository grew to a few hundred thousand documents. With a hunderds of thousands of document and thousands of folder it's easy to see that this was out of hand. There were many reports of folders accidentally dropped in other folders and so on. Searching for individual documents was time and resource consuming. Right off the bat we were able to add documents to a newly created document library in SharePoint 2007 with no issues. But what about the existing documents on the file shares? How would they get into SharePoint. A person (or team of people) would require considerable resources and time to manually input a few hundred thousand documents into SahrePoint. Enter Powershell.

For this solution I am importing pdf, doc, and even some gif and jpg files from a file share into a SharePoint 2007 document library. There are about 12 custom columns in the library also. I am using Powershell 1.0.

Overview
In the following script I will be grabbing properties of a clooection of documents, massaging some of the detailed data, storing them in local variables, and finally gaining access to the sharepoint library and inserting them to the sharepoint library.

Let's look at the script them rip it apart step by step.


$gci = gci "\\server\share" -recurse -force | ? {$_.lastwritetime -ge "01.01.1900"-and $_.attributes -ne "directory" -and $_.extension -ne ".url"}
foreach
($file in $gci)

{
#begin

$acct = $file.fullname.replace("-","").substring(($file.directoryname.lastindexofany("\") ),9)
$date = $file.creationtime

$repl_filename = $file.name
$repl2 = $repl_filename.replace("&","and")

# there were more here

$repl12 = $repl2.replace("..",".")



$repl_filefullname = $file.directoryname + "\" + $repl12

$propbag=@{”ContentType”=”Document”
“Name” = $repl11
"Property1" = $acct.replace("\","")
"Property2" = "Reviewed"
"Property3" = $date
"Property4" = "Legacy Import"}


$docliburl=”http://full url to doc library”
$relweburl=”/folder containing doc library”
[System.Reflection.Assembly]::LoadWithPartialName(”Microsoft.SharePoint”) > $null
$site=new-object Microsoft.SharePoint.SPSite($docliburl)
$web=$site.openweb($relweburl)
$folder=$web.getfolder($docliburl)
$list=$web.lists[$folder.ContainingDocumentLibrary]

#process


$stream=[IO.File]::OpenRead($file.fullname)
$folder.files.Add($repl11,$stream,$propbag, $true) > $null
$stream.close()
}


WOW! You might be thinking that is ugly! Remember where beauty lays?

A word about variables in Powershell.
Powershell understands Objects. Powershell works with Objects. a variable is an object. maybe it is an object of type string or int32, maybe it is a custom object or an array. All variables in Powershell are prefixed with a $ symbol.

A word about shorthand in Powershell.
Powershell has lots of shorthand. The more I use Powershell, the more I learn about available shorthand. I will try to explain as many as I can here.

Let's get started!

$gci = gci "\\server\share" -recurse -force | ? {$_.lastwritetime -ge "01.01.1900"-and $_.attributes -ne "directory" -and $_.extension -ne ".url"}

$gci

Here I am declaring a variable named $gci it is of type what comes next.

gci "\\server\share" -recurse -force | ? {$_.lastwritetime -ge "01.01.1900"-and $_.attributes -ne "directory" -and $_.extension -ne ".url"}

gci is shorthand for get-childitem. Here I am using get-childitem to retrieve files from a shared directory. In this example it is "\\server\share"

-recurse Traditionally it means rerun, here this allows me to search all directories and sub directories.

-force allows me not to get tangled up in open files etc.

| (pipe) in Powershell the | character is a seperator. It allows me to seperate different sections of a command.

? this begins the filter or where clause. the keyword "where" could also be used.

{ begin block

$_ In Powershell, in dealing with variables there is a concept of a pipeline object. This is the current object being accessed. instead of using the object or variable name, you can use the $_ convention.

$_.lastwritetime Here I am using the current object property "lastwritetime".

A word about properties in Sharepoint.
SharePoint uses and understands objects. As with traditional "OO" design objects are derived from 2 things properties and methods. A true object has "details" and does "stuff". Powershell objects follow this principal. How can you see what methods and properties an object has? I'm glad you asked.

get-member.
get member is an necessary command for discovering more about any PowerShell object. Because you are at the command line, you cannot right click an object and check its properties and methods. Instead, what you can do is type: Get-Object | Get-Member. In this case:

gci | get-member

That's too much information for what we are working with right now. luckily PowerShell is a self describing language. So we can ask it a more specific question:

gci | get-member -Membertype property

this will only return properties and my ability to read or write them.

gci | get-member -Membertype method

this will retuen methods that are accessable to the object queried with get-member.

Remember, Powershell understands objects. Different properties and methods will be returned for whatever type of object you query with get-member.

back to $_.lastwritetime. Here I am using the lastwrite property of the current pipeline object which is of type get-childitem in my filter or where clause.

-ge is shorthand for greater than or equal to. more examples of usable shorthand include -eq means equal, -ne means not equal, etc...

-and is the command allowing me to add an additional condition to my filter oe where clause.

$_.attributes -ne "directory" -and $_.extension -ne ".url"} With the rest here I am filtering out root directories and files with a url extension (shortcuts).

foreach ($file in $gci) This uses a pretty standard construct in almost any programming language. The foreach loop. Here I am declaring a new variable ($file) and I am dropping that into my foreach loop.

$acct = $file.fullname.replace("-","").substring(($file.directoryname.lastindexofany("\") ),9) Here I am Creating a new variable and filling it with a string that has been massaged. I am replacing characters and performing string manipulation of the directoryname property of the $file object.

$date = $file.creationtime Here I am declaring a new variable and I am populating it with the creationdate property of the $file object.

$repl_filename = $file.name
$repl2 = $repl_filename.replace("&","and")
$repl12 = $repl2.replace("..",".")
here I have some business rules and logic for what the finished filename will look like.

It turns out that SharePoint will not store documents named with certain characters even though the OS is ok with some of these characters. In my experience, these characters include but may not be limited to /#,+()$~%.

Reminds me of Qbert :)

$propbag=@{ Here I am declaring a variable of type array and next I declare it's elements. These elements are named exactly as the custom columns in the sharepoint list or library. This is necessary for them to properly populate the list or library when we add the document and it's properties in the next few lines of code.

”ContentType”=”Document”
“Name” = $repl11
"Property1" = $acct.replace("\","")
"Property2" = "Reviewed"
"Property3" = $date
"Property4" = "Legacy Import"
Here I am setting custom property names and I am populating them with values. You can see that I am both passing ion variables for the values and "hardcoding" some of the values.

$docliburl=”http://full url to doc library”
$relweburl=”/folder containing doc library”
[System.Reflection.Assembly]::LoadWithPartialName(”Microsoft.SharePoint”) > $null
$site=new-object Microsoft.SharePoint.SPSite($docliburl)
$web=$site.openweb($relweburl)
$folder=$web.getfolder($docliburl)
$list=$web.lists[$folder.ContainingDocumentLibrary]


I have alot going on here. This all is necessary for connectivity and access to the object in order to manipulate items in the SharePoint document library or list. Here I am declaring and populating variables for access to the correct list or library. I need access the the sharepoint methods and properties so in this case I need to load data from the sharepoint dll directly. Here I use System.Reflection to accomplish this.

$stream=[IO.File]::OpenRead($file.fullname)
$folder.files.Add($repl11,$stream,$propbag, $true) > $null
$stream.close()


Now for the meat. Here I am declaring a stream and I am populating it with the actual filesystem file object which is the file. Next I am adding the stream and the array of properties (propbag) to the sharepoint list. Then I close my stream.

Using this script I was able to import 200k + documents from a disjoined malformed windows directory share into a SharePoint 2007 document library and fill out custom properties such as the date the file was created and when it was last accessed. I was also able to clean up filenames. The best item was that I was able to extrapolate a unique identifier out of the folder or file name to use as a search identifier in the document library.

That's alot! I hope that you can use this code as a spring board for your own powershell and/or SharePoint projects.

happy coding!

Labels:

Tuesday, February 02, 2010

vb script to copy and delete file system items

Took a few minutes this morning to help a buddy of mine script the archiving of files accross his network. There are tons of variations of this but here is what I came up with.

strFolder = "\\unc\path" 'or local path
dtstart = cdate(FormatDateTime(DateAdd("m",-1,Now),2) + " " + FormatDateTime(DateAdd("m",-1,Now),3))
copydir = "\\unc\path" 'or local path

Set objFSO = CreateObject("Scripting.FileSystemObject")
Set objFolder = objFSO.GetFolder(strFolder)

For Each objFile in objFolder.Files
dtmFileDate = objFile.DateLastModified

If dtmFileDate > dtStart Then
'Wscript.Echo objFile.Name
'objFile.copy(copydir + "\" + objFile.name)
'objFile.Delete True
End If
Next

wscript.echo "Done"

Let's quickly break this done line at a time.

Line 1
strFolder = "\\unc\path" 'or local path

Declaring a local variable to hold the name of the folder we want to copy from.

Line 2
dtstart = cdate(FormatDateTime(DateAdd("m",-1,Now),2) + " " + FormatDateTime(DateAdd("m",-1,Now),3))

Declaring and setting a value for a date that we will use to compare to the dates of files. Lokk at the dateadd "m" -1. this is adding a minus 1 to the current date. The second portion of this is for the time portion of the date. You can change the interval by increasing the -1 to however many months back you want to go.

Line 3
copydir = "\\unc\path" 'or local path

Declaring a local variable to hold the name of the folder we want to copy to.

Line 5
Set objFSO = CreateObject("Scripting.FileSystemObject")

Initializing a new variable for object creation.

Line 6
Set objFolder = objFSO.GetFolder(strFolder)

setting the folder attribute of the File System Object.

Line 8
For Each objFile in objFolder.Files

Start our "For" loop.

Line 9
dtmFileDate = objFile.DateLastModified

Declare and set a variable each time in this loop that will coorespond to the last modified date of the file now being processed.

Line 11
If dtmFileDate > dtStart Then

Begin our comparison. If the last modified date is newer than the start date (Set in Line 2) then go to the next line of code in this loop.

Line 12
'Wscript.Echo objFile.Name

This line has been commented out. It will not run because of the single tick ' character. Uncomment this line by removing the ' and it will print thwe current file name to the screen. This is helpful in debugging.

Line 13
objFile.copy(copydir + "\" + objFile.name)

Do the work. Copy the file in the loop to the destination.

Line 14
objFile.Delete True

Delete the file from the original location.

Line 15
End If

This is the end of the if statement block.

Line 16
Next

Loop to the next record.

Line 18
wscript.echo "Done"

This will send a message to the display stating the the job is completed. This can be commented but is also helpful in debugging.

This script can be referenced and schedule in a windows acheduled task or a SQL Server agent job and run on a schedule in an automated fashion to help alleviate the burdon of administration. There are also many error catching and handling features and steps that can be added to this script to help when the process runs into hiccups.

Error handling and logging is a seperate arena and will be tackled on the next post. Until the Happy Coding and Good Luck!

Labels:

Monday, February 01, 2010

TSQL Having clause

In the spirit of all of the aggregates for reporting I have been doingh lately the next item in the TSQL language that it made sense to discuss is the "having" clause. I had used previously but it was for a different purpose. having is a good way of using the count() function to show duplicates in a dataset. Here is an example:

-- table var
declare @duplicate table
(
col1 int,
col2 varchar(10)
)

-- inserts
insert into @duplicate
(col1,col2)
values
(1,'one')

insert into @duplicate
(col1,col2)
values
(1,'one')

insert into @duplicate
(col1,col2)
values
(1,'one')

insert into @duplicate
(col1,col2)
values
(2,'two')

insert into @duplicate
(col1,col2)
values
(2,'two')

insert into @duplicate
(col1,col2)
values
(3,'three')


select col2
from @duplicate
group by col2
having COUNT(col2) > 1 -- could be count(*)

go

You should have gotten 2 rows back one for the word "one" and one for the word "two". If you change the having clause in the query to be > 2 then you'll only get one row back. This wil be the word "one" for (hopefully) obviously reasons based on the inserts.

However, the having clause can be put to a much more useful purpose when producing a query that contains aggregates. let us say that you were asked to generate a report that returned the sums of a column? No big deal right?

Select sum(column), group_by_column
from table
group by group_by_column

What if you were asked to allow an input variable to the query that allowed you to filter on the sum of the "sum(column)"?

Enter the having clause. Think of it as a where clause for aggregates. Here is a simplified example:

declare @t table
(
col1 int,
col2 int
)

insert into @t
(col1,col2)
values
(1,1)

insert into @t
(col1,col2)
values
(1,1)

insert into @t
(col1,col2)
values
(1,1)

insert into @t
(col1,col2)
values
(2,1)

insert into @t
(col1,col2)
values
(2,1)

-- throws aggregate error
select col1,SUM(col2)
from @t
where SUM(col2) > 2
group by col1


This code used an aggregate in the where clause and should have cause the following error:

Msg 147, Level 15, State 1, Line 36
An aggregate may not appear in the WHERE clause unless it is in a subquery contained in a HAVING clause or a select list, and the column being aggregated is an outer reference.

Now use the having clause instead

-- having!
select col1,SUM(col2)
from @t
where 1 = 1 -- used just to show where the "where" clause would go if needed.
group by col1
having SUM(col2) > 2

Coming full circle you could declare a variable and apply this variable to your having clause such as having sum(col2) > @variable.

Put the having clause to work for you when needing to identify duplicates or filter on aggregates.

Happy coding!

Labels:

Thursday, January 28, 2010

TSQL try catch blocks

TSQL in SQL 2k8 has lots of functionality that reminds me of excellent tools available in the .net world but not TSQL until now. Well, in some cases there was new functionality available in 2k5. One example is the "try catch" block.

BEGIN TRY

whatever sql statement you want to execute

END TRY

BEGIN CATCH

what to do when there is an error....

END CATCH


As with any type of application programming, error handling is often an overlooked "luxury" item in the eyes of everyone except the folks that have to debug when there is an error. As an application programmer that you be you! (and me). My advice is to create a generic table to hold error info when it happens. Something like this:

CREATE TABLE [dbo].[error_log](
[error_number] [varchar](500) NULL,
[error_message] [varchar](500) NULL,
[error_severity] [varchar](500) NULL,
[error_state] [varchar](500) NULL,
[error_procedure] [varchar](500) NULL,
[inserted] [datetime] default getdate() NULL,
[inserted_by] [varchar](500) default user_name() NULL,
[defactodate] [datetime] NULL,
[machine_name] [varchar](255) NULL
) ON [PRIMARY]

GO

Of course I wrap almost all of my statements in stored procedures, so you'll need a proc to perform the insert. Something like this:

create proc [dbo].[pr_ins_error_log]
@error_number varchar(500),
@error_message varchar(500),
@error_severity varchar(500),
@error_state varchar(500),
@error_procedure varchar(500),
@inserted_by varchar(500)
as
insert into error_log
([error_number],[error_message],[error_severity],[error_state],[error_procedure],inserted_by)
values
(@error_number,@error_message,@error_severity,@error_state,@error_procedure,@inserted_by)

GO

There are an array of excellent functions that will return tons of information on the error that just occured. A good example of this is error_number(). You can get creative with your own implementation but based on the objects created above, an example of how to use this would look something like this:

begin try

some generic insert statement

end try
begin catch

declare @user varchar(500) = user
exec error_logging.dbo.pr_ins_error_log error_number,error_message,error_severity,error_state,error_procedure,@user

end catch

Good luck with catching error and actual capturing them so that you have a set of "eyes" and there isn't as much guessing on what is really going wrong.

Labels: