Charteris Community Server

Welcome to the Charteris plc Community
Welcome to Charteris Community Server Sign in | Join | Help
in Search

Ivor Bright's blog

  • Create a custom PowerShell cmdlet – End to End

    This article is focussed on how to create a custom PowerShell cmdlet – the entire process from deciding that you need a new cmdlet, implementing the cmdlet and providing documentation for that cmdlet.

    The steps that we will focus on are:

    • Why you might want to create a cmdlet
    • Creating the cmdlet
    • Registering the cmdlet with PowerShell
    • Additional helpful notes
    • Documenting the cmdlet.

    Why you might want to create a cmdlet

    Some of the reasons why you might want to create a custom PowerShell cmdlet include:

    • No cmdlet currently exists that provides the functionality you require. While there are a wide range of existing cmdlets – they are finite and it would not be unusual if you needed to perform some functionality for which cmdlets did not exist.
    • cmdlets do exist to perform the functionality that you require, but they are not easy to use or require you to make multiple calls to various cmdlets in sequence. e.g. to add a new user to a domain, you might want to create the user, add them to several default groups, setup their exchange accounts etc. A custom cmdlet could aggregate several existing cmdlets into one easy to use cmdlet like New-Employee.
    • You want to provide maintenance support to a custom application.

    For me, the desire to provide functionality that support people can use to maintain an application is strong. Traditionally this was done by providing a maintenance web application, but there are a few issues with this approach:

    • Maintenance applications are relative slow to build. You can spend as long on UI elements as providing the functionality, so it is not necessarily a good “fit” when you need to provide maintenance functionality that will only be used by support people.
    • Maintenance applications are tedious to build. The “CRUD” screens can be very repetitive to build, and are typically left until the end of the project as no one wants to do them.
    • Support staff frequently want to script this type of work, rather than using UI screens. e.g. if a support person wanted to create 100 employees, they would rather write a batch script to perform the operation, than using a particular UI page 100 times.

    Creating the cmdlet

    Visual Studio is our friend in this regard and the majority of this article will focus on creating a custom cmdlet. For this example, we are doing to create some cmdlets which perform simple mathematics. We will use Visual Studio 2010 and for the moment – we will use .Net Framework 3.5 rather than .Net Framework 4 (we’ll explain why later). As my VM is 32 bit – I will only be building a 32 bit cmdlet but you need to be aware that you can build either 32 bit or 64 bit. If you build a 64 bit cmdlet – then you need to launch the 64 bit version of PowerShell to use it and similar for the 32 bit.

    Initialising the environment

    I’m going to assume you have Visual Studio 2010 installed, but there are some initialisation steps that must be performed before you can make a start.

    • Install Microsoft Windows SDK which can be installed from here.
    • Ensure PowerShell is set to at least RemoteSigned. By default, Powershell will run with an execution policy of Restricted which means that your scripts won’t execute. Determine your current execution policy by executing Get-ExecutionPolicy in a PowerShell command window. To change your execution policy, use Set-ExecutionPolicy RemoteSigned. There are a number of execution policy options, but RemoteSigned suits our requirements nicely.

    Cutting the code

    Now that we have our environment setup, we can start cutting some cut.

    1. Create a new Class Library called CustomPowershellCmdlets
      1. In project properties, on the application tab – set its target framework to .Net Framework 3.5 (we’ll come back to this in a bit)
      2. Add a new class called New_Addition.
      3. Add a reference to System.Management.Automation by adding a reference to “C:\Program Files\Reference Assemblies\Microsoft\WindowsPowerShell\v1.0\System.Management.Automation.dll.
      4. Make this new class inherit from PSCmdlet
      5. Add a “using System.Management.Automation” for it to be recognised by Visual Studio.
      6. Perform a quick compile to make sure everything is ok.
    2. Add an attribute to the class as follows: [Cmdlet(VerbsCommon.New, “Addition”)]. Our example is a little contrived but this means that the the noun of the operation is “Addition” and the action is “New”. It’s not an exact fit and another good choice would have been “Get”.
    3. Add 2 public properties to the class
      1. FirstParameter is of type int
      2. SecondParameter is of type int
    4. As our property names are a bit verbose and as it is more usual to deal with X and Y – add an Alas attribute to each and Alias FirstParameter as “X” and Alias SecondParameter as “Y” e.g. “[Alias(“X”)]”
    5. Use the parameter attributes to  declare that both properties are mandatory and that the value can be retrieve from the pipeline by property name. e.g. “[Parameter(Mandatory=true, ValueFromPipelineByPropertyName=true)]”
    6. Implement ProcessRecord to perform the addition operation and write the output to the console. e.g. “WriteObject(string.Format("{0} + {1} = {2}", this.FirstParameter, this.SecondParameter, this.FirstParameter + this.SecondParameter));”
    7. Your code should now look like the following:
         1: // -----------------------------------------------------------------------
         2: // <copyright file="New_Addition.cs" company="Charteris">
         3: // Charteris
         4: // </copyright>
         5: // -----------------------------------------------------------------------
         6: namespace CustomPowershellCmdlets
         7: {
         8:     using System;
         9:     using System.Collections.Generic;
        10:     using System.Linq;
        11:     using System.Management.Automation;
        12:     using System.Text;
        13:  
        14:     /// <summary>
        15:     /// Cmdlet which performs an addition
        16:     /// </summary>
        17:     [Cmdlet(VerbsCommon.New, "Addition")]
        18:     public class New_Addition : PSCmdlet
        19:     {
        20:         /// <summary>
        21:         /// Gets or sets the first parameter.
        22:         /// </summary>
        23:         /// <value>
        24:         /// The first parameter.
        25:         /// </value>
        26:         [Alias("X")]
        27:         [Parameter(Mandatory = true, ValueFromPipelineByPropertyName = true)]
        28:         public int FirstParameter { get; set; }
        29:  
        30:         /// <summary>
        31:         /// Gets or sets the second parameter.
        32:         /// </summary>
        33:         /// <value>
        34:         /// The second parameter.
        35:         /// </value>
        36:         [Alias("Y")]
        37:         [Parameter(Mandatory = true, ValueFromPipelineByPropertyName = true)]
        38:         public int SecondParameter { get; set; }
        39:  
        40:         /// <summary>
        41:         /// Processes the record.
        42:         /// </summary>
        43:         protected override void ProcessRecord()
        44:         {
        45:             WriteObject(string.Format(
        46:                 "{0} + {1} = {2}", 
        47:                 this.FirstParameter, 
        48:                 this.SecondParameter, 
        49:                 this.FirstParameter + this.SecondParameter));
        50:         }
        51:     }
        52: }
    8. While this example is quite trivial, you should catch any exceptions and throw a terminating error – this makes sure that PowerShell will stop processing immediately which is probably the desired behaviour.
         1: /// <summary>
         2: /// Processes the record.
         3: /// </summary>
         4: protected override void ProcessRecord()
         5: {
         6:     try
         7:     {
         8:         WriteObject(string.Format(
         9:             "{0} + {1} = {2}",
        10:             this.FirstParameter,
        11:             this.SecondParameter,
        12:             this.FirstParameter + this.SecondParameter));
        13:     }
        14:     catch (Exception ex)
        15:     {
        16:         ErrorRecord errorRecord = new ErrorRecord(ex, string.Empty, ErrorCategory.NotSpecified, this);
        17:         ThrowTerminatingError(errorRecord);
        18:     }
        19: }
    9. Implement similar classes for New_Subtraction, New_Multiplication, New_Division etc.

    Registering the cmdlet with PowerShell

    Now that we have our basic PowerShell cmdlets, we need to build a snapin so that they can be installed on our system.

    1. Add a class called CustomPowerShellCmdletsInstaller to our project.
    2. Add a reference to System.Configuration.Install.
    3. Make CustomPowerShellCmdletsInstaller inherit from PSSnapIn and add an implementation for the following properties: Description, Name and Vendor
    4. Attribute CustomPowerShellCmdletsInstaller with “[RunInstaller(true)]”.
    5. Your code should now look like the following:
         1: // -----------------------------------------------------------------------
         2: // <copyright file="CustomPowerShellCmdletsInstaller.cs" company="Charteris">
         3: // Charteris.
         4: // </copyright>
         5: // -----------------------------------------------------------------------
         6:  
         7: namespace CustomPowershellCmdlets
         8: {
         9:     using System;
        10:     using System.Collections.Generic;
        11:     using System.ComponentModel;
        12:     using System.Linq;
        13:     using System.Management.Automation;
        14:     using System.Text;
        15:     
        16:     /// <summary>
        17:     /// Installer for Custom Powershell Cmdlets
        18:     /// </summary>
        19:     [RunInstaller(true)]
        20:     public class CustomPowerShellCmdletsInstaller : PSSnapIn
        21:     {
        22:         /// <summary>
        23:         /// Gets the description.
        24:         /// </summary>
        25:         public override string Description
        26:         {
        27:             get 
        28:             { 
        29:                 return "CustomPowerShellCmdlets"; 
        30:             }
        31:         }
        32:  
        33:         /// <summary>
        34:         /// Gets the name.
        35:         /// </summary>
        36:         public override string Name
        37:         {
        38:             get 
        39:             {
        40:                 return "CustomPowerShellCmdlets";
        41:             }
        42:         }
        43:  
        44:         /// <summary>
        45:         /// Gets the vendor.
        46:         /// </summary>
        47:         public override string Vendor
        48:         {
        49:             get 
        50:             {
        51:                 return "Charteris"; 
        52:             }
        53:         }
        54:     }
        55: }
    6. Another quick build to make sure everything works.
    7. Open a Visual Studio 2010 command prompt, browse to your projects bin folder and run InstallUtil on your new dll. e.g. “InstallUtil CustomPowerShellCmdlets.dll” If the first attempt goes wrong and you need to rebuild and reinstall – you should use InstallUtil /u CustomPowerShellCmdlets to uninstall the old version first.
    8. Open a Powershell command window and execute get-pssnapin –registered. This returns a list of all registered but not activated snapins on your machine which should include our new snapin. If you don’t see your new snapin, then the first place to look is the output from your installutil command
    9. To make our new snapin available to use – we need to add it using the following in the PowerShell command window: “Add-PSSnapIn CustomPowerShellCmdlets”
    10. We can now execute out new cmdlets in the same way that we would execute any standard PowerShell cmdlet.
      1. As our parameters are set to Mandatory – we can execute New-Addition and we will be prompted for our property values
      2. We can also use our aliases and execute “New-Addition –X 5 –Y 7”

    Additional Helpful Notes

    The Add-PSSnapIn adds the snapin to your current PowerShell session, but if you close and reopen your powershell command window – you need to remember to re-add the snapin. This gets a bit tedious after a while, so it’s useful to make the snapin available to all future PowerShell windows.This can be done by creating a PowerShell profile. In your PowerShell window – find your $Home directory by issuing the command $Home. It’s probably something like C:\Documents and Settings\[user name]. Using that folder as your starting point – create a file called Profile.ps1 under C:\Documents and Settings\[user name]\My Documents\WindowsPowerShell” – you may need to create part of the folder structure and you may have to create the file. Make the contents of the file “Add-PSSnapIn CustomPowerShellCmdlets”. The contents of Profile.ps1 are executed every time a new Powershell command window is open so you can put your initialisation code here.

    At the beginning of this tutorial, we set the Target Framework to .Net 3.5. The reason for doing this is that Powershell is expecting us to use this version of the framework and this was the easiest way for us to get up and running. However – it’s always nice to use the latest Framework so edit the following file $pshome\powershell.exe.config (C:\WINDOWS\system32\WindowsPowerShell\v1.0\powershell.exe.config or similar) so that it contains the following:

       1: <?xml version="1.0"?> 
       2: <configuration> 
       3:     <startup useLegacyV2RuntimeActivationPolicy="true"> 
       4:         <supportedRuntime version="v4.0.30319"/> 
       5:         <supportedRuntime version="v2.0.50727"/> 
       6:     </startup> 
       7: </configuration> 

    See here for more information.

    Remember that to switch to using the new version of the snapin, you’ll need to uninstall the old version – rebuild and re-install the new version.

    Documenting the Cmdlet

    One of the first ever cmdlets you will have ever used is Get-Help. If you try running Get-Help for your new cmdlets – you might be a bit disappointed with the results. It would be nice if we could provide help information which was at least as good as the built in cmdlets and it is possible to do so. In you want to do it by hand – you have to build an xml file to a specific format – but why build something by hand when someone has built an excellent utility for you.

    The Cmdlet Help editor can be found here. I won’t provide any details here as its fairly self explanatory, but 1 hint is to deploy the help file to the same folder as your assembly. Calls to Get-Help for your cmdlets should now be a lot more user friendly.

    Downloading the code

    Hopefully you found this walkthrough useful. The key point is that despite this example being trivial – you can create a cmdlet using anything that is available in the .Net Framework. The code used in this tutorial (targeting .Net Framework 3.5) can be found here.

    Technorati Tags:
    Posted Nov 01 2011, 12:06 PM by IvorB with 3 comment(s)
    Filed under:
  • Professional ASP.Net MVC 2 and NerdDinner – Object Expected and map positioning issues

    The Professional ASP.Net MVC 2 book by Wrox seems like a good way to learn MVC 2 but I encountered a number of issues with the NerdDinner example in Chapter 1. As you go through the tutorial, there are a number of minor typos which are easily fixed, but I hit a complete stumbling block when I reached the AJAX Map integration functionality around page 127. Lots of other people seem to be having similar problems, but it seems to vary depending on what version of Visual Studio/MVC people are using. I am personally using Visual Studio 2010 Ultimate and the issues I hit in order were:

    “Microsoft JScript runtime error: Object expected”. It was tempting to go straight to CodePlex and get the latest code, but this seems to have moved on quite a bit from the book. The map is now displayed on the left rather than the right, Site.css is completely different etc. I wanted to work through the book so I was keen to get the sample to work. Using my search engine of choice – the fix to the Object expected error was to add a script include for JQuery to Map.ascx. The top of Map.ascx now looks like:

       1: <script src="http://dev.virtualearth.net/mapcontrol/mapcontrol.ashx?v=6.2" 
       2:     type="text/javascript"></script>
       3: <script src="/Scripts/Map.js" type="text/javascript"></script>
       4: <script src="/Scripts/jQuery-1.4.1.js" type="text/javascript"></script>
       5:  
       6: <div id="theMap" style="width:520px">
       7: </div>

    Map appears in Top Left. Now that the Javascript is happier, a CTRL + F5 renders a map but it’s not very pretty. The map should appear to the right of the current controls, but instead appears over them and to the top left of the form.

    A good source of help for map issues is the Wrox P2P Forum for the book which can be found here.

    This resulted in me adding the following to Site.css (found in the Content folder):

       1: #mapDiv 
       2: {
       3:     float: right;
       4:     width: 522px;
       5: }
       6:  
       7: #theMap
       8: {
       9:     position: relative; 
      10:     width: 580px; 
      11:     border: solid 1px #bbd1ec;
      12: }

    and this produces the following UI:

    Moving the div called mapDiv in DinnerForm.ascx from the end of the fieldset to the beginning, results in the following code:

       1: <% using (Html.BeginForm()) { %>
       2:  
       3:     <fieldset>
       4:  
       5:         <div id="mapDiv">     
       6:             <%Html.RenderPartial("Map", Model.Dinner); %> 
       7:         </div> 
       8:  
       9:         <p>
      10:             <%: Html.LabelFor(m => m.Dinner.Title) %>
      11:             <%: Html.TextBoxFor(m => m.Dinner.Title)%>
      12:             <%: Html.ValidationMessageFor(m => m.Dinner.Title, "*")%>
      13:         </p>

    and this produces the following UI:

    Map does not reflect change in address. The code attempts to uses the blur event on the address field to refresh the map but this is not working. Digging through the code, it refers to fields like #Address in DinnerForm.ascx, but if you do a View – Source on your rendered HTML page – the fields are actually called #Dinner_Address (because of the binding to the custom ViewModel). We need to adjust the end of DinnerForm.ascx to the following:

       1: <script type="text/javascript">
       2:  
       3: $(document).ready(function () {
       4:     $("#Dinner_Address").blur(function (evt) {
       5:         $("#Dinner_Latitude").val("");
       6:         $("#Dinner_Longitude").val("");
       7:  
       8:         var address = jQuery.trim($("#Dinner_Address").val());
       9:         if (address.length < 1)
      10:             return;
      11:  
      12:         FindAddressOnMap(address);
      13:     });
      14: });
      15:  
      16: </script>

    We also need to adjust Map.js with this change:

       1: //If we've found exactly one place, that's our address.
       2: if (points.length === 1) {
       3:     $("#Dinner_Latitude").val(points[0].Latitude);
       4:     $("#Dinner_Longitude").val(points[0].Longitude);
       5: }

    I was also missing the Latitude and Longitude fields. I think this was caused by copying/pasting code sections from the downloaded code samples, so I added them back in and made them visible so I could confirm that the latitude and Longitude values were being correctly set:

       1: <p>
       2:     <%: Html.TextBoxFor(m => m.Dinner.Latitude)%>
       3: </p>
       4: <p>
       5:     <%: Html.TextBoxFor(m => m.Dinner.Longitude)%>
       6: </p>  
       7:  
       8: <p>
       9:     <input type="submit" value="Save" />
      10: </p>

    Testing using the postcode for Microsoft at Reading (RG6 1WG) results in the following UI which looks correct to me:

    Technorati Tags: ,
    Posted Jan 13 2011, 03:24 PM by IvorB with 1 comment(s)
    Filed under: ,
  • Use PowerShell to synchronise Active Directory users to a contact list – Using Contact Photos

    This is the fifth article in a series of posts looking at using PowerShell to synchronise Active Directory users with Exchange contacts. Previous articles looked at why we might do this, how we can access active directory, how we might access Exchange contacts and how we can use Checksums to detect changes in Active Directory.

    This article will focus on storing photos against our Exchange contacts. Before we begin with any implementation details, its useful to consider why we might want to store contact photos:

    • Better experience when receiving phone calls. Most modern phones will display the photo for incoming calls so you get a better experience when receiving calls. You might recognise the person’s face but not their name, so its helps in the decision about accepting/rejecting an incoming call.
    • Better experience making a phone call. As above – you might not know the name of the person you want to call but you might recognise their face. Storing the photos helps the user choose the correct person to phone.
    • The photos are often stored in Active Directory anyway in the thumbnail photo attribute (see here for details). Since we may already have the photo, it makes sense to reuse it.

    The implementation details are based upon the scripts in previous article in this series but will be reproduced with the various tweaks here.

    To get a specific user from active directory, we can use the following function:

       1: # Get User Details
       2: Function Get-ActiveDirectoryUser
       3: {    
       4:     param ([string]$firstName, [string]$lastName) 
       5:     
       6:     $root = [ADSI]''
       7:     $searcher = new-object     System.DirectoryServices.DirectorySearcher($root)
       8:     $searcher.filter = "(&(objectClass=user)(givenName=$firstName)(sn=$lastName))"
       9:     
      10:     $users = $searcher.Findall()
      11:     
      12:     if ($users.Count -eq 1)
      13:     {
      14:         $user = $users[0]
      15:         $properties = $user.Properties
      16:         $thumbnailPhoto = $user.Properties["thumbnailphoto"]
      17:         
      18:         if ($thumbnailPhoto -eq $null)
      19:         {
      20:             Write-Host "User has no photo"
      21:         }
      22:         else
      23:         {
      24:             Write-Host "User has photo"
      25:         }
      26:         
      27:         return $user
      28:     }
      29:     else
      30:     {
      31:         Write-Host "failed to find match"
      32:         return $null
      33:     }   
      34: } 

    To connect to Exchange EWS and to check to see if the contact already exists are identical to previous function implementations used:

       1: # Connect to Exchange EWS
       2: function Connect-ExchangeEWS
       3: { 
       4:     $EwsApi = "C:\Program Files\Microsoft\Exchange\Web Services\1.0\" + 
       5:         "Microsoft.Exchange.WebServices.dll"
       6:     [void][Reflection.Assembly]::LoadFile($EwsApi) 
       7:     
       8:     # Avoid hardcoding email address
       9:     $windowsIdentity = [System.Security.Principal.WindowsIdentity]::GetCurrent()
      10:     $windowsQuery = "LDAP://<SID=" + $windowsIdentity.user.Value.ToString() + ">"
      11:     $windowsUser = [ADSI]$windowsQuery
      12:         
      13:     # Use the discovered email address to discover the exchange url    
      14:     $exchangeService = new-object Microsoft.Exchange.WebServices.Data.ExchangeService(
      15:         [Microsoft.Exchange.WebServices.Data.ExchangeVersion]::Exchange2010_SP1)
      16:     $exchangeService.AutodiscoverUrl($windowsUser.mail.ToString())
      17:     
      18:     return $exchangeService
      19: }
      20:  
      21:  
      22: # Check if an exchange contact exists.
      23: function Check-ExchangeContactExists
      24: { 
      25:     param ($exchangeService, [string]$firstName, [string]$lastName) 
      26:     
      27:     # Used to return the contact Id if the contract exists.
      28:     $Id = $null
      29:     
      30:     $viewer =  New-Object Microsoft.Exchange.WebServices.Data.ItemView(300)    
      31:     $ContactsFolder=[Microsoft.Exchange.WebServices.Data.Folder]::Bind($exchangeService, 
      32:         [Microsoft.Exchange.WebServices.Data.WellKnownFolderName]::Contacts)
      33:  
      34:     $searchFilterFirstName = New-Object `
      35:         Microsoft.Exchange.WebServices.Data.SearchFilter+IsEqualTo(
      36:         [Microsoft.Exchange.WebServices.Data.ContactSchema]::GivenName,$firstName)
      37:     $searchFilterLastName = New-Object `
      38:         Microsoft.Exchange.WebServices.Data.SearchFilter+IsEqualTo(
      39:         [Microsoft.Exchange.WebServices.Data.ContactSchema]::Surname,$lastName)
      40:     $searchFilters = New-Object `
      41:         Microsoft.Exchange.WebServices.Data.SearchFilter+SearchFilterCollection(
      42:         [Microsoft.Exchange.WebServices.Data.LogicalOperator]::And);
      43:     $searchFilters.add($searchFilterFirstName)
      44:     $searchFilters.add($searchFilterLastName)
      45:  
      46:     # Search for matching contacts.
      47:     $searchResults = $exchangeService.FindItems(
      48:         [Microsoft.Exchange.WebServices.Data.WellKnownFolderName]::Contacts, 
      49:         $searchFilters, $viewer)
      50:     
      51:     # If we have results - then the contact already exists
      52:     foreach($searchResult in $searchResults)
      53:     {
      54:         # Remember the Id as this makes deleting easier
      55:         $Id = $searchResult.Id
      56:     }
      57:     
      58:     return $Id
      59: }

    Creating a contact is very similar to a previous implementation, but it has an additional thumbnailPhoto parameter and needs an additional call to SetContactPicture():

       1: # Create an Exchange contact
       2: function Create-ExchangeContact
       3: { 
       4:     param ($exchangeService, [string]$firstName, [string]$lastName, 
       5:         [string]$mobileNumber, $thumbnailPhoto) 
       6:  
       7:     $enumPhoneMobileValue = 
       8:         [Microsoft.Exchange.WebServices.Data.PhoneNumberKey]::MobilePhone
       9:  
      10:     # Create the new contact and populate properties
      11:     $newContact = 
      12:         New-Object Microsoft.Exchange.WebServices.Data.Contact($exchangeService) 
      13:     $newContact.GivenName = $firstName
      14:     $newContact.Surname = $lastName
      15:     $newContact.PhoneNumbers[$enumPhoneMobileValue] = $mobileNumber
      16:     
      17:     $newContact.SetContactPicture($thumbnailPhoto)
      18:     
      19:     [void]$newContact.Save()
      20: }

    Updating a contact is very similar to a previous implementation but it has an additional thumbnailPhoto parameter, and needs to manage updating the photo by removing the old one and adding the new one.

       1: # Updates an Exchange contact
       2: function Update-ExchangeContact
       3: { 
       4:     param ($exchangeService, $Id, [string]$mobileNumber, $thumbnailPhoto) 
       5:  
       6:     $enumPhoneMobileValue = 
       7:         [Microsoft.Exchange.WebServices.Data.PhoneNumberKey]::MobilePhone
       8:     $enumAlwaysOverWrite = 
       9:         [Microsoft.Exchange.WebServices.Data.ConflictResolutionMode]::AlwaysOverWrite
      10:     # http://msdn.microsoft.com/en-us/library/gg274394(v=exchg.80).aspx        
      11:     $extendedPropertyMobileTelephoneNumber = 14876
      12:      
      13:     # load the current contact
      14:     $contactToUpdate = 
      15:         [Microsoft.Exchange.WebServices.Data.Contact]::Bind($exchangeService,$Id)
      16:     
      17:     # Set the phone number
      18:     $contactToUpdate.PhoneNumbers[$enumPhoneMobileValue] = $mobileNumber
      19:     [Void]$contactToUpdate.SetExtendedProperty( 
      20:         (new-object Microsoft.Exchange.WebServices.Data.ExtendedPropertyDefinition(
      21:             $extendedPropertyMobileTelephoneNumber,
      22:             [Microsoft.Exchange.WebServices.Data.MapiPropertyType]::String) ), 
      23:             $mobileNumber )
      24:     
      25:     # Need to remove the old photo if it has one, before setting the new one.        
      26:     if ($contactToUpdate.HasPicture)
      27:     {
      28:         $contactToUpdate.RemoveContactPicture()
      29:         [void]$contactToUpdate.Update($enumAlwaysOverWrite)
      30:     }
      31:  
      32:     $contactToUpdate.SetContactPicture($thumbnailPhoto)
      33:     
      34:     [void]$contactToUpdate.Update($enumAlwaysOverWrite)
      35: }

    The main bootstrap function is as follows:

       1: # Main bootstrap
       2: Function Run-Main
       3: {    
       4:     $firstName = "Donald"
       5:     $lastName = "Duck"
       6:         
       7:     Clear-Host
       8:     $user = Get-ActiveDirectoryUser $firstName $lastName
       9:     $mobile = $user.Properties["mobile"]
      10:     $thumbnailPhoto = $user.Properties["thumbnailphoto"][0]
      11:     
      12:     $exchangeService = Connect-ExchangeEWS
      13:     $Id = Check-ExchangeContactExists $exchangeService $firstName $lastName
      14:     
      15:     if ($Id -eq $null)
      16:     {
      17:         Write-Host "Creating Exchange Contact"
      18:                        
      19:         Create-ExchangeContact `
      20:             $exchangeService $firstName $lastName $mobile $thumbnailPhoto
      21:         
      22:         Write-Host "Exchange Contact created"
      23:     }
      24:     else
      25:     {
      26:         Write-Host "Updating Exchange Contact"
      27:         
      28:         Update-ExchangeContact $exchangeService $Id $mobile $thumbnailPhoto
      29:         
      30:         Write-Host "Exchange Contact updated"
      31:     }
      32:     
      33:     Write-Host "Finished"
      34: }

    Adding the photo functionality doesn’t require much additional coding, but your end users will really appreciate the extra effort.

    This ends this series of articles on using PowerShell to synchronise Active Directory with Exchange contacts – hopefully you found it interesting and informative!

  • Use PowerShell to synchronise Active Directory users to a contact list – Using Checksums

    This is the fourth article in a series of posts looking at using PowerShell to synchronise Active Directory users with Exchange contacts. Previous articles looked at why we might do this, how we can access active directory and how we might access Exchange contacts. This article will focus on detecting changes to Active Directory users so that we can detect which Exchange contacts need to be updated.

    When looking at a method of detecting changes in Active Directory, I initially looked at the “When-Changed” Active Directory attribute. This appeared to satisfy my requirement with the description of the attribute being:

    “The date when this object was last changed. This value is not replicated and exists in the global catalog”.

    However – I got a large number of false positives and I believe the cause is related to another attribute called “Last-Logon”:

    “(The last time the user logged on. This value is stored as a large integer that represents the number of 100 nanosecond intervals since January 1, 1601 (UTC). A value of zero means that the last logon time is unknown).”

    It seems that each time a user logs into the domain – it is considered a change to their account which alters the “When-Changed” attribute and as a result – the attribute is of limited use to detecting “real” active directory changes. To be honest – we probably need something more granular as we are only interested in a small subset of Active Directory fields and we only want to consider updating a contact when one of these fields change.

    A better approach seems to be to generate a checksum when the initial exchange contact is created and store with the contact (in an unused field for example). Then – on subsequent passes the logic can be something like the following:

    The process of generating a checksum using PowerShell is relatively painless as it is easy to generate both SHA1 and MD5 hashes. In this example – I am going to generate a MD5 hash for the firstName, lastName and mobile phone number combination. This example is slightly trivial as for such a small subset of fields – we could simply compare all the field values. However – for a real implementation, we are likely to be concerned with many more fields and the hash/checksum seems an appropriate solution:

       1: # Generates a checksum for the passed parameters
       2: function Generate-CheckSum([string[]]$fields)
       3: {
       4:     [string]$totalSource = ""
       5:     [string]$checksum = ""
       6:     
       7:     # Join all the parameters, ready for hashing
       8:     foreach($field in $fields)
       9:     {
      10:         $totalSource += $field
      11:     }
      12:     
      13:     # Generate an MD5 hash
      14:     $cryptoServiceProvider = [System.Security.Cryptography.MD5CryptoServiceProvider];
      15:     $hashAlgorithm = new-object $cryptoServiceProvider
      16:     $hashByteArray = $hashAlgorithm.ComputeHash([Char[]]$totalSource);
      17:  
      18:     # Convert the hash to something easy to read
      19:     foreach ($byte in $hashByteArray) 
      20:     { 
      21:         $checkSum += “{0:X2}” -f $byte
      22:     }
      23:  
      24:     return $checkSum
      25: }
      26:  
      27: # Main bootstrap function
      28: function Run-Main
      29: {
      30:     $firstName = "Donald"
      31:     $lastName = "Duck"
      32:     $mobileNumber = "12345"
      33:     
      34:     $reverseMobileNumber = "54321"
      35:     $firstNameLowerCase = "donald"
      36:     
      37:     Clear-Host
      38:     
      39:     $checksum = Generate-CheckSum($firstName, $lastName, $mobileNumber)
      40:     Write-Host "Checksum for" $firstName $lastName $mobileNumber : $checksum
      41:     
      42:     $checksum = Generate-CheckSum($firstName, $lastName, $reverseMobileNumber)
      43:     Write-Host "Checksum for" $firstName $lastName $reverseMobileNumber : $checksum
      44:     
      45:     $checksum = Generate-CheckSum($firstNameLowerCase, $lastName, $mobileNumber)
      46:     Write-Host "Checksum for" $firstNameLowerCase $lastName $mobileNumber : $checksum   
      47: }
      48:  
      49: Run-Main

    The output from this is:


    This demonstrates that the hash is dependent on the content provided to it and is case sensitive. As content is changed in Active Directory, the checksum generated will vary and can be detected by stashing the previous checksum with the contact in Exchange. This allows a simple mechanism for detecting changes in Active Directory so only users with changed values need to be synchronised to contacts in Exchange.

    A future article in this series will look at storing photos with the contact in Exchange.

    Technorati Tags: ,
  • Use PowerShell to synchronise Active Directory users to a contact list – Accessing Exchange Contacts

    This is the third article in a series of posts looking at using PowerShell to synchronise Active Directory users with exchange contacts. Previous articles looked at why we might want to do this, and how we can access active directory. This article will focus on using PowerShell to access and create/update exchange contacts.

    I am going to focus on accessing Exchange using Exchange Web Services (EWS) and documentation can be found here. My main goal is to be able to access contact lists for all users in the organisation rather than just my own, so EWS provides an ideal approach. You need the following on your client before we begin our PowerShell development:

    • The latest EWS Managed API from here
    • .Net 3.5 installed

    The first step is to simply search for contacts for the active user. Some basic results can be found as follows:

       1: # Use EWS to list all exchange contacts
       2: function List-ExchangeContacts
       3: { 
       4:     $EwsApi = "C:\Program Files\Microsoft\Exchange\Web Services\1.0\" + 
       5:         "Microsoft.Exchange.WebServices.dll"
       6:     [void][Reflection.Assembly]::LoadFile($EwsApi) 
       7:     
       8:     # Avoid hardcoding email address
       9:     $windowsIdentity = [System.Security.Principal.WindowsIdentity]::GetCurrent()
      10:     $windowsQuery = "LDAP://<SID=" + $windowsIdentity.user.Value.ToString() + ">"
      11:     $windowsUser = [ADSI]$windowsQuery
      12:         
      13:     # Use the discovered email address to discover the exchange url    
      14:     $exchangeService = new-object Microsoft.Exchange.WebServices.Data.ExchangeService(
      15:         [Microsoft.Exchange.WebServices.Data.ExchangeVersion]::Exchange2010_SP1)
      16:     $exchangeService.AutodiscoverUrl($windowsUser.mail.ToString())
      17:  
      18:     # Create a view for browsing the results.
      19:     $viewer =  New-Object Microsoft.Exchange.WebServices.Data.ItemView(1000)    
      20:  
      21:     # Access the contacts folder
      22:     [void][Microsoft.Exchange.WebServices.Data.Folder]::Bind($exchangeService, 
      23:         [Microsoft.Exchange.WebServices.Data.WellKnownFolderName]::Contacts)
      24:     $exchangeContacts = $exchangeService.FindItems(
      25:         [Microsoft.Exchange.WebServices.Data.WellKnownFolderName]::Contacts, $viewer)
      26:  
      27:     # Iterate over the contacts and write to output
      28:     foreach($exchangeContact in $exchangeContacts)
      29:     { 
      30:         Write-Host ($exchangeContact.GivenName + " " + $exchangeContact.Surname)
      31:     }
      32: }
      33:  
      34: Clear-Host
      35: List-ExchangeContacts

    As our main goal is related to phone numbers, we should display them as well. The phone numbers are contained in a dictionary and for this example – we will look at the mobile number. The List-ExchangeContacts function becomes:

       1: # Use EWS to list all exchange contacts
       2: function List-ExchangeContacts
       3: { 
       4:     $enumPhoneMobileValue = 
       5:         [Microsoft.Exchange.WebServices.Data.PhoneNumberKey]::MobilePhone
       6:  
       7:     $EwsApi = "C:\Program Files\Microsoft\Exchange\Web Services\1.0\" + 
       8:         "Microsoft.Exchange.WebServices.dll"
       9:     [void][Reflection.Assembly]::LoadFile($EwsApi) 
      10:     
      11:     # Avoid hardcoding email address
      12:     $windowsIdentity = [System.Security.Principal.WindowsIdentity]::GetCurrent()
      13:     $windowsQuery = "LDAP://<SID=" + $windowsIdentity.user.Value.ToString() + ">"
      14:     $windowsUser = [ADSI]$windowsQuery
      15:         
      16:     # Use the discovered email address to discover the exchange url    
      17:     $exchangeService = new-object Microsoft.Exchange.WebServices.Data.ExchangeService(
      18:         [Microsoft.Exchange.WebServices.Data.ExchangeVersion]::Exchange2010_SP1)
      19:     $exchangeService.AutodiscoverUrl($windowsUser.mail.ToString())
      20:  
      21:     # Create a view for browsing the results.
      22:     $viewer =  New-Object Microsoft.Exchange.WebServices.Data.ItemView(1000)    
      23:  
      24:     # Access the contacts folder
      25:     [void][Microsoft.Exchange.WebServices.Data.Folder]::Bind($exchangeService, 
      26:         [Microsoft.Exchange.WebServices.Data.WellKnownFolderName]::Contacts)
      27:     $exchangeContacts = $exchangeService.FindItems(
      28:         [Microsoft.Exchange.WebServices.Data.WellKnownFolderName]::Contacts, $viewer)
      29:  
      30:     # Iterate over the contacts and write to output
      31:     foreach($exchangeContact in $exchangeContacts)
      32:     { 
      33:         Write-Host ($exchangeContact.GivenName + " " + $exchangeContact.Surname)
      34:         Write-Host (" - Mobile: " + $exchangeContact.PhoneNumbers[$enumPhoneMobileValue])
      35:     }
      36: }
      37:  
      38: Clear-Host
      39: List-ExchangeContacts

    Details about other properties in the contact object can be found here.

    Now that we can retrieve our exchange contacts, it’s time to create new ones:

       1: # Create an Exchange contact
       2: function Create-ExchangeContact
       3: { 
       4:     param ([string]$firstName, [string]$lastName, [string]$mobileNumber) 
       5:  
       6:     $enumPhoneMobileValue = 
       7:         [Microsoft.Exchange.WebServices.Data.PhoneNumberKey]::MobilePhone
       8:  
       9:     $EwsApi = "C:\Program Files\Microsoft\Exchange\Web Services\1.0\" + 
      10:         "Microsoft.Exchange.WebServices.dll"
      11:     [void][Reflection.Assembly]::LoadFile($EwsApi) 
      12:     
      13:     # Avoid hardcoding email address
      14:     $windowsIdentity = [System.Security.Principal.WindowsIdentity]::GetCurrent()
      15:     $windowsQuery = "LDAP://<SID=" + $windowsIdentity.user.Value.ToString() + ">"
      16:     $windowsUser = [ADSI]$windowsQuery
      17:         
      18:     # Use the discovered email address to discover the exchange url    
      19:     $exchangeService = new-object Microsoft.Exchange.WebServices.Data.ExchangeService(
      20:         [Microsoft.Exchange.WebServices.Data.ExchangeVersion]::Exchange2010_SP1)
      21:     $exchangeService.AutodiscoverUrl($windowsUser.mail.ToString())
      22:  
      23:     # Create the new contact and populate properties
      24:     $newContact = 
      25:         New-Object Microsoft.Exchange.WebServices.Data.Contact($exchangeService) 
      26:     $newContact.GivenName = $firstName
      27:     $newContact.Surname = $lastName
      28:     $newContact.PhoneNumbers[$enumPhoneMobileValue] = $mobileNumber
      29:     
      30:     [void]$newContact.Save()
      31:     
      32:     Write-Host ("Contact '" + $firstName + " " + $lastName + "' created")
      33: }
      34:  
      35: Clear-Host
      36: Create-ExchangeContact "Donald" "Duck" "12345"
      37: Write-Host "Finished"

    However – if you run this multiple times, the same contact will be created multiple times. We need a check function to ensure we don’t create duplicate contacts, and we’ll do a bit of refactoring to add structure at the same time:

       1: # Connect to exchange EWS
       2: function Connect-ExchangeEWS
       3: { 
       4:     $EwsApi = "C:\Program Files\Microsoft\Exchange\Web Services\1.0\" + 
       5:         "Microsoft.Exchange.WebServices.dll"
       6:     [void][Reflection.Assembly]::LoadFile($EwsApi) 
       7:     
       8:     # Avoid hardcoding email address
       9:     $windowsIdentity = [System.Security.Principal.WindowsIdentity]::GetCurrent()
      10:     $windowsQuery = "LDAP://<SID=" + $windowsIdentity.user.Value.ToString() + ">"
      11:     $windowsUser = [ADSI]$windowsQuery
      12:         
      13:     # Use the discovered email address to discover the exchange url    
      14:     $exchangeService = new-object Microsoft.Exchange.WebServices.Data.ExchangeService(
      15:         [Microsoft.Exchange.WebServices.Data.ExchangeVersion]::Exchange2010_SP1)
      16:     $exchangeService.AutodiscoverUrl($windowsUser.mail.ToString())
      17:     
      18:     return $exchangeService
      19: }
      20:  
      21:  
      22: # Check if an exchange contact exists.
      23: function Check-ExchangeContactExists
      24: { 
      25:     param ($exchangeService, [string]$firstName, [string]$lastName) 
      26:     
      27:     $viewer =  New-Object Microsoft.Exchange.WebServices.Data.ItemView(300)    
      28:     $ContactsFolder=[Microsoft.Exchange.WebServices.Data.Folder]::Bind($exchangeService,
      29:         [Microsoft.Exchange.WebServices.Data.WellKnownFolderName]::Contacts)
      30:  
      31:     $searchFilterFirstName = 
      32:         New-Object Microsoft.Exchange.WebServices.Data.SearchFilter+IsEqualTo(
      33:         [Microsoft.Exchange.WebServices.Data.ContactSchema]::GivenName,$firstName)
      34:     $searchFilterLastName = 
      35:         New-Object Microsoft.Exchange.WebServices.Data.SearchFilter+IsEqualTo(
      36:         [Microsoft.Exchange.WebServices.Data.ContactSchema]::Surname,$lastName)
      37:     $searchFilters = New-Object `
      38:         Microsoft.Exchange.WebServices.Data.SearchFilter+SearchFilterCollection(
      39:         [Microsoft.Exchange.WebServices.Data.LogicalOperator]::And);
      40:     $searchFilters.add($searchFilterFirstName)
      41:     $searchFilters.add($searchFilterLastName)
      42:  
      43:     # Search for matching contacts.
      44:     $searchResults = $exchangeService.FindItems(
      45:         [Microsoft.Exchange.WebServices.Data.WellKnownFolderName]::Contacts,
      46:         $searchFilters, $viewer)
      47:     $match = $false
      48:     
      49:     # If we have results - then the contact already exists
      50:     foreach($searchResult in $searchResults)
      51:     {
      52:         $match = $true
      53:     }
      54:     
      55:     return $match
      56: }
      57:  
      58: # Create an Exchange contact
      59: function Create-ExchangeContact
      60: { 
      61:     param ($exchangeService, [string]$firstName, [string]$lastName, 
      62:         [string]$mobileNumber) 
      63:  
      64:     $enumPhoneMobileValue = 
      65:         [Microsoft.Exchange.WebServices.Data.PhoneNumberKey]::MobilePhone
      66:  
      67:     # Create the new contact and populate properties
      68:     $newContact = 
      69:         New-Object Microsoft.Exchange.WebServices.Data.Contact($exchangeService) 
      70:     $newContact.GivenName = $firstName
      71:     $newContact.Surname = $lastName
      72:     $newContact.PhoneNumbers[$enumPhoneMobileValue] = $mobileNumber
      73:     
      74:     [void]$newContact.Save()
      75: }
      76:  
      77: # The main boot strap
      78: function Run-Main
      79: {
      80:     $firstName = "Donald"
      81:     $lastName = "Duck"
      82:     $mobileNumber = "12345"
      83:     
      84:     Clear-Host
      85:     
      86:     # Connect to exchange EWS
      87:     $exchangeService = Connect-ExchangeEWS
      88:     
      89:     # Check if the contact already exists
      90:     $result = Check-ExchangeContactExists $exchangeService $firstName $lastName
      91:  
      92:     if ($result -eq $true)
      93:     {
      94:         # Already exists, so don't create
      95:         Write-Host ("Contact '" + $firstName + " " + $lastName  + "' already exists")
      96:     }
      97:     else
      98:     {
      99:         # Does not exist, so create.
     100:         Create-ExchangeContact $exchangeService $firstName $lastName $mobileNumber
     101:         Write-Host ("Contact '" + $firstName + " " + $lastName  + "' created")
     102:     }
     103:  
     104:     Write-Host "Finished"
     105: }
     106:  
     107: Run-Main

    Now we can create contacts, the next step is to provide delete functionality. We do a minor tweak to Check-ExchangeContactExists() so it returns the Id of the contact if it exists, and $null if the contact does not exist. This Id can then be used to delete the contact. The changed portions are below:

       1: # Check if an exchange contact exists.
       2: function Check-ExchangeContactExists
       3: { 
       4:     param ($exchangeService, [string]$firstName, [string]$lastName) 
       5:     
       6:     # Used to return the contact Id if the contract exists.
       7:     $Id = $null
       8:     
       9:     $viewer =  New-Object Microsoft.Exchange.WebServices.Data.ItemView(300)    
      10:     $ContactsFolder=[Microsoft.Exchange.WebServices.Data.Folder]::Bind($exchangeService,
      11:         [Microsoft.Exchange.WebServices.Data.WellKnownFolderName]::Contacts)
      12:  
      13:     $searchFilterFirstName = New-Object `
      14:         Microsoft.Exchange.WebServices.Data.SearchFilter+IsEqualTo(
      15:         [Microsoft.Exchange.WebServices.Data.ContactSchema]::GivenName,$firstName)
      16:     $searchFilterLastName = New-Object `
      17:         Microsoft.Exchange.WebServices.Data.SearchFilter+IsEqualTo(
      18:         [Microsoft.Exchange.WebServices.Data.ContactSchema]::Surname,$lastName)
      19:     $searchFilters = New-Object `
      20:         Microsoft.Exchange.WebServices.Data.SearchFilter+SearchFilterCollection(
      21:         [Microsoft.Exchange.WebServices.Data.LogicalOperator]::And);
      22:     $searchFilters.add($searchFilterFirstName)
      23:     $searchFilters.add($searchFilterLastName)
      24:  
      25:     # Search for matching contacts.
      26:     $searchResults = $exchangeService.FindItems(
      27:         [Microsoft.Exchange.WebServices.Data.WellKnownFolderName]::Contacts, 
      28:         $searchFilters, $viewer)
      29:     
      30:     # If we have results - then the contact already exists
      31:     foreach($searchResult in $searchResults)
      32:     {
      33:         # Remember the Id as this makes deleting easier
      34:         $Id = $searchResult.Id
      35:     }
      36:     
      37:     return $Id
      38: }
      39:  
      40: # Delete an Exchange contact
      41: function Delete-ExchangeContact
      42: { 
      43:     param ($exchangeService, $Id) 
      44:  
      45:     $enumDeleteMode = 
      46:         [Microsoft.Exchange.WebServices.Data.DeleteMode]::MoveToDeletedItems
      47:         
      48:     $contactToDelete = [Microsoft.Exchange.WebServices.Data.Contact]::Bind(
      49:         $exchangeService,$Id)
      50:     [void]$contactToDelete.Delete($enumDeleteMode)
      51: }
      52:  
      53: # The main boot strap
      54: function Run-Main
      55: {
      56:     $firstName = "Donald"
      57:     $lastName = "Duck"
      58:     
      59:     Clear-Host
      60:     
      61:     # Connect to exchange EWS
      62:     $exchangeService = Connect-ExchangeEWS
      63:     
      64:     # Check if the contact already exists
      65:     $Id = Check-ExchangeContactExists $exchangeService $firstName $lastName
      66:  
      67:     if ($Id -eq $null)
      68:     {
      69:         # Does not exist, so no need to delete
      70:         Write-Host ("Contact '" + $firstName + " " + $lastName  + "' does not exist")
      71:     }
      72:     else
      73:     {
      74:         # Already exists, so delete the contact
      75:         Delete-ExchangeContact $exchangeService $Id
      76:         Write-Host ("Contact '" + $firstName + " " + $lastName  + "' deleted")   
      77:     }
      78:  
      79:     Write-Host "Finished"
      80: }

    We may as well provide an update method to complete the CRUD operations. A future article will use a checksum to detect when a contact needs updating, but the implementation here will be as simple as possible:

       1: # Update an Exchange contact
       2: function Update-ExchangeContact
       3: { 
       4:     param ($exchangeService, $Id, [string]$mobileNumber) 
       5:  
       6:     $enumPhoneMobileValue = 
       7:         [Microsoft.Exchange.WebServices.Data.PhoneNumberKey]::MobilePhone
       8:     $enumAlwaysOverWrite = 
       9:         [Microsoft.Exchange.WebServices.Data.ConflictResolutionMode]::AlwaysOverWrite
      10:         
      11:     # http://msdn.microsoft.com/en-us/library/gg274394(v=exchg.80).aspx        
      12:     $extendedPropertyMobileTelephoneNumber = 14876
      13:  
      14:     # load the current contact
      15:     $contactToUpdate = 
      16:         [Microsoft.Exchange.WebServices.Data.Contact]::Bind($exchangeService,$Id)
      17:         
      18:     # set the new values
      19:     $contactToUpdate.PhoneNumbers[$enumPhoneMobileValue] = $mobileNumber
      20:     [Void]$contactToUpdate.SetExtendedProperty( 
      21:         (new-object Microsoft.Exchange.WebServices.Data.ExtendedPropertyDefinition(
      22:             $extendedPropertyMobileTelephoneNumber,
      23:             [Microsoft.Exchange.WebServices.Data.MapiPropertyType]::String) ), 
      24:             $mobileNumber )
      25:     
      26:     [void]$contactToUpdate.Update($enumAlwaysOverWrite)
      27: }

    Note: There is a “Gotcha” here related to updating the mobile telephone number. As it is an extended property, you have to make a call to SetExtendedProperty. The list of the Extended Properties and the corresponding ids can be found here.

    All the operations so far will be performed for the active user. If you want to adjust contacts for other users in your organisation – you need to “impersonate” them. While this might sound dangerous if anyone was allowed to adjust contact details for others, it is perfectly safe as the ability to “impersonate” is controlled by permissions. You can find further information about Exchange Impersonation here.

    The ImpersonatedUserId needs to be set on the ExchangeService before making a call, so a new sample Connect method that supports impersonation looks like:

       1: # Connect to exchange EWS with impersonation
       2: function Connect-ExchangeEWS
       3: { 
       4:     param ([string]$emailAddressToImpersonate) 
       5:     
       6:     $enumSmtpAddress = 
       7:         [Microsoft.Exchange.WebServices.Data.ConnectingIdType]::SmtpAddress   
       8:     
       9:     $EwsApi = "C:\Program Files\Microsoft\Exchange\Web Services\1.0\" + 
      10:         "Microsoft.Exchange.WebServices.dll"
      11:     [void][Reflection.Assembly]::LoadFile($EwsApi) 
      12:     
      13:     # Avoid hardcoding email address
      14:     $windowsIdentity = [System.Security.Principal.WindowsIdentity]::GetCurrent()
      15:     $windowsQuery = "LDAP://<SID=" + $windowsIdentity.user.Value.ToString() + ">"
      16:     $windowsUser = [ADSI]$windowsQuery
      17:         
      18:     # Use the discovered email address to discover the exchange url    
      19:     $exchangeService = new-object Microsoft.Exchange.WebServices.Data.ExchangeService(
      20:         [Microsoft.Exchange.WebServices.Data.ExchangeVersion]::Exchange2010_SP1)
      21:     $exchangeService.AutodiscoverUrl($windowsUser.mail.ToString())
      22:     
      23:     # Impersonate the desired exchange user
      24:     $exchangeService.ImpersonatedUserId = 
      25:         New-Object Microsoft.Exchange.WebServices.Data.ImpersonatedUserId(
      26:             $enumSmtpAddress,$emailAddressToImpersonate)
      27:     
      28:     return $exchangeService
      29: }

    Future article in this series will look at using Checksums to detect when Contacts need to be updated, and storing photos for contacts.

    Technorati Tags: ,
    Posted Jan 05 2011, 02:04 PM by IvorB
    Filed under: ,
  • Use PowerShell to synchronise Active Directory users to a contact list – Accessing Active Directory

    The first step in performing the synchronisation, is to get user information out of active directory. I am going to assume that there is an Active Directory group that contains all your active users and I’m going to discuss 2 different methods of getting the information out of Active Directory:

    Quest Software

    Quest Software provides a wide range of products, services and solutions, but they also make a number of freeware PowerShell libraries available to download. The available free downloads include “ActiveRoles Management Shell for Active Directory”. This greatly simplifies the process of accessing Active Directory with PowerShell and can be downloaded from http://www.quest.com/powershell/activeroles-server.aspx

    A simple example of using Quest ADManagement to list users in an AD Group is:

       1: # Add Quest ActiveRoles ADManagement functions
       2: if ((Get-PSSnapin "Quest.ActiveRoles.ADManagement" -ErrorAction SilentlyContinue) `
       3:     -eq $null) 
       4: {
       5:     Add-PSSnapin "Quest.ActiveRoles.ADManagement"
       6: } 
       7:  
       8: # Use Quest to get Group Members
       9: Function ListAllUsers
      10: {
      11:     $AllStaff = Get-QADGroupMember "Domain Admin"  `
      12:         -IncludedProperties displayName, givenName, sn
      13:     
      14:     foreach($IndexUser in $AllStaff)
      15:     {
      16:         Write-Host $IndexUser.givenName $IndexUser.sn "(" $IndexUser.displayName ")"
      17:     }
      18: }
      19:  
      20: Clear-Host
      21: ListAllUsers

    The full syntax for the Get-QADGroupMember can be found here.

    I tend to use functions whenever possible with PowerShell as it usually makes it easier to reuse the logic in other scripts.

    In this example – we are only concerns about givenName (forename), sn (Surname) and displayName but there is a wealth of information in ActiveDirectory. Some other common properties which might be of interest include:

    • mail
    • mobile
    • otherTelephone
    • telephoneNumber
    • facsimileTelephoneNumber
    • pager
    • homePhone
    • Title
    • Company
    • thumbnailPhoto

    Raw PowerShell

    Quest ActiveRoles Management Shell for Active Directory makes it very easy to access Active Directory group member information but sometimes you might be working in an environment where you cannot perform additional installs on a environment. In general – Administrators are very reluctant to perform “optional” installs on a Live environment and they usually get very nervous when you use the word “freeware”! In this situation – we need to fallback on default functionality in PowerShell. The equivalent function to the one above can be rewritten as follows:

       1: # List Group Members
       2: Function ListAllUsers
       3: {    
       4:     [DirectoryServices.DirectorySearcher]$searcher= New-Object `
       5:             DirectoryServices.DirectorySearcher([ADSI]"")
       6:     $searcher.SearchRoot = [adsi]"LDAP://CN=Domain Admins,CN=Users,DC=MyDomain,DC=com"
       7:     $searcher.SearchScope="base"
       8:     
       9:     # Configure the properties you are interested in
      10:     [void]$searcher.PropertiesToLoad.Add("sn")
      11:     [void]$searcher.PropertiesToLoad.Add("givenName")
      12:     [void]$searcher.PropertiesToLoad.Add("displayName")
      13:     
      14:     $searcher.AttributeScopeQuery="Member"
      15:     [DirectoryServices.SearchResultCollection]$AllStaff = $searcher.FindAll()
      16:    
      17:     foreach($IndexUser in $AllStaff)
      18:     {
      19:         # Note: The properties are all lowercase
      20:         $output = $IndexUser.Properties.givenname + $IndexUser.Properties.sn + 
      21:             "(" + $IndexUser.Properties.displayname + ")"
      22:         Write-Host ( $output )
      23:     }   
      24: } 
      25:  
      26: Clear-Host
      27: ListAllUsers

    If you want to list the groups that are in ActiveDirectory, you can use:

       1: # Return a list of all the groups
       2: function ListAllGroups() 
       3: { 
       4:     [DirectoryServices.DirectorySearcher]$searcher= New-Object `
       5:             DirectoryServices.DirectorySearcher([ADSI]"")
       6:     $searcher.Filter = "(objectCategory=Group)"
       7:     [DirectoryServices.SearchResultCollection]$AllGroups = $searcher.FindAll() 
       8:  
       9:     foreach($IndexGroup in $AllGroups)
      10:     {
      11:         Write-Host $IndexGroup.Path
      12:     } 
      13: }
      14:  
      15: Clear-Host
      16: ListAllGroups

    It should be clear that Quest.ActiveRoles.ADManagement makes it easier to get the information out of Active Directory, but the approach you take will probably be determined by your ability to install the library in your environment.

    Now that we can get the information out of ActiveDirectory – the next step in performing the synchronisation is to create exchange contacts with this information. This will be discussed in the next post in this series.

    Technorati Tags: ,
  • Use PowerShell to synchronise Active Directory users to a contact list – Introduction

    This article is the start of a short series about using PowerShell to synchronise the users in Active Directory with a person’s contact list – in particular so that they appear on their mobile phone. So, before we jump into any implementation detail, it is worth considering why you might want to synchronise active directory with a person’s phone contact list. Some reasons which spring to mind include:

    • Easier to make phone calls.
    • Improved functionality receiving phone calls.
    • Reduced load on office receptionists.
    • Reduced telephony costs.

    Easier to make phone calls

    To be completely honest, it really is not difficult to make phone calls without this functionality. With Windows Mobile, you usually have an option inside the phone functionality called “Company Directory” or similar. You typically type in some search criteria, press the search button and after a few seconds – a list of matching people from the Global Address List is displayed.

    While this sounds straightforward, there are a number of minor disadvantages:

    • A phone will typically make a 3G connection or similar to get the data, so you need to be in an area capable of establishing a decent data connection.
    • There is a time delay between deciding you want to make a phone call and actually being able to dial (choosing company directory, entering search criteria, waiting for a response). A number of people will often phone an office receptionist and ask to be transferring to the correct number rather than wait for the search results to dial.
    • A user typically cannot annotate the contents of the company directory. E.g. active directory might record users’ official work phone numbers, but certain people might like a limited numbers of people to know their private numbers. Using the company directory does not allow any customisations by the end user.
    • You typically have to search some search criteria and perform a search of the company directory but sometimes you just want to list the company users. A scan of the list can “jog” your memory about who you want to phone.

    There are also disadvantages to consider when synchronising active directory with users’ contact lists. If you have a large organisation, it can consume a reasonable amount of disk space. E.g. if you have 1024 users and you put all users into everyone’s contact list, and each contact takes 1K, you will consume:

    1024 * 1024 * 1K = 1GB on the server and

    1024 * 1K = 1MB on each users phone

    Another advantage to consider when synchronising active directory users with contact lists comes if we use photos. We can synchronise those photos with the contact information, so users of the phone can see photos of the various people in the organisation. This can be particularly useful for new starters who may recognise a face but not know the various names, or large organisations where it might be difficult to remember everyone's name.

    Improved functionality receiving phone calls

    How many phone numbers can you remember? I can probably recognise 2 or 3 as I am now completely reliant on my mobile to store the various phone numbers. It is extremely useful to have the person’s name displayed when your mobile rings – at the very least – you can decide if you want to divert the user to voicemail!

    Mobiles are currently not smart enough to scan the company directory to match the number – to be honest – the time delay in getting the information from the Company Directory would probably make this impossible.

    If we include contact photo information – we can see the photo of the person calling which can help in the decision about accepting/diverting a call.

    Storing users as phone contacts can lead to a significant usability improvement as it allows the display of names/photos for incoming company calls.

    Reduced load on office receptionists

    I briefly referred to user’s impatience with accessing the Company Directory above, and it should not be underestimated. After a few seconds of frustrated waiting, most people will simply cancel the search on Company Directory and phone the office receptionist and ask to be transferred to the appropriate person. Not only is it very tedious for the receptionist to perform this role – it is a distraction from their core role. If we can simplify the process of phoning our colleagues directly – we reduce the load on office receptions/administrators and potentially save money.

    Reduced Telephony costs

    These days, most companies have corporate agreements with their mobile provider and these typically include various incentives to use the provider. It is not uncommon for a provider to offer free mobile – to – mobile calls for business consumers, so we want to encourage our colleagues to make “free” direct calls to each other, rather than “paid” calls to receptionists who make another “paid” call to transfer them to the appropriate person.

    Technical Implementation

    Now that we have discussed the reasons why it might be a good idea to synchronise your users in Active Directory with people’s contact list, we will explore how this might be achieved using PowerShell. Rather than provide a complete solution as a download, the focus of this series will be to provide some snippets which others can build upon to provide their own solution. “Give a man fish and you feed him for a day, teach a man to fish and you feed him for a lifetime”. Some future posts in this series will look at the following:

    ·         Accessing user details in Active Directory.

    ·         Storing contacts in Outlook Exchange.

    ·         Detecting changes in Active Directory.

    ·         Storing photos with contact details.

     

    Technorati Tags: ,

     

  • Sync Framework 2.0 Series – Part 3 – Performing Collaborative Synchronisation

    This is part 3 in the Sync Framework 2.0 series and deals with building an application that can perform collaborative synchronisation. By this – I mean we have an application which will directly update 1 instance of SQL Server, but this server is kept in sync with a number of other SQL Server peers. In order to keep things relatively simple – we will assume only SQL Servers, so the basic idea is to build something like the following:

    I decided to use the pubs database and in particular the table “Authors”, so you’ll need 2 different databases with this table - I called mine pubs and pubsCopy. I’m going to assume that the tables are in sync when we start, so we only need to worry about keeping them in sync.

    In general with SQL Server and Sync Framework, there are 2 approaches to keeping tables in sync:

    • SQL Server Change Tracking
    • Adding custom tables/columns/trigger to track changes.

    As SQL Server change tracking limits you to specific versions of SQL Server, and as our previous example used change tracking – this example is going custom tables/columns and triggers to track changes. The first thing we need to do is provision both tables which basically means creating the tables/columns & triggers to track the database changes in both databases. I created a helper class to wrap this functionality as it looks like the following:

       1: /// <summary>
       2: /// Wraps some synchronisation functionality
       3: /// </summary>
       4: public class SynchronisationHelper
       5: {
       6:     private static string scopeName = "authors";
       7:  
       8:     /// <summary>
       9:     /// Provision a sql server
      10:     /// </summary>
      11:     /// <param name="connectionString"></param>
      12:     /// <returns></returns>
      13:     public static bool ProvisionSqlSyncProvider(string connectionString)
      14:     {
      15:         bool requiredProvision = false;
      16:  
      17:         SqlConnection connection = new SqlConnection(connectionString);
      18:  
      19:         // create a new scope description and add the 
      20:         // appropriate tables to this scope
      21:         DbSyncScopeDescription scopeDesc = new 
      22:             DbSyncScopeDescription(scopeName);
      23:  
      24:         // class to be used to provision the scope defined above
      25:         SqlSyncScopeProvisioning scopeProvision = new SqlSyncScopeProvisioning();
      26:  
      27:         // Check if this scope already exists on the server and if not 
      28:         // go ahead and provision
      29:         if (!scopeProvision.ScopeExists(scopeName, connection))
      30:         {
      31:             requiredProvision = true;
      32:  
      33:             // add the appropriate tables to this scope
      34:             DbSyncTableDescription tableDesc = 
      35:                 SqlSyncDescriptionBuilder.GetDescriptionForTable("authors", connection);
      36:             scopeDesc.Tables.Add(tableDesc);
      37:  
      38:             // note that it is important to call this after the tables 
      39:             // has been added to the scope
      40:             scopeProvision.PopulateFromScopeDescription(scopeDesc);
      41:  
      42:             //indicate that the base table already exists and does not need to be created
      43:             scopeProvision.SetCreateTableDefault(DbSyncCreationOption.Skip);
      44:  
      45:             //provision the server
      46:             scopeProvision.Apply(connection);
      47:         }
      48:  
      49:         return requiredProvision;
      50:     }
      51: }

    Our helper takes a connection string to the database where we have tables that we wish to provision. We need to provide an identifier for our synchronisation operation and we do this by creating a DBSyncScopeDescription and a SqlSyncScopeProvisioning. We check if this scope has already been configured in our database and if not – specify the tables that will be part of this synchronisation. When we apply the scope provision, a number of database operations are performed:

    • Creation of new Table
      • scope_info - This provides a unique identifier (scope_id) for each scope
      • scope_config - This stores configuration information for our new scope. This includes mapping information for stored procedures to create, delete and update records, as well as all the columns that are in the scope.
      • authors_tracking - This is used to track changes to the table in the scope, along with tombstone information for deleted records.
    • Creation of Stored Procedures
      • authors_delete – used by the framework to delete the record from the authors table.
      • authors_deletemetadata – used by the framework to delete a record from the authors_tracking table.
      • authors_insert – used by the framework to insert a record into the authors table.
      • authors_insertmetadata – used by the framework to update the authors_tracking table.
      • authors_selectchanges – used by the framework to return records from the authors table since a particular revision.
      • authors_selectrow – used by the framework to return a particular record from the authors table.
      • authors_update – used by the framework to update a record in the authors table.
      • authors_updatemetadata – used by the framework to update a record in the authors_tracking table.
    • Creation of triggers on authors table
      • insert trigger called authors_insert_trigger – this inserts a record into the authors_tracking table
      • delete trigger called authors_delete_trigger – this updates the record in authors_tracking as a tombstone.
      • update trigger called authors_update_trigger – this updates the record in author_tracking to capture the local data change

    Creation of these entities through the application is probably fine for a non production application, but in reality you will probably want to script the changes so they can be applied to a production environment.

    The next step is to actually perform the synchronisation between the 2 SQL Server instances. Again – I wrapped the functionality to perform the synchronisation into a helper class and the code looks like:

       1: /// <summary>
       2: /// Utility function that will create a SyncOrchestrator and 
       3: /// synchronize the two passed in providers
       4: /// </summary>
       5: /// <param name="localProvider">Local store provider</param>
       6: /// <param name="remoteProvider">Remote store provider</param>
       7: /// <returns></returns>
       8: public static SyncOperationStatistics SynchronizeProviders(
       9:     RelationalSyncProvider localProvider, 
      10:     RelationalSyncProvider remoteProvider)
      11: {
      12:     SyncOrchestrator orchestrator = new SyncOrchestrator();
      13:     orchestrator.LocalProvider = localProvider;
      14:     orchestrator.RemoteProvider = remoteProvider;
      15:     orchestrator.Direction = SyncDirectionOrder.UploadAndDownload;
      16:  
      17:     // Perform the synchronisation
      18:     SyncOperationStatistics stats = orchestrator.Synchronize();
      19:     return stats;
      20: }

    It is very straightforward to create the RelationalSyncProviders needed as parameters.

       1: /// <summary>
       2: /// Return a sql sync provider
       3: /// </summary>
       4: /// <param name="connectionString"></param>
       5: /// <returns></returns>
       6: public static SqlSyncProvider ConfigureSqlSyncProvider(string connectionString)
       7: {
       8:     SqlSyncProvider provider = new SqlSyncProvider();
       9:     provider.ScopeName = scopeName;
      10:     provider.Connection = new SqlConnection(connectionString);
      11:  
      12:     return provider;
      13: }

    Next step is to build a simple UI that allows you to maintain the data on either the source or the target, perform a synchronisation and view the results of the synchronisation.

    You should observe the following behaviour:

    • Perform a sync to ensure the source is in sync with the target.
    • Press Refresh on both the source and the target and ensure that they visually look the same.
    • Alter an author on the source and press Save. When you press Refresh on the source – you should continue to see the new value.
    • Press Refresh on the target and you should continue to see the old value.
    • Press Sync which should synchronise the change from the source to the target.
    • Press Refresh on the target and you should see that the value has been updated with the value from the Source.

    The prototype application can be downloaded from here. Don’t forget - you will need 2 SQL Server databases – one called pubs and one called pubsCopy, both containing a version of the authors table.

    Technorati Tags:
  • Installing .Net 1.1 applications on Windows Server 2008 R2

    I recently needed to get an ASP.Net 1.1 application working on Windows Server 2008 R2. I naively assumed that this was just going to work but then the whole 32/64 bit problem emerged.

    .Net 1.1 is 32 bit only and Server 2008 R2 is 64 bit only! Well – WOW64 has been around quite a while so surely this would be a piece of cake. I proceeded to download the .Net Framework Version 1.1 Redistributable Package and when you attempt to install – you get your first hint about the potential problems ahead.

    At this stage – you have 2 choices:

    • Attempt to upgrade your application to a 64 bit version of the framework
    • Attempt to get .Net 1.1 working on Server 2008 R2

    My preference was to get .Net 1.1 working as I didn’t want to go through a test cycle for an application I didn’t really understand on a new version of the framework. However – rather than just choosing “Run Program” from the Program Compatibility Assistant dialog – I did some research and used the following process to get my application to work.

    1 - Install IIS Metabase Compatibility

    To install on Windows 2008 Server – click Start and then “Server Manager”.

    Under Web Server (IIS), click Add Role Services.

    Ensure that IIS Metabase Compatibility is installed.

    2 - Install .Net 1.1

    I installed .Net 1.1 in the following order:

    I still got a compatibility warning, but chose “Run Program” to continue.

    Installing Service Pack 1 is likely to require a reboot.

    3 – Enable ASP.Net 1.1 ISAPI Extension

    Following the steps in option 2 made ASP.NET v1.1.4322 available on my ISAPI and CGI Restrictions dialog, but was disabled by default. Open Internet Explorer, click on your server name and choose ISAPI and CGI Restrictions from the IIS section. Enable ASP.Net v1.1.4322 as a valid ISAPI extension.

    4 – Adjust machine.config

    We need ASP.NET 1.1 to ignore IIS configuration sections, so open machine.config for Framework 1.1 (%windir%\Microsoft.NET\Framework\v1.1.4322\config\machine.config) and add the following towards the end of configSections.

       1: <section name="system.webServer" type="System.Configuration.IgnoreSectionHandler, 
       2:     System, Version=1.0.5000.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" /> 

    5 – Adjust the Application pool

    I now needed to tell my application to use the application pool ASP.NET 1.1. Use IIS Manager, choose the site that you are working with and choose “Advanced Settings”. Adjust the application pool to use ASP.NET 1.1 which will use .Net Framework 1.1.

    6 – Fix applicationHost.config bug

    IIS runtime detected that I was running on a 64 bit operating system, so it attempted to load .net framework configuration from Microsoft.Net\Framework64, but this doesn’t exist for .Net Framework 1.1. The solution is to copy the 32 bit version into the appropriate 64 bit folder.

    • Create \Windows\Microsoft.net\Framework64\v1.1.4322\config
    • Copy machine.config from \Windows\Microsoft.net\Framework\v1.1.4322\Config\

    7 - Check any application registry settings

    My application relied upon a number of custom application registry settings. In the 32 era – this wasn’t a problem as registry settings were in the “same” expected place. e.g. most people put registry settings under HKEY_LOCAL_MACHINE \ SOFTWARE someplace. However – on a 64 bit machine – the 64 bit software settings are stored in this expected place, while 32 bit settings are stored under HKEY_LOCAL_MACHINE \ SOFTWARE \ Wow6432Node.

    This is probably easiest to explain by giving an example. My application needed to access HKEY_LOCAL_MACHINE \ SOFTWARE \ Microsoft \ ConfigurationManagement \ hashKey. In order to make this setting available to a 32 bit application running on a 64 bit server, you need to store the key as  HKEY_LOCAL_MACHINE \ SOFTWARE \ Wow6432Node \ Microsoft \ ConfigurationManagement \ hashKey.

    I initially exported the keys from an existing Server 2003 machine and imported them onto Server 2008 which put them into HKEY_LOCAL_MACHINE \ SOFTWARE \ Microsoft \ ConfigurationManagement \ hashKey. This is where Windows expects you to store your 64 bit settings, so you need to move them to the corresponding location under Wow6432Node.

    This caused a lot of pain and confusion trying to understand why my application couldn’t read registry settings which were obviously there. The solution is to put your 32 bit registry settings where your 32 bit application will attempt to read them – under Wow6432Node.

    Some useful references for understanding the 32/64 bit registry issues are:

    Finally, after following all of the above – my 32 bit .net 1.1 application  was working correctly on Server 2008 R2.

    Useful links

    Technorati Tags: ,
    Posted Jul 15 2010, 05:17 PM by IvorB with 26 comment(s)
    Filed under:
  • Sync Framework 2.0 Series – Part 2 – Using Local Database Caching for an occasionally connected application.

    This is part 2 in the Sync Framework 2.0 series and deals with building an occasionally connected application using Local Database Caching. Part 1 in the series dealt with installing the Sync Framework and using the File Sync Provider.

    My original goal with this post was to see if I could use the Sync Framework to build a C# version of SQL Replication to keep two SQL Server databases in sync. However – I found a limited amount of information available in the public domain, so I have decided to take an initial step into SQL Synchronisation by building an “occasionally connected application” using Local Database Caching. The basic idea is to build something like the following:

    Users of the application can work in local mode where all their changes are persisted to a local database, and at intervals they can sync their changes back to the main corporate database. The best initial source of information I found for this type of application was on MDSN called “Occasionally Connected Applications” – an excellent series of articles that discuss various aspects of this topic. The manner you approach this type of application depends upon what technologies you are going to use to build it – for my example – I’m going to use SQL Server 2008 R2, Visual Studio 2010 and of course – Sync Framework 2.0.

    When it comes to tracking database changes, there are 2 general approaches that can be taken.

    • Add columns and triggers to the tables being monitored.
    • Use SQL Server Change Tracking

    In general – I find Database Administrators quite reluctant to add columns to tables for this kind of work. Generally – the application is in production when someone decides that they would like to be able to add a disconnected approach, and it is often considered to risky to add these additional tracking columns and triggers. Other people don’t like mixing the data we need to store for an entity with metadata we use to embellish an entity. For this reason – I’m going to focus on using SQL Server Change Tracking and further details on change tracking can be found on MSDN.

    First step is to choose the data that your application needs to work with. I wanted an example that was as simple as possible so I chose the pubs database. pubs isn’t installed by default on SQL Server 2008, so I had to download the database from Microsoft. After attaching to SQL Server 2008 – I came across my first stumbling block – using the Local Database Caching wizard (more on this later) didn’t detect it as a database which I could use SQL Server Change Tracking on as “Use SQL Server change tracking” was disabled.

    I struggled to find useful information on the internet about this using my favourite search engine (perhaps I was using poor search criteria?), but in the end I made a lucky guess – perhaps it didn’t like my database as it wasn’t really a SQL Server 2008 database? It was really a SQL Server 2000 database attached to SQL Server 2008. I quickly created a new database and used the Data Import Wizard to copy the “authors” table from pubs to my new database and while this was a step forwards as “Use SQL Server change tracking was now enabled”, my next stumbling block was that it wouldn’t let me add a new Cached Table - the Add button was disabled!

    More lost time using my favourite search engine and another lucky guess solved my problem – perhaps it was to do with the way my authors table was created. The Data Import Wizard created my table as well as copying the contents, and it decided not to create a primary key. Sync Framework likes primary keys so a quick “Add Primary Key” operation later – and the Local Database Caching wizard was a lot happier.

    To recap – when you are choosing your data source which you wish to use for an occasionally connection application and you want to use SQL Server Change Tracking – make sure its SQL Server 2008 and you have a primary key on any table that you want to synchronise.

    Now we have our data that we want to synchronise – let’s start building our application. I’m going to use a simple WinForms Desktop application. Tweak the default form so we have a tab control with 2 tabs – one to display the contents on the server and one to display the contents on our client and some buttons to perform useful operations. I’ve built the following:

    Right click the project and choose “Add New Item” and on the Data section – choose “Local Database Caching”. This will bring up the “Configure Data Synchronisation” wizard where you specify the data that you wish to use. Specify the Server connection for the source of the data and the Client connection should default to a new Compact database. Choose the table which you wish to cache by clicking the Add button. The dialog should look something like the following:

    Next step in the wizard is to choose the database model, and I went with the Dataset approach. This is good for getting a prototype up and running, but if you are building something for a production environment – you might want to consider using an Entity Data Model.

    You end up with a typed dataset for the local version of the authors table. Visual Studio will have added references to Microsoft.Synchronisation, Microsoft.Synchronisation.Data, Microsoft.Synchronisation.Server and Microsoft.Synchronisation.ServerCe but it managed to completely screw up the references on my machine. I’m working on a 64 bit machine and it created a strange hybrid of 32 and 64 bit references. I manually removed and re-added all 64 bit references on my machine and then changed the target platform to be “Any CPU” (It had defaulted to x86).

    Next step is populate the client DataGridView and this requires minimal coding. I have 2 form level variables as follows:

       1: #region Variables
       2:  
       3: private AuthorsDataSet authorsClientDataSet = new AuthorsDataSet();
       4: private AuthorsDataSetTableAdapters.AuthorsTableAdapter authorsClientAdapter = 
       5:     new AuthorsDataSetTableAdapters.AuthorsTableAdapter();
       6:  
       7: #endregion

    In the event handler for the click event for the populate button on the client tab – I have the following code:

       1: /// <summary>
       2: /// Populate the client data grid view control
       3: /// </summary>
       4: /// <param name="sender"></param>
       5: /// <param name="e"></param>
       6: private void buttonClientAuthorsPopulate_Click(object sender, EventArgs e)
       7: {
       8:     try
       9:     {
      10:         authorsClientAdapter.Fill(authorsClientDataSet.Authors);
      11:  
      12:         this.dataGridViewClientAuthors.DataSource = authorsClientDataSet.Authors;
      13:     }
      14:     catch (Exception Ex)
      15:     {
      16:         MessageBox.Show("Failed to populate the client - perhaps the client compact " +
      17:             "database hasn't been created yet!" + Environment.NewLine + Ex.Message);
      18:     }
      19: }

    To make our prototype easy to test – it’s useful to be able to change and save the data using the application (rather than testing by changing the data directly in the database), so I implemented the save functionality for the client as follows:

       1: /// <summary>
       2: /// Save any changes on the client
       3: /// </summary>
       4: /// <param name="sender"></param>
       5: /// <param name="e"></param>
       6: private void buttonClientSave_Click(object sender, EventArgs e)
       7: {
       8:     try
       9:     {
      10:         this.Validate();
      11:         this.dataGridViewClientAuthors.EndEdit();
      12:         this.authorsClientAdapter.Update(this.authorsClientDataSet);
      13:  
      14:         PopulateClient();
      15:         MessageBox.Show("Saved");
      16:     }
      17:     catch(Exception ex)
      18:     {
      19:         MessageBox.Show("Error " + ex.Message);
      20:     }
      21: }

    In order to make the server tab work, I decided to manually add a typed dataset for the server version of the authors table. There were a number of ways I could have made the server tab work, but using a typed dataset meant that the code looked very similar to the client version.

    You should now have working versions of the server and client tabs which allow you to view, change and save values. Next step is to implement the Synchronisation functionality – we want the user to be able to perform synchronisations between their local database and the central server database. Initial implementation for performing the synchronisation is very straightforward:

       1: /// <summary>
       2: /// Perform a database change sync between our client and the server.
       3: /// </summary>
       4: /// <param name="sender"></param>
       5: /// <param name="e"></param>
       6: private void buttonSync_Click(object sender, EventArgs e)
       7: {
       8:     AuthorCacheSyncAgent syncAgent = new AuthorCacheSyncAgent();
       9:     Microsoft.Synchronization.Data.SyncStatistics syncStats = syncAgent.Synchronize();
      10:  
      11:     PopulateServer();
      12:     PopulateClient();
      13:     
      14:     StringBuilder message = new StringBuilder();
      15:     message.Append("Changes downloaded: ");
      16:     message.Append(syncStats.TotalChangesDownloaded.ToString());
      17:     message.AppendLine();
      18:     message.Append("Changes uploaded: ");
      19:     message.Append(syncStats.TotalChangesUploaded.ToString());
      20:  
      21:     MessageBox.Show(message.ToString());
      22: }

    When I tried to run my application for the first time – it failed with the following error: “The specified change tracking operation is not supported. To carry out this operation on the table, disable the change tracking on the table, and enable the change tracking.” More frantic searching but at least I found other people having the same problem. The easiest way to get rid of this exception is to set the “Copy to Output Directory” for your local compact database to “Do not copy”.

    You should now have a working “occasionally connected application” which can synchronise changes from the server to your client. However – what if you want to be able to synchronise both ways? This is actually easy to achieve – you just need to implement the OnInitialized() method in the cache file that the wizard generated.

       1: /// <summary>
       2: /// Code generated Sync Agent which we can extend
       3: /// </summary>
       4: public partial class AuthorCacheSyncAgent 
       5: {
       6:     /// <summary>
       7:     /// Initialise the sync agent to perform a bidirectional sync.
       8:     /// </summary>
       9:     partial void OnInitialized()
      10:     {
      11:         Authors.SyncDirection = SyncDirection.Bidirectional;
      12:     }
      13: }

    This allows you to perform a 2 way sync. However – what happens if you change a record on both the server and the client and perform a sync. The behaviour you should find is that the server always wins and the client change is thrown away. What happens if this isn’t the behaviour you require? You can add custom conflict handling by extending the client and server sync provider that the wizard generated. Delve into the generated code to find the names of the client and server providers, and create a partial class for the client provider as follows:

       1: /// <summary>
       2: /// Extends the code generated client sync provider for authors.
       3: /// </summary>
       4: public partial class AuthorCacheClientSyncProvider
       5: {
       6:     /// <summary>
       7:     /// Allows the main form to configure the event handler for the 
       8:     /// ApplyChangeFailed event
       9:     /// </summary>
      10:     public void AddHandlers()
      11:     {
      12:         this.ApplyChangeFailed +=
      13:             new System.EventHandler<ApplyChangeFailedEventArgs>
      14:             (AuthorCacheClientSyncProvider_ApplyChangeFailed);
      15:     }
      16:  
      17:     /// <summary>
      18:     /// Allow the client to win a conflict by choosing to continue.
      19:     /// </summary>
      20:     /// <param name="sender"></param>
      21:     /// <param name="e"></param>
      22:     private void AuthorCacheClientSyncProvider_ApplyChangeFailed(object sender,
      23:         Microsoft.Synchronization.Data.ApplyChangeFailedEventArgs e)
      24:     {
      25:  
      26:         if (e.Conflict.ConflictType ==
      27:             Microsoft.Synchronization.Data.ConflictType.ClientUpdateServerUpdate)
      28:         {
      29:             e.Action = Microsoft.Synchronization.Data.ApplyAction.Continue;
      30:         }
      31:     }
      32: }

    Similarly – create a partial class for the server provider as follows:

       1: /// <summary>
       2: /// Extend the server sync provider for authors to allow a client to win a conflict.
       3: /// </summary>
       4: public partial class AuthorCacheServerSyncProvider
       5: {
       6:     /// <summary>
       7:     /// Hooks up the ApplyChangeFailed event handler.
       8:     /// </summary>
       9:     partial void OnInitialized()
      10:     {
      11:         this.ApplyChangeFailed +=
      12:             new System.EventHandler<ApplyChangeFailedEventArgs>
      13:             (AuthorCacheServerSyncProvider_ApplyChangeFailed);
      14:     }
      15:  
      16:     /// <summary>
      17:     /// When a conflict occurs, force write the client change.
      18:     /// </summary>
      19:     /// <param name="sender"></param>
      20:     /// <param name="e"></param>
      21:     private void AuthorCacheServerSyncProvider_ApplyChangeFailed(object sender,
      22:         Microsoft.Synchronization.Data.ApplyChangeFailedEventArgs e)
      23:     {
      24:         if (e.Conflict.ConflictType ==
      25:             Microsoft.Synchronization.Data.ConflictType.ClientUpdateServerUpdate)
      26:         {
      27:             string message = "A client/server conflict was detected at the server.";
      28:             MessageBox.Show(message);
      29:             e.Action = Microsoft.Synchronization.Data.ApplyAction.RetryWithForceWrite;
      30:         }
      31:     }
      32: }

    Finally you need to tweak the implementation to perform the synchronisation as follows:

       1: /// <summary>
       2: /// Perform a database change sync between our client and the server.
       3: /// </summary>
       4: /// <param name="sender"></param>
       5: /// <param name="e"></param>
       6: private void buttonSync_Click(object sender, EventArgs e)
       7: {
       8:     AuthorCacheSyncAgent syncAgent = new AuthorCacheSyncAgent();
       9:     AuthorCacheClientSyncProvider clientSyncProvider = 
      10:         (AuthorCacheClientSyncProvider)syncAgent.LocalProvider;
      11:     clientSyncProvider.AddHandlers();
      12:  
      13:     Microsoft.Synchronization.Data.SyncStatistics syncStats = syncAgent.Synchronize();
      14:  
      15:     PopulateServer();
      16:     PopulateClient();
      17:  
      18:     StringBuilder message = new StringBuilder();
      19:     message.Append("Changes downloaded: ");
      20:     message.Append(syncStats.TotalChangesDownloaded.ToString());
      21:     message.AppendLine();
      22:     message.Append("Changes uploaded: ");
      23:     message.Append(syncStats.TotalChangesUploaded.ToString());
      24:  
      25:     MessageBox.Show(message.ToString());
      26: }

    The latest implementation now means that the client always wins and the server change is throw away.. In reality you might want to prompt the user or log the details and allow manual intervention, but at least we can control how we manage the conflict.

    This prototype application can be downloaded from here. You will need to create a SQL Server 2008 database with an Authors table containing the standard authors information. I used a SQL ConnectionString for a user called sqlsync but you can edit these in the app.config.

    Hopefully you will agree with me that using the Local Database Cache means that we don’t have to write much code in order to produce a working occasionally connected application.  However – the main problems are limited example in the public domain and some puzzling times when things don’t work quite as expected!

    Technorati Tags:
  • Sync Framework 2.0 Series – Part 1 – Using the File Sync Provider

    This is the beginning of what will hopefully become a series of posts about Microsoft Sync Framework 2.0.

    Microsoft Sync Framework is “A comprehensive synchronization platform that enables collaboration and offline access for applications, services, and devices with support for any data type, any data store, any transfer protocol, and any network topology”. I wanted to use the sync framework recently but struggled to find really good content using my favourite search engine. I decided to work through some of the samples in the SDK and try and explore around some of the issues, and hopefully my experiences will help jump-start others who wish to use the platform.

    The best place to start reading is on msdn at the Microsoft Sync Framework Developer Center

    One of the first things you will want to do is download the Sync Framework 2.0 SDK and this can be found at: Sync Framework 2.0 SDK.

    I’m using Visual Studio 2010 to develop a Windows Forms application so the first thing I needed to do was add a reference to Microsoft.Synchronisation and Microsoft.Synchronisation.Files. However – a gotcha (at least on my machine) are that the available references on the .Net tab are for Sync Framework 1.0. Its worth checking the Path column on the Add References dialog to make sure you are adding a reference to Sync Framework 2.0. In the end – I did a Browse to the SDK folder and added my reference that way. Some constructors etc have changed subtly from version 1.0, so you need to use the right sample with the right version of the SDK to make things a bit easier.

    Another gotcha that I came across is that my Visual Studio 2010 defaulted my Windows Forms Application to a platform target of x86 despite the fact that I was using a 64 bit machine. This became an issue as I had installed the 64 bit version of the Sync Framework SDK, so I started getting run time exceptions like the following: "Retrieving the COM class factory for component with CLSID {C201C012-C929-4D72-B9C5-341D48630630} failed due to the following error: 80040154 Class not registered (Exception from HRESULT: 0x80040154 (REGDB_E_CLASSNOTREG)).". After a lot of frustration – I found by changing the platform target to “Any CPU”, my run time COM error went away.

    I think the issue is created by the managed implementation actually being a wrapper around the real non managed implementation, so when I installed the 64 bit version of the SDK – I got the 64 bit registry entries. When Visual Studio built the 32 bit version of the application – it couldn’t find the 32 bit registry entries and gave me a run time error.

    I wanted to build an example that looked and worked a bit like SyncToy. If you are unfamiliar with SyncToy – its “a free application that synchronizes files and folders between locations. Typical uses include sharing files, such as photos, with other computers and creating backup copies of files and folders.” and is actually built with Sync Framework 2.0.

    So some might ask the question – why build something when a solution already exists and my answer is that its allows me to extend the solution beyond SyncToy. e.g. It would be relatively easy to extend my solution to use a Windows service and FileSystemWatchers to detect changes to the file system and automatically initiate a synchronisation. SyncToy simply initiates a synchronisation on demand.

    My example prompts the user for a source and target folder, and the user can press a button to synchronise the files between the 2 folders. To get a simple example working, you need to create a FileSyncProvider for the source folder, a FileSyncProvider for your target folder and a SyncOrchestrator to perform the synchronisation operation. I’ve put my code into a class, and my button click event handler calls it.

       1: /// <summary>
       2: /// Perform a simple synchronisation
       3: /// </summary>
       4: public class SimpleSyncHelper
       5: {
       6:     /// <summary>
       7:     /// Perform the synchronisation
       8:     /// </summary>
       9:     /// <param name="sourceFolder"></param>
      10:     /// <param name="targetFolder"></param>
      11:     public void PerformSynchronisation(string sourceFolder, 
      12:         string targetFolder)
      13:     {
      14:         FileSyncProvider sourceProvider = new FileSyncProvider(sourceFolder);
      15:         FileSyncProvider targetProvider = new FileSyncProvider(targetFolder);
      16:  
      17:         SyncOrchestrator agent = new SyncOrchestrator();
      18:         agent.LocalProvider = sourceProvider;
      19:         agent.RemoteProvider = targetProvider;
      20:         
      21:         // Perform 2 way sync
      22:         agent.Direction = SyncDirectionOrder.UploadAndDownload; 
      23:         agent.Synchronize();
      24:     }
      25: }

    This works really well but when I looked at the file synchronisation example from the SDK, it was coded differently and the piece that attracted my attention was: “Explicitly detect changes on both replicas upfront, to avoid two change detection passes for the two-way sync”. It turns out that the implementation of SyncOrchestrator-Synchronize() actually splits the operation into 2 – 1 to perform the upload and 1 to perform the download. When it performs the upload part – it detects changes on both the source and the target, and when it performs the download part – it detects changes on both the source and target. This effectively means that 4 “detect change” operations are performed and explains why the SDK sample was keen to avoid the two change detection phases.

    To prove to yourself that the code performs as above, hook up a DetectingChanges event handler for both the sourceProvider and the targetProvider and add a MessageBox.Show() to the handler and you should observer 4 message boxes.

    To get around the two change detection passes, you should perform an explicit detection on both the source and the target.

       1: /// <summary>
       2: /// Perform a DetectChange operation on a FileSyncProvider.
       3: /// </summary>
       4: /// <param name="replicaRootPath"></param>
       5: /// <param name="filter"></param>
       6: /// <param name="options"></param>
       7: private void DetectChangesOnFileSystemReplica(string replicaRootPath, 
       8:     FileSyncScopeFilter filter, FileSyncOptions options)
       9: {
      10:     using (FileSyncProvider provider = new FileSyncProvider(replicaRootPath,
      11:         filter, options))
      12:     {
      13:         provider.DetectChanges();
      14:     }
      15: }

    The Framework captures the change information which can then be used later in the synchronisation process. You can then change your code that instead of performing a SyncDirectionOrder of UploadAndDownload – you perform 2 SyncDirectionOrder with a value of Upload – once between the source and the target and once between the target and the source. Make sure you use the FileSyncOptions.ExplicitDetectChanges to prevent unnecessary change detection passes.

    As well as the DetectingChanges event, there are a number of other EventHandlers that you can hook up on a FileSyncProvider to get useful information:

    • ApplyingChange – occurs when a change is about to be applied.
    • AppliedChange – occurs when a change has been applied
    • SkippedChange – occurs when a change is skipped.

    The return value from a SyncOrchestration – Synchronise() operation is of type SyncOperationStatistics and it contains useful information about the changes applied as part of the synchronisation. This could be used to display a report for the user after the synchronisation operation. It contains properties like:

    • UploadChangesApplied – total number of changes that were successfully applied during the upload session.
    • DownloadChangesApplied – total number of changes that were successfully applied during the download session.
    • SyncStartTime – when the synchronisation started.
    • SyncEndTime – when the synchronisation ended.

    So far – we have seen how we can listen for events occurring as part of the synchronisation, and get details for a report at the end of the synchronisation, but what if we want to display a progress bar to indicate to the user how long is left in the synchronisation process? The Sync Framework doesn’t provide this kind of functionality by default as it would incur a reasonable overhead which would be wasted for those who do not require it. However – the FileSyncProvider does provide a preview mode! When this is set to true – it performs all the work of calculating what changes need to be made without actually performing the changes. This allows us to perform the operation initially in preview mode to calculate what changes need to be applied (which we can access by looking at the SyncOperationStatistics), we can capture the events from the operation and display accurate progress information to the user.

    I’ve put all of the above information into a sample application that looks like the following:

    The source code for this sample application can be downloaded from here.

    Hopefully future posts in this series will demonstrate the use of other providers in Microsoft Sync Framework 2.0.

    Technorati Tags:
  • Windows PowerShell Series - Part 8 – Working with Variables

    This is part 8 in a series of posts related to Windows PowerShell, and deals with working with Variables.

    Those from a programming background will be very used to dealing with variables – it is something we take for granted when doing development. Without variables – it would be very difficult to write any useful scripts as we wouldn’t have anyplace to remember working values.

    Variables in Windows PowerShell are prefixed with the dollar sign ($).Variable names are not case sensitive but you should be consistent to make your life easier. By default – any variables you create will exist only in the current session and will be lost when you end your session. There are 4 types of variable:

    • Automatic – these are created and maintained by Windows PowerShell and users cannot change their values. e.g. $PSHome stores the path to the Windows PowerShell installation directory.
    • Preference – these are created by Windows PowerShell with default values which the user can change. e.g. $MaximumVariableCount determines how many variables are permitted in a given session.
    • Environment – these store information about the operating system environment. To list all environment variables – use Get-ChildItem Env:
      or to get details about a particular environment variable – use Get-ChildItem Env:ComputerName
    • User created – these are variable which are created and maintained by the user. When someone refers to a variable – they usually mean this type of variable. e.g. $Today = (Get-Date).date

    Windows PowerShell variables are loosely typed which mean they are not limited to a particular variable type of object. The type of a variable is determined by the value it contains. e.g.

    Variables1

    In general – most programming languages (like C#) are strongly typed. There can be a number of advantages that using strong types can bring:

    • Limit the range of values that a variable can contain. e.g. limiting a variable like $Age to integer values. Compilers and other tools can check that we are putting sensible values into appropriate variables.
    • Improved clarify. e.g. if a “+” operation is performed on 2 integer variables -  then we always need to perform an addition. If a “+” operation is performed on 2 string variables – then we always need to perform a string concatenation. With loose/weak typing – the operation depends on the contents of the variables so isn’t always clear.
    • Improved Performance – optimisations can often be made when the type is known in advance rather than being determined “on the fly”

    Windows PowerShell allows us to use the type attribute and cast syntax to constrain a variable to a particular type. Some examples include:

    Variables2

    There are a number of Cmdlets which are designed for manipulating variables. These include:

    • Clear-Variable – Deletes the value for a variable
    • Get-Variable – Gets the variables in the current console
    • New-Variable – Creates a new variable
    • Remove-Variable – Deletes a variable and its value
    • Set-Variable – Changes the value of a variable.

    Variables3

    Windows PowerShell allows us to create constant and read-only variables. While constants and read-only variables are functionality very similar – one difference is that you can delete a read-only variable while you cannot delete a constant. To create constants and read-only variables, use:

    Variables4

    See the following for useful information about PowerShell variables:

    Next step – Working with Logic

    Previous step – What the heck is a Cmdlet?

    Technorati Tags:
    Posted Jun 29 2010, 02:49 PM by IvorB
    Filed under:
  • Windows PowerShell Series - Part 7 – What the heck is a Cmdlet?

    This is part 7 in a series of posts related to Windows PowerShell, and deals with cmdlets.

    Cmdlets (pronounced command-lets) are the smallest units of functionality in Windows PowerShell. They are named in a very specific verb-noun format that should make it obvious what operation the cmdlet performs.

    e.g. Get-Date returns the current date and time while Set-Date allows you to set the current date and time.

    A useful cmdlet when starting out with Windows PowerShell is Get-Command which returns a complete list of available cmdlets.

    Get-Command

    The example above also demonstrates pipelining where the output from the Get-Command cmdlet is piped (or redirected) to Format-Wide

    Get-Help is also useful in discovering information about available cmdlets, and its output includes some summary information about each cmdlet.

    Get-Help

    To get detailed information about a particular cmdlet, use Get-Help “cmdlet name”.

    Get-Help

    There are 2 other variations of Get-Help which provide different levels of information for a cmdlet:

    • Get-Help “cmdlet” –detailed
    • Get-Help “cmdlet” –full

    You can pass parameters to a cmdlet by using the a dash (-) notation. e.g. if you want to get the current date, but not include the time aspect, you can use the following:

    cmdlet parameters

    Some standard cmdlets which I recommend looking at include:

    • Get-Command – Get information about cmdlets
    • Get-EventLog – Gets events fro man event log
    • Write-EventLog – Writes an event to the event log
    • Get-Host – Gets information about the host application
    • Get-Location – Gets information about your current working location.
    • Get-Variable – Gets a particular variable
    • Ping-Computer – Sends a ping
    • Read-Host – Reads information from the host window
    • Write-Host – Writes information to the host window.
    • Start-Service – starts a particular service

    Use Get-Help *-* to discover other useful cmdlets.

    Next step – Working with Variables

    Previous step – Getting to grips with Scripts

    Technorati Tags:
    Posted Jun 25 2010, 04:14 PM by IvorB
    Filed under:
  • Windows PowerShell Series - Part 6 – Getting to grips with Scripts

    This is part 6 in a series of posts related to Windows PowerShell, and deals with getting to grips with scripts..

    So far – we have been typing commands into the PowerShell console (or the ISE) but what happens if you want to create some functionality which you can reload at a future session. The answer is to create your PowerShell commands as a PowerShell script.

    You can either go hard-core and use something like Notepad (or your text editor of choice) to create the script, or you can use the ISE to create and save your script. When saving the script – give it a “.ps1” file extension so PowerShell knows that it’s a script file and not a text file. The extension comes from PowerShell 1.0 and they didn’t feel the need to change the extension for PowerShell 2.0. As you would expect – PowerShell 2.0 can run PowerShell 1.0 scripts, but the reverse is not guaranteed.

    Let’s start with our first script – the typical Hello World script!

    HelloWorld

    Save this as HelloWorld.ps1 (or similar). To make your life easier for the moment – make sure there is no space in the file name or anywhere in the path that contains the file name. We’ll worry about handling spaces in a later post.

    The logic for this script is extremely simple:

    • Clear the output
    • Prompt the user their name and store in a variable
    • Say hello and repeat the users name by accessing the variable.

    You have a number of choices to run this script:

    • Run it from inside the ISE by pressing F5
    • Run it from inside the ISE or PowerShell Console by typing the script name (including path) into the command prompt
    • Run it from a standard console prompt by using the following: powershell [FilePath]\HelloWorld.ps1 where [FilePath] is your relevant directory structure. In my case – my script is stored on my desktop so I’m executing powershell c:\Users\IvorB\Desktop\HelloWorld.ps1"

    There you go – you should now have a working Hello World script!

    As mentioned in Part 4 of this blog series – security measures are in place to prevent scripts executing without your permission. You have 2 choices at this stage – to digital sign your script, or to adjust your execution policy to allow execution of unsigned scripts. The easiest (and laziest) way to get this to work is to change the execution policy which you can do with the following command:

    Set-ExecutionPolicy RemoteSigned.

    Next step – What the heck is a Cmdlet?

    Previous step – The Integrated Shell Environment (ISE)

    Technorati Tags:
    Posted May 11 2010, 03:13 PM by IvorB
    Filed under:
  • Windows PowerShell Series - Part 5 – The Integrated Shell Environment (ISE)

    This is part 5 in a series of posts related to Windows PowerShell, and deals with using the Integrated Shell Environment.

    So far we have been using the PowerShell Command Shell to execute PowerShell statements but you also have the option of using the Integrated Shell Environment.This is a more graphical rich interface for using PowerShell.

     PowerShellISE

    You can launch it in similar ways to the PowerShell Command Shell:

    • All Programs > Accessories > Windows PowerShell > Windows PowerShell ISE
    • From a command line prompt (e.g. cmd), type powershell_ise (note the underscore)
    • Start menu > Search > type powershell_ise (note the underscore)
    • [Win] + R (for Run command) > type powershell_ise (note the underscore)

    The ISE interface is made up of the following:

    • Script/Editor pane – this is where you will spend most of your time creating scripts. Tabs across the top of the pane allow you to work on multiple scripts at the same time.
    • Output pane – this is where the output from executing your script is displayed.
    • Command pane – you can enter ad-hoc commands in this pane. It effectively behaves as a standard PowerShell Command Shell

    This type of interface is a bit more familiar to those who are used to Visual Studio, although perhaps a little basic in direct comparison:

    • You have a text area where you can enter your script
    • Your have toolbar buttons to control script execution
    • You have basic debugging facilities.
    • Useful context sensitive help – highlight a command in the script window and press F1
    • Execute part of the scripts without running the whole thing (highlight the text you want to run and choose “Run Selection”
    • You can customise the environment by changing $psISE.Options including adding new menu options.

    The first time I attempt to debug a script in the ISE – I couldn’t figure out how to set a breakpoint. Coming from a Visual Studio background – I am used to creating a breakpoint whenever and where-ever I like. However – the ISE refused to create my breakpoint. I eventually figured out that you simply need to save your script and you then get access to the debugging facilities. You also need to remember that breakpoints are only remembered for your session – once you close the ISE and fire it up again – you’ve lost all your previous breakpoints.

    At this stage, its also worthwhile mentioning that Quest provides a free graphical user interface and script editor for Windows PowerShell called PowerGUI. This is definitely worth a look if you are going to be doing a lot with PowerShell.

    Next step – Getting to grips with scripts

    Previous step – Customise via Commands

    Technorati Tags:
    Posted May 10 2010, 04:52 PM by IvorB
    Filed under:
More Posts Next page »
Powered by Community Server (Commercial Edition), by Telligent Systems