Tuesday, July 29, 2008

MultiMon utopia

At my previous blog I wrote about tools and utilities I am using and also mentioned that I want to replace some of them...

One of tools I mentioned that I want to replace was some utility for multimon management.

Being partially developer\scripter, I really love multiple monitors. However in Windows using multi mon is not as easy as it could be :(

Ideal scenario

In ideal scenario I would like to have 3 monitors - one main for VS\Script editor & Outlook, one for tons of small windows (as mentioned, Live Messenger, Last.FM, Skype...) and one for internet\documentation. I would like to have this setup because I definitely want to have my primary working tool (editor) fullscreen, see all small windows at once and have (again fullscreen) access to articles and documentation. So from hardware perspective I find 3 monitors best.

I definitely need option to easily move windows between screens - which is quite painful in Windows, because you must make full drag&drop and this is not supported when window is in fullscreen. Obviously this is not supported by Windows natively (shame, shame...), so I need some external utility for this.

Talking about replacement for default Windows functionality, I want following:

  • Easy option to move window to another monitor (HW)
  • Easy option to move window to another screen (virtual desktop)
  • Separate taskbars for each monitor (and virtual desktop)
  • Ability to minimize windows to screen instead of taskbar. Ideally having one virtual desktop to store all minimized windows.
  • Centralized notification system (for windows that are on other desktops)
  • Virtual monitors

Current setup

Currently I have 2x22" (1680x1050) and my laptop (1920x1200) connected through Synergy and I can imagine getting third monitor ;) On one monitor I run Visual Studio\PowerGUI Script Editor together with Outlook and on second I have all other utilities like Messenger, Last.FM player, Skype etc...

For moving windows between monitors I used trial version of UltraMon, however once it expired, I was not sure whether it is what I was looking for - not many features and quite expensive.

I have no virtual desktops - I tried to use them few years ago, however it just didn't work for me (but my situation changed of course).

I have my tablet and laptop connected through Synergy with my primary desktop. Synergy is application that allows you to control multiple computers by 1 mouse and keyboard (virtual KVM without V ;)). It is easy to configure (when mouse reaches end of primary monitor on left, just to computer XXX) and it really works.

Solutions?

Ok, lets go step by step. I will start by basic introduction to 3rd party utilities that are most often used for multi monitor support:

  • MultiMon - small application that is available for free. It allows you to easily move window between monitors using easy to use title buttons. Also it creates multiple taskbars for each monitor. Small feature (important for developers) is extended clipboard, where you see all (text) entries you saved to clipboard and you can easily restore code you added to clipboard some time ago. Main disadvantages are that it doesn't really work with Vista (at least for me) and there is paid version - where all bugs are fixed :( So if you are using XP and are looking for some free solution, I can highly recommend. Price for PRO version is $28.
  • UltraMon - recognized as best multi monitor solution, however I was not really satisfied. In fact only features it adds is multi mon taskbar and title buttons to move window to other desktop. Otherwise it works fine - but I don't think $39.95 is really good price.
  • Actual Window Manager - this is not very famous piece of software, however I find it best. It is most expensive ($49.95), however it have great functionality and tons of small features and tweaks. Because MutliMon and UltraMon are quite famous and AWM not, I will write in detail about it below.

Actual Window Manager

So, I mentioned that I really like AWM (Actual Window Manager), so I want to share some details with you. Information is based on 5.2 beta 1 version (so final 5.2 could have some features added).

At first I was searching only for 2nd taskbar + move to window functionality, however AWM is more like monitor Swiss knife.

AWM provides more than 40 tools for manipulating with Windows (according to their website). Some features (second taskbar, move to monitor, virtual desktops, process priorities...) are very interesting for me, some (transparency settings, stay on top...) are interesting, however not for me and some (rollup, minimize to screen, pin to desktop...) could be very interesting for me, however they would need to be slightly modified to do exactly what I want.

There is trial version available, I highly recommend to try it and explore little bit. Visually AWM is not very nice, however from functionality perspective it is perfect.

There are also many nice tweaks and tunes that are not obvious immediately, for example modifications to cmd.exe that allows you to dynamically resize it or special right-click behaviors. Once you will start digging into configuration, you will be amazed how much can be achieved by AWM.

Consider simple configuration for minimizing. AWM allows 3 minimization modes:

  • minimize to taskbar (default Windows behavior)
  • minimize to systray (I prefer this, I used TaskSwitchXP to achieve that)
  • minimize to screen (creates small icon representing application on screen. Especially with DWM this could be great enhancement, sadly current version creates only static icon)

That's quite nice, however you can specify other settings. For example you can specify that you want to automatically minimize window 1 minute after it was deactivated (or immediately). Or you can specify that if you left-click on Close title button (X), it will only minimize and if you right-click on it, window will close (and of course you can enable this only for particular applications).

Power of AWM comes when you combine different tools (features) together. For example transparency was never really interesting for me. In AWM you can not only configure transparency, but also Ghost mode (which means that anytime you click on application, click goes to application BELOW ghosted application). So using this you can easily create transparent (status, information etc) windows that ignores clicks and you can (using AWM Always on top functionality) keep them in front. Or you can specify special CPU priority when application is minimized or inactive. Or you can specify default sized\monitors etc for each application...

It takes some time to start with AWM (because you have so many options), however it is definitely worth every penny. 100% recommended (hope so I won't change my opinion soon ;)).

Another really nice thing is that I used trial version. Once it expired, I decided to uninstall it and wanted to try UltraMon. After uninstallation I was asked what I didn't like and why I uninstalled that application. I decided to be honest and wrote what I was missing (taskbar and DWM-based minimize to screen). Soon I received answer - not the usual "thank you for your feedback blabla", I received answer - taskbar is going to be implemented in 5.2 version and second request was registered in wishlist.

I played a little bit with Virtual desktops, however I am missing some features there:

  • Easy way to switch desktops (Linux\Microsoft PowerToys like systray)
  • multi monitor aware virtual desktops - I want to use ONLY secondary screen for virtual desktops. AWM instead switches both screens to new virtual desktop. Again, based on WDM, I would like to have live preview of all virtual desktops (so if not needed, I would like to see live previews of all virtual desktops on my second screen).

So let's have a look at features I wanted when I count in AWM:

  • Easy option to move window to another monitor (HW)
  • Easy option to move window to another screen (virtual desktop)
  • Separate taskbars for each monitor (and virtual desktop)
  • Ability to minimize windows to screen instead of taskbar. Ideally having one virtual desktop to store all minimized windows. (however not DWM based)
  • Centralized notification system (for windows that are on other desktops)
  • Virtual monitors

    UPDATE: Version 5.2 of Actual Window Manager supports also taskbar on each monitor, so now AWM is definitely my recommendation. 

    Centralized notification system

    This is one feature I really miss in Windows. Do you know pop-ups from Live Messenger or Outlook whenever you receive new mail\instant message?

    I always wanted to have centralized system for this that will be supported by all Windows applications. Something like Growl for Mac OS.

    Simple interface, where you specify some default settings (timeouts, stickyness...) and subscribe to events you would like to receive (for example Outlook\New mail, Outlook\New RSS, Live Messenger\New message, Total Commander\Copy finished etc).

    There are of course some alternatives like Snarl. Problem is that all such solutions that are not VERY WELL known will have very limited support (which is also case with Snarl).

    Notification system is quite important once you start playing with stuff like Virtual Desktops - you can't just place instant messenger to (hidden) virtual desktop if you want to be aware of what is happening :(

    Currently there is no real solution available... Microsoft, are you listening? ;)

    Virtual monitors

    Virtual monitor is not virtual desktop. Virtual monitor should allow you to turn any device (old laptop) into additional monitor. Most famous application is called MaxiVista. It can also replace Synergy and add more features, however it is not for free ($39.95). Supported features are:

    • Extended screen a.k.a. software monitor
    • Remote control (Synergy)
    • Clipboard synchronization

    Don't get confused by name - MaxiVista is not designed for Vista (and therefore doesn't take advantages of WDM and is using only XPDM) :(

    What I need virtual monitor for? I really like Synergy for my laptop (why using it as monitor when I can use it 4GB Ram and dual-core processor for some tasks meanwhile), however I have also pretty old tablet (800MHz, 256MB ram...).

    I really prefer to read books\documents and make drawings at tablet, however it is too old. So what I would really like to have is tablet, that will act as software monitor, however I could use controls (pen) from TabletPC.

     

    As workaround I could connect through remote desktop, however having ability to drag&drop document to tablet would be really great.

     

    Martin

  • Wednesday, July 23, 2008

    PowerShell problems with transcript

    I really like Start\Stop-Transcript functionality from PowerShell...

    However I already encountered few problems:

     

    1.) Transcript doesn't save output from exe files

    2.) Log file is not creating new lines (one long line instead)

    3.) If script fails, Stop-Transcript is not automatically executed

     

    Transcript doesn't save output from exe files

    Yes, that's right. If you run following script:

    Start-Transcript c:\temp\testscript.log
    "Starting"
    IpConfig
    "Finished"
    Stop-Transcript

    You see different output in log file than on screen. Workaround is pretty simple, but functional:

    Start-Transcript c:\temp\testscript.log
    "Starting"
    IpConfig | Out-Host
    "Finished"
    Stop-Transcript

    You pipe output from executable to cmdlet and it is automatically saved to your transcript log file.

    Log file is not creating new lines

    I experienced this few times and not sure why it happened. If you will run into this problem, you can use my workaround:

    Instead of

    Write-Host -ForegroundColor $Color -Object "Test"

    Use

    "Test" | Write-Host -ForegroundColor $Color

    Stop-Transcript is not executed

    It is not really problem, however you shouldn't forget about it. Easiest way is to include Stop-Transcript in your trap statement:

    Trap {Continue} Stop-Transcript

     

    Have you experienced any other problems with transcript?

    Tuesday, July 15, 2008

    Filter in PowerShell

    I was reading yesterday about filters in PowerShell - I noticed it in one script and didn't play with it before, so I was really curious...

    This reading help about filters (Get-Help about_Filter -Detailed), I was getting more and more excited.

    I understood that you can specify complex filters here, something like

    Filter MyCustomFilter {
    $_.ProcessName -match '^a.*'
    $_.CPU -gt 20
    }


    Get-Process | Where {MyCustomFilter}



    That would be really cool, because that you could easily provide filtering in some XML definition.



     



    Obviously I didn't get it right. After studying a while PowerShell in Action (best book about PowerShell for me, highly recommended), I realized that Bruce is using filter to alter data:



    PS (1) > filter double {$_*2}
    PS (2) > 1..5 | double
    2
    4
    6
    8
    10



    Consider following filter:



    Filter StartWithA {$_.ProcessName -match '^a.*'}



    It should be able to filter all processes and display only ones that starts with a.



    If we try to use it Get-Process | StartWithA, we will get array of booleans:



    PS C:\Links\HDDs\DevHDD\WorkingProjects\S4Maintenance> gps | StartWithA
    True
    True
    True
    False
    False
    False
    False
    False
    False
    False
    False
    False
    ...



    If we try to use Get-Process | Where{StartWithA}, we don't get anything in return.



    If we use change this filter to



    Filter StartWithA {$_ | Where {$_.ProcessName -match '^a.*'}}



    then we finally get what we wanted:




    PS C:\Links\HDDs\DevHDD\WorkingProjects\S4Maintenance> Get-Process | StartWithA | Select Name

    Name
    ----
    AcroRd32
    alg
    audiodg



    It appears to be bug in PowerShell documentation - however it would be great if we could see filter as array of conditions in v2 ;)



    Thanks to Mow for {$_ | Where {$_...}} example

    If (object exist) {...} in PowerShell..

    In VB.NET I always used If Not Object is nothing just to be sure that I am working with real object.

    In PowerShell, this can be done very easily:

    If ($Object) {...}

    or

    If (!$Object) {...}

    Easy, powerful and useful, I love this :)

    Martin

    Assigning Citrix Load Evaluator from PowerShell

    Well, still lot of stuff to learn about MFCOM behavior :(

    I already run into this issue few times and I wanted to clarify that Citrix XenApp 4.5 have same issue.

    There are two different ways how to assign Load Evaluator using MFCOM.

    Server based

    First makes more sense to me:

    1.) Load server object

    2.) Assign load evaluator to server

    Problem with this approach is that sometimes (= quite often) something goes wrong in MFCOM. If you ask MFCOM about assigned load evaluator, you get correct answer:

    PS C:\Temp> MFCOM:Assign-LoadEvaluator -Server ctxs-ctp -LoadEvaluator Default
    True

    In this case True means that load evaluator assigned is SAME as requested (-LoadEvaluator).

    When I try to manually double-check, everything looks fine:

    PS C:\Temp> $CtxServer = $CitrixFarm.GetServer([MF.Object]::WinSrvObject, "ctxs-ctp")
    PS C:\Temp> $LE = $CtxServer.AttachedLE
    PS C:\Temp> $LE.LoadData($True)
    PS C:\Temp> $LE.LEName
    MFDefaultLE
    PS C:\Temp>

    If you however look in CMC, you see old load evaluator:

    image

    Hmmmmm, that is really strange. To make it even more confusing, lets select Load Evaluators in CMC:

    image Now change view to report "By Evaluator" - here we go, our server have 2 Load Evaluators assigned :(

    image

     Load Evaluator based

    You can also use second method

    1.) Get Load Evaluator

    2.) Assign server to it

    I don't like this method, it doesn't sound that good to me - I want to assign Load Evaluator to server, not other way around.

    Problem is that this method works :(

    $RequestedLE = New-Object -ComObject MetaFrameCOM.MetaFrameLoadEvaluator
    $RequestedLE.LEName = $LoadEvaluator
    $RequestedLE.LoadData($True)
    $RequestedLE.AttachToServerByName($True, $Server)
    $RequestedLE.SaveData()

    Martin

    Monday, July 14, 2008

    Translating books to PowerShell

    As mentioned many times, PowerShell is great for one liners.

    Currently few blogs are translating books to PowerShell:

    Matrix

    No Exit

    Hamlet

    MacBeth

    If you follow all posts in time, you can see that scripts are getting smaller and smaller... Well, what about rewriting 1216 pages of The Lord of the Rings? :)

    While ($OneRing) {Continue-Journey -Companion (("Frodo", "Sam", "Pippin", "Merry") + $TemporaryHeroes) -Destination "Mount Doom" -Enemy $Sauron -EvilArmy (1..1000000000000)}

    Martin

    Working with Citrix from PowerShell - custom enumerations

    As I continue to work on my PS work flow, time has come to start building Citrix components...

    I had tons of vbscripts I wrote before, so this part shouldn't be that hard... One of functions I wanted to have is Get-AppsFromFolder, which can dump published applications from specific folder.

    I love this, because than you can have really, really fast enumerations if you want to retrieve published applications based on some filter (for example show me all applications in "applications/primary"). In huge enterprise environments parsing all published icons and filtering output can take ages.

     

    In my VBScript I had line
    Set rootAppFolder = theFarm.GetRootFolder(MetaFrameAppFolder)

    I tried $AppFolder = $CitrixFarm.GetRootFolder(MetaFrameAppFolder). Of course it didn't work - I realized that MetaFrameAppFolder should be index here...

    Well, I don't really like $AppFolder = $CitrixFarm.GetRootFolder(12), so I started to write constants. Well, I wrote 2-3 of them and then tried Google :)

    I found Citrix MFCom Enums from Brandon Shell and it saved me tons of time - however because I want to implement user-friendly functions and don't want to spend hours and hours by implementing special checks if arguments are correct, I decided that I would rather create custom enumerations.

    Then for example if my script requires color as parameter, I can just use simple function:

    Function Test-Color([MF.Color]$Color)

    and don't need to care about any other code - only accepted values from now on are MF.Color enums...

     

    For example I use following to add all supported color values:

    New-Enum MF.Color Unknown 16 256 64K 16M

    If you would try to use Test-Color with unsupported value, for example Red, you will get following error:

    PS C:\Links\HDDs\DevHDD\WorkingProjects\S4Maintenance> Test-Color -Color Red
    Test-Color : Cannot convert value "Red" to type "MF.Color" due to invalid enumeration values. Specify one of the follow
    ing enumeration values and try again. The possible enumeration values are "Unknown, 16, 256, 64K, 16M".
    At line:1 char:18
    + Test-Color -Color  <<<< Red

    As you can see, as "side effect" you get all possible enumerations... Also you can use [enum]::GetValues([MF.Color]) to get all possible values.

    Right now I can't publish all enumerations I have done, but you can easily translate them from Brandon's excellent blog...

    Friday, July 11, 2008

    Speed up PowerShell startup times

    For long time I am using script Speed-Startup.ps1. It is part of my automated PowerShell installation. I try to keep all my PS-related stuff (scripts, profiles) subversioned - easy to use, free, easy to setup and it already saved me few times.

    $NgenLocation = @($(dir -Path (join-path ${env:\windir} "Microsoft.NET\Framework") -Filter "ngen.exe" -recurse) | sort -descending lastwritetime)[0].fullName

    If ($(Test-Path -PathType Leaf -Path $NgenLocation)) {
    $CurrentAccount = new-object System.Security.Principal.WindowsPrincipal([System.Security.Principal.WindowsIdentity]::GetCurrent())
    $Administrator = [System.Security.Principal.WindowsBuiltInRole]::Administrator

    If ($CurrentAccount.IsInRole($Administrator)) {
    [appdomain]::currentdomain.getassemblies() | ForEach {. $NgenLocation $_.Location}
    } Else {
    Write-host -ForegroundColor Red "You must be local administrator."
    }

    } Else {
    Write-host -ForegroundColor Red "Ngen.exe was not found"
    }




     



    You should always run it, because sometimes assemblies are not ngened and PowerShell load is slooooooooow... I just noticed that Jeffrey posted reminder how to speed startup.



    I am however curious if there is any way to pre-load .NET environment if you use PowerShell for your logon scripts in Terminal Services\Citrix environment (that means environment where multiple users are logged on to same server).



     



    Martin

    Returning objects from PowerShell functions

    This can be very confusing and I am sure that I will need to describe it to some people in future, so I will rather write small post about it now and just send them link in future ;)

    Difference between subroutines and functions is that functions returns some data...

    So for example Get-Date that RETURNS date is function. Get-Date that will ONLY display date on screen is subroutine.

    If you are used to programming, usually you do something within body function and then you return some object.

    Lets have a look at below example. You just run function and it should return "Test" string:

    Function Test {
    Write-Host 'I want to return object of [String] type with value "Test"'
    Return "Test"
    }

    Makes sense, right? If we run it, we can see following:

    PS C:\Links\HDDs\DevHDD\WorkingProjects\S4Maintenance> Test
    I want to return object of [String] type with value "Test"
    Test

    Looks fine so far. Now lets try to assign output from that function to some variable:

    PS C:\Links\HDDs\DevHDD\WorkingProjects\S4Maintenance> $Output = Test
    I want to return object of [String] type with value "Test"
    PS C:\Links\HDDs\DevHDD\WorkingProjects\S4Maintenance> $Output
    Test

    Still completely normal. But I want to make that function reaaaally really short, so I will remove Write-Host, it's now really needed:

    Function Test {
    'I want to return object of [String] type with value "Test"'
    Return "Test"
    }

    Still makes perfect sense, right? But it won't work:

    PS C:\Links\HDDs\DevHDD\WorkingProjects\S4Maintenance> $Output = Test
    PS C:\Links\HDDs\DevHDD\WorkingProjects\S4Maintenance> $Output
    I want to return object of [String] type with value "Test"
    Test

    As you can see both lines are returned... Reason for this is simple:

    If you assign output from function to variable, it is NOT object specified after Return command that is returned, but whole output from function.

    Only exception (as far as I know) is Write-Host, that is ignored. This behavior can be VERY confusing, because your function can work perfectly, but then suddenly (once you improve it or add more code), it will return System.Object instead of Xml.XmlElement etc.

    For workaround, I am passing Output variable name as parameter also:

    Function TestOutput ([string]$OutputVariable) {
        Write-Host $OutputVariable
        Set-Variable -Value 0 -Name $OutputVariable -Scope 1
    }

    $MyVar = 1
    Test "MyVar"

    As you can see, output is assigned to variable provided as argument. I don't really like this, however I still think it can make your functions more robust. This is specially case with processing XML - some methods will automatically display output from operation to screen and then your output is immediately corrupted.

     

    How do you solve this, any ideas???

    PowerShell naming conventions

    Well, as you probably all know, PowerShell is using verb-noun naming convention.

    I really love it. In fact I use it for few years already (Install-Server.cmd, Get-Users.vbs etc).

    Obviously I used it only for naming scripts (well, I used it also for functions, but that's different story). Problem with PowerShell is that many names that I really like are already used :( For example New-Object (that's one I really need now ;)).

    So I decided to use some kind of namespaces. My current project is called Solution4 Maintenance (short S4M), so I am using
    S4M:New-Object or S4M:Move-Object. It is obvious and I can use names that fits me best.

    Another advantage for me is that I can easily use
    Get-ChildItem Function:\S4M:* to see all functions that are available.

    Which naming convention do you use for your scripts? Really curious about it :)

    Friday, July 4, 2008

    PowerShelling day 2

    So I started with my first script finally after spending lot of time studying and designing (mostly designing, I am learning on fly now ;))

    Today I learned a lot and also refreshed my memory - my whole day was combination of "Whoa, that's great" and "Grrrrr, doesn't work" ;) I love to learn new stuff and with PowerShell there is lot to learn :)

     

    So which problems\solutions I run into? As this is my blog, I also like to use it as reminder ;)

    PowerGUI

    First of all, I started to use great (free) product PowerGUI. It consists of two user parts - first is visualization of PowerShell (this was promised for new version of MMC, not sure whether it is still the case). This is quite cool, however is not what I am looking for right now.

    Second part is really great PowerGUI Script Editor - IDE for PowerShell. I started to use it today and I am already very satisfied, it is really great :)

    My bad...

    I also run into few problems because I am used to VB.NET programming  - I stuck at one function that simply didn't work correctly. It was supposed to accept 3 parameters, however second and third were always ignored. Skilled powershellers probably already know where was problem - yes, SomeFunction(Param1, Param2, Param3) is array and not 3 standalone arguments :) Ouch, learning something new always hurts, especially if you KNOW about that problem and you just forgot (and then you look at the code and everything seems perfectly normal ;))

    Implementing debug log

    For my script I wanted to implement debug log. Idea is pretty simple, because it will run as scheduled task, I want to be able to see what happened in case something goes wrong (this is generally problem with scheduled tasks, if they got stuck somewhere, without complex logging there is no way how to find out where or why they got stuck). When I wrote scripts for batches, I used MTee for that purpose. You can then create scheduled task that will automatically generate log file with current console of that script, which is really helpful. In PowerShell, I remembered there was cmdlet Start-Transcript. I used it few years ago for presentations (hmmmmm, maybe that was reason why it was even created ;)). So at beginning of my script I added Start-Transcript -Force -Path $DebugLog. After playing little bit I realized two problems.

    PowerShell have by default (when you run script) disabled command echoing (as reminder, this is in batches by default enabled and is configured by @Echo on\off command). This makes complete sense, however it was not perfect for my debug scenario. Leo Tohill (thanks again) pointed me to Set-PSDebug. Parameter -trace 1 enables something very similar to command echoing, it is not exactly what I want (format is quite hard to read in case you have complex script), however it helped me a lot :)

    Second problem was mentioned by Shay Levy - transcript outputs ONLY powershell output, so for example if you use ipconfig, you want be able to see it's output in debug log file. As workaround I tried Tee-Object around whole script (S4Maintenance.ps1 | Tee-Object ...). To my surprise situation was opposite - no powershell information, only output from external binaries :D So after a while I came with quite simple solution that works fine so far. I use Start-Transcript and whenever I need to call external binary, I use function Fix-Trascript:

    Function Fix-Transcript ()
    {
    Process{
      Write-Host $_
    }
    }

    For example with ipconfig it means ipconfig | Fix-Transcript. This way output from ipconfig is also stored in transcript file.

    Scopes

    I decided I want to use scopes also. In batches I implemented a lot of scopes. Not only global\local (SetLocal), but using Set <VarPrefix> behavior of Set command in cmd. For details, have a look at this blog post. By specifying prefix (for example Private.), I was able to automatically destroy all such variables at end (For /f "usebackq tokens=1,* delims==" %%i IN (`Set Private.`) Do Set %%i=).

    In PowerShell it is much easier. For my script I would really like to allow fallbacks, which is very hard in batches. You specify some default values and you can override (NOT OVERWRITE) them in sub-scripts or functions. I run into one problem with PowerShell.

    Consider example where I want to have (default\global) variable called $JobStorage. This variable must be provided by command line argument. Have a look at following line:

    Param (
        [string]$Global:JobStorage = $(throw "You must specify folder where you jobs are stored as parameter."), #Specify folder where jobs are stored

    Looks correct, right? If you try to use .\S4Maintenance.ps1 C:\Jobs, it works fine. But I really hate position based parameters - I prefer to use name-based whenever possible. PowerShell automatically supports named parameters, so instead of .\S4Maintenance.ps1 C:\Jobs I can use .\S4Maintenance.ps1 -JobStorage C:\Jobs.

    Problem is that if you specify scope, you can't use named parameter. This makes sense and I don't expect that it could be considered bug. As workaround, I assigned it to global variable afterwards:

    Param ([string]$JobStorage = $(throw "You must specify folder where you jobs are stored as parameter.")

    $Global:JobStorage = $JobStorage

    This works as expected.

    Background processing

    I was also thinking about background processing and run into really nice post here. Highly recommended, I will probably use it once my basic code is finished. BTW background processing is supported in PowerShell v2.

    Array index evaluated to null

    However of course I got stuck on something :( I use hashtable for some of my entries, but one returns me quite strange error that even Google is not able to help with:

    Index operation failed; the array index evaluated to null.
    At C:\Links\HDDs\DevHDD\WorkingProjects\S4Maintenance\S4Maintenance.ps1:108 cha
    r:19
    +                         $Containers[$ <<<< ($Container.Name)] = $Container

    Code that I am trying to execute is nothing special from what I can say:

    $ContainersToLoad = @{}

            ForEach ($Target in $($Container.failed.targetcontainer)) {
                $ContainersToLoad[$($Target.Name)] = $Target.Name
                }
            ForEach ($Target in $($Container.finished.targetcontainer)) {
                $ContainersToLoad[$($Target.Name)] = $Target.Name
                }
             Write-Host $ContainerToLoad.Count
            If ($ContainerToLoad.Count -gt 0) {
                ForEach ($Target in $ContainersToLoad) {$Target}
            }

     

    UPDATE: Of course immediately after I posted it I realized where the problem is (in next minute ;)) one small typo, instead of $ContainersToLoad I used $ContainerToLoad ;) I will probably have a look at strict option in Set-PSDebug tomorrow ;)