Wednesday, December 24, 2008

End of year

Hello everyone,

end of year is here – so merry Christmas to all of you :)

For next year I want to promise you something – I will write at least one post per week. What can you expect?

I think that Windows 7 will be very important OS release – as I mentioned few times in my posts, Windows Vista can be seen as Windows 2000 – new features, changes in MS approach etc, however still many annoyances and missing features. Windows 7 will be more like Windows XP – take already existing major release (Windows 2000), add new features and change some settings.

I really, really like Windows Vista – there are many features that I really love and are NOT obvious to normal users (MIC or I/O priorities) and I think that Windows 7 will make use of all these features.

As I am one of official MVP moderators for Windows 7, I want to share my thoughts with you and hope I’ll be able to make Windows 7 next-gen Windows XP – OS that we will happily use for years :)

You can also expect lot of posts about project I was working on called S4Matic. S4Matic is workflow framework based on PowerShell that allows you to easily automate processes. Consider scenario when new employee will come and all you need to do is to specify his name – his AD account is automatically generated, he is added to all required groups, his mailbox is generated and few emails are automatically sent (to create new badge for him for example or request for new mobile phone). This is quite simple scenario – consider more complex scenarios, for example if you have hundreds of Citrix servers and you want to reboot them during weekend without any downtime (I already implemented this). Or even better – you want to rebuild all servers during weekend without affecting any users. Sounds cool? You will see what S4Matic can do :)

Of course I want to continue with my Powershell blogging activities, I don’t like to post cool functions (there are many better bloggers for this), I prefer to write about general principles and how thinks works (or even better, what doesn’t work and why).

This blog is not only about technical stuff, so you can also expect some productivity posts – for example how versioning can improve your daily work or how to organize your drives.

And hopefully I will post some small programs and utilities :) And many many tips & tricks :)

Wednesday, December 17, 2008

What's The Difference Between Size And Size On Disk In Folder Properties

Many people don’t understand the difference between these two values:



Today I run into small blog post that describe these two values. However there is something I am still missing in Windows. We should be able to see 3 values:


Size on disk

Virtual size on disk

Why is that? Because especially with Windows Vista\2008\7, softlinks and hardlinks are playing much more important role.

For example take folder WinSxS in Windows directory. If you check it’s size, you will be probably surprised that it is really big, in my case it is 10GB. Surprisingly this folder is NOT that big, because most files (or many of them) are in fact only hardlinks to existing files.

Confusing? Yes

Do you know real size of that folder? No

Same applies to my “loopback” symlink. I tried to use utility (Gladinet) that allows you to share some folders between computers. However I didn’t wanted to share only folder, I wanted to share whole C: drive (which is as far as I know not supported by that utility). So I used simple trick and created C:\Links\Loopbacks\C folder that is symlink to C:\. Worked great :) But as you can see below, that folder is really big (even though it DOESN’T occupy 75GB on my disk):


I think this can be pretty confusing\annoying in some situations – in my case if I would calculate size per folder, it would be at least 150GB (on my 110GB HDD) :)

Thursday, December 11, 2008

Google Zeitgeist

What I really like about internet is the fact that you can get access to real-life data sometimes. For example what is best method for identifying most popular songs of last year?

For some it can be charts - but charts are always depending on radio listeners or radio genre. Another approach can be number of sold records, however that doesn't say anything about number of listeners, for example if song is really great, however album is really bad, people won't buy album and will download MP3 instead. Not talking about fact that most people don't buy CDs anymore (IMHO) and are buying\downloading MP3s only (and I think that number of downloads is much higher than number of purchases).

So for me best method to identify hits of last year is to review numbers from Last.FM (that have millions of listeners and is not focused on single genre).

Want to know more? Have a look at Last.FM Best of 2008.

Don't interested in movies? What would be best method to see changes worldwide or in your country?

Great idea would be to base it upon search queries - especially in case of big player such as Google (that is used by different people all around the world). I blogged about Google Trends already and when I tried to play with some trends of year 2008, I discovered very interesting press page called Google Zeitgeist that already did this job. Very interesting with details about each country :)

Wednesday, December 10, 2008

LastPass - Free password manager

First of all - I was very busy with few projects, that is why I didn't post anything on this blog for long time. Situation is now better and from beginning of next year I would like to blog every week again, so keep tuned :)

Maybe you remember that I wrote about tools and utilities I am using and what am I missing. One of them was password manager and I finally found one I really like.

My requirements:

  • Online synchronization because I am using more computers
  • Support for multiple accounts for one site (typically Live ID or Gmail account)
  • Support for one account to multiple sites (again Live ID\Open ID\Gmail account)
  • Ability to have online access to accounts\passwords

2 months ago I found tool called LastPass. Major features they are promoting are online synchronization and fact that it is available for free.

From security perspective, central site DOESN'T store you password, it stores encrypted data by your master password.

There are many cool features I really like about LastPass... Because it have online synchronization, I finally started using autogenerated passwords (before I was scared what would I do in case of data loss)... You can even share sites with your friends.

So in case you are interested, head directly to LastPass homepage

Tuesday, November 18, 2008

ZoneAlarm Pro for free

You can get ZoneAlarm Pro for free today - however be careful, it is NOT full security suit as many sites are reporting, it is "only" firewall & antispyware suit (which I have no use for - both are already included in Vista by default)...

Anyway, if you find this information useful, so grab a copy while it is available ;)

Saturday, August 2, 2008

Returning values from Fuctions in PowerShell... Tricky :(

First let me explain something: I LOVE POWERSHELL...

No, I really do. After 3 years working every day with batches, vbscript & .NET I can see huge huge potential in PowerShell and more and more I use it I think it is genius shell.

However I am always trying to be skeptical about everything I do - and about every product I love.

I don't want to end up like some people that think their favorite product is PERFECT - and when there is new version, they suddenly realized that something that was PERFECT is more PERFECT ;)


I worked with Monad\PowerShell a lot 3 years ago, then I stopped and returned to batches (because of my work). It was quite fun and it turned out that with good imagination you can still achieve a lot with batches (script blocks, complex scopes, private variables, subroutines, functions etc ;)), however it was always just about building some workarounds. 1 month ago I returned to PowerShell and now I am working with it on daily basis - and of course I already encountered some things I don't really like. 

Working with function output

If you are familiar with programming, following code should be easy to understand for you:

Function bar {
$MyVariable = "Foo"
Return $MyVariable

Function bar returns MyVariable. So using $X = bar should be equivalent to $X = $MyVariable in fact.

Not in PowerShell. $X will get output from WHOLE function. That means that following function is equivalent to one above:

Function bar {
$MyVariable = "Foo"

I really, really don't like this behavior. Creating complex scripts is getting much more complicated. Consider example where you run into some problems in bar function. Most primitive debugging is always to use Echo (Write-Host) and just see value itself.

If I would have enough time, I would change it as follows:

Function bar {
$MyVariable = "Foo"
Write-Host "DEBUG: $MyVariable"
Return $MyVariable

Result would be as expected - if I will use $X = bar, $X will be Foo. However what if I try following:

Function bar {
$MyVariable = "Foo"
"DEBUG: $MyVariable"
Return $MyVariable

Looks fine? But doesn't work - $X in this case is Foo Foo and you will never see "DEBUG: Foo" output in console. I really think this is against object-oriented principle of PowerShell and it reminds me of 'For /f' behavior in batches.

You probably won't run into this problem - however you will run into it if you will start creating really complex scripts with XML handling, adding new elements etc. Usually (in .NET), if you for example add something to array, index is returned.

Look at following function. Variable $MyVariable is created, two numbers are added (1 and 2) and then it is returned:

Function bar {
[System.Collections.ArrayList]$MyVariable = @()
Return $MyVariable

Normally you would expect to get array that contains 'a' and 'b'. But because of PowerShell way of returning object, instead you will get array with 4 (!) entries: 0 1 a b.

0 and 1 in this case is index of added array elements. 

To make this function work as expected, you will need to redirect output to null:

Function bar {
[System.Collections.ArrayList]$MyVariable = @()
$MyVariable.Add("a") | Out-Null
$MyVariable.Add("b") | Out-Null
Return $MyVariable

Usually I try to write all my functions in following format:

Function <Name> {
<initial checks>
<final checks before return>

With PowerShell this is usually not possible, because even if you check if $MyVariable is correct type with correct values etc, it is quite hard to know if something "leaked" before. 

For me ideal solution would be that PowerShell would support BOTH methods for returning:

Function bar {
$MyVariable = "Foo"
Return $MyVariable

Would return object $MyVariable

Function bar {
$MyVariable = "Foo"

Would return whole output from function

Friday, August 1, 2008

How to manage your small development projects??

Some of you are maybe hobby developers as I am... I am partially developing for my company and also I have few free public projects like Elevator.

There are 3 tools that I always must have when I work with my projects:

  • Subversion - used for versioning, very easy to setup and use, available for free, no need for database, integrated to shell.
  • Visual Studio - well, of course :)
  • PowerGUI - it contains very nice PowerShell script editor

If you have however some professional projects (that means projects with fixed deadline for me ;)) and you are not working for professional developer company, it can be pretty tough to keep project on track and not spend days implementing small features and tweaks. It is very important to organize your time and keep track of everything you have to do.

At beginning I was "wild" developer - that means no versioning system, all notes in txt files and backups and rollbacks implemented by copy&paste (or popular ".old", ".bak", ".backup", ".001" etc..). Of course this was long time ago.

Later on I started working with Subversion and storing my notes in Visio. Still, that was not ideal solution.

Few days ago I discovered Unfuddle webpage. Unfuddle is project that is used for hosting - it provides you with basic bug tracking system (that meets all my needs), subversion hosting, notes taking system...

And best of all - they have also free edition. In free edition you are limited to 1 project and 2 people, however still it is great way to discover Unfuddle.

Tuesday, July 29, 2008

MultiMon utopia

At my previous blog I wrote about tools and utilities I am using and also mentioned that I want to replace some of them...

One of tools I mentioned that I want to replace was some utility for multimon management.

Being partially developer\scripter, I really love multiple monitors. However in Windows using multi mon is not as easy as it could be :(

Ideal scenario

In ideal scenario I would like to have 3 monitors - one main for VS\Script editor & Outlook, one for tons of small windows (as mentioned, Live Messenger, Last.FM, Skype...) and one for internet\documentation. I would like to have this setup because I definitely want to have my primary working tool (editor) fullscreen, see all small windows at once and have (again fullscreen) access to articles and documentation. So from hardware perspective I find 3 monitors best.

I definitely need option to easily move windows between screens - which is quite painful in Windows, because you must make full drag&drop and this is not supported when window is in fullscreen. Obviously this is not supported by Windows natively (shame, shame...), so I need some external utility for this.

Talking about replacement for default Windows functionality, I want following:

  • Easy option to move window to another monitor (HW)
  • Easy option to move window to another screen (virtual desktop)
  • Separate taskbars for each monitor (and virtual desktop)
  • Ability to minimize windows to screen instead of taskbar. Ideally having one virtual desktop to store all minimized windows.
  • Centralized notification system (for windows that are on other desktops)
  • Virtual monitors

Current setup

Currently I have 2x22" (1680x1050) and my laptop (1920x1200) connected through Synergy and I can imagine getting third monitor ;) On one monitor I run Visual Studio\PowerGUI Script Editor together with Outlook and on second I have all other utilities like Messenger, Last.FM player, Skype etc...

For moving windows between monitors I used trial version of UltraMon, however once it expired, I was not sure whether it is what I was looking for - not many features and quite expensive.

I have no virtual desktops - I tried to use them few years ago, however it just didn't work for me (but my situation changed of course).

I have my tablet and laptop connected through Synergy with my primary desktop. Synergy is application that allows you to control multiple computers by 1 mouse and keyboard (virtual KVM without V ;)). It is easy to configure (when mouse reaches end of primary monitor on left, just to computer XXX) and it really works.


Ok, lets go step by step. I will start by basic introduction to 3rd party utilities that are most often used for multi monitor support:

  • MultiMon - small application that is available for free. It allows you to easily move window between monitors using easy to use title buttons. Also it creates multiple taskbars for each monitor. Small feature (important for developers) is extended clipboard, where you see all (text) entries you saved to clipboard and you can easily restore code you added to clipboard some time ago. Main disadvantages are that it doesn't really work with Vista (at least for me) and there is paid version - where all bugs are fixed :( So if you are using XP and are looking for some free solution, I can highly recommend. Price for PRO version is $28.
  • UltraMon - recognized as best multi monitor solution, however I was not really satisfied. In fact only features it adds is multi mon taskbar and title buttons to move window to other desktop. Otherwise it works fine - but I don't think $39.95 is really good price.
  • Actual Window Manager - this is not very famous piece of software, however I find it best. It is most expensive ($49.95), however it have great functionality and tons of small features and tweaks. Because MutliMon and UltraMon are quite famous and AWM not, I will write in detail about it below.

Actual Window Manager

So, I mentioned that I really like AWM (Actual Window Manager), so I want to share some details with you. Information is based on 5.2 beta 1 version (so final 5.2 could have some features added).

At first I was searching only for 2nd taskbar + move to window functionality, however AWM is more like monitor Swiss knife.

AWM provides more than 40 tools for manipulating with Windows (according to their website). Some features (second taskbar, move to monitor, virtual desktops, process priorities...) are very interesting for me, some (transparency settings, stay on top...) are interesting, however not for me and some (rollup, minimize to screen, pin to desktop...) could be very interesting for me, however they would need to be slightly modified to do exactly what I want.

There is trial version available, I highly recommend to try it and explore little bit. Visually AWM is not very nice, however from functionality perspective it is perfect.

There are also many nice tweaks and tunes that are not obvious immediately, for example modifications to cmd.exe that allows you to dynamically resize it or special right-click behaviors. Once you will start digging into configuration, you will be amazed how much can be achieved by AWM.

Consider simple configuration for minimizing. AWM allows 3 minimization modes:

  • minimize to taskbar (default Windows behavior)
  • minimize to systray (I prefer this, I used TaskSwitchXP to achieve that)
  • minimize to screen (creates small icon representing application on screen. Especially with DWM this could be great enhancement, sadly current version creates only static icon)

That's quite nice, however you can specify other settings. For example you can specify that you want to automatically minimize window 1 minute after it was deactivated (or immediately). Or you can specify that if you left-click on Close title button (X), it will only minimize and if you right-click on it, window will close (and of course you can enable this only for particular applications).

Power of AWM comes when you combine different tools (features) together. For example transparency was never really interesting for me. In AWM you can not only configure transparency, but also Ghost mode (which means that anytime you click on application, click goes to application BELOW ghosted application). So using this you can easily create transparent (status, information etc) windows that ignores clicks and you can (using AWM Always on top functionality) keep them in front. Or you can specify special CPU priority when application is minimized or inactive. Or you can specify default sized\monitors etc for each application...

It takes some time to start with AWM (because you have so many options), however it is definitely worth every penny. 100% recommended (hope so I won't change my opinion soon ;)).

Another really nice thing is that I used trial version. Once it expired, I decided to uninstall it and wanted to try UltraMon. After uninstallation I was asked what I didn't like and why I uninstalled that application. I decided to be honest and wrote what I was missing (taskbar and DWM-based minimize to screen). Soon I received answer - not the usual "thank you for your feedback blabla", I received answer - taskbar is going to be implemented in 5.2 version and second request was registered in wishlist.

I played a little bit with Virtual desktops, however I am missing some features there:

  • Easy way to switch desktops (Linux\Microsoft PowerToys like systray)
  • multi monitor aware virtual desktops - I want to use ONLY secondary screen for virtual desktops. AWM instead switches both screens to new virtual desktop. Again, based on WDM, I would like to have live preview of all virtual desktops (so if not needed, I would like to see live previews of all virtual desktops on my second screen).

So let's have a look at features I wanted when I count in AWM:

  • Easy option to move window to another monitor (HW)
  • Easy option to move window to another screen (virtual desktop)
  • Separate taskbars for each monitor (and virtual desktop)
  • Ability to minimize windows to screen instead of taskbar. Ideally having one virtual desktop to store all minimized windows. (however not DWM based)
  • Centralized notification system (for windows that are on other desktops)
  • Virtual monitors

    UPDATE: Version 5.2 of Actual Window Manager supports also taskbar on each monitor, so now AWM is definitely my recommendation. 

    Centralized notification system

    This is one feature I really miss in Windows. Do you know pop-ups from Live Messenger or Outlook whenever you receive new mail\instant message?

    I always wanted to have centralized system for this that will be supported by all Windows applications. Something like Growl for Mac OS.

    Simple interface, where you specify some default settings (timeouts, stickyness...) and subscribe to events you would like to receive (for example Outlook\New mail, Outlook\New RSS, Live Messenger\New message, Total Commander\Copy finished etc).

    There are of course some alternatives like Snarl. Problem is that all such solutions that are not VERY WELL known will have very limited support (which is also case with Snarl).

    Notification system is quite important once you start playing with stuff like Virtual Desktops - you can't just place instant messenger to (hidden) virtual desktop if you want to be aware of what is happening :(

    Currently there is no real solution available... Microsoft, are you listening? ;)

    Virtual monitors

    Virtual monitor is not virtual desktop. Virtual monitor should allow you to turn any device (old laptop) into additional monitor. Most famous application is called MaxiVista. It can also replace Synergy and add more features, however it is not for free ($39.95). Supported features are:

    • Extended screen a.k.a. software monitor
    • Remote control (Synergy)
    • Clipboard synchronization

    Don't get confused by name - MaxiVista is not designed for Vista (and therefore doesn't take advantages of WDM and is using only XPDM) :(

    What I need virtual monitor for? I really like Synergy for my laptop (why using it as monitor when I can use it 4GB Ram and dual-core processor for some tasks meanwhile), however I have also pretty old tablet (800MHz, 256MB ram...).

    I really prefer to read books\documents and make drawings at tablet, however it is too old. So what I would really like to have is tablet, that will act as software monitor, however I could use controls (pen) from TabletPC.


    As workaround I could connect through remote desktop, however having ability to drag&drop document to tablet would be really great.



  • Wednesday, July 23, 2008

    PowerShell problems with transcript

    I really like Start\Stop-Transcript functionality from PowerShell...

    However I already encountered few problems:


    1.) Transcript doesn't save output from exe files

    2.) Log file is not creating new lines (one long line instead)

    3.) If script fails, Stop-Transcript is not automatically executed


    Transcript doesn't save output from exe files

    Yes, that's right. If you run following script:

    Start-Transcript c:\temp\testscript.log

    You see different output in log file than on screen. Workaround is pretty simple, but functional:

    Start-Transcript c:\temp\testscript.log
    IpConfig | Out-Host

    You pipe output from executable to cmdlet and it is automatically saved to your transcript log file.

    Log file is not creating new lines

    I experienced this few times and not sure why it happened. If you will run into this problem, you can use my workaround:

    Instead of

    Write-Host -ForegroundColor $Color -Object "Test"


    "Test" | Write-Host -ForegroundColor $Color

    Stop-Transcript is not executed

    It is not really problem, however you shouldn't forget about it. Easiest way is to include Stop-Transcript in your trap statement:

    Trap {Continue} Stop-Transcript


    Have you experienced any other problems with transcript?

    Tuesday, July 15, 2008

    Filter in PowerShell

    I was reading yesterday about filters in PowerShell - I noticed it in one script and didn't play with it before, so I was really curious...

    This reading help about filters (Get-Help about_Filter -Detailed), I was getting more and more excited.

    I understood that you can specify complex filters here, something like

    Filter MyCustomFilter {
    $_.ProcessName -match '^a.*'
    $_.CPU -gt 20

    Get-Process | Where {MyCustomFilter}

    That would be really cool, because that you could easily provide filtering in some XML definition.


    Obviously I didn't get it right. After studying a while PowerShell in Action (best book about PowerShell for me, highly recommended), I realized that Bruce is using filter to alter data:

    PS (1) > filter double {$_*2}
    PS (2) > 1..5 | double

    Consider following filter:

    Filter StartWithA {$_.ProcessName -match '^a.*'}

    It should be able to filter all processes and display only ones that starts with a.

    If we try to use it Get-Process | StartWithA, we will get array of booleans:

    PS C:\Links\HDDs\DevHDD\WorkingProjects\S4Maintenance> gps | StartWithA

    If we try to use Get-Process | Where{StartWithA}, we don't get anything in return.

    If we use change this filter to

    Filter StartWithA {$_ | Where {$_.ProcessName -match '^a.*'}}

    then we finally get what we wanted:

    PS C:\Links\HDDs\DevHDD\WorkingProjects\S4Maintenance> Get-Process | StartWithA | Select Name


    It appears to be bug in PowerShell documentation - however it would be great if we could see filter as array of conditions in v2 ;)

    Thanks to Mow for {$_ | Where {$_...}} example

    If (object exist) {...} in PowerShell..

    In VB.NET I always used If Not Object is nothing just to be sure that I am working with real object.

    In PowerShell, this can be done very easily:

    If ($Object) {...}


    If (!$Object) {...}

    Easy, powerful and useful, I love this :)


    Assigning Citrix Load Evaluator from PowerShell

    Well, still lot of stuff to learn about MFCOM behavior :(

    I already run into this issue few times and I wanted to clarify that Citrix XenApp 4.5 have same issue.

    There are two different ways how to assign Load Evaluator using MFCOM.

    Server based

    First makes more sense to me:

    1.) Load server object

    2.) Assign load evaluator to server

    Problem with this approach is that sometimes (= quite often) something goes wrong in MFCOM. If you ask MFCOM about assigned load evaluator, you get correct answer:

    PS C:\Temp> MFCOM:Assign-LoadEvaluator -Server ctxs-ctp -LoadEvaluator Default

    In this case True means that load evaluator assigned is SAME as requested (-LoadEvaluator).

    When I try to manually double-check, everything looks fine:

    PS C:\Temp> $CtxServer = $CitrixFarm.GetServer([MF.Object]::WinSrvObject, "ctxs-ctp")
    PS C:\Temp> $LE = $CtxServer.AttachedLE
    PS C:\Temp> $LE.LoadData($True)
    PS C:\Temp> $LE.LEName
    PS C:\Temp>

    If you however look in CMC, you see old load evaluator:


    Hmmmmm, that is really strange. To make it even more confusing, lets select Load Evaluators in CMC:

    image Now change view to report "By Evaluator" - here we go, our server have 2 Load Evaluators assigned :(


     Load Evaluator based

    You can also use second method

    1.) Get Load Evaluator

    2.) Assign server to it

    I don't like this method, it doesn't sound that good to me - I want to assign Load Evaluator to server, not other way around.

    Problem is that this method works :(

    $RequestedLE = New-Object -ComObject MetaFrameCOM.MetaFrameLoadEvaluator
    $RequestedLE.LEName = $LoadEvaluator
    $RequestedLE.AttachToServerByName($True, $Server)


    Monday, July 14, 2008

    Translating books to PowerShell

    As mentioned many times, PowerShell is great for one liners.

    Currently few blogs are translating books to PowerShell:


    No Exit



    If you follow all posts in time, you can see that scripts are getting smaller and smaller... Well, what about rewriting 1216 pages of The Lord of the Rings? :)

    While ($OneRing) {Continue-Journey -Companion (("Frodo", "Sam", "Pippin", "Merry") + $TemporaryHeroes) -Destination "Mount Doom" -Enemy $Sauron -EvilArmy (1..1000000000000)}


    Working with Citrix from PowerShell - custom enumerations

    As I continue to work on my PS work flow, time has come to start building Citrix components...

    I had tons of vbscripts I wrote before, so this part shouldn't be that hard... One of functions I wanted to have is Get-AppsFromFolder, which can dump published applications from specific folder.

    I love this, because than you can have really, really fast enumerations if you want to retrieve published applications based on some filter (for example show me all applications in "applications/primary"). In huge enterprise environments parsing all published icons and filtering output can take ages.


    In my VBScript I had line
    Set rootAppFolder = theFarm.GetRootFolder(MetaFrameAppFolder)

    I tried $AppFolder = $CitrixFarm.GetRootFolder(MetaFrameAppFolder). Of course it didn't work - I realized that MetaFrameAppFolder should be index here...

    Well, I don't really like $AppFolder = $CitrixFarm.GetRootFolder(12), so I started to write constants. Well, I wrote 2-3 of them and then tried Google :)

    I found Citrix MFCom Enums from Brandon Shell and it saved me tons of time - however because I want to implement user-friendly functions and don't want to spend hours and hours by implementing special checks if arguments are correct, I decided that I would rather create custom enumerations.

    Then for example if my script requires color as parameter, I can just use simple function:

    Function Test-Color([MF.Color]$Color)

    and don't need to care about any other code - only accepted values from now on are MF.Color enums...


    For example I use following to add all supported color values:

    New-Enum MF.Color Unknown 16 256 64K 16M

    If you would try to use Test-Color with unsupported value, for example Red, you will get following error:

    PS C:\Links\HDDs\DevHDD\WorkingProjects\S4Maintenance> Test-Color -Color Red
    Test-Color : Cannot convert value "Red" to type "MF.Color" due to invalid enumeration values. Specify one of the follow
    ing enumeration values and try again. The possible enumeration values are "Unknown, 16, 256, 64K, 16M".
    At line:1 char:18
    + Test-Color -Color  <<<< Red

    As you can see, as "side effect" you get all possible enumerations... Also you can use [enum]::GetValues([MF.Color]) to get all possible values.

    Right now I can't publish all enumerations I have done, but you can easily translate them from Brandon's excellent blog...

    Friday, July 11, 2008

    Speed up PowerShell startup times

    For long time I am using script Speed-Startup.ps1. It is part of my automated PowerShell installation. I try to keep all my PS-related stuff (scripts, profiles) subversioned - easy to use, free, easy to setup and it already saved me few times.

    $NgenLocation = @($(dir -Path (join-path ${env:\windir} "Microsoft.NET\Framework") -Filter "ngen.exe" -recurse) | sort -descending lastwritetime)[0].fullName

    If ($(Test-Path -PathType Leaf -Path $NgenLocation)) {
    $CurrentAccount = new-object System.Security.Principal.WindowsPrincipal([System.Security.Principal.WindowsIdentity]::GetCurrent())
    $Administrator = [System.Security.Principal.WindowsBuiltInRole]::Administrator

    If ($CurrentAccount.IsInRole($Administrator)) {
    [appdomain]::currentdomain.getassemblies() | ForEach {. $NgenLocation $_.Location}
    } Else {
    Write-host -ForegroundColor Red "You must be local administrator."

    } Else {
    Write-host -ForegroundColor Red "Ngen.exe was not found"


    You should always run it, because sometimes assemblies are not ngened and PowerShell load is slooooooooow... I just noticed that Jeffrey posted reminder how to speed startup.

    I am however curious if there is any way to pre-load .NET environment if you use PowerShell for your logon scripts in Terminal Services\Citrix environment (that means environment where multiple users are logged on to same server).



    Returning objects from PowerShell functions

    This can be very confusing and I am sure that I will need to describe it to some people in future, so I will rather write small post about it now and just send them link in future ;)

    Difference between subroutines and functions is that functions returns some data...

    So for example Get-Date that RETURNS date is function. Get-Date that will ONLY display date on screen is subroutine.

    If you are used to programming, usually you do something within body function and then you return some object.

    Lets have a look at below example. You just run function and it should return "Test" string:

    Function Test {
    Write-Host 'I want to return object of [String] type with value "Test"'
    Return "Test"

    Makes sense, right? If we run it, we can see following:

    PS C:\Links\HDDs\DevHDD\WorkingProjects\S4Maintenance> Test
    I want to return object of [String] type with value "Test"

    Looks fine so far. Now lets try to assign output from that function to some variable:

    PS C:\Links\HDDs\DevHDD\WorkingProjects\S4Maintenance> $Output = Test
    I want to return object of [String] type with value "Test"
    PS C:\Links\HDDs\DevHDD\WorkingProjects\S4Maintenance> $Output

    Still completely normal. But I want to make that function reaaaally really short, so I will remove Write-Host, it's now really needed:

    Function Test {
    'I want to return object of [String] type with value "Test"'
    Return "Test"

    Still makes perfect sense, right? But it won't work:

    PS C:\Links\HDDs\DevHDD\WorkingProjects\S4Maintenance> $Output = Test
    PS C:\Links\HDDs\DevHDD\WorkingProjects\S4Maintenance> $Output
    I want to return object of [String] type with value "Test"

    As you can see both lines are returned... Reason for this is simple:

    If you assign output from function to variable, it is NOT object specified after Return command that is returned, but whole output from function.

    Only exception (as far as I know) is Write-Host, that is ignored. This behavior can be VERY confusing, because your function can work perfectly, but then suddenly (once you improve it or add more code), it will return System.Object instead of Xml.XmlElement etc.

    For workaround, I am passing Output variable name as parameter also:

    Function TestOutput ([string]$OutputVariable) {
        Write-Host $OutputVariable
        Set-Variable -Value 0 -Name $OutputVariable -Scope 1

    $MyVar = 1
    Test "MyVar"

    As you can see, output is assigned to variable provided as argument. I don't really like this, however I still think it can make your functions more robust. This is specially case with processing XML - some methods will automatically display output from operation to screen and then your output is immediately corrupted.


    How do you solve this, any ideas???

    PowerShell naming conventions

    Well, as you probably all know, PowerShell is using verb-noun naming convention.

    I really love it. In fact I use it for few years already (Install-Server.cmd, Get-Users.vbs etc).

    Obviously I used it only for naming scripts (well, I used it also for functions, but that's different story). Problem with PowerShell is that many names that I really like are already used :( For example New-Object (that's one I really need now ;)).

    So I decided to use some kind of namespaces. My current project is called Solution4 Maintenance (short S4M), so I am using
    S4M:New-Object or S4M:Move-Object. It is obvious and I can use names that fits me best.

    Another advantage for me is that I can easily use
    Get-ChildItem Function:\S4M:* to see all functions that are available.

    Which naming convention do you use for your scripts? Really curious about it :)

    Friday, July 4, 2008

    PowerShelling day 2

    So I started with my first script finally after spending lot of time studying and designing (mostly designing, I am learning on fly now ;))

    Today I learned a lot and also refreshed my memory - my whole day was combination of "Whoa, that's great" and "Grrrrr, doesn't work" ;) I love to learn new stuff and with PowerShell there is lot to learn :)


    So which problems\solutions I run into? As this is my blog, I also like to use it as reminder ;)


    First of all, I started to use great (free) product PowerGUI. It consists of two user parts - first is visualization of PowerShell (this was promised for new version of MMC, not sure whether it is still the case). This is quite cool, however is not what I am looking for right now.

    Second part is really great PowerGUI Script Editor - IDE for PowerShell. I started to use it today and I am already very satisfied, it is really great :)

    My bad...

    I also run into few problems because I am used to VB.NET programming  - I stuck at one function that simply didn't work correctly. It was supposed to accept 3 parameters, however second and third were always ignored. Skilled powershellers probably already know where was problem - yes, SomeFunction(Param1, Param2, Param3) is array and not 3 standalone arguments :) Ouch, learning something new always hurts, especially if you KNOW about that problem and you just forgot (and then you look at the code and everything seems perfectly normal ;))

    Implementing debug log

    For my script I wanted to implement debug log. Idea is pretty simple, because it will run as scheduled task, I want to be able to see what happened in case something goes wrong (this is generally problem with scheduled tasks, if they got stuck somewhere, without complex logging there is no way how to find out where or why they got stuck). When I wrote scripts for batches, I used MTee for that purpose. You can then create scheduled task that will automatically generate log file with current console of that script, which is really helpful. In PowerShell, I remembered there was cmdlet Start-Transcript. I used it few years ago for presentations (hmmmmm, maybe that was reason why it was even created ;)). So at beginning of my script I added Start-Transcript -Force -Path $DebugLog. After playing little bit I realized two problems.

    PowerShell have by default (when you run script) disabled command echoing (as reminder, this is in batches by default enabled and is configured by @Echo on\off command). This makes complete sense, however it was not perfect for my debug scenario. Leo Tohill (thanks again) pointed me to Set-PSDebug. Parameter -trace 1 enables something very similar to command echoing, it is not exactly what I want (format is quite hard to read in case you have complex script), however it helped me a lot :)

    Second problem was mentioned by Shay Levy - transcript outputs ONLY powershell output, so for example if you use ipconfig, you want be able to see it's output in debug log file. As workaround I tried Tee-Object around whole script (S4Maintenance.ps1 | Tee-Object ...). To my surprise situation was opposite - no powershell information, only output from external binaries :D So after a while I came with quite simple solution that works fine so far. I use Start-Transcript and whenever I need to call external binary, I use function Fix-Trascript:

    Function Fix-Transcript ()
      Write-Host $_

    For example with ipconfig it means ipconfig | Fix-Transcript. This way output from ipconfig is also stored in transcript file.


    I decided I want to use scopes also. In batches I implemented a lot of scopes. Not only global\local (SetLocal), but using Set <VarPrefix> behavior of Set command in cmd. For details, have a look at this blog post. By specifying prefix (for example Private.), I was able to automatically destroy all such variables at end (For /f "usebackq tokens=1,* delims==" %%i IN (`Set Private.`) Do Set %%i=).

    In PowerShell it is much easier. For my script I would really like to allow fallbacks, which is very hard in batches. You specify some default values and you can override (NOT OVERWRITE) them in sub-scripts or functions. I run into one problem with PowerShell.

    Consider example where I want to have (default\global) variable called $JobStorage. This variable must be provided by command line argument. Have a look at following line:

    Param (
        [string]$Global:JobStorage = $(throw "You must specify folder where you jobs are stored as parameter."), #Specify folder where jobs are stored

    Looks correct, right? If you try to use .\S4Maintenance.ps1 C:\Jobs, it works fine. But I really hate position based parameters - I prefer to use name-based whenever possible. PowerShell automatically supports named parameters, so instead of .\S4Maintenance.ps1 C:\Jobs I can use .\S4Maintenance.ps1 -JobStorage C:\Jobs.

    Problem is that if you specify scope, you can't use named parameter. This makes sense and I don't expect that it could be considered bug. As workaround, I assigned it to global variable afterwards:

    Param ([string]$JobStorage = $(throw "You must specify folder where you jobs are stored as parameter.")

    $Global:JobStorage = $JobStorage

    This works as expected.

    Background processing

    I was also thinking about background processing and run into really nice post here. Highly recommended, I will probably use it once my basic code is finished. BTW background processing is supported in PowerShell v2.

    Array index evaluated to null

    However of course I got stuck on something :( I use hashtable for some of my entries, but one returns me quite strange error that even Google is not able to help with:

    Index operation failed; the array index evaluated to null.
    At C:\Links\HDDs\DevHDD\WorkingProjects\S4Maintenance\S4Maintenance.ps1:108 cha
    +                         $Containers[$ <<<< ($Container.Name)] = $Container

    Code that I am trying to execute is nothing special from what I can say:

    $ContainersToLoad = @{}

            ForEach ($Target in $($Container.failed.targetcontainer)) {
                $ContainersToLoad[$($Target.Name)] = $Target.Name
            ForEach ($Target in $($Container.finished.targetcontainer)) {
                $ContainersToLoad[$($Target.Name)] = $Target.Name
             Write-Host $ContainerToLoad.Count
            If ($ContainerToLoad.Count -gt 0) {
                ForEach ($Target in $ContainersToLoad) {$Target}


    UPDATE: Of course immediately after I posted it I realized where the problem is (in next minute ;)) one small typo, instead of $ContainersToLoad I used $ContainerToLoad ;) I will probably have a look at strict option in Set-PSDebug tomorrow ;)

    Sunday, June 22, 2008

    Virtual KVM switch (use one mouse and keyboard for multiple computers)

    Well, probably all of you that were working on helpdesk as techies had also KVM (keyboard\video\mouse) switch on their desk...

    By using KVM, you could easily control multiple PCs with one mouse\keyboard\screen...

    Next week I am starting to work on very interesting project for my new company - maintenance framework that should allow small\medium\enterprise companies to easily build maintenance for their services (be it SQL, print servers or citrix servers). I am really looking forward to it - I think it could be huge success and ideas from brainstorming I and my colleague Dennis had on this topics were really exciting :)

    Well, and here comes my problem - I came to conclusion that I would need 3 monitors for it ;)

    Why 3? Because on 1st I always have Visual Studio open (of course fullscreen), on 2nd I would like to have some documentation (usually Internet or MSDN) and (one I am missing) on 3rd I would like to have small tools like Last.FM player, Live Messenger, Skype etc...

    Instead of buying new monitor I decided I will finally try Synergy. Synergy is small utility that allows you to use one mouse\keyboard to control multiple computers and also share clipboard between them. I run into synergy some time ago, I really liked the concept, however I didn't really have use for it that time. I spoke with Lukas about it (btw one day I will make post about people I mention in my posts ;)), because I remembered he used it before. He mentioned that concept was great, however there were some smaller technical difficulties that time.

    So I installed it on both my desktop (with 2 monitors) and also my laptop (well, 1 monitor ;)). Installation was very easy - nothing special to configure, installed easily. After installation I configured it - that step is very easy, all you need to specify is that you add potential clients to server (based on client name or aliases) and then specify locations (laptop on left, desktop on right). And that's it - you just run it on both computers and it works like a charm :)

    Really nice (and important) feature is that Synergy is available for multiple OSs (OSX, Windows, Linux...).

    I am only missing feature to use clipboard to transfer files (only text is shared) and drag&drop (which is very interesting, however I can imagine it must be hard to code it).

    Thursday, June 19, 2008

    First post

    Well, I decided I need to move my blog again...

    I started blogging at BlogDrive - however in September 2005 I moved my blog, because BlogDrive was not supporting RSS\Atom :(

    Now my blog at MSMVPS is not available for 3 days and I have no idea when it will work again, so it is time to move on and select stable blog provider - and Blogger is best I think :)

    So if you never saw my blog before, few information for you:
    My name is Martin Zugec and I work for company called Login Consultants as senior consultant. I am MVP for Server Setup\Deployment 2005, 2006, 2007 and right now I am in middle of renomination process...

    I am specialized on scripting (batches, powershell, autoit, you name it), deployment and SBC (terminal services, citrix, application virtualization...)