Showing posts with label tips and tricks. Show all posts
Showing posts with label tips and tricks. Show all posts

Wednesday, March 6, 2013

Not my type

In programming languages variables can be of a specific data type, such as integers or strings of characters.  If a language requires variable to be defined as containing only a specific class of data it is called strongly typed.  If the variables can contain any kind of data the language is weakly typed.  If the language is experimental it is prototyped. If it requires punch cards it is teletyped

http://www.staff.ncl.ac.uk/roger.broughton/museum/iomedia/images/pc6.jpg

As mentioned previously, everything in PowerShell is an object.  But simple scalar variables still have a data type.  PowerShell makes a best guess at what the type should be and does conversions on the fly so you usually don't have to worry about the data type of a variable or performing explicit conversions.


Here PowerShell converts the string to an integer, multiplies it with the floating point number, then adds the hexadecimal number.  When I give the answer it converts the integer to a string so it can be concatenated with the rest of the text.

Most of the time PowerShell correctly determines the data type and formats the output accordingly.  But what if you need to force a variable to be a specific type?  Precede either the variable or expression with the .NET value type enclosed in square brackets.  In this example we divide two numbers and cast the result as an integer and string respectively.


But that looks like it involves a bunch of typing.  Wouldn't it be nice if there was a shortcut to access those system types?  Yes it would be nice so that's why the nice people who made PowerShell included some type accelerators.  Instead of using [System.Int32] I can use [int] and instead of [System.String] I can use [string]


Are there many of these type accelerators available?  Yes there are. Oisen Grehan 's blog post not only shows all of the accelerators available but a handy way to list them all.  One that does not show up in the list is the [DateTime] accelerator for [System.DateTime].  Notice how flexible this value type is.


Now go back and check Oisen Grehan's blog post again.  There's all kinds of useful stuff that is just as flexible as the [DateTime] accelerator.  For example, there is the [ipaddress] data type that accepts a string and converts it to an IP address.


This is pretty sweet.  Notice that the type accelerator processes both IPv4 and IPv6 addresses.  If I need to prompt my user for an IP address I don't need to parse and validate it.  I can just assign it to a variable that has the [ipaddress] type and wrap it with some error handling then let .Net and PowerShell do the heavy lifting.

One other handy accelerator is the array type.  Why is it handy?  Because it gives us access to all of the methods for the .Net array class.  To access any of the .Net class methods specify the class in square brackets followed by two colons and then the method.


Note that that when calling these methods they only operate on the variable you pass.  If you need to keep the array in its original state you need to copy it first and then call the method on the copy.


Type accelerators are one of PowerShell's many useful and time saving capabilities.  If you never introduce yourself you might not know that PowerShell is just your type.




Monday, July 2, 2012

The shortest distance

Everybody likes shortcuts.  If you can save yourself some time and effort, why not take the opportunity?

Powershell offers several ways to save yourself some typing and shorten your command lines and scripts.  Plus you almost never blow yourself up.

One way is to string together all of the slicing and dicing you might need to do.  If a method for an object returns an array and I know I need a specific element in that array I can specify that element  right after the method invocation.  And I can call a method right from that array element.  Allow me to demonstrate.

In a previous example I had to parse the DistinguishedName for a user.  I model my test domain based on a former employer who managed users and computers by location.  Delegations were done so support staff at each site could manage their objects.  My OU structure looks like this:

In order to find the site name for an arbitrary user account I can assign a variable at each step to parse the DistinguishedName.  In this example I also show the value of the variables to confirm each step:

But I don't need to use all those variables.  This can be processed in a single line.  Below I build up each step.  The last line shows how I combine all of these steps into a single command:

Another handy shortcut is to enclose a command in parentheses and do the same slicing and dicing.  In the above example I don't even need to assign the Get-ADUser output to a variable.  I can use the command
 (Get-ADUser AlAlpha).DistinguishedName.split(',')[2].substring(3)

Letting PowerShell process things in parentheses has many uses and works for all cmdlets.  You can nest some really crazy crap and it will still work.  In the following example I know I have a user in Atlanta with the last name Alpha and I'm not sure what the account name is but I have to change his first name to Allen.  The first command get-aduser -searchbase "OU=Atlanta,OU=Locations,DC=toddp,DC=local" -filter * | where {$_.surname -eq 'Alpha'} shows how I can do this search.  In the second command I enclose that first command in parentheses and use that as the identity for my Set-ADUser command.  I call Get-ADUser one last time to show the change was made in the GivenName attribute.

Another PowerShell shortcut is to use aliases for the cmdlets. 
Here we see all of the aliases that start with the letter "g".  Two aliases I use frequently are gm for Get-Member and gwmi for Get-WmiObject.


In this example the Get-ADUser output is piped through the Get-Member cmdlet using the alias GM.

You can create your own aliases using the Set-Alias cmdlet.
After using Import-Module (or its alias, ipmo) to import the ActiveDirectory cmdlets I use Set-Alias to create the alias gadu for the Get-ADUser command.  Note that an alias can only be for a cmdlet, not a cmdlet and its parameters.  You can work around this limitation by using a function as shown in the last example for Set-Alias.


There are two other aliases that you may encounter.

There is a default alias for ForEach-Object which is the percent sign (%) and an alias for Where-Object which is the question mark (?).  This will come in handy.

How cool is this?  I pipe Get-Process through the ForEach-Object shortcut to pull out the ProcessName and use the Where-Object shortcut to show just the processes starting with the letter "c".  That's good enough for me.

In Powershell 3 there is an implied ForEach-Object for arrays.  Instead of piping the array through ForEach-Object cmdlet or shortcut you just enclose the object in parentheses and process the properties you need.
These are the processes running on my Windows 8 demo VM.  But if I need to grab just the process name I can use the syntax (Get-Process).ProcessName
Pretty sweet.  There are some other handy shortcuts coming in PowerShell 3.

So we have all these nice shortcuts.  Shouldn't we use them all the time?  The rule of thumb from our friend Don Jones is that if you are writing a quick and dirty command for yourself, then the shortcuts can save you time and effort.  But scripts with shortcuts look more cryptic and are harder to read.  So if you are creating a more invovled script that will be used by someone else you should be considerate and make it more accessible by avoiding shortcuts. 

Knowing these shortcuts and aliases (or at least how to find what is being aliased) helps you be more efficient.  And if you come across a script written by someone else who used them you won't be caught off guard

Monday, April 30, 2012

Pipeline Conga Line

http://28.media.tumblr.com/tumblr_l2k058TExA1qzjpn4o1_500.jpg
Unix, DOS, and PowerShell all have the ability to combine commands so the output from one command becomes the input for the next command.  This is called a pipeline because the commands are connected like sections of a pipe.  All three platforms use the vertical bar as the connector which is why that character is sometimes referred to as the pipe.

Unix and DOS also use the less-than sign < to pipe input from a file to a command, the greater-than sign > to pipe output from a command to a text file, and two greater-than signs >> to append to a file.  All of these pipes are useful and you can sometimes do complex things, but because the Unix and DOS pipes are text based there are limitations on what can be accomplished.

PowerShell has powerful cmdlets for getting data to and from files so it only needs the vertical bar pipe.  Because PowerShell is object oriented and the cmdlets are designed to process objects passed from the pipeline some insanely powerful capabilities are available.  Often you can complete a task with a single command line.

So let's stop talking and start dancing.

Get-GPO -All | where {$_.displayname -match 'XenApp5'} | Set-GPPermissions -TargetName 'XAAdmins' -TargetType Group -PermissionLevel GpoEditDeleteModifySecurity

The administrators of our XenApp farm requested access to edit the group policies that manage the farm.  All of the GPOs include 'XenApp5' in the name.  The output from Get-GPO is piped into the Where-Object command to filter out any GPOs that don't have 'XenApp5' in the name.  The resulting objects are piped into the Set-GPPermissions cmdlet to assign the XAAdmins group the permission to edit the GPOs.  And I complete a complex task in a single line.

Most of your pipeline commands will follow this general pattern.  You will run a command or get some input, filter the pipeline to keep only the stuff you want to work on, then process those objects further.  The filters Where-Object and Select-Object are quite powerful but depending on your source you can speed up your scripts by filtering objects at the source.  Don Jones calls this Filter Left, Format Right.

That article also notes that the pipeline processor assumes that every pipeline ends with Out-Default to dump the results to the screen unless another Out-* command is at the end of the pipe.  PowerShell is using the Extended Type System to convert the objects to text and, unless directed otherwise, to make a best guess at how to format the results.  The Out-* commands produce no objects for any comdlets in the pipeline after them to process.  Therefore the Out-* command is always the last command, even if it is the implied Out-Default.

Where else can we send output?
If you are not using the implied Out-Default you will probably use Out-File, although Out-GridView has some handy capabilities.

What options do we have for formatting the data before we pipe it out somewhere?
Depending on the objects PowerShell will automatically Format-List or Format-Table before doing the implied Out-Default.  We have some handy ConvertTo cmdlets to format the data as HTML, CSV, or XML.  There are also the Export-CSV and Export-CliXML that combine the ConvertTo and Out-File operations into a single cmdlet.

So our pipeline conga line dance steps are:
  1. Get some data objects from a file or cmdlet
  2. Filter the data by limiting the number of objects and/or number of attributes
  3. Process the data by sorting or using the data in another cmdlet
  4. Repeat steps 2 and 3 as needed
  5. Format the data if needed
  6. Output the formatted processed data to the screen or file
Something like:

get-wmiobject win32_service | where {$_.StartMode -eq 'Auto'} | select-object name,displayName,PathName,StartName,Status,State | sort-object StartName,displayName

Use Get-WMIObject to find all of the services that are set to start automatically, select the properties of interest, and sort them so services running under the same account are grouped together.

Get-WMIObject does allow us to filter so we only get the WMI objects we want.  Doing so will make the command run faster because we return less data.  I can also format the output into a table so I get more information on the page.

get-wmiobject win32_service -filter "StartMode='Auto'" | select-object name,displayName,PathName,StartName,Status,State | sort-object StartName,displayName | format-table



If I replace Format-Table with Out-Gridview I get the output in an interactive window that lets me further sort and filter the data.


Or I can replace the Out-GridView with a Format-HTML | Out-File and create an .html web page that I can deliver to my PHB so he can open the report in his web browser.









So don't be a PowerShell wallflower.  Get out there and dance.


Commenters: Have you done any crazy cmdlet stacking on the pipeline?  How many cmdlets have you been able to use in a single pipeline?

Wednesday, March 21, 2012

FOR command: The DOS Sonic Screwdriver

When I was younger I enjoyed the British TV show "Doctor Who".  The imports we got in the states were the seasons where Tom Baker starred as the eponymous, dalek-fighting time-lord.  The shows were entertaining but had the deus ex machina of the sonic screwdriver that the Dr. would use for everything from picking locks to picking up women. ("Yes that is a sonic screwdriver in my pocket and I am happy to see you.")  I used to dream of having a single tool that could do all kinds of useful things.
http://media.screened.com/uploads/0/34/44299-tom_baker1.jpg

Then I found the FOR command in Windows XP and my dreams were made reality -- in a clunky, geeky, command-line sort of way.  (Adult life never really turns out like you imagined it when you were a kid.)  What does the FOR command do?

Runs a specified command for each file in a set of files.

FOR %variable IN (set) DO command [command-parameters]
  %variable  Specifies a single letter replaceable parameter.
  (set)      Specifies a set of one or more files.  Wildcards may be used.
  command    Specifies the command to carry out for each file.
  command-parameters
             Specifies parameters or switches for the specified command.

So (set) is one or more filenames (which may include wildcards) on which we can perform a command.

Well that's fine for working on a bunch of files but what if I need to work on a bunch of directories?

FOR /D %variable IN (set) DO command [command-parameters]
    If set contains wildcards, then specifies to match against directory
    names instead of file names.

What if I need to recursively look through all of these directories?

FOR /R [[drive:]path] %variable IN (set) DO command [command-parameters]
    Walks the directory tree rooted at [drive:]path, executing the FOR
    statement in each directory of the tree.  If no directory
    specification is specified after /R then the current directory is
    assumed.  If set is just a single period (.) character then it
    will just enumerate the directory tree.

But wait, I remember using a FOR command in BASIC to loop through a bunch of integers.

FOR /L %variable IN (start,step,end) DO command [command-parameters]
    The set is a sequence of numbers from start to end, by step amount.
    So (1,1,5) would generate the sequence 1 2 3 4 5 and (5,-1,1) would
    generate the sequence (5 4 3 2 1)

That's all good stuff, if fairly pedestrian.  But the next option is the one I use all the time and starts to crack open the sonic screwdriver capability of the FOR command.

FOR /F ["options"] %variable IN (file-set) DO command [command-parameters]
FOR /F ["options"] %variable IN ("string") DO command [command-parameters]
FOR /F ["options"] %variable IN ('command') DO command [command-parameters]

    filenameset is one or more file names.  Each file is opened, read
    and processed before going on to the next file in filenameset.

These seem like different functions but they get grouped together because they provide similar input and use the same ["options"] (discussed below).  But look at what is available now.  I can provide a set of one or more files, each of which will be opened, parsed line by line, and acted upon.  I can provide a string that will get parsed and acted upon.  But most importantly I can provide a command and have the output of that command parsed and acted upon.  The command can be a native DOS function or some other command line utility that produces output.  This is huge! 

Before we get to some nifty screwdriving let's take a look at the options.

        eol=c           - specifies an end of line comment character
                          (just one)
        skip=n          - specifies the number of lines to skip at the
                          beginning of the file.

These are easy to grasp.  Use one character to mark the end of the line which allows adding comments to the input file (although you can use it for other purposes).  Skip past the first one or more lines of the input to avoid processing header information.  It would be handy if there was a way to mark the end of the input file but it is up to us to process that in our code.

        delims=xxx      - specifies a delimiter set.  This replaces the
                          default delimiter set of space and tab.

Use one or more characters to parse the input.  This lets me process a comma separated .csv file but with a little creativity you can do much more.

        tokens=x,y,m-n  - specifies which tokens from each line are to
                          be passed to the for body for each iteration.

So from the input  I get one or more lines of text that is going to be parsed by breaking it into tokens based on the delimiters specified.  For example parsing the string "A B C D" will produce 4 tokens, one for each letter.  In the FOR command I specify a variable (%X in the following examples) that by default will be assigned to the first token (A).  But if I want the variable to be assigned to a the third token I would specify "tokens=3" and my variable %X will have the value C

I can also generate multiple variables that follow in alphabetical sequence and assign them values based on the tokens I select.  So if I specify "tokens=2,4" the variable %X will have the value B and the variable %Y will have the value D.  Or I can specify a range so "tokens=2-4" will make %X=B, %Y=C, and %Z=D.

I can also use an * to generate one final variable whose value will be the rest of the unparsed line of text.  So "tokens=1,2*" will make %X=A, %=B, and %Z=C D.  Using "tokens=1*" will make %X=A and %Y=B C D"Tokens=*" will prevent parsing completely and make %X=A B C D.


This is another example with the same tokens= values described above.  Notice in the last 2 examples there is no token to assign to the variables at the end so the ECHO command just displays the variable name as if it were a string. 

An important point about the FOR command is that because it can return a sequence of variables it only allows single character variable names.  I tend to start with %A so I can get as many tokens as I might need.  If I need to nest FOR commands in a pipeline I usually start later in the alphabet for the FOR commands later in the chain to avoid collision.

Now scroll back up a bit and notice that the IN part of the FOR /F command can be a command, a string, or a file set.  And notice that the string to be parsed will be in double quotes.  But what if I have a file set that has a filespec that contain spaces?  Well, those filespecs will have to be enclosed in double quotes, otherwise they will be seen as different filespecs.  But if the filespecs are enclosed in quotes won't they be confused for strings?  Hmmmm... why, yes they would.  Well how do we get around this problem?  I'm glad you asked.

        usebackq        - specifies that the new semantics are in force

This option will use the backquote to specify an executable command, a single quote to specify a string, which leaves double quotes available for enclosing file names that contain spaces.  Those guys at Microsoft think of everything, don't they?

So now we have a bunch of arrows in our quiver.  Let's go shoot some stuff.

Who is your computer talking to?  The netstat command will show all ports your computer has open:
I have my browser open to google.com  so the IPv4 addresses are google's servers.  Let's parse that output and see how the traffic gets from my computer to google.

FOR /F "skip=4 tokens=1-4" %A in ('netstat -n') do IF %D==ESTABLISHED 
   FOR /F "delims=:" %X in ('echo %C') do tracert %X

This all goes on a single command line but word wraps in the box above.  In the first FOR command I am skipping the first 4 lines so I can ignore the column headings.  I get 4 tokens broken up by white space.  I am only interested in established connections so my IF checks %D, the fourth token, that shows the connection state.  If netstat tells me the connection is ESTABLISHED then I will parse the third token (variable %C) which has the IP address and port.  I use a second FOR command to echo the IP:port value and have FOR split the string at the colon to get just the IP address. I use that as the parameter for the tracert command which shows each hop on the way from my PC to google.

Well, that is interesting but not terribly useful.  Here is something I have actually used my job.  I needed a way to get the list of users from a domain global group.  The NET GROUP command was selected because it is available on every platform.  But the output from the command lists the users in three columns which isn't useful if you need to do something else with the information like drop it into a spreadsheet or pipe the account names into another command.  So I gave the requestors this:

@ECHO OFF
IF %1.==. GOTO :Done


FOR /F "skip=8 tokens=1-3*" %%I IN ('net group %1 /domain') DO CALL :DumpEm %%I %%J %%K
GOTO :Done

:DumpEm
IF %1.==The. GOTO :Done
ECHO %1
IF NOT %2.==. ECHO %2
IF NOT %3.==. ECHO %3

:Done

I made the font small on the FOR command so it would fit into a single line to avoid confusion.  In this FOR command I'm skipping the header lines and grabbing the three columns of names  (I will gloss over how I handle the input to the batch file for now.  That will be the subject of a future post.)

One thing to note is that because this is run from a batch file and not the command line the variables in the FOR command have to use two percent signs (%%).  In the IF statements the first thing I do is see if we are at the end of the output from the NET GROUP command.  For the other IF statements I only ECHO output if there is data.  (Sometimes you will see examples like IF NOT "%1"=="" but really all the IF statement command does is compare two strings.  If my variable has no value and "%1" resolves to "" or %1. resolves to .  Either way I have verified the empty string and since I am more efficient if I type less so I use the technique shown in my batch file.)


This shows the results of a normal NET GROUP command to compare it to the results of the ShowUsers.cmd file.  So I achieved the goal of skipping the header and footer and getting all the names in a single column.  Mission accomplished.

One final example that shows the directory parsing capabilities of the FOR command.  On our NTFS file servers I was asked to dump the Access Control Lists (ACLs for you cool guys, "who has access to what" if you are in upper management).  I just needed a simple list for the top level folders so I cam up with this.

:: Show the ACLs for the top level folders on each file server data drive
::

SETLOCAL
SET AdmFS=\\FS0111\E$
\\FS0113\E$ \\FS0115\E$
Set FN=FileServerACLs.txt

DEL /Y %FN%

FOR %%A in (%AdmFS%) DO FOR /D %%B in (%%A\*.*) DO CACLS "%%B" >> %FN%

I can't show you the output but I will describe what happens.  The variable %AdmFS% lists the administrative shares on three file servers.  The variable %FN% has the name of the output file which gets cleared by the DEL command each time the script runs.

Now I chain together two FOR commands, the first one parses %AdmFS% to call the second FOR command once for each file server.  The second FOR command lists the top level folders.  Because the folders may contain spaces the variable from the second FOR command is enclosed in quotes so CACLS will correctly process its value.  The results of CACLS are piped using >> so they always append to the output file.

The output from CACLS isn't pretty and fortunately I wasn't asked the follow up of having to list all the users in all of the groups that were output.  If I were working this project today I would use icacls.exe instead because the output is cleaner and the utility has more features.  And I would use Powershell instead of DOS because it provides more capabilities for parsing results and creating friendly reports.

Didn't I say I would talk about Powershell in the first blog post?  Isn't it about time I started?

Wednesday, March 14, 2012

Environment Under the Hood

In the last post I talked about several ways environment variables get created.  But a bunch of environment variables are available just by running the operating system.  Open a command prompt and enter the command SET to see all of the available variables.
Handy tip: If you follow SET with a letter you will see all of the variables that start with that letter.  In the example above SET L would show just %LOCALAPPDATA% and %LOGONSERVER%.

So where do these already available environment variables come from?  Well it looks a little something like this:
OK, not quite. 

There are 3 kinds of environment variables automatically available.  There are variables whose values are calculated at logon such as %USERNAME% and %COMPUTERNAME%.  These variables are static throughout the current session.

There are also variables that are static across sessions such as %OS% and %PATH%.  You can find these variables by going to computer Properties > Advanced > Environment Variables
Note there are two sections.  User variables are just available to the current user while System variables are available to all users on the computer.

Sometimes it is fun and informative to look under the hood.  How does the operating system know what these variables and their values between sessions?  If you said, "the registry", you win the prize.
HKLM/System/CurrentControlSet/Session Manager/Environment and HKCU/Environment are where to check.  If you need to you can hack the registry to create new system variables.

And note that even though these variables are static you can still change the values in your batch file.  It is not generally recommended but you can do it.  Most commonly a batch file will append a folder to the PATH with the command:

PATH %PATH%;C:\Some\New\Folder
So that's two types of variables.  But I said there were three.  Was I lying?  After all, I did mislead you earlier about where variables come from as a cheap excuse to show a Monty Python clip.  But I'm not lying, there really are three.

The third type of automatically available variables aren't displayed with the SET command.  These variables are not static during your session but are calculated as needed.  They are described at the end of HELP SET (or SET /? if you prefer).

%CD% - expands to the current directory string
%DATE% - expands to current date using same format as DATE command.
%TIME% - expands to current time using same format as TIME command.
%RANDOM% - expands to a random decimal number between 0 and 32767.
%ERRORLEVEL% - expands to the current ERRORLEVEL value
%CMDEXTVERSION% - expands to the current Command Processor Extensions
    version number.
%CMDCMDLINE% - expands to the original command line that invoked the
    Command Processor.

These variables can be mangled just like any other variable.

%CMDEXTVERSION% is used to determine what commands are available.  The value is 1 if the operating system is Windows NT and is 2 if the operating system is Windows 2000 or later.  If you check this variable and the value is 1 you are working with limited command set and you might be wearing clothes that are out of style.

On Windows 7 and 2008 there is another variable named %HIGHESTNUMANODENUMBER% which is the highest NUMA node number on the machine.  This is handy information in a multi-processor environment with NUMA support and multi-threaded applications.  But for batch file processing leveraging this variable is probably overkill.

Commenters: Can you think of a reasonable use for the %HighestNumaNodeNumber% variable?  Or is it like the pizza shop that offers pineapple as a topping knowing full well that nobody ever orders pineapple as a topping.