One car household - AKA I'm Selling My Car

Now that I work from home full time, I really don't feel like we need two cars. We both previously commuted to other parts of town for work, but my fiancé now works ~10 minutes from the house while I work from home. When I do need the car, I can always drop him off at work and do whatever running needs done. It just doesn't make sense to keep such a fun machine sitting in the garage when someone else could be out there enjoying it!

I am going to miss the beast. I've had this Audi S5 since July 8, 2013 with 36,417 miles on the odometer. Since then I've treated it like the special machine it is. We've been through a few sets of rubber together, several oil changes, and yet more very expensive factory recommended maintenance (be glad you didn't have to foot the bill for the 50k service!) - all records are available for your peace of mind. If you're interested, drop me a line. I'll be taking it for appraisal before the end of the month. 

2010 Audi S5 Prestige Coupe in Meteor Grey
4.2L Naturally Aspirated V8
6 Speed Manual Transmission
19" Alloy Wheels
255/35ZR-19 Michelin Pilot Super Sport XL Tires
Black Leather Interior with Black Headliner
~65,000 Miles on Odometer
Original Floor Mats and WeatherTech custom floor liners, Sat Nav, DVD, MMI, iPod and USB Kits, Reverse camera, Park Assist, Heated seats, Spare key....

 

Customizing Configuration Manager Offline Servicing of Operating System Images Temporary File Location

While working on a presentation for Microsoft's Minority Student Day here in the Columbus office, I came across an annoyance with Configuration Manager 1610 in my lab. The ConfigMan server in the lab is virtual, hosted on Server 2016 in Hyper-V. The VM's OS disk is set to be 128GB and stored on a high speed PCIe SSD (Solid State Disk), but an additional 2TB mechanical disk is assigned to this VM for ConfigMan Content, Images, Packages, etc. This normally works great, but when updating an Operating System Image using Scheduled Updates, Configuration Manager will attempt to do all of the WIM operations in a temporary folder on the disk you installed it on - for many, that will be C:. In my case that's 20+ extra gigabytes I don't want on the SSD. That's a whole bunch of additional write operations, a large amount of data, and a better-suited workload for placement on the rotational disk.

A quick web search turned up this handy article that someone from Microsoft published several years ago. Rather annoyingly, though, the information is a little hard to follow. Right off the bat, if you try connecting to the WMI namespace as suggested in their article, you're going to have a bad time. You will scratch your head, wonder if this is worth even pursuing, and begrudgingly flip back to the search results to look for a better written article. Unfortunately, you're not going to find one.

To attempt to help shed some light on the subject, I thought I'd document my experience with this whole process. So first off, let's have a look at the steps our well-meaning poster suggested in the original article.

  1. Launch WBEMTest
  2. Connect to the WMI Namespace for Configuration Manager
  3. Query WMI for Offline Servicing Manager settings
  4. Drill down into the properties to change the StagingDrive property
  5. Save the changes

To get started, launch WBEMTest. Right click your Start button, select Run, type wbemtest into the box, then click okay. A happy little window will show up on your desktop. 

Now we need to connect to the proper WMI Namespace. Fortunately, you can query these with PowerShell if you're lost! This is how I determined the error in the original post!

gwmi -namespace "root" -class "__Namespace" | Select Name

If you followed the original article's advice to  "Connect to the Configuration Manager namespace on the site server. For example, if your site code is “CCP”, connect to namespace 'rootsmssite_CCP'." you won't get very far.

I think the original poster left off several characters, namely '\' where they should have separated root, sms, and site! So, the proper advice here is if your site code is "HHQ" (mine), connect to the namespace using "root\sms\site_HHQ".

Once you're successfully connected, click the Query button and paste in the following substituting your own Site Code for HHQ:

SELECT * FROM SMS_SCI_Component WHERE SiteCode='HHQ' AND ItemName LIKE 'SMS_OFFLINE_SERVICING_MANAGER%'

This will hopefully return a Query Result! Double click the result. This will open the Object Editor. 

Look for and double click Props in the properties list. This will open the Property Editor.

Click the button titled View Embedded. There will be four objects in an embedded array to inspect in order to find the StagingDrive property. 

Once you've located it, change Value1 in that list to whatever drive letter you want the temporary working folder to be placed on. I've chosen E:.

You've come all this way - now for the most important part: Save your changes! If I took screen-shots of every click, this post might tie for a Guinness World Record for Longest Blog Post in Vertical Pixels. So follow loosely - basically click Save and Close for each window. My exact cadence was like so: Click Save Property. Now click Save Object and then click Close. Next, click close on the Query Result. Click Save Property and then click Save Object, and finally click Close. Whew! That was a lot of clicking. You may now exit WBEMTest.

I'm not sure if you need to restart the Site Server to pick up this change, but the original article mentioned it would be used on the next Offline Servicing run. I bounced my VM for good measure. I truly hope this post was helpful and saved you a bit of headache. With any luck, you were able to switch your temporary working folder to your drive of choice!

 

Home Lab Upgrade

Working on Azure is fun. It’s more than fun – it’s freaking awesome. Anyone can make pretty much anything they want without having to have hardware on-hand to back it up. All you need is money (or a sponsored Azure account from your workplace *hint hint*).

While that’s pretty damn amazing, I miss having a home lab. I still have one, don’t get me wrong, but it’s mostly neglected these days. It’s less of a lab and more of a home control appliance. It powers our home automation, turning lights, heat, etcetera on and off. It monitors the water level in the basement sump, reports effluent volume, charts high and low activity dates. It stores our local backups and files before they’re shipped off to an Azure storage account for off-site backup. It’s a real production system that we legitimately depend on.

I reminisce about building my lab like any nerd building his computer. There’s something about picking out parts that perfectly complement each other, bring out amazing performance or perfectly match the requirements. It affirms my nerd-cred. It’s a physical representation of my membership card. Haha.

I recently purchased a new gigabit switch with POE to power and connect our security and monitoring systems. Gigabit is typically fast enough for nearly everything we need, certainly so for surveillance and media purposes, but not so much for some of the other things we do with our networked storage. I’ve noticed that with multiple machines connecting to our file server we can easily exceed the available bandwidth of a single link there. I toyed around with LACP and port channels (which work great, but still limit each client to 1Gb/s), but wouldn’t it be fun to have something in an entirely different class? I think it’s time to make the step to 10 gig Ethernet.

I managed to snag a Ubiquiti Networks US-16-XG SFP+ 10 Gig Aggregation switch for fairly cheap. I have some existing Unifi gear that I've had pretty decent luck with, so I wanted something that could integrate with that. Luckily they had just released this beauty (though, also a downside because it's a bit buggy - more on that in a bit). The DL380P we use as a file / virtualization / automation server didn't have a 10GbE interface, so I had to procure one of those as well. I ended up with an HP CN1100E for ~$100. It's a dual port 10G PCIe converged network adapter that does Ethernet, iSCSI, and Fibre Channel (FC) connectivity over 10GbE. I won't need anything but Ethernet for our deployment, but the price was right!

The last major piece of the puzzle was cabling. I've not forayed into the world of SFP+ before, so I had to learn about optics, DACs, transceivers, and a whole mess of things that I was ignorant of previously. We already have Cat6 strung around the house, so I had hoped to utilize that to connect my desktop to this new 10G switch. Boy, was I wrong. The transceivers for SFP+ RJ45 connectors are something like $300 each - and that's on the low end of what I was able to find! It takes two per connection, an amount I'm not willing to spend. This particular connection requires a bit more thought. Perhaps a single 100' run of Fiber would be more economical. The jury is still out on this one.

Connecting the server and switches was relatively simple. I hopped over to the Ubiquiti community forums and checked out the list of DACs and Optics on the US-16-XGs compatibility list before settling on these iPolex passive DACs I picked up from Amazon. At ~$24 each, they're fairly cheap. I utilized four of them in the rack - two from the US-16-XG to the DL380P, and two to the gigabit POE switch. This configuration essentially allows ~20 (theoretical, real world is obviously less) clients to access the server at their full gigabit line rate. The server connection is aggregated, as well as the inter-switch connection. I'm planning for the desktop to connect directly to the XG, as well, which will allow full 10Gb access to the server once I get the whole fiber thing sorted out.

Now, about the bugginess of the switch. The US-16-XG was originally a bit finicky about the DACs. Others have reported this issue on the community forums, as well. I was able to get everything working by shifting things to different ports. For whatever reason DACs would work in some ports, but not others. I managed to get everything working, though. Here's a snippet from the working switch configuration:

(UBNT) >show fiber-ports optics-info all

                         Link Link                                 Nominal
                       Length Length                                   Bit
                         50um 62.5um                                  Rate
Port     Vendor Name      [m] [m]  Serial Number    Part Number     [Mbps] Rev  Compliance
-------- ---------------- --- ---- ---------------- ---------------- ----- ---- ----------------
0/1      OEM              0   0    CSS31GB1516      SFP-H10GB-CU3M   10300 03   DAC
0/2      OEM              0   0    CSS31GB1523      SFP-H10GB-CU3M   10300 03   DAC
0/5      OEM              0   0    CSS31GC0602      SFP-H10GB-CU3M   10300 03   DAC
0/6      OEM              0   0    CSS31GC0601      SFP-H10GB-CU3M   10300 03   DAC

Overall, I'm happy with this configuration. Now that the server's network connection is no longer the bottleneck, I can probably live with gigabit to my desktop - but I still dream of 10G! I'll update the post with whatever I choose to do when I get it figured out. Until next time!

Calculate "Billable Size" and Cost of Azure Blobs

Today my customer expressed a desire to utilize PowerShell to gather data regarding the size of their storage accounts. Not just that, but they wanted to understand the billable size of the containers / blobs / etc, and be able to calculate those costs.

A quick search on the web turned up a script (https://gallery.technet.microsoft.com/scriptcenter/Get-Billable-Size-of-32175802/view/Discussions) that had been developed back sometime in 2014. It still relied on pre-Resource Manager cmdlets and would not work in my test tenant. I'm no PowerShell guru by any means, but I was able to modify it to use the latest cmdlets and output accurate information regarding the containers within my subscription. I've uploaded the modified PowerShell script to gitHub. https://github.com/jimmielightner/GetBillableSize If you find a bug, want to improve the script, or just want to say hi - drop me a line or submit comments! 

Code is provided AS-IS with no warranty. If you're dumb enough to run it without looking at it, I can't be held liable if it breaks anything! :-)

<#
.SYNOPSIS
    Calculates cost of all blobs in a container or storage account.
.DESCRIPTION
    Enumerates all blobs in either one container or one storage account and sums
    up all costs associated.  This includes all block and page blobs, all metadata
    on either blobs or containers.  It also includes both committed and uncommitted
    blocks in the case that a blob is partially uploaded.
 
    The details of the calculations can be found in this post:
    http://blogs.msdn.com/b/windowsazurestorage/archive/2010/07/09/understanding-windows-azure-storage-billing-bandwidth-transactions-and-capacity.aspx
 
    Note: This script requires an Azure Storage Account to run.  The storage account 
    can be specified by setting the subscription configuration.  For example:
    Set-AzureSubscription -SubscriptionName "MySubscription" -CurrentStorageAccount "MyStorageAccount"
.EXAMPLE
    .\CalculateBlobCost.ps1 -StorageAccountName "mystorageaccountname"
    .\CalculateBlobCost.ps1 -StorageAccountName "mystorageaccountname" -ContainerName "mycontainername"
#>
 
param(
     # The name of the storage account to enumerate.
    [Parameter(Mandatory = $true)]
    [string]$StorageAccountName,
 
   # The name of the storage container to enumerate.
    [Parameter(Mandatory = $false)]
    [ValidateNotNullOrEmpty()]
    [string]$ContainerName
)
 
# The script has been tested on Powershell 3.0
Set-StrictMode -Version 3
 
# Following modifies the Write-Verbose behavior to turn the messages on globally for this session
$VerbosePreference = "Continue"
 
# Check if Windows Azure Powershell is avaiable
if ((Get-Module -ListAvailable Azure) -eq $null)
{
    throw "Windows Azure Powershell not found! Please install from http://www.windowsazure.com/en-us/downloads/#cmd-line-tools"
}

<#
.SYNOPSIS
   Gets the size (in bytes) of a blob.
.DESCRIPTION
   Given a blob name, sum up all bytes consumed including the blob itself and any metadata,
   all committed blocks and uncommitted blocks.

   Formula reference for calculating size of blob:
       http://blogs.msdn.com/b/windowsazurestorage/archive/2010/07/09/understanding-windows-azure-storage-billing-bandwidth-transactions-and-capacity.aspx
.INPUTS
   $Blob - The blob to calculate the size of.
.OUTPUTS
   $blobSizeInBytes - The calculated sizeo of the blob.
#>
function Get-BlobBytes
{
    param (
        [Parameter(Mandatory=$true)]
        $Blob)
 
    # Base + blob name
    $blobSizeInBytes = 124 + $Blob.Name.Length * 2
 
    # Get size of metadata
    $metadataEnumerator = $Blob.ICloudBlob.Metadata.GetEnumerator()
    while ($metadataEnumerator.MoveNext())
    {
        $blobSizeInBytes += 3 + $metadataEnumerator.Current.Key.Length + $metadataEnumerator.Current.Value.Length
    }
 
    if ($Blob.BlobType -eq [Microsoft.WindowsAzure.Storage.Blob.BlobType]::BlockBlob)
    {
        $blobSizeInBytes += 8
        $Blob.ICloudBlob.DownloadBlockList() | 
            ForEach-Object { $blobSizeInBytes += $_.Length + $_.Name.Length }
    }
    else
    {
        $Blob.ICloudBlob.GetPageRanges() | 
            ForEach-Object { $blobSizeInBytes += 12 + $_.EndOffset - $_.StartOffset }
    }

    return $blobSizeInBytes
}
 
<#
.SYNOPSIS
   Gets the size (in bytes) of a blob container.
.DESCRIPTION
   Given a container name, sum up all bytes consumed including the container itself and any metadata,
   all blobs in the container together with metadata, all committed blocks and uncommitted blocks.
.INPUTS
   $Container - The container to calculate the size of. 
.OUTPUTS
   $containerSizeInBytes - The calculated size of the container.
#>
function Get-ContainerBytes
{
    param (
        [Parameter(Mandatory=$true)]
        [Microsoft.WindowsAzure.Storage.Blob.CloudBlobContainer]$Container)
 
    # Base + name of container
    $containerSizeInBytes = 48 + $Container.Name.Length * 2
 
    # Get size of metadata
    $metadataEnumerator = $Container.Metadata.GetEnumerator()
    while ($metadataEnumerator.MoveNext())
    {
        $containerSizeInBytes += 3 + $metadataEnumerator.Current.Key.Length + 
                                     $metadataEnumerator.Current.Value.Length
    }

    # Get size for Shared Access Policies
    $containerSizeInBytes += $Container.GetPermissions().SharedAccessPolicies.Count * 512
 
    # Calculate size of all blobs.
    $blobCount = 0
    Get-AzureStorageBlob -Context $storageContext -Container $Container.Name | 
        ForEach-Object { 
            $containerSizeInBytes += Get-BlobBytes $_ 
            $blobCount++
            }
 
    return @{ "containerSize" = $containerSizeInBytes; "blobCount" = $blobCount }
}

$storageAccount = Get-AzureRMStorageAccount -StorageAccountName $StorageAccountName -ErrorAction SilentlyContinue
if ($storageAccount -eq $null)
{
    throw "The storage account specified does not exist in this subscription."
}
 
# Instantiate a storage context for the storage account.
$storagePrimaryKey = (Get-AzureRMStorageAccountKey -Name $StorageAccount.StorageAccountName -ResourceGroupName $StorageAccount.ResourceGroupName).Value[0]
$storageContext = New-AzureStorageContext -StorageAccountName $StorageAccount.StorageAccountName -StorageAccountKey $storagePrimaryKey

# Get a list of containers to process.
$containers = New-Object System.Collections.ArrayList
if ($ContainerName.Length -ne 0)
{
    $container = Get-AzureStorageContainer -Context $storageContext `
                      -Name $ContainerName -ErrorAction SilentlyContinue | 
                          ForEach-Object { $containers.Add($_) } | Out-Null
}
else
{
    Get-AzureStorageContainer -Context $storageContext | ForEach-Object { $containers.Add($_) } | Out-Null
}

# Calculate size.
$sizeInBytes = 0
if ($containers.Count -gt 0)
{
    $containers | ForEach-Object { 
                      $result = Get-ContainerBytes $_.CloudBlobContainer                   
                      $sizeInBytes += $result.containerSize
                      Write-Verbose ("Container '{0}' with {1} blobs has a size of {2:F2}MB." -f `
                          $_.CloudBlobContainer.Name, $result.blobCount, ($result.containerSize / 1MB))
                      }
    Write-Output ("Total size calculated for {0} containers is {1:F2}GB." -f $containers.Count, ($sizeInBytes / 1GB))

    # Launch default browser to azure calculator for data management.
    Start-Process -FilePath http://www.windowsazure.com/en-us/pricing/calculator/?scenario=data-management
}
else
{
    Write-Warning "No containers found to process in storage account '$StorageAccountName'."
}

 

Another web blog? Ugh.

Yeah, yeah. I know. Since deleting my Facebook profile, I have a renewed need for an outlet to share my nerdy thoughts. Why not make use of that domain I've had registered for the last ten years? Anyway, you can expect to see random ramblings, posts about technology, and lots of pictures of cats! What the hell is the internet for, after all? :-)

Goodbye, Facebook!

I'm done. That's it. I call quits. I've deleted my Facebook account. Permanently.

According to my exported Facebook data, my initial registration date was way back on Sunday, October 24, 2004 at 12:17pm EDT. I was a member until Friday, November 11, 2016 at 9:35am EST. That is 4400 days, 21 hours, and 18 minutes. Quite a bit of time if you think about it. I'm afraid to even imagine how much of my life was wasted putzing around on that site.

It's interesting how cathartic dropping my membership has been. I've successfully shrugged off the leeches, the stalkers, and the people I just couldn't bring myself to unfriend. I don't have to see the political bullshit, the vague-booking, or any of the other drama. Now I don't have to make excuses or worry about hurting anyone's feelings: I don't have an account - and it's marvelous! Why didn't I do this sooner?

I do sometimes find myself trying to type facebook into the address bar of my browser if I let my brain go on autopilot. Funny how old habits die so damned hard. I won't lie, it's harder to keep in touch by text message or email (or even hand-written notes) but now it's more meaningful when I do. I like it.

I've had to find other things to fill my free time. It used to be so easy to "check in" on people and what they were doing. Without that crutch, it has been easier to focus on work and other more important things. I'm finally learning to code (which is fun and terrifying all at once) in C#. I actually sat down and played a video game, too. I feel much more productive than I ever have - and I actually have REAL free time to enjoy things.

So here's to enjoying life, enjoying friends (people who I actually interact with, not just online), and that thing called living.