Customizing Configuration Manager Offline Servicing of Operating System Images Temporary File Location

While working on a presentation for Microsoft's Minority Student Day here in the Columbus office, I came across an annoyance with Configuration Manager 1610 in my lab. The ConfigMan server in the lab is virtual, hosted on Server 2016 in Hyper-V. The VM's OS disk is set to be 128GB and stored on a high speed PCIe SSD (Solid State Disk), but an additional 2TB mechanical disk is assigned to this VM for ConfigMan Content, Images, Packages, etc. This normally works great, but when updating an Operating System Image using Scheduled Updates, Configuration Manager will attempt to do all of the WIM operations in a temporary folder on the disk you installed it on - for many, that will be C:. In my case that's 20+ extra gigabytes I don't want on the SSD. That's a whole bunch of additional write operations, a large amount of data, and a better-suited workload for placement on the rotational disk.

A quick web search turned up this handy article that someone from Microsoft published several years ago. Rather annoyingly, though, the information is a little hard to follow. Right off the bat, if you try connecting to the WMI namespace as suggested in their article, you're going to have a bad time. You will scratch your head, wonder if this is worth even pursuing, and begrudgingly flip back to the search results to look for a better written article. Unfortunately, you're not going to find one.

To attempt to help shed some light on the subject, I thought I'd document my experience with this whole process. So first off, let's have a look at the steps our well-meaning poster suggested in the original article.

  1. Launch WBEMTest
  2. Connect to the WMI Namespace for Configuration Manager
  3. Query WMI for Offline Servicing Manager settings
  4. Drill down into the properties to change the StagingDrive property
  5. Save the changes

To get started, launch WBEMTest. Right click your Start button, select Run, type wbemtest into the box, then click okay. A happy little window will show up on your desktop. 

Now we need to connect to the proper WMI Namespace. Fortunately, you can query these with PowerShell if you're lost! This is how I determined the error in the original post!

gwmi -namespace "root" -class "__Namespace" | Select Name

If you followed the original article's advice to  "Connect to the Configuration Manager namespace on the site server. For example, if your site code is “CCP”, connect to namespace 'rootsmssite_CCP'." you won't get very far.

I think the original poster left off several characters, namely '\' where they should have separated root, sms, and site! So, the proper advice here is if your site code is "HHQ" (mine), connect to the namespace using "root\sms\site_HHQ".

Once you're successfully connected, click the Query button and paste in the following substituting your own Site Code for HHQ:

SELECT * FROM SMS_SCI_Component WHERE SiteCode='HHQ' AND ItemName LIKE 'SMS_OFFLINE_SERVICING_MANAGER%'

This will hopefully return a Query Result! Double click the result. This will open the Object Editor. 

Look for and double click Props in the properties list. This will open the Property Editor.

Click the button titled View Embedded. There will be four objects in an embedded array to inspect in order to find the StagingDrive property. 

Once you've located it, change Value1 in that list to whatever drive letter you want the temporary working folder to be placed on. I've chosen E:.

You've come all this way - now for the most important part: Save your changes! If I took screen-shots of every click, this post might tie for a Guinness World Record for Longest Blog Post in Vertical Pixels. So follow loosely - basically click Save and Close for each window. My exact cadence was like so: Click Save Property. Now click Save Object and then click Close. Next, click close on the Query Result. Click Save Property and then click Save Object, and finally click Close. Whew! That was a lot of clicking. You may now exit WBEMTest.

I'm not sure if you need to restart the Site Server to pick up this change, but the original article mentioned it would be used on the next Offline Servicing run. I bounced my VM for good measure. I truly hope this post was helpful and saved you a bit of headache. With any luck, you were able to switch your temporary working folder to your drive of choice!

 

Home Lab Upgrade

Working on Azure is fun. It’s more than fun – it’s freaking awesome. Anyone can make pretty much anything they want without having to have hardware on-hand to back it up. All you need is money (or a sponsored Azure account from your workplace *hint hint*).

While that’s pretty damn amazing, I miss having a home lab. I still have one, don’t get me wrong, but it’s mostly neglected these days. It’s less of a lab and more of a home control appliance. It powers our home automation, turning lights, heat, etcetera on and off. It monitors the water level in the basement sump, reports effluent volume, charts high and low activity dates. It stores our local backups and files before they’re shipped off to an Azure storage account for off-site backup. It’s a real production system that we legitimately depend on.

I reminisce about building my lab like any nerd building his computer. There’s something about picking out parts that perfectly complement each other, bring out amazing performance or perfectly match the requirements. It affirms my nerd-cred. It’s a physical representation of my membership card. Haha.

I recently purchased a new gigabit switch with POE to power and connect our security and monitoring systems. Gigabit is typically fast enough for nearly everything we need, certainly so for surveillance and media purposes, but not so much for some of the other things we do with our networked storage. I’ve noticed that with multiple machines connecting to our file server we can easily exceed the available bandwidth of a single link there. I toyed around with LACP and port channels (which work great, but still limit each client to 1Gb/s), but wouldn’t it be fun to have something in an entirely different class? I think it’s time to make the step to 10 gig Ethernet.

I managed to snag a Ubiquiti Networks US-16-XG SFP+ 10 Gig Aggregation switch for fairly cheap. I have some existing Unifi gear that I've had pretty decent luck with, so I wanted something that could integrate with that. Luckily they had just released this beauty (though, also a downside because it's a bit buggy - more on that in a bit). The DL380P we use as a file / virtualization / automation server didn't have a 10GbE interface, so I had to procure one of those as well. I ended up with an HP CN1100E for ~$100. It's a dual port 10G PCIe converged network adapter that does Ethernet, iSCSI, and Fibre Channel (FC) connectivity over 10GbE. I won't need anything but Ethernet for our deployment, but the price was right!

The last major piece of the puzzle was cabling. I've not forayed into the world of SFP+ before, so I had to learn about optics, DACs, transceivers, and a whole mess of things that I was ignorant of previously. We already have Cat6 strung around the house, so I had hoped to utilize that to connect my desktop to this new 10G switch. Boy, was I wrong. The transceivers for SFP+ RJ45 connectors are something like $300 each - and that's on the low end of what I was able to find! It takes two per connection, an amount I'm not willing to spend. This particular connection requires a bit more thought. Perhaps a single 100' run of Fiber would be more economical. The jury is still out on this one.

Connecting the server and switches was relatively simple. I hopped over to the Ubiquiti community forums and checked out the list of DACs and Optics on the US-16-XGs compatibility list before settling on these iPolex passive DACs I picked up from Amazon. At ~$24 each, they're fairly cheap. I utilized four of them in the rack - two from the US-16-XG to the DL380P, and two to the gigabit POE switch. This configuration essentially allows ~20 (theoretical, real world is obviously less) clients to access the server at their full gigabit line rate. The server connection is aggregated, as well as the inter-switch connection. I'm planning for the desktop to connect directly to the XG, as well, which will allow full 10Gb access to the server once I get the whole fiber thing sorted out.

Now, about the bugginess of the switch. The US-16-XG was originally a bit finicky about the DACs. Others have reported this issue on the community forums, as well. I was able to get everything working by shifting things to different ports. For whatever reason DACs would work in some ports, but not others. I managed to get everything working, though. Here's a snippet from the working switch configuration:

(UBNT) >show fiber-ports optics-info all

                         Link Link                                 Nominal
                       Length Length                                   Bit
                         50um 62.5um                                  Rate
Port     Vendor Name      [m] [m]  Serial Number    Part Number     [Mbps] Rev  Compliance
-------- ---------------- --- ---- ---------------- ---------------- ----- ---- ----------------
0/1      OEM              0   0    CSS31GB1516      SFP-H10GB-CU3M   10300 03   DAC
0/2      OEM              0   0    CSS31GB1523      SFP-H10GB-CU3M   10300 03   DAC
0/5      OEM              0   0    CSS31GC0602      SFP-H10GB-CU3M   10300 03   DAC
0/6      OEM              0   0    CSS31GC0601      SFP-H10GB-CU3M   10300 03   DAC

Overall, I'm happy with this configuration. Now that the server's network connection is no longer the bottleneck, I can probably live with gigabit to my desktop - but I still dream of 10G! I'll update the post with whatever I choose to do when I get it figured out. Until next time!