Terraform and Extensions for DSC and AD Join

I’m putting these here so I don’t forget how to properly format these resources.  My future me will be pleased about this at some point.

resource "azurerm_virtual_machine_extension" "dsc" {
  count = var.compute_instance_count
  name                 = "TestDSC"
  virtual_machine_id   = element(azurerm_virtual_machine.compute.*.id, count.index)
  publisher            = "Microsoft.Powershell"
  type                 = "DSC"
  type_handler_version = "2.80"

  settings = <<SETTINGS
        {
            "WmfVersion": "latest",
            "Privacy": {
                "DataCollection": ""
            },
            "Properties": {
                "RegistrationKey": {
                  "UserName": "PLACEHOLDER_DONOTUSE",
                  "Password": "PrivateSettingsRef:registrationKeyPrivate"
                },
                "RegistrationUrl": "${var.dsc_endpoint}",
                "NodeConfigurationName": "${var.dsc_config}",
                "ConfigurationMode": "${var.dsc_mode}",
                "ConfigurationModeFrequencyMins": 15,
                "RefreshFrequencyMins": 30,
                "RebootNodeIfNeeded": false,
                "ActionAfterReboot": "continueConfiguration",
                "AllowModuleOverwrite": false
            }
        }
    SETTINGS

  protected_settings = <<PROTECTED_SETTINGS
    {
      "Items": {
        "registrationKeyPrivate" : "${var.dsc_key}"
      }
    }
PROTECTED_SETTINGS
}

resource "azurerm_virtual_machine_extension" "joindomain" {
  count = var.compute_instance_count
  name                 = "joindomain" 
  virtual_machine_id   = element(azurerm_virtual_machine.compute.*.id, count.index)
  publisher            = "Microsoft.Compute"
  type                 = "JsonADDomainExtension"
  type_handler_version = "1.3"

  settings = <<SETTINGS
      {
        "Name": "EXAMPLE.COM",
        "User": "EXAMPLE.COM\\azureuser",
        "Restart": "true",
        "Options": "3"
      }
    SETTINGS

  protected_settings = <<PROTECTED_SETTINGS
    {
      "Password": "F@ncyP@ssw0rd"
    }
PROTECTED_SETTINGS
}

Service Fabric, Containers and Open Networking Mode

In case you haven’t noticed, deploying applications in containers is the way of the future for a lot of workloads.  Containers can potentially solve a lot of problems that have plagued developers and operations teams for decades, but the extra layer of abstraction can also bring new challenges.

I often deploy Windows containers to Service Fabric, not only because it’s a nifty orchestrator, but it also provides a greater array of options for modernizing Windows workloads since you can run Service Fabric on-prem as well as in Azure to support hybrid networking and other business requirements.

You can quickly create a Service Fabric cluster in Azure with the portal and Visual Studio can get you started with deploying existing containers to a Service Fabric cluster pretty quickly with the project wizard, but as with anything in the technology space, what comes out of the box might not do exactly what you need.

In the case of a recent project, I wanted to be able to deploy more instances of a container than I had nodes in my cluster.  By default, Service Fabric will deploy one instance of application to each node until you’ve placed that application on all nodes.  However, depending on what your container does, you might want to double or triple up.  This is accomplished with two things: open networking and partitions.

You can get the majority of the way with this documentation about container networking nodes on Service Fabric – https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-networking-modes.  You’ll need to make some changes to your Service Fabric deployment template, including parts to issue each node in your VM Scale Set additional IP addresses on your subnet.  Each container deployed with get one of these IP addresses. Then you will need to make some changes to your application and service manifest files, which include specifying the networking mode to “open” and adjusting how you handle port bindings.

Because your application is really a container, its deployed as a stateless service.  Most of the Service Fabric documentation talks about partitions in relation to stateful services and it’s a bit unclear how to apply that to stateless ones.

Within your application manifest, you’ll need to edit your service instance to use either the named or ranged partition type, instead of “SingletonPartiton” which is the default.  I prefer using the ranged version as it’s much easier to adjust the partition number, but admittedly don’t really have a good understanding on how the low and high keys apply to the containers when they aren’t actually using those ranges to distribute data.

Named Example:

<Service Name="MyContainerApp" ServicePackageActivationMode="ExclusiveProcess">
     <StatelessService ServiceTypeName="MyContainerType" InstanceCount="[InstanceCount]">
         <NamedPartition>
             <Partition Name="one" />
             <Partition Name="two" />
             <Partition Name="three" />
             <Partition Name="four" />
        </NamedPartition>
    </StatelessService>
</Service>

Ranged Example:

<Service Name="MyContainerApp" ServicePackageActivationMode="ExclusiveProcess">
     <StatelessService ServiceTypeName="MyContainerType" InstanceCount="[InstanceCount]">
           <UniformInt64Partition PartitionCount="4" LowKey="1" HighKey="10" />
     </StatelessService>
</Service>

Once you’ve made all these changes, Service Fabric will deploy containers equal to the number of instances multiplied by the partition count, up to the available number of IP addresses.  So two instances of four partitions will be eight containers and eight IP addresses.  Keep in mind that if a deployment exceeds the number of IP addresses you have available, you will have errors.  Based my testing so far, I don’t recommend trying to max out your available IP addresses, there seems to be a need for a little wiggle room for scaling operations.

Microsoft OpenHack on Containers comes to San Francisco – May 15-17

Who?

OpenHack brings together groups of diverse developers to learn how to implement a given scenario on Azure through three days of immersive, structured, hands-on, challenge-based hacking. This scenario is focused on implementing container solutions and move them to the cloud.

What!

Join us for three-days of fun-filled, hands-on hacking where you will team up with community peers and learn how to containerize Linux and Windows based workloads to the cloud. During OpenHack you will:

  • Choose your desired tooling and technology based on Kubernetes or Azure Service Fabric.
  • Hack on challenges structured to leave you with skills and expertise needed to deploy containers and clusters in the work place.
  • Network with fellow community members and other professional developers from startups to large enterprises, as well Microsoft developers.
  • Get answers to your technology and workplace project questions from Microsoft and community experts.

Bonus

In addition to the challenge-based learning paths, a limited number of 1-hour envisioning slots will be made available on a first come, first served basis to work side-by-side with Microsoft experts on your own workplace projects.

OpenHack is FREE for registered attendees!

Food, refreshments, prizes and fun will be provided. If travelling, attendees are responsible for their own travel expenses and evening meals.

What you need:

To be successful and maximize value from the event, participants should have a basic understanding of the following concepts and technologies. You are not required to be an expert or authority, but a familiarity with each will be advantageous:

  • Docker containers
  • Cloud hosted services
  • REST Services
  • DevOps
  • IP Networking & Routing

Click here to register!

OpenHacks are invite only and space is limited. You may be put on a waitlist. When your registration is confirmed, we will follow up with additional details.

New Surface, New Start

This month, I was blessed with an email from work stating “You Qualify for a Hardware Refresh!”… Oh, music to my ears. 🙂   While it takes time to set up a new machine, I’m often appreciative of a chance to install just the software I’m using right now and leaving some of the clutter of the previous year or so behind. This time around, I ordered the current Surface Pro i7 with 16GB of RAM and terabyte of disk space. I’ll probably never use a terabyte of disk space since I keep most of everything in the cloud except for synced mail, One Drive documents and GitHub repos. But I’m not going to say no to it, especially since it came along with the 16GB of RAM.

Since my days at Microsoft, I’ve been a dedicated Surface user, starting with the original Surface Pro.  I’ve used both the Surface 3 and 4 for a while and regularly use a Surface Book.  Personally, I find the Surface Book a tad to heavy, so the Surface Pro is my go to device for commuting and overnight trips.  Actually, it’s my go to for pretty much all the time.  Sometimes the kickstand isn’t ideal for the times I actually need to put it on my lap, but there are far too many other “pros” to make that a deal breaker.

This Surface doesn’t come with any included accessories, like the pen.  I’m not a big pen user, but there are times when I really need it and I’m hoping to get better about using the pen more over time.  Thus, my accessories included a pen, Type Cover, extra charger and since I really like having a mouse, I ordered one of the new Surface Arc mice without any buttons.  It’s very sleek looking.

After the customary rounds of software updates on top of the corporate included image (version 1709) that came with it, I joined it to Azure Active Directory with my work credentials and then added in the credentials for my other accounts, like Hotmail and Gmail.  As someone who spent a lot of time managing traditionally domain joined devices, the Azure Active Directory joining process is a much nicer end user experience.  Once all the company policies sync down, I was prompted to set up Windows Hello, which I realized I really miss when it’s not on.

I’m not really all that hardcore about my setup. I tend to stick with a lot of the out of box settings and just tweak as I go along. My Windows 10 setup preferences stem from being an early adopter of Windows 8, so I prefer the full screen start menu. I like the look and I’ve gotten used to touching my screen to start applications. I also quickly take a pass through uninstalling all the default applications and games I won’t use. I like to keep my “desktop” pretty clean, so I pin some key applications in the task bar and everything else is pinned to my start menu. I often just type what I need into the Cortana search box and go from there.

Next up on the task list is all the business applications I need for my daily work. We have Office365 and Office was preinstalled for me, so it was just a matter of adding in my credentials there. I had to manually install Skype for Business and Microsoft Teams. When I opened up OneNote to connect up my notebooks, I realize I only use about a handful and left a whole bunch disconnected until they become needed again. The last of the quick stuff was configuring One Drive for business and my personal files and grabbing our applications for Expenses out of the Windows Store.

While I’m in the Windows Store, I also grab Twitter, NetGen Reader, Slack, Microsoft Solitaire (my guilty pleasure) and Ubuntu (aka WSL, aka Bash on Windows). That last one requires enabling Hyper-V and rebooting. Oh look, a few more windows updates!

As I’m often using more than one identity throughout my day, those identities tend to gravitate to particular browsers… so I need more than one browser to choose from. I’ve got Edge and IE by default and add Chrome to the mix. I’ve heard good things about Firefox recently, but haven’t had a need to install a fourth browser option.

For my final heavy lifters, I install Docker for Windows, Visual Studio Code (with plug-ins for Docker and Azure), the most current version of the Azure CLI Tools, Azure Storage Explorer, Azure PowerShell and Github Desktop. Visual Studio Code prompts for the Git command line tools to be installed, but I happen to like the Desktop version too. Don’t judge. I’ll need to install Visual Studio 2017 as well, even though VS Code is my usual go-to for that sort of thing. Visual Studio takes a long time to install, so I’ll save that to kick off at a time I don’t need to use my machine for a while.

And for some of the more not-quite-designed-for-Windows applications, like kubectl and BOSH for Cloud Foundry, I’ve found it much easier to create a C:\bin directory and put all those applications there. I add that directory to my PATH environment variable and then usually don’t have issues running things from the command line after that.

I’m sure I’ll find some missing things as I go along as one does, but this Surface is ready to take the front seat in my work bag. All it needs now is some stickers.

 

Are you CCNA bound? Here’s a chance to win some training!

It’s been a while since I’ve had a need to focus specifically on Cisco exams, but if you are seriously in the business of computer networking, you may be looking at Cisco certification in your future.  One of my connections “via the interwebs” (@flackboxtv) is running a contest to win access to some of his Cisco training materials.  I’m not always a big fan of boot camp style studying because nothing beats really understand how the big picture works to pass an exam, but if you need that bump in your studying to get you over that hump, or something to get you started in the New Year,  this could be just the thing.

This is what the winner gets:

  • Payment for your Cisco CCNA exam
  • Access to the CCNA course online
  • Weekly coaching calls
  • Full access to the AlphaPrep test engine CCNA exam bank
  • 400 pages of configuration lab exercises with setup instructions to run on your laptop for free
  • An additional 150 pages of bonus troubleshooting labs
  • Private Facebook study group

If this is something that interests you, the chance to enter ends on 1/13/18 – https://www.flackbox.com/giveaways/cisco-ccna-exam

 

Shared Drives with Docker for Windows

I’ve mentioned in a previous post that Docker recommends that you avoid volume mounts from the Windows host, but sometimes you just have to have it. You’ll want to set up the Shared Drives feature  in your Docker for Windows settings to get that going.  You’ll only need this feature if you need to share files from your Windows host to the Linux containers.  If you are working with Windows containers, it shouldn’t be necessary, as per the Docker documentation.

Simply select the checkbox for the drive letter you want to share and you will be prompted for credentials. After that the drive letter should remain checked and you’ll be able to mount volumes under your users home directory.

In my case, the only account on my machine with Administrator privileges was my “domain” account… aka DOMAIN\username, which was prefilled in the credentials box. Upon entering my password, Docker for Windows thought at bit, reporting that it was updating the settings, cleared the checkbox and the declared itself to be finished. Leaving my C drive unshared. Grrr.

It occurred to me that maybe Docker for Windows didn’t like the DOMAIN\username format, so I tried my UPN format instead – username@domain.com. That immediately failed as an invalid account, however when I checked my account settings on my PC, the account is clearly listed at DOMAIN\username. I say this is a “Domain Account”, however the machine is not domain joined so its authenticating via Azure AD. I did some hunting around and there are some related issues since 2016, which don’t seem to have a clear resolution – https://github.com/docker/for-win/issues/132 and https://github.com/docker/for-win/issues/303

In addition to my work account, I also have my personal MSA (Microsoft Account) account as an alternate account on this machine. It didn’t have Administrator rights, but figured it was worth a shot. I entered my MSA email address and password and lo and behold… It worked! The C drive checkbox stayed and Docker was able to mount some local volumes. Due to the non-Administrative nature of that account, I did find I had to add on some additional file sharing at subfolder needed during a Docker build, but otherwise I was good to go.

The end result is if you are having problems turning on the Docker for Windows shared drives feature, you may need to use or create an alternative local account.

Azure Containers, SSH Keys and Windows

When working with containers on Azure there are a couple things to keep in mind around key management. I’ll use Azure Container Service (AKS) for the context here, but in the end, keys are keys.

You have two options when creating a cluster on AKS:

az aks create --resource-group YourRG --name YourCluster --generate-ssh-keys

az aks create --resource-group YourRG --name YourCluster --ssh-key-value \PATH\TO\PUBLIC\KEY

With –generate-ssh-keys, Azure will automatically create the necessary key for you named id_rsa and id_rsa.pub in the $HOME\.ssh folder, of the machine that created the cluster. If there are already keys with that name there, it will re-use those.

Once your cluster is created, you’ll use

az aks get-credentials --resource-group YourRG --name YourCluster

to download an access token to set the current context for your session, manage the cluster and deploy containers.

If you happen to work from more than one machine, or expect other people to also access this cluster or make other clusters using the same keys, you need to share these auto-created keys appropriately. I work from two different machines, wasn’t paying attention and ended up with two different “default” sets of keys. I awkwardly discovered this when creating a cluster with my home machine, traveling with my laptop and then finding myself unable to access the cluster while out of town. Joys.

Using “–generate-ssh-keys” shall henceforth be known as “the lazy way” of key management.

To do this better, create your keys manually, put them in a secure location accessible by those who matter and then make your clusters using “–ssh-key-value” instead.  (Let’s call this the “thoughtful way.”) You will also need to provide the path to the key when requesting the access token. For example:

az aks get-credentials --resource-group YourRG --name YourCluster --ssh-key-value \PATH\TO\PRIVATE\KEY

As I’m a Windows user, I use PuttyGen for my key creation. I will refrain from recreating the wheel of how to do this, as there are already some pretty comprehensive posts, either in Microsoft Docs or this one by Pascal Naber.

A Note about AKS vs ACS: As of this writing, you have two different ways of creating container clusters in Azure. ACS allows you to create clusters orchestrated with Kubernetes, Docker Swarm or DC/OS. Due to the nature of the way these are created, you have full access to the master node VM. If you’ll be using Putty to connect to the master node of your ACS cluster directly, you’ll need to use a Putty-specific PPK file for your private key and specify it in your Putty session settings. If you create a Kubernetes cluster using AKS (as I did in my examples above) you won’t have SSH access to the master node.

A Note About Service Principals: In addition to automatically generating keys, AKS/ACS will automatically generate the necessary service principals needed. However, it won’t generate a new SP for each cluster. If you have a suitable SP already in your subscription, it will re-use that one. So just keep that in mind for your production clusters. You may want to provide different service principals for various clusters, etc. You can read more about setting up Azure AD SPs for AKS if you so desire.

Working with Containers while working on Windows

With all the rage with containers these days, you may be wondering how to get started and make sure you can be successful if you use a Windows on your preferred client device. One of the cool things about working with containers from a Windows machine is that you can work with both Linux and Windows containers. This post will focus on working with Linux containers, but you’ll need all these tools for working with Windows containers too.

For building containers and working with images locally, you’ll need Docker for Windows. Just go with the default installer options and you should be ready to go in short order. You will need to have machine that supports virtualization and has those features on. When you work with Windows containers, they will run on your OS. When you work with Linux containers, they will run on a Hyper-V VM you can find if you run Hyper-V Manager on your machine.

It’s worth noting if you are going to be working with persistent or shared volumes on your containers, they work a little bit differently on your windows machine. Docker recommends that you use the –mount flag with volumes  and when using them for Linux containers, it’s better to share from the Linux MobyVM and avoid using the Windows host directly.  However, if you need to use the host directly, you can by sharing the required drive via the Shared Drives feature under the Docker for Windows Settings.

For deploying containers to Azure, you will want the latest version of the Azure CLI 2.0.  You DO NOT want anything less than version 2.0.21, trust me. You will use the Azure CLI to do things like create and manage container services (either ACS or AKS), push images to Azure Container Registry, deploy containers to Azure Container Instances and get the credentials to connect to those resources.

Once you are connected to those resources (particularly if they are going to be used for Linux containers), you’ll be using the same tools as anyone working from a Linux client, such as Kubectl for deploying containers to a Kubernetes cluster.

For sanity checking purposes, I also make sure I have Windows Subsystem for Linux (aka “Bash on Windows”) installed and the latest version of Azure CLI 2.0 installed in that environment too. I usually can do everything I need from CMD, but sometimes a strange error has me double checking my work in “Linux-land”. 🙂 Speaking of WSL, if you really want to trick out your WSL setup, read this.

Once you have Linux container hosts deployed in Azure, you may want to connect to one directly using SSH – perhaps your Kubernetes master agent. I use Putty for this, because I like being able to save my connection settings in the application to use again when I’m working on a project over several days. You will need to convert your SSH keys to a PPK file type with PuttyGen before using them to connect to Linux container host.  (More to come on key management later, I promise.)

So to sum up… To get started with containers on a Windows machine, you need:

  • Docker for Windows
  • Azure CLI 2.0
  • Putty and PuttyGen

Happy Containerizing… and if you run into some “beyond the basics” challenges, let me know in the comments.

Keeping Up with the Releases

There are a lot of great things to say about the faster release cycles we see with software these days. Bugs are fixed and features become available to us sooner, security issues are resolved quicker too. In a lot of cases, our operating systems and software packages are smart enough to check themselves and let us know updates are available or automatically install themselves.

I work between two different machines regularly and depending on my schedule sometimes favor one machine over the other for several weeks at a time. For better or for worse (mostly for the better), Windows 10 takes care of itself for me, as does Visual Studio Code and Docker for Windows. This means I often find myself sitting down at the “other” machine and once again waiting for those updates to install. While sometimes I admit to rolling my eyes in frustration every time I get an update alert, I do appreciate that I don’t have to think about those updates otherwise.

But for software that doesn’t automatically update, I will sometimes find myself wondering why demo notes I’ve drafted on one machine suddenly aren’t working when I try them on the other machine or worse, blaming documentation for being incorrect when the commands don’t work as instructed.

When it comes to documentation freshness vs software freshness… Let’s not go there today. I generally always start with docs.microsoft.com when I’m looking for information about Azure and other Microsoft products. While nothing is above being error free and sometimes out of date, more often than not my problems exist between my keyboard and monitor – in the form of some piece of software needing an update.

The top two things on my machines that I have to manually update regularly are:

  • Azure CLI 2.0Instructions for Installing or Updating Azure CLI 2.0
    • Type “az –version” at your command line to see what version you at running.  As of this writing (10/17/17) the current version is 2.0.19.
    • If you aren’t a regular Azure CLI user and just want to try it out via the Azure Portal, check out the Cloud Shell.
  • Azure PowerShellInstructions for Installing or Updated Azure PowerShell 4.4.1
    • I recommend the command line installer for this one, but if you want to do something other than that (like install within a docker container) you can find those instruction here.
    • You can check your version of Azure PowerShell by typing “Get-Module AzureRM -list | Select-Object Name,Version,Path” at the PowerShell command line, however if you don’t get any response back, you don’t have 4.4.1 installed at all.
    • Also, don’t confuse the Azure PowerShell Modules with the PowerShell that comes on your Windows machine itself.  That’s at version 5.1 right now if you have Windows 10 with your updates turned on. You can check that by typing “$PSVersionTable” at your PowerShell command line. If you want instructions running the beta version 6, you can find all that information here with the general installation instructions.

 

Windows containers, dockerfiles and npm

As part of my adventure with the IoTPlantWatering project, I ran into the issue of not being able to automatically launch “npm start” from within a Windows container using this command in my dockerfile, which would work just fine if this was a Linux container.

CMD [ "npm", "start"]

If I built the container without this command, connected to it interactively and typed “npm start” it worked fine. What gives? For Windows you need to use:

CMD [ "npm.cmd", "start"]

Here are a couple links that give you a little more context to why, but if nothing else, just remember – npm.CMD!