Mac Admin

Submit Jamf inventory update on OS changes

Over the years, Apple have released a number of new and helpful profile payloads such as PPPC, System Extensions and the like. Each of these had a minimum OS required for them to work. Mac Admins soon realised that if these settings were deployed on an OS that doesn’t support them, they’ll often not take affect if the device was updated until they were redeployed.

What was needed was something to flag and trigger the installs as soon as the device was updated. In this post, I’ll share a short script I used to achieve this with Jamf Pro.

The problem

It seemed this was an issue with an easy answer: scope the deployment of OS-specific profiles to smart groups of devices that meet that OS. For example, to deploy a PrivacyPreferencesPolicyControl payload to only macOS 10.14 or newer devices, I would set a target smart group that selected those devices. Rinse and repeat for any other OS-specific payloads.

Simple right? Well not quite.

Until very recently, Jamf Pro only used OS version data collected by their jamf agent for version comparison in smart groups. This meant the device would need to be updated / upgraded, then submit an inventory whenever the next trigger was hit before it received the profiles. These profiles could manage a variety of items including some to suppress and affect the user experience, and others that are required for security tools to operate. You’d really want these down ASAP!

Most Jamf Pro customers have their Jamf inventory submission (or “recons”) set to once per day, or once per week. So you could be waiting anything from almost 24 hours, to almost a week for this profiles to be deployed (depending on your configuration).

This isn’t great for many enterprises…but there are a few possible options to get around this:

Option 1: Submit an Inventory on the recurring check-in

So one solution would be to have all of your devices submit an inventory update on the recurring policy check-in – by default this is every 15 minutes. This is considered a very bad practice for many reasons, including:

  • The inventory submissions are not hugely intensive (depending on what Extension Attributes you have written) but they will take up some extra resource as they run. Having this happen every 15 minutes would make for a poor experience for your users
  • The inventory submissions do collect a fair amount of data (again, this can vary depending on what Extension Attributes you have written) and so having these run so frequent will grow and bloat the database, causing performance and stability issues.

TL;DR – Don’t do this

Option 2: Have an Inventory submission at every start up

This is not a bad option, as devices can’t install an OS update without a reboot*. However it is possible that this might be missed if the device doesn’t have network/Internet access very shortly after boot. This means there is no guarantee it’ll complete when you need it. Additionally this can submit lots of unnecessary inventory updates – how many times does a device restart and not install an update (and therefore not need to let Jamf know of a new OS version)?

This is a pretty good idea if you’re aware and happy to work with the limitations above.

Rich Trouton has a great blog on configuring this to work around most of the above issues which can be found here. Their blog was also what inspired me to get off my backside and complete this write up!

Option 3: Crack out the scripting tools!

So I ended up knocking up a Launch Daemon and script to solve the problem for us.

The Launch Daemon simply triggers the script at every boot. The script itself will:

  1. Create and update a logfile at /Library/Logs/Management/updateJamfOnOSChange-1.0.log
  2. Check a local file (/Library/Preferences/org.macadmins.macoscheck.plist) for the “LastOSChecked” key:
    • If this is present and matches the current booted OS, exit the script nicely
    • If this is not present, or differs in any way from the current booted OS, continue.
  3. Use the Jamf binary’s checkjssconnection option to try to contact the device’s Jamf Server for up to (roughly) 5 minutes
    • If this fails to get a positive connection within 5 minutes, exit the script with an error (exit 2)
    • If this connection, continue
  4. Submit a Jamf recon / inventory update and collect the exit code
    • If this fails, exit the script with an error (exit 3)
    • If this is successful, update the local file’s “LastOSChecked” key with the current booted OS
  5. Exit the script

This process:

  • Allows the script to exit very early on in the vast majority of cases (normal device boot not following an update)
  • Wait for roughly up to 5 minutes for a valid connection to the Jamf server (allowing some time for the devi e to get a connection, but not forever)
  • Check if the recon is marked as successful or not
  • If any of these fail (Jamf server is not contactable, or the recon fails) the local file is not updated, meaning the script will be triggered again on next boot.

This has proved helpful with our Mac estate for the last 6-9 months and so I thought it’d be worth sharing!

Where can I find this?

I have uploaded a copy of the script to my GitHub page here. I have also uploaded an example of the Launch Daemon I’ve used here.

You may have noticed the layout of this error of my repo is a little unusual. That’s an attempt to make it a little easier to build a deployment package out of! Some assembly required:

What about Declarative Management?

So with the release of macOS Ventura, macOS devices enrolled in an MDM can push information updates to the MDM solution without being requested. One of these items is the current OS version. Jamf Pro added support for this in version 10.42. More on this can be found in Jamf’s documentation here.

Now this would cover the use case I’ve detailed above, however we continue to use our solution as:

  • We have had this in place before the release of macOS Ventura and Jamf Pro 10.42
  • Our solution will submit an entire inventory, not just a few items that Declarative Management can currently do
  • Our solution is separate from the MDM protocol and so acts as a second redundant tool in the event that the MDM Declarative Management option doesn’t work as needed.


In this post, I’ve been able to share a solution to running Jamf inventory submissions as quick as possible after an OS change but only if required.

A reminder from my last post: A special note to say that I will be speaking at 2023’s MacADUK conference, being held in May in the lovely Brighton, UK. No I’m afraid I won’t be resuming my Adobe sessions but something a bit different. Grab your tickets here! I’ll do my reasonable best to buy anyone a drink there who mentions they got to this point in this post.  

As always, if you have any questions, queries or comments, let me know below (or @daz_wallace in Mac Admins Slack) and I’ll try to respond to and delve into as many as I can.

* – Ignoring the new Rapid Security Response!

Mac Admin

Changes to Docker Desktop for Mac

Today I discovered a “fun” change to Docker Desktop on macOS that affects the deployment method I’ve used in the past. The result was a minor update (from 2.14.1 to 2.15.0) that broke the installation of the application. In this post, I’ll take a quick look at how I’ve deployed the software in the past, what changed and possible ways to work around it.

What was…

Prior to the release of Docker Desktop v2.15.0, I’d used a variation of the common postinstall scripts in order to perform a number of ‘first launch’ steps required by Docker Desktop. Examples of the typical Docker Desktop post installation scripts can be found below:

These postinstall scripts removed the need by Docker Desktop to request local administration rights by generally performing the below steps:

  1. Symlink any command line tools from within the Docker Desktop application, to /usr/local/bin/
  2. Install a helper tool (com.docker.vmnetd) – partly for opening up system level ports without needing local administrative rights
  3. Install and launch a LaunchDaemon to trigger the helper tool
  4. Often configure a hosts file entry for “kubernetes.docker.internal” pointing to “”
  5. Triggering a silent launch-and-quit of Docker Desktop with admin rights to carry out other miscellaneous tasks that would require admin rights. This is delivered with the command /Applications/ --unattended --install-privileged-components

What happened…

Overnight last night (well, overnight Thursday for those in the UK), the developers at Docker released version 2.15.0 of Docker Desktop. After packaging up and pushing out the new package to test devices, an interesting issue was spotted.

Devices that tried to run the update, seem to have installs get stuck on ‘running postinstall scripts’ from the installer package. If the user tried to open Docker Desktop during this period, the application would either open and not respond to any input, or refuse to open at all.

I grabbed a test device, and re-ran the installation package manually watching the install.log (/var/log/install.log) as it ran through, looking for clues. As any good IT Admin should do, I had the postinstall script echo out as it hit each set of tasks and this helped me realise where things were getting stuck. The installer was deploying its payload fine (the Docker was updated on disk) and managed to get partway through the postinstall script before not progressing. The last log output was the script starting the /Applications/ --unattended --install-privileged-components command, but it appeared to go no further.

I killed the Installer process, and manually ran this command in Terminal using sudo from the local administrator account. I was greeted by the following output:

Briefly, and annoyingly too quick to catch a screenshot of, the GUI showed a pop-up message warning that running Docker Desktop as root is not safe and has been stopped (paraphrasing from memory). Shortly after disappearing, another larger error window appeared and disappeared – again too quick to grab a screenshot of or to get any helpful information.

After this point, I launched the Docker Desktop application anyway, and was shown a newer, slight different admin prompt request than what I’d seen in the past:

All of this is no good for people deploying Docker Desktop to users without local admin rights, and so felt like a regression in features and functionality – especially not great for a product that now requires payment for larger companies.

Time to dig into the documentation…

What changed?

After some digging, I came across this section of the documentation:

Versions prior to Docker Desktop 4.15, require root access to be granted on the first run. The first time that Docker Desktop is launched the user receives an admin prompt to grant permissions for a privileged helper service com.docker.vmnetd to be installed. For subsequent runs, no root privileges are required. This approach allowed, following the principle of least privilege, root access to be used only for the operations for which it is absolutely necessary, while still being able to use Docker Desktop as an unprivileged user. All privileged operations are run using the privileged helper process com.docker.vmnetd.

For security reasons, starting with the version 4.15, Docker Desktop for Mac does not require the user to run a permanent privileged process. Whenever elevated privileges are needed for a configuration, Docker Desktop prompts the user with information on the task it needs to perform. Most configurations are applied once, subsequent runs do not prompt for privileged access anymore. The only time Docker Desktop may start the privileged process is for binding privileged ports that are not allowed by default on the host OS.

So this does at least confirm an intended change from the Docker developers, one that, to my non-developer experienced mind, makes sense. Annoyingly though, this sort of setup won’t work for organisations who’s users are not local administrators.

The Docker Desktop documentation has a fairly good section on the expected permissions required to use the tool (link) so I thought ‘what better a place to start my digging’. And what I found, turned out to be pretty helpful:

In version 4.11 and later of Docker Desktop for Mac, privileged configurations are applied during the installation with the –user flag on the install command. In this case, the user is not prompted to grant root privileges on the first run of Docker Desktop. Specifically, the –user flag:

  • Uninstalls the previous com.docker.vmnetd if present
  • Sets up symlinks for the user
  • Ensures that localhost is resolved to

This approach has the limitation that Docker Desktop can only be run by one user account per machine, namely the one specified in the -–user flag.

This looked promising, so on my test Mac I ran the following commands as the local admin account:

currentUser="$(/usr/sbin/scutil <<< "show State:/Users/ConsoleUser" | /usr/bin/awk '/Name :/ && ! /loginwindow/ { print $3 }')"
sudo "/Applications/" --user "${currentUser}"

With some basic testing, this looked to have done the trick: Symlinks were configured, and a new LaunchDaemon was created. Most importantly, Docker Desktop 2.15.0 no longer prompted for admin rights on launch or under some basic tests!

A possible fix?

Last thing I tried was to edit the postinstall script my AutoPkg recipes created, removing the --install-privileged-components line, and swapped in the two lines mentioned above.

After running the recipes and installing the outputted packages on a few test devices, this looks to have done the trick.

Now those of you who actually read the snippets I quoted from the Docker Documentation will notice a possible small issue:

This approach has the limitation that Docker Desktop can only be run by one user account per machine, namely the one specified in the -–user flag.

For my needs, this isn’t an issue. However if you utilise Docker Desktop on multi-user devices, I’m afraid you may need to get more creative. The command looks to do a number of things, including the creation of a symlink from /var/run/docker.sock to /Users/<user>/.docker/run/docker.sock as well as a Launch Daemon to do the same thing, with the user home path explicitly specified.

I did try manually creating both the symlink and the LaunchDaemon, even (as far as I know) replicating the outputted items exactly, and Docker Desktop still asked for admin rights on launch and for some reason didn’t like my work.

Perhaps you can have some sort of root-level process running that watches for a user login, and re-runs the ./install --user command but it’s not something I’ve tested, nor have a need to.


In this post, I have taken some time to collect my thoughts and review my testing of a change with how Docker Desktop’s first-run elements works on macOS. I’m hoping to get back to blogging a bit more regularly now I’m settled in my new role, but it’ll all depend on motivation and time!

A special note to say that I will be speaking at 2023’s MacADUK conference, being held in May in the lovely Brighton, UK. No I’m afraid I won’t be resuming my Adobe sessions but something a bit different. Grab your tickets here! I’ll do my reasonable best to buy anyone a drink there who mentions they got to this point in this post. 😉

As always, if you have any questions, queries or comments, let me know below (or @daz_wallace in Mac Admins Slack) and I’ll try to respond to and delve into as many as I can.

Mac Admin

Recommended workflow to deploy Adobe software with Jamf Pro

Hi, it’s been a little while hasn’t it? I’ve had this post on my list for sometime now but never seemed to get round to it due to a combination of work-life, personal-life and (to be honest!) motivation.

Over the years there has been a number of questions around how to deploy Adobe software using Jamf Pro, and issues arising from the process. This post looks to cover the method I have used for the last few years to maximise success and minimise problems. It’s by no means the only method, but comes from what I’ve learnt from mine and others’ testing.

I have split this post into the following sections:

  1. Obtaining your Adobe Packages
  2. Uploading to Jamf
  3. Deployment
  4. Appendix 1: Only packaging one Adobe title per installation package
  5. Appendix 2: Issues with Jamf Pro zipping Adobe packages prior to upload
  6. Appendix 3: Moving downloaded Adobe packages prior to upload
  7. Appendix 4: Troubleshooting Adobe package deployment
  8. Outro

Obtaining your Adobe Packages

The first step is to generate and download your Adobe packages. Adobe has this detailed in their own documentation (here) and I have covered this in detail in various talks over the years. As a result, I won’t go into great detail, but will go through the high-level process:

  1. Log into the Adobe Admin Console with an Adobe account that has the ability to create Adobe Application packages
  2. Navigate to the Packaging section, and create a new package
  3. Configure the required options you need for your Adobe package
  4. Select only one single Adobe title per package (more on this in Appendix 1, below)
  5. Once built, download a copy of the matching disk image from the console.

As mentioned in a post of mine (waaay back in 2019), this disk image no longer contains the installer package, but rather an Adobe GUI application called Adobe Package Downloader. More details on this can be found in the Adobe documentation here.

  1. Launch the Adobe Package Downloader application, and follow the onscreen instructions to download your zipped Adobe application installer/uninstaller
  2. Once the download is completed, quit Adobe Package Downloader and find the downloaded zip file (this will be in your ~/Downloads folder by default). Unzip the file to show the directory using the built-in macOS Archive Utility (there are reports of issues with using third party tools such as Keka, The Unarchiver etc).
  3. The extracted folder will contain two ‘installer’ packages:
    • [Package name]_Install.pkg
      • This is your installer package for the title you have built
    • [Package name]_Uninstall.pkg
      • This is the matching uninstaller install package. Most people disregard this.

Uploading to Jamf

We now have our downloaded Adobe application installer package ready to go. We now need to get this uploaded to Jamf.

If you are using on-premise distribution points you can use the Jamf Admin application to add these to your Distribution Point/s. If using a Jamf Cloud Distribution Point, you can use either Jamf Admin, or the Jamf Pro web interface (documentation link).

Please Note: The packages produced by Adobe are ‘bundle’-style Apple installer packages, instead of the more current ‘flat’-style. This means they are actually a directory of files, instead of a single file. As a result they can’t easily be deployed through Jamf Cloud Distribution Points or HTTPS-based on-premise distribution points.

Both Jamf Admin and the web interface should automatically attempt to zip up the package during the upload process to get around this limitation. Once successfully uploaded, they’ll also show in the Jamf tools and policy screens as zip files. Don’t worry, treat them the same as any other package in Jamf Pro and the system should deploy and unzip them before installation time.

If you’re seeing issues uploading your Adobe package to Jamf Pro, see appendix 2 below for a suggestion.

Also, if you are moving your installation packages between devices before uploading them to Jamf, see appendix 3 below for things to be aware of.


And now for the easiest bit, deployment.

For fresh installations of each application, I’d suggest using a standard policy as you would with any other non-Mac App Store software title installation. For titles that will be automatically installed rather than on-demand via Self Service, I’d suggest the use of smart group logic to have Jamf automatically retry the installation on an error as sometimes these packages can fail to install due to (at least in part) the complexity in the Adobe packages.

For updates/upgrades, you’ll need to try and ensure the user doesn’t have the relevant Adobe title running during the installation to minimise risks with corruption. This can be achieved using a number of different techniques including:

  • Installing using the Logout Jamf policy trigger
  • Utilising Jamf’s Patch Management solution (documentation link).
  • Utilising other solutions that can check if the application is running, such as Munki or dataJAR’s Auto-Update solution (based on Munki)

Appendix 1: Only packaging one Adobe title per installation package

I would strongly recommend that you only package a single core Adobe application per installation package.

For example, if you need to deploy Photoshop, InDesign, and Illustrator I would not recommend building a single package with all three installed. Instead, create three packages: one for Photoshop, one for InDesign, and one for Illustrator.

Why? Well there are three reasons:

1. Less impact of failures

If you deploy an Adobe package with multiple application installations within, and the package fails on any of those installs, it will remove any and all software it installed (or tried to) as part of its failed install clean up.

For arguments sake, let’s say each Application install has a 1% failure rate, but each time it fails it removes even the successfully installed elements as above. You build an Adobe “mega”-package, with 15 Adobe titles present, the chance that a failed install will occur isn’t 1%, its 1% “rolled” 15 times, so roughly 14%.

With this example, you’d have a ~14% chance that a failure will occur in one of the package installs resulting in no Adobe software being deployed.

If you instead split this into 15 single-title packages, it’ll be the same odds of at least one failure, however the result would be that only the titles in the installs that failed, would not be installed!

This would also allow you the flexibility to build out smart group logic to have these packages attempt a reinstall if needed, resolving an issue if the failures were due to a random event, rather than a permanent failure.

2. Better ability to swap out-dated installers

Let’s say Adobe release a new update to Photoshop that includes a high risk security update you need to roll out. With individual packages, you can generate a smaller single-title replacement package, swap this into the relevant policies and push out the patch.

With the Adobe “mega”-package, you’d need to regenerate, re-download and re-upload the full, larger package, then deploy this larger package out to devices. Even if you decided to generate a single Photoshop-only package to get around this, you’d still need to rebuild your mega-package too to avoid pushing out a vulnerable version when using the mega-package.

3. More flexibility, with less duplication

With these individual packages, you gain a huge amount of flexibility, with the added benefit of less duplication in your Adobe packages.

You can supply each title you wish in Self Service to allow users to install only the items they need instead of the full suite (or sub-sections). In situations where you need to automatically deploy parts of the suite you can mix and match different installers for different areas (common for lab setups), reusing the same packages across the board – treating your Adobe deployments like a buffet instead of a one size fits all.

Appendix 2: Issues with Jamf Pro zipping Adobe packages prior to upload

As mentioned above, Jamf Pro (and some other management platforms) require the Adobe installers to be wrapped in another flat-file format for downloading and deployment. Jamf Pro should do this automatically but I have seen and heard of issues with this, especially with larger installers such as the “mega”-package of multiple Adobe titles.

One method to resolve this is to pre-zip the package in Finder, before uploading to Jamf.

Simply right-/control-click on the [Package name]_Install.pkg installer package, and click “Compress [filename]”. Once complete, upload the zip file into Jamf as you would have done the unzipped installer package.

Appendix 3: Moving downloaded Adobe packages prior to upload

Some organisations will download the Adobe installers on one device, then move it to a second device for processing and / or uploading to Jamf. This can be common when another device may have a faster network connection for the download, but does not have access to Jamf (or some other management platform).

Adobe admins often use a number of different methods to transport these installers between devices, including:

  • AirDrop
  • OneDrive
  • DropBox
  • Google Drive
  • A file share
  • A USB storage device

Some of the methods listed above can add a ‘quarantine attribute‘ to the installer or zip files during this move, resulting in an installer that may open and upload fine, but errors out during installation.

If you use the methods above (particularly, but not limited to, AirDrop) and are seeing unexpected installation failures with certain packages, ensure to clear out the quarantine extended attribute from the installer and / or zip file prior to uploading to Jamf. This can be done with the below command:

sudo xattr -r -d /path/to/installer.pkg

(Credit: Rich Trouton’s amazing blog)

Appendix 4: Troubleshooting Adobe package deployment

To help with troubleshooting Adobe installation issues, Adobe provide a tool called the Log Collector Tool. This can be found, along with some documentation, on Adobe’s site here.

The tool will collect all the various Adobe installation-related logs, and turn them into a compressed .zxp file on your desktop. You can rename the extension on this output to .zip, decompress the contents and go through this to see if you can identify the issue.


It’s been almost a year since my last post so I felt overdue to get something written down and shared! I hope this is found to be helpful to some.

As always, if you have any questions, queries or comments, let me know below (or @daz_wallace in Mac Admins Slack) and I’ll try to respond to and delve into as many as I can.

Cooking, Personal

Daz’s Christmas Ham

For the last few years, I’ve been lucky enough to have my family request I cook a Christmas Ham (or three). It’s something I actually enjoy and I’m not all that bad at. So every year, I spend a few hours cooking up the Ham, as well as spamming my social media pages, work Slack channels as well as the #London MacAdmins’ Slack channel with pictures as I go.

The recipe I use is a mix-and-match of a few recipes I’ve found on the Internet as well as some tweaks I’ve made over the last few years. It’s by no means perfect, but I like it…and the family haven’t complained to my face about it 😉

The recipe has lived on a random sheet of A4 paper in a pad somewhere for the last few years, until this year I finally decided to digitise it! Partly in case I loose the paper, and partly because, as each year has gone on, I keep spilling more things on it.

Daz’s Christmas Ham


  • 2-2.5KG Unsmoked boneless Gammon joint
  • 2 Carrots
  • 2 Celery Sticks
  • 1 Onion
  • 2 Cinnamon Sticks
  • 2 Bay Leaves
  • 2-3 Litres of Coke (Not sugar free)
  • 150ml Maple Syrup
  • Red Wine Vinegar
  • Wholegrain mustard
  • Whole Cloves
  • Whole Black Peppercorns


1) Cut the 2 Carrots and 2 Celery sticks into roughly 1-inch thick slices then peel and quarter an Onion. Take 2 Cinnamon Sticks, 1/2 (half) a tablespoon of whole black peppercorns, 2 bay leaves and a pinch or so of whole Cloves. Add all to a cooking pot.

2) Remove the packaging from the Unsmoked Gammon. If you have butchers twine (Thanks for the suggestion James H!) remove all the packaging including the wrap, then tie together. If you don’t have the twine and the ham is wrapped in a plastic sleeve, leave this on. Add to the pot.

3) Add in 2-3 litres of Coke (not sugar free) to the pot, enough to cover the ham. You may need to top up a little with boiling water depending on how big the pot is.

Ham and bits on the hob

4) Bring the pot to the boil with the lid on and reduce to a simmer. This might take 20-30 minutes depending on the amount of liquid. It will make your kitchen smell rather Christmasy.

5) Simmer for 2.5 hours, checking periodically to make sure the pot isn’t spilling over or running low on liquid. If needed, top up the liquid with more boiling water to keep the ham covered as best as possible. The ham will swell a little as well as possibly float, which can make this a little interesting!

6) Once the 2.5 hours are up, drain the pot and set your oven to 170°C. This will let the oven heat up whilst we prep the ham.

7) Remove any wrappings and cut the skin off the ham, leaving behind an even layer of fat. Score the fat in a diagonal fashion to create a a cross-hatch pattern. Push into the fat a whole clove at each intersection of the scoring.

8) Mix the maple syrup, 2 tablespoons of wholegrain mustard, and 2 tablespoons of red wine vinegar in a jug to make the glaze mixture.

9) Move the ham to a roasting pan and pour over roughly half of the glaze mixture.

Ham, scored and poured

10) Roast the ham in the oven for 15-minutes

11) Pour over the rest of the glaze mixture and put back in the oven for 15 minutes

12) Baste with any of the glaze that ran off and put back in the oven for a last 15 minutes

13) Remove from the oven and leave to rest for at least 30 minutes.

14) Have fun cleaning up all the now-sticky kitchen bits you’ve used 🙂

Have a great Christmas and a brilliant New Year!

Mac Admin

Jamf Pro, Intune* and the Jamf Cloud Connector

For this post, I thought I’d share a mixture of things in and around Jamf Pro, the Intune* integration, and the new Jamf Cloud Connector. This is a mixture of a few smaller things I’ve seen in testing that I’d not seen written up anywhere and thought it’d be worth sharing.

First up, What is the Jamf Cloud Connector?

The Jamf Cloud Connector is a new feature added to Jamf Pro a few months back. This is limited to Jamf Cloud customers only (no on-premise or 3rd party hosted options at this time I’m afraid) and, amongst other things, allows you to connect multiple Jamf Pro instances to a single Azure AD tenant for the Jamf Pro / Intune* device compliance solution. This is the only method to link multiple Jamf Pro instances to a single Azure AD tenant!

Jamf documentation on this can be found here. The initial setup process (the Jamf Pro to Intune* connection) is also a little easier than the previous manual method. It’s also worth noting at this point that the previous manual method is still an option (and the only option if you are not hosted with Jamf Cloud).

With the background out the way, I’ll share some of the odd bits and pieces I’ve seen in testing so far.

Passing over the Consent URL

First thing I noticed with the Jamf Cloud Connector method, is it’s no longer possible to ‘just’ pass over the Consent URL to your Global Administrator (or equivalent) as before.

What do I mean by that? Well, with the previous manual setup, you could configure the Jamf Pro side, then grab the Consent URL for your Azure Global Administrator (GA) to approve. They could approve this, complete the setup their side and all worked well. This method was handy if you were in a situation where your Jamf administrator wasn’t an Azure GA, and your Azure GA wasn’t a Jamf administrator.

With this new Jamf Cloud Connector method, there’s less manual tasks, however there is a number of browser redirects back and forth meaning you can’t just send the Consent URL to be approved (or at least I haven’t found a simple way). As a result, you’d either need to:

  1. Give your Azure GA admin access to the Jamf instance/s to set everything up
  2. Apply to get (temporary) Azure GA for your Jamf administrator, and set everything up
  3. Work side by side with your Azure GA to enter their credentials at the Azure screen/s as required (not always ideal in a COVID world!)

Testing between two Jamf Pro Instances

The next thing I came across was in testing enrolments between multiple instances. I was using the same physical device when testing the Intune* connection on both Jamf instances. I enrolled and deployed the test device on Jamf Pro instance “A”, and registered this with Intune* – no problems.

I then wiped the test device, and enrolled and deployed this on Jamf Pro instance “B”. All worked fine, until it came to the Intune* registration. The Company Portal launched as normal, but showed the “This device is already registered” message. I selected the “done” button and Company Portal closed. Shortly after which Self Service / Jamf AAD popped up a message to say that device registration failed due to the user closing Company Portal too soon.

This same behaviour was repeated when I re-ran registration, wiped and re-deployed the device, and also after removing the device from Azure AD (and waiting the 30-60 minutes for things to settle down).

In the end, the resolution was to remove the old device record from the first Jamf Pro server (Jamf Pro instance “A” above). After a few minutes, the registration worked fine.

I can only imagine that the first Jamf Pro server (instance “A”) was still sending data into Intune* and this was holding some sort of partial record in Azure I couldn’t get access to.

I tweeted about this shortly after seeing it, but realised how difficult it would be to explain in a Tweet!

Migrating from the Manual Intune Connector to the Jamf Cloud Connector

Lastly little snippet of things to share is regarding migrating from the previous manual Intune* connection, to the new Cloud Connector. This is something you’d need to do if you had a requirement to link two or more Jamf Cloud instances to the same Azure AD tenant.

After some calls with Jamf Support, it’s as simple as hitting the edit option under “Settings” > “Global Management” > “Conditional Access”, selecting the “Cloud Connector” option, and following the rest of the steps to set this up (don’t forget the documentation here). Once the setting is saved and the connection confirmed (should be within 5 minutes) devices should be fine and stay compliant.

All of the above went off without a hitch, however one thing we did see that wasn’t mentioned is that users will see a popup from JamfAAD (the solution that handles the local aspects of the Intune* registration and data submission). Each user with a device already registered with Intune* will see a popup along the lines of “” wants to use “” to sign in, with the options to Continue or Cancel. Users should click “Continue” and the message will disappear and all will be happy!

Something to be aware of and perhaps pre-warn your users about if you’ll be looking to migrate your setup.


Something a little different from the last few posts I’ve done, but I hope there’s enough helpful information there to help someone out, and save you some faff.

As always, if you have any questions, queries or comments, let me know below (or @daz_wallace in Mac Admins Slack) and I’ll try to respond to and delve into as many as I can.

* I’ve used the term Intune here as its familiar to most people who’d be working through the above. Be aware that Microsoft has rebranded this recently to “Microsoft Endpoint Manager” with a new admin URL. Consult your Microsoft documentation for more details.

Mac Admin

What’s new with Adobe 2021 in Education – Content

Hi all,

If I time this right, this post should be going live about halfway through my talk at the PSU Campfire Sessions on Adobe 2021 and Shared Device Licenses in Education. 

This talk is a continue update and expansion on my previous talks, Adobe CC2019 in Education – PSU 2019 and What’s new with Adobe 2020 in Education.

This post contains links to URLs mentioned as well as eventually copies of the video and slides.



Link Dump

As promised, heres a list of URLs from the presentation, as well as some further reading suggestions:

MacAdmins Conference

Thanks again to everyone who could attend!

Mac Admin

MacAdmins at PSU 2021: Campfire Sessions – What’s new with Adobe 2021 in Education

For the last 2 years I have had the privilege of speaking at MacAdmins at PSU around Adobe and it’s use in education. You can find more on that here and here.

Once again, I’m very luck to have another slot to talk about, you guessed it, Adobe in Education!

MacAdmins Campfire Sessions?

As we’re still not fully out of the woods yet with the current worldwide pandemic (boo!) the folks at PSU have elected to repeat the success of their Campfire sessions from last year. Instead of an in-person conference delivered over a few days, they have switched to an online, remotely delivered conference with 2-sessions per week, delivered remotely over 7 weeks.

These are completely free and run 1PM to 3PM ET (6PM to 8PM BST – local time for me – ideal if you’re stuck at home after work)! Once you’re signed up and registered for each session, you’ll be emailed joining links the day before each session. You can also join everyone on Slack in the #psumac channel.

As mentioned, sign up is free and can be done here.

What’s new with Adobe 2021 in Education

As I continue my relentless hammering of the Adobe drum with blog posts and smaller content, I’ll be delivering a slightly longer and expanded version of the talk from last year. Don’t worry, there’s plenty of updates and changes to cover even from last year!

We’re now almost into the 4th year of Adobe’s new Shared Device Licensing, and to top it off, we’ve had a whole new architecture to deal with. What’s changed? What hasn’t? Can you still deploy Adobe Fireworks on Big Sur? Can you _really_ manage the Desktop App? How many bottles of [beverage] will you need to get you through it? All this and (maybe) more!

My talk will be on Thursday 10th June at 1:30PM ET (US) / 6:30PM BST (UK) time, and you can register for it here.

Resources and Recording

The session is being recorded and I’ll add links to that and resources as soon as they’re available.

One more thing…

I’m very happy to announce that we also have two of my friends and colleagues from dataJAR also presenting this year:

I hope to see you there! 😊

Mac Admin

Adobe Remote Update Manager

So I started the testing for this post a few years ago, and after a recently written post about patching Adobe titles for dataJAR I thought it’d be a good chance to revisit and actually get something out! Today’s post is about the Adobe Remote Update Manager, another command line tool that can check for and install Adobe updates. Read on to learn more!


Full Disclosure: I’ve played around a fair bit with the Remote Update Manager tool, but I don’t have a need or the opportunity to use it in anger or full-on in the real world. Your mileage may vary and please test before rolling anything into your own environment.

The Adobe Remote Update Manager (or RUM) has been around for a good few years now, and offers a command line tool to run Adobe title updates. It runs on both macOS and Windows, and can work with a local Adobe Auto Update server.

There are some limitations in what it can do:

  • RUM will only work with updates to installed Adobe Apps. It can’t install upgrades (e.g. Photoshop 2020 to Photoshop 2021) and it cannot install applications that are not present already on the system
  • It can’t patch some native Adobe items such as Adobe Flash (now end of life), Gaming SDK and Adobe Air application updates
  • It can’t patch Adobe Acrobat or Reader unless their own updaters are version 1.0.14 or newer (see “Applying updates for Acrobat and Reader” here)
  • It can’t patch the Creative Cloud Desktop App (CCDA) or RUM itself
  • You can’t use the download action with the binary for Adobe Acrobat or Reader (see more below)
  • Unlike the ALD, the RUM command needs to be run with local admin privileges

How do I get RUM?

So as this is a binary, you’ll need to make sure this is installed on all the devices you wish to use it with. There’s three ways you can get RUM onto your Macs:

Option 1: Direct download

You can download the binary from the Adobe Admin console under “Packages” -> “Tools” -> “Remote Update Manager”. Once downloaded, this should be installed into /usr/local/bin/RemoteUpdateManager

Option 2: Include in your Application Packages

The second option is to add RUM to your Application and / or licensing packages. This is probably the easiest method and will supply an Adobe-made (and therefore Adobe supported!) RUM install using a macOS installer package. This option is found under the packaging options when building the package

Option 3: Use an AutoPKG recipe

The folks over at Moof IT have an AutoPKG recipe in their repo here, that can download and package RUM into an installer package.

How do I use RUM?

So a typical RUM command would look something like this:

/usr/local/bin/RemoteUpdateManager --proxyUserName=XXX --proxyPassword=XXX --channelIds=XXX --productVersions=XXX --action=XXX

This can be broken down as follows:

  • --proxyUserName=XXX --proxyPassword=XXX
    • Optional. You can pre-supply proxy details for the updates, but this will require service account details and isn’t a great idea. I’d really suggest consulting Adobe’s networking requirements KB and opening the required connections
  • --channelIds=XXX
    • Optional. Replace XXX with the channel ID/s of the specific products you wish to update. You can specify multiple channel IDs with a comma (but no space!) between the values. If you do not specify any channel IDs, all updates are considered.
    • You can find a list of channel IDs in Adobe’s Admin Guide here. These will only go up to Adobe CC 2015. If you have titles newer than this, use the --productVersions option instead.
    • If any specified updates have linked recommended updates (such as Camera RAW), these will automatically be considered and run too.
  • --productVersions=XXX
    • Optional. Replace XXX with the SAP code and version of the specific product/s you wish to update. You can specify multiple product versions with a comma (but no space!) between the values. If you do not specify any product versions, all updates are considered.
    • You can find a list of SAP codes and versions at the link here. Note that the current major versions of titles are at the top, with the previous versions under the “Previous Versions” disclosure triangle
    • Again, if any specified updates have linked recommended updates (such as Camera RAW), these will automatically be considered and run too.
  • --action=XXX
    • Optional. Replace XXX with the action you wish to perform. If no action is provided, the tool will automatically try and check, download and install any updates that should be considered.
    • Options are:
      • --action=list – This will check and display a list of updates that are available. The output is a little tricky, but there might be room to turn that into a Jamf Pro Extension Attribute if you’re feeling bold.
      • --action=download – This will check and download the specified updates that are available. It will not carry out any installations
      • --action=install – This will check, download and install the specified updates that are available. If an update has already been download, RUM will use the local copy.

Usage Examples

For these examples, I’ve used a macOS 10.15.7 VM, with Adobe Photoshop 2021 (v22.2.0 – latest at the time of writing), Photoshop 2020 (v21.2.0 – not current), and Premiere Pro 2020 (v14.3 – not current).

Running RUM without any options and with all Apps closed

Command: sudo /usr/local/bin/RemoteUpdateManager


RemoteUpdateManager version is :
Starting the RemoteUpdateManager...
No new updates are available for Acrobat/Reader
Following Updates are applicable on the system :
Following Updates are to be downloaded :
*** Downloading (PPRO/ ...
*** Successfully downloaded (PPRO/ ...
*** Downloading (PHSP/ ...
*** Successfully downloaded (PHSP/ ...
All Updates downloaded successfully ...
*** Installing (PPRO/ ...
*** Successfully installed (PPRO/ ...
*** Installing (PHSP/ ...
*** Successfully installed (PHSP/ ...
All Updates installed successfully ...
Following Updates were successfully installed :
RemoteUpdateManager exiting with Return Code (0)

Notes: This command was run without any of the Adobe titles open or running (besides CCDA).

Running RUM without any options and with Photoshop 2020 open

Command: sudo /usr/local/bin/RemoteUpdateManager


RemoteUpdateManager version is :
Starting the RemoteUpdateManager...

No new updates are available for Acrobat/Reader
Following Updates are applicable on the system :
Following Updates are to be downloaded :
*** Downloading (PPRO/ ...
*** Successfully downloaded (PPRO/ ...
*** Downloading (PHSP/ ...
*** Successfully downloaded (PHSP/ ...
All Updates downloaded successfully ...
*** Installing (PPRO/ ...
*** Successfully installed (PPRO/ ...
*** Installing (PHSP/ ...
*** Failed to install (PHSP/ ...
Some Updates failed to install ...
Following Updates failed to Install :
Following Updates were successfully installed :
RemoteUpdateManager exiting with Return Code (2)

Notes: This command was run with Adobe Photoshop 2020 open and running. You can see the update for Premiere Pro went through whilst the Photoshop update failed. You’ll also note the different exit / return code. More on this below.

Running RUM with the list action and with Photoshop 2020 open

Command: sudo /usr/local/bin/RemoteUpdateManager --action=list


RemoteUpdateManager version is :
Starting the RemoteUpdateManager...

No new updates are available for Acrobat/Reader
Following Updates are applicable on the system :
RemoteUpdateManager exiting with Return Code (0)

Notes: This command will just list the updates available. As mentioned above, without any arguments, it’ll show all available updates for all locally installed titles.

Jamf Pro Policy?

How does the output look when run through a Jamf Policy? Turns out, pretty nice!

Command: /usr/local/bin/RemoteUpdateManager --action=list (via a Jamf Pro Policy ‘Files and Processes’ task)



As with all things, you might hit some issues and not be sure where to look. I’ve chucked a few suggestions below to help you out with troubleshooting.

Running updates with Adobe Apps open

Strangely, Adobe call this out explicitly in their KB article (here) without much explanation:

Adobe Applications for which updates are to be installed should not be running when Remote Update Manager is invoked.

I read this as “here be dragons, we’re just not sure what kind” and advise that you do what you can to make sure you’re running RUM with the Adobe apps not running on your users’ devices.

In my testing, this seems to fail to update any running Adobe titles, but I haven’t seen it cause major issues. Maybe I was lucky? Maybe I didn’t test all areas that doing this may have broken?

Exit / Return codes

When running RUM, it’ll always return an exit code within it’s output text. This will be one of three possible options, which I’ve detailed below:

RemoteUpdateManager exiting with Return Code (0)

Everything worked fine. Either all updates / actions performed fine, or there are no updates found

RemoteUpdateManager exiting with Return Code (1)

Something didn’t work. Could be that the Application was open, the network isn’t live or some installer error

RemoteUpdateManager exiting with Return Code (2)

Some tasks worked, but some didn’t. Could be there was two updates to apply and one failed.

Log Files!

Adobe seem to have very strange rules on where logs are stored, and RUM is no exception.

If a user is logged in, the logs will be stored at /Users/[logged in user]/Library/Logs/RemoteUpdateManager.log

This is the same if the user is an admin user or not, if they’re the user that ran the command, or even if it was pushed out via a Jamf Pro policy.

If no users are logged in, the logs will go to /var/root/Library/Logs/RemoteUpdateManager.log

Again, it doesn’t matted how the command was ran. only who is logged in (or not).

What about updating RUM?

RUM, being an item of software, will have updates time to time. As mentioned above, RUM cannot update itself, so I guess that maybe we need some sort of RUM Remote Update Manager (a RUM RUM? A Rum Double?).

Credit to accidentalmacadmin on Slack

Ok, yes this was a silly excuse to get that (I feel funny) meme into this post. On a serious note, this will depend on how you’re deploying RUM initially, with the simplest method being to keep including it in packages you create. Alternatively, you can use the AutoPKG recipe mentioned above, or manually deploy updated versions as needed.


So that was significantly longer that I thought it’d be and possibly why I didn’t write things up until now. RUM can be a pretty powerful and handy tool, but does have limitations and some oddities. If you’re looking for more information on RUM, you can find it here – Adobe | Use Adobe Remote Update Manager. I hope this post has at least given you an idea of what you’re getting into!

As always, if you have any questions, queries or comments, let me know below (or @daz_wallace in #adobe on Mac Admins Slack) and I’ll try to respond to and delve into as many as I can.

Mac Admin

Adobe installers, Munki and Error 82

Over a week ago, reports started coming in that newly created Adobe deployment packages were failing to install via Munki, with the error message of “Adobe Setup error: 82: Unknown error“. After some digging and help in the MacAdmins Slack community we’ve found a possible solution. Read on to find out more.

What happened?

The first inklings of an issue were reported in the #adobe channel in Mac Admins Slack on March 16th. The day before, Adobe had released updates to a number of titles including:

  • Illustrator 2021 (25.2.1)
  • Animate 2021 (21.0.4)
  • Photoshop 2021 (22.3)
  • Photoshop 2020 (21.2.6)
  • Premiere Rush 2021 (1.5.54)

If you tried to deploy these titles using Munki, as well as a selection of other 2021 titles, they would fail the install shortly after starting. You’d then find the following entry in the Munki error.log, install.log and ManagedSoftwareUpdate.log (found at /Library/Managed Installs/Logs/)

ERROR: Adobe Setup error: 82: Unknown error

This was the same on different versions of macOS, and if installed with a user logged in, or at the login window. The exact same packages would install fine when attempted via the GUI or via another deployment tool.

What was the fix?

After some detailed discussions in Slack, it was found that Munki detected these packages as Adobe titles and “installed” them in a different way. This different method was being tripped up by changes Adobe have made to the recent installer packages and so was failing.

The fix was to force Munki to treat these packages as “normal” installer packages (although anyone who’s dug into them knows they’re far from normal)! To implement the fix, any affected installer packages should have their pkginfo modified to either:

  1. Remove the installer_type key (which will be set to “AdobeCCPInstaller“); or
  2. Set the installer_type key to a blank (or rather empty) value

Either of these options will force Munki to treat the package as a normal installer package, and seemed to install fine.

How do I implement the change?

Note: This change resolved this issue for us and in testing. Please test in your environment before rolling out.

This will depend greatly on how you specifically work on your Munki repo, but a few suggestions:

I manually edit the text files in my Munki repo

You should find and edit the pkginfo file/s for the affected Adobe packages in your repo to remove the installer_type key as mentioned above

I use Munki Admin to work on my Munki repo

Bring up the details pop-up window on each affect Adobe installer, and on the “Basic Info” tab, clear out / remove the value in the “Installer Type” box. Save and repeat as needed.

I use the dataJAR AutoPKG recipes to import my Adobe application installers

First of all, thanks! We find those things useful and glad others do too.

Secondly, as per this commit, we’ve now added this key to both our 2020 and 2021 Munki parent recipes. If you pull down the changes (and update your local trust info/s) any new imports should take advantage of this.

I’m afraid for anything already in you Munki repo, you’ll need to change the value manually.

Testing Notes

I wouldn’t be involved with an Adobe issue without some structured testing!

For this testing, I used a macOS 10.15.7 VM with VM Fusion Pro. I ran a number of titles through a single test each via Munki. Each title was configured to require a logout and the VM was restored to a snapshot before any Adobe installation for each test.

Additionally I tested Photoshop 2021, 2020 and CC2019 installs and uninstalls, both with and without the change above to briefly test both the installation of older, unaffected packages, and the uninstallation packages.

TitleVersionTaskKey valueResult
XD 202033.1.12.4InstallationAdobeCCPInstallerSuccess
XD 202033.1.12.4Installation[Blank]Success
Media Encoder 202115.0InstallationAdobeCCPInstallerFailed
Media Encoder 202115.0Installation[Blank]Success
Media Encoder 202014.9InstallationAdobeCCPInstallerSuccess
Media Encoder 202014.9Installation[Blank]Success
Photoshop 202122.3.0InstallationAdobeCCPInstallerFailed
Photoshop 202122.3.0Installation[Blank]Success
Photoshop 202021.2.6InstallationAdobeCCPInstallerFailed
Photoshop 202021.2.6Installation[Blank]Success
Photoshop CC 201920.0.10InstallationAdobeCCPInstallerSuccess
Photoshop CC 201920.0.10Installation[Blank]Success
Photoshop 202122.3.0UninstallationAdobeCCPInstallerSuccess
Photoshop 202122.3.0Uninstallation[Blank]Success
Photoshop 202021.2.6UninstallationAdobeCCPInstallerSuccess
Photoshop 202021.2.6Uninstallation[Blank]Success
Photoshop CC 201920.0.10UninstallationAdobeCCPInstallerSuccess
Photoshop CC 201920.0.10Uninstallation[Blank]Success

I fully admit this wasn’t an all encompassing test (unlike the last lot of testing I did) but I feel it covered enough for our needs.

Background information

After some detailed discussions on Slack, I found out some more background about how Munki handles these installs when the installer_type key is set to AdobeCCPInstaller.

More of an issue in older years (and still very much the case on Macs with spinning rust Hard Drives), Adobe installers could take a long time to install their payloads. I have regularly seen as long as 30+ minutes per application package! As Adobe packages don’t utilise the expected payload behaviour, and instead use a post install script to move data around and perform actions, the feedback to both the user and macOS isn’t great. The user will see a message along the lines of “running package scripts” for much of the time the installation is processing. In normal usage, Munki would use this same output to display progress to the end user.

Munki showing a message along the lines of “running package scripts” for ~30 minutes per Adobe package could often lead to users thinking the deployment has become stuck, when the issue is more with how the vender is using the package. It was discovered that Munki could trigger the same deployment tool inside the Adobe package that the post install script does, and that this would give much much better output, and so this is what the value of AdobeCCPInstaller for the installer_type key does.

At some point between now and then, this method continued to work, but the output from the tool was reduced and this method arguably isn’t too useful anymore. This has now culminated in Adobe making changes to how their deployment tool works that affects the deployment method used.

I’ve created a discussion on the Munki-Discuss mailing list to discuss this issue. If you’re seeing the same behaviour, please feel free to contribute here.


This post is out later than I wanted to but real life got in the way! I hope if you’ve encountered this issue, the above information will help you get up and running again.

As always, if you have any questions, queries or comments, let me know below (or @daz_wallace in #adobe on Mac Admins Slack) and I’ll try to respond to and delve into as many as I can.

Mac Admin

Grabbing Application Icons from Adobe installer packages

When adding applications to Jamf’s Self Service or an option install for Munki’s Managed Software Centre, it makes sense to use the proper software icons for a nice and complete user experience. Did you know you can easily grab the icons from Adobe Admin Console packages, without having to install the package? Read on to find out more.

Previously when I’ve had to grab the icons for Adobe Apps, I’ve had to install them, before manually copying the icons from the application bundles themselves. This process is time consuming and fiddly. After checking a few things today, I realise Adobe ship the icons in an easy-to-grab location inside the Admin Console generated packages!

Where can I find the icons?

This icons can be found in the following location:

./[package name]/Build/[package name]_Install.pkg/Contents/Resources/HD/[SAP Code]/

Here you’ll find two icon files, one at a ‘normal’ resolution (appIcon.png) and one at a retina resolution (appIcon2x.png).

For example in my Bridge package, these can be found at the below location:


Tip: As the Adobe Admin console packages are bundle style, you can right-/control-click on the package and use “Show Package Contents” to access the path via Finder.

Simply generate the packages as you need, pull out a copy of the icon file you need, then you can upload both into your deployment solution of choice.

How do I know what the SAP Code is?

The simplest method is to try and figure it out from the application title, and the options presented when browsing the directory. For example:

  • Adobe Bridge 2021 has a SAP Code of KBRG11.0.1
  • Adobe Dimension 2020 has a SAP Code of ESHR3.4.1

Ah ok so maybe it’s not always so simple! Adobe has a KB that details the SAP codes if you do need to refer to it. This can be found here.


Hopefully this little tip will help speed up (or beautify) your Adobe deployments.

As always, if you have any questions, queries or comments, let me know below (or @daz_wallace in #adobe on Mac Admins Slack) and I’ll try to respond to and delve into as many as I can.