Monthly Archives: February 2019

Test-MailFlow cmdlet failing with *FAILURE*

I recently started looking into using the Test-Mailflow cmdlet to develop an email flow monitoring script in LogicMonitor. I had never tried using it in my current environment before and when I tried executing the cmdlet it just timed out with this output:

[PS] C:\Windows\system32>Test-Mailflow -Identity mailbox01
RunspaceId : 808205bb-e671-4a65-94ca-1828bf0f7ab8
TestMailflowResult : *FAILURE*
MessageLatencyTime : 00:00:00
IsRemoteTest : False
Identity :
IsValid : True

I tried adding -Verbose and -Debug switches and did not get anything useful. I checked to make sure all system mailboxes (Get-Mailbox -Arbitration) were in place and verified the test messages were going out via the transport logs. I dug a little more into how the cmdlet actually works and found that it sends an email with a delivery receipt which led me to look into that. I eventually found that we had our ‘DSNConversionMode‘ set to ‘DoNotConvert’ in our transport configuration:

[PS] C:\Windows\system32>Get-TransportConfig | fl DSNConversionMode
DSNConversionMode : DoNotConvert

After changing it back to the default (UseExchangeDSNs) the cmdlet started working. During testing I was sending email from my mailbox to a system mailbox with the ‘Request a Delivery Receipt‘ option checked. Exchange is expecting the default format in the delivery receipt DSN email and when it is modified Exchange cannot process it.

Delivery receipt with DSNConversionMode set to DoNotConvert:

Delivery receipt with DSNConversionMode set to UseExchangeDSNs:

Citrix Receiver Per-User Install Cleanup

For years Citrix has created the Receiver installer with per-user installation functionality where if the installer is launched in the context of a regular user it will install/register the components to the local user’s profile rather than just failing with a permission error. This creates a huge headache when trying to mass deploy Receiver (now Citrix Workspace) to the environment. You wind up with machines that have both installed. When this happens the user that had the per-user installation cannot launch applications. Even worse the machine/profile usually winds up being in a state where the per-user installation cannot be removed. Even if you get it removed the uninstaller and Citrix’s own cleanup utilities do an awful job at cleaning up the registered classes/components in the per-user installation. Their tools only clean up a fraction of what is actually there. My last two work environments (and current) have been plagued with these installations. I spent time a while back figuring out how to clean it up manually, but it is a major headache to do so. I tried logging a case (and an enhancement request) with Citrix about two years ago stating their Receiver Clean-up Utility utility does not properly clean up these installations. They later came back saying they no longer are supporting the utility. It seems that since then they’ve updated the utility to clean installs up to version 4.3.

Let’s take a look what Citrix is missing in their utility… To do this I take a clean profile, install Citrix Receiver 4.3.100 (not elevated/per-user install), and uninstall it using the Receiver Clean-up Utility (running as an administrator/elevated) while the regular user is still logged in and has their profile loaded. Here is a high-level list of what was left behind in the registry:

  • HKCU\Software\Classes\* – File Associations and COM object registrations
  • HKCU\Software\Classes\AppID\* – AppID registrations
  • HKCU\Software\Classes\Applications\* – More app registrations
  • HKCU\Software\Classes\CLSID\* – MANY COM class object GUIDs
  • HKCU\Software\Classes\WOW6432Node\CLSID\* – MANY COM class object GUIDs (32-bit)
  • HKCU\Software\Classes\Interface\* – MANY interface name to interface ID mappings
  • HKCU\Software\Classes\WOW6432Node\Interface\* – MANY interface name to interface ID mappings (32-bit)
  • HKCU\Software\Classes\MIME\Database\Content Type\* – x-ica MIME types
  • HKCU\Software\Classes\PROTOCOLS\Filter\* – Protocol filter handlers
  • HKCU\Software\Classes\Record\*
  • HKCU\Software\Classes\TypeLib\*
  • HKCU\SOFTWARE\MozillaPlugins\* – Firefox plugin registrations
  • HKCU\Software\Microsoft\Installer\Products – MSI installer product codes

In addition to this there are other issues:

  • When it is run against a machine it doesn’t properly load other existing (unloaded) profiles. This caused it not to fully process other user profiles on the machine
  • It doesn’t always kill processes correctly leaving file/directories behind

I decided to build a wrapper script around the Citrix Receiver Clean-up Utility to fill in the gaps. To do so I had to create a full list of everything I had to target. I decided to extract the MSIs from the installer I was testing with and dissect them. I used SuperOrca to pull the ‘Registry’ table from each MSI. I imported those into Excel and used filtering/VLOOKUPs to extract what I needed.

After chopping up the data I am left with the following groups of items to target:

  • A static list of unique registry keys under the user’s profile (this is a good start, but I noticed that some of the IDs in CLSID and Interface are different between versions)
  • A list of values to search for under ‘HKCU\Software\Classes\CLSID’ in order to determine of the root key should be deleted. Also need to target the WOW6432Node path
  • A list of values to search for under ‘HKCU\Software\Classes\Interface’ in order to determine of the root key should be deleted. Also need to target the WOW6432Node path
  • A list of values to search for under ‘HKCU\Software\Microsoft\Installer\Products’ in order to determine of the root key should be deleted. The clean-up utility cleaned up most of this, but one was left over for me

Now that I have the targets it’s time to write the script. In addition to targeting the various registry locations I want to:

  • Identify and load all profile registry hives. This will allow me to run my clean-up process against all profiles, but it will also allow the clean-up utility to process them without the having to be logged in
  • Kill processes that reside in certain paths using wildcards
  • Execute the Citrix Receiver Clean-up Utility silently
  • Clean up a static list of registry keys in all user profiles
  • Search for a list of value strings under in the ‘CLSID’ keys and delete the parent key if a match is found
  • Search for a list of value strings under in the ‘Interface’ keys and delete the parent key if a match is found
  • Search for a list of ‘ProductName‘ value strings under in the ‘Installer\Products’ keys and delete the product key if a match is found
  • Unload all registry hives that were manually loaded in the first step
  • I also wanted the script to work with older versions of PowerShell. I did my best to make it compatible with PS V2

I have tested the script with multiple versions and so far it is working well. It does require that you download the Receiver Clean-up Utility and place the executable in the same directory as the script. Feel free to submit any issues here or in the GitHub repository.

GitHub link: https://github.com/markdepalma/CleanPerUserReceiverInstall

Applying group policy preferences based on Citrix delivery group or machine catalog membership

We’ve slowly been transitioning our Citrix XenApp environment from static VMs to Machine Creation Services (MCS) based VMs. My goal was to have two master (fat) images and two machine catalogs. Because of policy and application segregation requirements those two catalogs translated into more than two delivery groups. With these delivery groups came the requirement to apply different group policies to different machines. One option would be to move the corresponding AD object into a different OU and apply policy that way. While that would work due to AD objects not being automatically moved/re-created after machine creation it still requires some administrative overhead. It was clear that dynamically adjusting certain policies based on delivery group membership would be ideal.

After a little digging I found where both the delivery group and machine catalog memberships were written to in the registry by the VDA. Below is an example of how we applied a user GPP item dynamically based on the delivery group of the machine.

Registry Key Path: HKEY_LOCAL_MACHINE\SOFTWARE\Citrix\VirtualDesktopAgent\State
Delivery Group Value Name: DesktopGroupName
Machine Catalog Value Name: DesktopCatalogName

Deploying a RODC for Palo Alto PAN-OS Credential Phishing Prevention

I was recently tasked with setting up the AD side of PAN-OS Credential Phishing Prevention. For some technical reason that I haven’t been able to find it requires a read-only domain controller (I attempted putting the credential agent on a regular DC just to see if it would work and it seemed to read credentials without issue. If anyone has information about RODC requirement I’d love to hear it.) We don’t have or use any read-only domain controllers currently, so I had to deploy one for each domain we needed to protect. This brought up a few questions to mind…

  • How would I decide/maintain what users have their passwords replicated to the RODC?
  • How do these passwords get replicated to the RODC? By design passwords are only replicated to an RODC after an initial authentication attempt when they are configured for password replication.
  • Since the sole reason this domain controller is being deployed is for PAN-OS I don’t want it to handle logons and I want to make it very lightweight. How do I prevent user logons/authentication from occurring on this DC?
  • How are usernames identified? Will it handle all formats (samAccountName, explicit UPN, implicit UPN, and email address)?

How would I decide/maintain what users have their passwords replicated to the RODC?

This one is pretty easy for me. I don’t see any reason to exclude any accounts from credential detection, so I will use ‘Domain Users’. I usually stay away from using default groups, but this is one of the few cases where it makes sense to do so.

How do these passwords get replicated to the RODC?

I turned the logging level up to verbose (HKEY_LOCAL_MACHINE\SOFTWARE\Palo Alto Networks\User-ID Credential Agent\Log | DebugLevel=5) on the credential agent after full configuration and saw that the agent enumerates all the objects within the ‘msDS-Reveal-OnDemandGroup‘ attribute of the RODC computer object (and DNs manually specified in the user-id agent seen in the screenshot below) and executes ‘repadmin‘ against each object to force replication. As password changes are detected it re-replicates passwords using the same method.

How do I prevent user logons/authentication?

Clients discover domain controllers using DC Locator. I decided to prevent the domain controller from registering all SRV records except for the two necessary for replication (LdapIpAddress + DsaCname). To do this I set a local policy under ‘Computer Settings → Administrative Templates → System → NetLogon → DC Locator DNS records‘ called ‘DC Locator DNS Records not registered by the DCs‘. The value I set for this policy was:

Ldap LdapAtSite Pdc Gc GcAtSite GcIpAddress DcByGuid Kdc KdcAtSite Dc DcAtSite Rfc1510Kdc Rfc1510KdcAtSite GenericGc GenericGcAtSite Rfc1510UdpKdc Rfc1510Kpwd Rfc1510UdpKpwd

How are usernames identified?

After experimentation it is clear that when using the domain credential filter method PAN-OS is getting the user from the IP<->user relationship and only looks for that user’s password in web site submissions. No matter what username I put in a form the submission triggered a detection as long as the password matched my password. Another user’s credentials under my session did not trigger a detection. I was happy with this because I do not have to worry about certain username formats not being detected.

After all of these questions/concerns were addressed came the actual implementation. You are required to install both the ‘User-ID Agent’ and the ‘User-ID Credential Agent’ on the RODC. According to the documentation this instance of the user-id agent should not be used for IP<->user relationship gathering and should only be pulling credentials. The credential agent creates the ‘bloom filter’ and sends it over to the user-id agent. PAN-OS connects to the user-id agent receives the newest version of the bloom filter. One issue I ran into was around permissioning and service accounts. Normally you would assign a domain account with limited permissions to the user-id agent, but the thing to consider here is that credential agent and user-id agent communicate using named pipes. According to the documentation on named pipes if no ACL is specified when creating a named pipe the default ACL is:

  • LocalSystem – Full Control
  • Administrators – Full Control
  • Creator Owner – Full Control
  • Everyone – Read
  • Anonymous – Read

The issue here is that the credential agent only runs under LocalSystem and assigning a non-administrator account to the user-id agent service prevents the user-id agent from communicating to the credential agent’s named pipe. Leaving the user-id agent service running under LocalSystem worked, but created another problem. When running under LocalSystem for some reason it was unable to enumerate the ‘msDS-Reveal-OnDemandGroup‘ attribute (seen in the UaDebug.log file) for the RODC meaning it couldn’t determine what user accounts were allowed to sync to this RODC. I found that if I manually specified a group DN in the user-id agent it would work under LocalSystem. The only other option would be switching to a ‘DOMAIN\Administrators’ service account (since this a domain controller) which I did not want to do. Since I’m only using ‘Domain Users’ this was easy enough to configure.

UPDATE: There seems to be a discrepancy between how the User-ID agent worked previously, the current documentation, and how it works now. In the past the User-ID agent configuration utility would adjust the ‘Log on as’ value for the ‘User-ID Agent’ service to the account you specified in the agent setup ‘Authentication’ tab. It seems now the service continues to run as LocalSystem, but uses the account specified in the configuration to actually probe the DCs and AD. I was able to leave it running as LocalSystem, specify an account with the proper rights in the ‘Authentication’ tab, and leave the group DN blank under the ‘Credentials’ tab in the user-id agent configuration utility. I verified the agent was using the account via logon events in the security event log on the RODC.

After configuring this you can monitor both log files to verify proper operation and then later verify PAN-OS is properly receiving the bloom filters. Be sure to restart the user-id agent after making any changes.

Credential agent log (UaCredDebug.log) sending bloom filter:

02/08/19 12:43:46:593 [ Info  667]: Sent BF to UaService. 21edc031f4891d2c42c133acded980ba

User-ID Agent log (UaDebug.log) receiving bloom filter from credential agent:

02/08/19 12:43:46:593[ Info 2896]: Received BF Push. Different from current one.
02/08/19 12:43:46:593[ Info 2897]: 0829f71740aab1ab98b33eae21dee122->21edc031f4891d2c42c133acded980ba

Azure AD Connect mail-enabled public folder synchronization issues – The cause of the error is not clear

We recently went through some Exchange Online Protection (EOP) cleanup and part of that involved turning on Directory Based Edge Blocking. We already went through the exercise of syncing all objects (especially ones part of Exchange), but the only ones that weren’t being synced were mail-enabled public folders. After turning on Directory Based Edge Blocking we realized there were a few public folders that needed to receive mail from the Internet. After syncing mail-enabled public folders (this is a newer feature in AD Connect) we received synchronization errors for four objects. The only thing these objects had in common was that they referenced a mail-enabled public folder by either having that object as a group member or having it as a forwarding object on a mailbox.

The errors we receiving were:

  • The cause of the error is not clear. This operation will be retried during the next synchronization. If the issue persists, contact Technical Support.
  • IdentityDataValidationFailed

The workaround is to create a mail contact object that has the same targetAddress as the mail-enabled public folder object and use that object in place of the public folder object in something like a group membership. The issue with this is that by design a mail contact’s targetAddress is also part of its proxyAddresses attribute and the mail-enabled public folder object of course already has the email address as part of its proxyAddresses attribute. This duplicate is not allowed. The way around this is to modify the mail contact object so that the targetAddress is not part of proxyAddresses. To create this special mail contact you do the following:

  • Create a mail contact in Exchange with a fake external address
  • Disable e-mail address policy for the object
  • Use ADSIEdit to:
    • Change the targetAddress to the email address of the mail-enabled public folder
    • Remove the fake external address you specified earlier from proxyAddresses

After the object has been created you can now use it in lieu of the mail-enabled public folder in group memberships and other attributes.

Deploying Home Assistant Hass.io to ESXi 6.5

I recently did a complete overhaul of my home ESXi home lab environment. With the new capacity/reliability came the desire to move as much onto it as possible. One of these items was my Home Assistant Hass.io instance which was running on a Raspberry Pi 3 (and originally a Raspberry Pi 1 B). Running it on the Pi has always come with painfully slow reboot and update times. With VM real estate available I see no reason to rely on mini computers to run various workloads around my home. I can re-purpose these boards for other projects. ESXi will also bring the ability to do machine-level snapshots which will be more complete and easier to revert to than the snapshot mechanism within Hass.io.

The main issue I ran into was with the VMDK. The way the VMDK was created it split into multiple files and I couldn’t consolidate or delete snapshots. The VMDK was also getting locked preventing vMotions. To get around this I cloned the VMDK in shell using vmkfstools. I also had to use the proper storage controller, network adapter, and firmware settings. I’ve listed all steps below:

  • Create a new VM with the following parameters:
    • Guest OS – CentOS 7 (64-bit) (The VM will adjust this automatically later)
    • 1 vCPU
    • 2GB RAM
    • 1x E1000 NIC (NIC will not be usable as VMXNET 3)
    • Remove any other devices like CD-ROM, hard disk, SCSI controller, etc.
  • Download the latest stable HassOS VMDK from https://github.com/home-assistant/hassos/releases
  • Copy this VMDK up to the VM directory in ESXi/vSphere
  • Open an SSH session to the ESXi host and change directory to the location of the VMDK you just copied (ex. cd /vmfs/volumes/datastore1/HASSOSVM)
  • Clone the VMDK using the following command: vmkfstools -i “hassos_ova-2.8.vmdk” “hassos_ova-2.8_new.vmdk” (This creates a thick copy of the disk and avoids locking/snapshot issues with the virtual disk)
  • Delete the original VMDK from the datastore as it is no longer needed
  • Edit the VM and add an existing hard drive selecting the VMDK you just cloned
  • Change the controller for the disk from “New SCSI controller” to “IDE 0”
  • Remove the newly added SCSI controller as it is not needed for the IDE virtual disk
  • Go to “VM Options” and change the Firmware from “BIOS” to “EFI”
image

After all this has been completed you just need to power on the VM. Assuming DHCP is configured properly on the network Hass.io is using it will pick up an IP and start configuring. With Hass.io running on an older Xeon-powered host I have never seen the VM get over 50-60 percent CPU utilization and even then I’ve only seen those spikes when running an update. Updates of HassOS and Hass.io take a minute or two when they would sometimes take up to 10-15 minutes when running on a Pi.

AirWatch API – The argument cannot be null

I recently was creating a PowerShell script that uses the AirWatch REST API to perform mass updates to enrollment users. When testing the process using an API tool (Insomnia) I was receiving the following error when issuing a POST to “/api/V1/system/users/{id}/update“:

<?xml version="1.0" encoding="utf-8"?>
<AirWatchFaultContract
  xmlns:xsd="http://www.w3.org/2001/XMLSchema"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xmlns="http://www.air-watch.com/">
  <ErrorCode>1018</ErrorCode>
  <Message>The argument cannot be null</Message>
  <ActivityId>99311627-d7fd-4fa3-bede-78553fe0ac88</ActivityId>
</AirWatchFaultContract>

I was using an XML body to pass one parameter as per the documentation and the user id was correct. I was unable to find any information on this error and the only thing I was left with was using a JSON body instead of XML for the POST. Once I switched to a JSON body the call was successful. I tested other POST commands using an XML body and they did not produce this error.