Author Archives: Mark DePalma

Hacking a Verizon RC8021V IP camera – Part 2

Obtaining Root

In the last post I went over obtaining terminal access to the camera via the UART connection points on the main board. One I had terminal access I was still presented with a login that I needed to get past. With no other way at the time to get this I started doing more research on this and the other related camera models I mentioned previously. I was hoping to find firmware that could lead me to a static root password. I was unable to find any export of the Verizon firmware, but I did find others. With some Google searches I stumbled across unpublished links for source-code builds on the Sitecom web site for both the WL-404 and LN-406 cameras. The two archives I used were under: https://www.sitecom.com/documents/. Let’s take a look at
GPL _ LN-406_WL-404 _ fw1_0_11.zip. If we dig into ZIP we can a lot of source code, but more importantly we see an /etc/passwd file. The contents of this file were:

root:9szj4G6pgOGeA:0:0:root:/root:/bin/sh

The string we are concerned with is ‘9szj4G6pgOGeA‘. Taking a closer look at the string we see It is an old descrypt crypt(3) Unix password hash. The password itself has a maximum length of eight characters combined with a two character salt (‘9s‘). If you Google this hash you find that it actually is common among many of the camera models I mentioned in part 1, but I was unable to find any instance of this hash actually being cracked. Looking at the firmware it seemed that this was based on a core source code distribution that originally came from Sercomm (the actual manufacturer of the camera). I figured that there was a high probability of this being actual root password on the camera I was trying to crack. At this point I decided that cracking the root password may be the easiest way to get the web console admin password and get access to the camera.

Cracking a crypt(3) Unix Password

I had never attempted to crack this algorithm before and knowing the password wasn’t overly simple (as others have attempted to crack it) I knew I’d need external resources to accomplish this. I started off by calculating the approximate number of password combinations I’d have to go through. I know nothing about the password, so I’d at least have to assume it could have all printable characters with a length between one and eight characters. The math for this would be: 93^1 + 93^2 + 93^3 + 93^4 + 93^5 + 93^6 + 93^7 + 93^8 = 5656642206396600 combinations. I ran a quick Jonn the Ripper benchmark on a decent Azure VM (no GPU) and the descrypt performance on that box was only around 16385000 c/s. At that rate it would take ~11 years to crack. I found the most powerful video card I had lying around which was an NVIDIA GTX 750Ti (forgive me, I am not a gamer). The Hashcat benchmark for this card cracking descrypt was around 28989000 c/s. This was significantly faster, but still would take ~6 years. I started researching video cards and found that one of the more popular cards, the NVIDIA 1080 GTX Ti, had a descrypt crack rate of around 1227000000 c/s. At this rate it would take around ~53 days. I had a colleague who actually had this card at home and I figured I’d make a deal with him to run this crack for me during idle time if I couldn’t find anything else. I started researching heavy GPU cloud systems and the most powerful (GPU-wise) I found was the Amazon p3.16xlarge configuration which used EIGHT NVIDIA® V100 Tensor Core GPUs. These GPUs are commonly used in the machine learning space and are a bit more powerful than the top gaming GPUs. An instance with this configuration has a descrypt crack rate of around 15571600000 c/s and could crack the password in around ~4 days, but would cost $2000 for the privilege. As a last resort when looking at computing rentals I turned to Vast.ai. Playing with different GPU/server rental combinations I found the password could be cracked somewhere around $250, but I couldn’t get it any lower than this. Doing further research I came across and interesting site called Crack.sh which was founded by David Hulton (also a founder of ToorCon). He created a purpose-built system that uses a series of FPGAs and specializes in cracking DES. They rent it out for much less than any other cracking service comes close to and even offer some services for free. Because this system was specifically created for cracking DES it actually can crack a descrypt password in ~3 days which is even faster than the Amazon configuration I spec’d out. I decided to eat the cost in the name of the project and figured the cracked password would be useful to others as well. Once my crack job actually started after sitting in the queue for a while I had the password in around three days. The email read:

Crack.sh has found a password that works against your hash. The password match is included below both in ASCII and HEX:
HASH: 9szj4G6pgOGeA
Password (ASCII): "h@11oCAM"
Password (HEX): 684031316f43414d
This run took 313726 seconds. Thank you for using crack.sh, this concludes your job.

The password was a bit comical. I took the first instance of idle time I had, remotely connected to my lab environment, and attempted getting into the camera’s shell using the password. To my dismay I found that the password DID NOT WORK on this camera. I felt somewhat defeated at this point, but I had known when going this route that there was some percentage chance that the password was in the Verizon firmware was not the same as the other cameras using the same platform. Cracking what was LIKELY the root password was sort of an easy way out.

Flashing New Firmware

Since I had multiple versions of usable firmware (with a default web console password I actually knew) I decided to look into somehow flashing alternate firmware. I forgot to mention this in part 1, but during discovery I found that holding the reset button and powering the camera on put the camera into a download mode. I was able to see this through the UART connection. When in this mode the output showed this:

The bootloader version 4.09
Flash Size = 4M(8K x 8,64K x 63)
MAC address 00:0e:8f:7a:34:83
MAC address 00:0e:8f:7a:34:83
  MAC address is 00:0e:8f:7a:34:83
DAVICOM driver ready
geting PID
got PID in flash
got it
Download mode.

The reason I didn’t further explore this was because at time I didn’t see how there was a way to upload this firmware. When in download mode the camera did not have an IP address and the UART connection didn’t seem to accept any input in this mode. What I didn’t realize at the time was that it was able to accept a firmware download over network in this mode, but not via TCP/IP. Further research led me to an old Sercomm tool that was used to flash a number of Sercomm-based products. I had seen references to this tool earlier on, but knowing it was a network-based utility I didn’t see how it could be used for this device. The filename/download for this tool is: Upgrade_207_XP.zip. The tool required a 32-bit OS and I didn’t have any instances running, so I decided to quickly spin up a Windows 98 VM. To my surprise the application immediately discovered the camera on the network.

I was curious what protocol this was using since it was not using IP, so I used the span port I setup earlier when I was performing the initial reconnaissance on the camera to sniff this traffic.

I found that the upgrade utility sent a layer 2 broadcast and the camera responded. The Ethernet protocol type shown in the packet capture was 0x8888. A Google search of this revealed a little information about the protocol and its commands. I attempted to flash the NQ-9006 version firmware (NQ-9006.1009.bin) to this camera and it took without any issue.

After the firmware update I was FINALLY able to get into the camera’s admin console making the camera completely usable. Though I technically accomplished my original goal I was not happy about spending time on the password crack that turned out to be a dead end and I still did not have or know the default admin console password (or root password) that the Verizon firmware was using. Also, my buddies would need to open and manually flash every camera before attempting to sell them. True to form I could not let it be and I will cover this in part 3!

Hacking a Verizon RC8021V IP camera – Part 1

A few months ago a tenant next to my friends’ warehouse threw way something like 100 IP cameras that were still sealed in the box. One of my other friends took the bulk of them with hopes of making a profit and the friend who initially found them saved me a few because he thought I’d like playing with them (little did he know what this would turn into). When I tried setting one up I found that it was a Verizon-branded version of the Sercomm RC8021 camera. The web console had a username/password combination that did not match up with anything I found online. Because of this the camera was essentially useless as you could not configure any settings or join it to a wireless network. I thought cracking this camera would make for an interesting exercise/challenge and It turned out be exactly that…

Initial Reconnaissance

I started out by researching the camera model and any default passwords associated with it. I came across references to the default credentials for an RC8021 being ‘administrator’ with a blank password. That didn’t work along with anything else I could think of admin/blank, admin/admin, administrator/admin, etc. Resetting the camera didn’t help with this either. Verizon’s own document for this camera was still available online, but had nothing useful. Some Googling also revealed some documented APIs for the camera, but any of the APIs I hoped to get access to required authentication. A port scan did not yield anything useful. I did find that the camera had a port open for UPnP, but I wasn’t able to use any of the known UPnP exploits to gain access to the camera or its file system.

Obtaining Physical Access

After all my initial attempts to get into this camera were unsuccessful, I decided to crack the camera open and start poking around. Below are both sides of the camera board. The four connection points within red square below are indicative of a UART connection. The other clue to this was the square connection to the far right. That would likely be the ground. I used my multi-meter to test these points and verified this.


Now that I was confident I had located a UART connection I broke out the soldering iron and soldered a four pin header here…

After soldering the header I just needed to figure out which pin was RX and which was TX. I already knew what ground was, so figuring out RX/TX was simple. VCC was not necessary since the camera had its own power supply. I hooked up three of the pins to a USB->UART converter, connected Putty to the COM port assigned to the converter on my computer, and set the Putty serial session with a common config:

Speed (baud): 115200
Data bits: 8
Stop bits: 1
Parity: Even
Flow control: None

Once I hooked the camera up to power I was able to watch the entire boot process and eventually received logon prompt. My initial hope was that the shell was not password protected or at least had a blank root password, but I was not that lucky. This was the the boot up output:

 The bootloader version 4.09
Flash Size = 4M(8K x 8,64K x 63)
MAC address 00:0e:8f:7a:34:83
unprotected area checksum = 00000000
Uncompressing Linux………………………………………… done, booting the kernel.
Linux version 2.4.19-pl1029 (peggy@ISBU-Compiler-B1) (gcc version 3.3.4) #294 Thu Sep 29 14:27:29 CST 2011
CPU: Faraday FA526id(wb) revision 1
ICache:16KB enabled, DCache:16KB enabled, BTB support, IDLE support
Machine: Prolific ARM9v4 - PL1029
Prolific arm arch version 1.0.11
On node 0 totalpages: 8192
zone(0): 8192 pages.
zone(1): 0 pages.
zone(2): 0 pages.
Kernel saved command line [arch]: root=/dev/mtdblock3 rootfstype=squashfs
Kernel command line [options]: root=/dev/mtdblock3 rootfstype=squashfs
plser console driver v2.0.0
Calibrating delay loop… 147.56 BogoMIPS
Memory: 32MB = 32MB total
Memory: 30764KB available (1263K code, 284K data, 64K init)
Dentry cache hash table entries: 4096 (order: 3, 32768 bytes)
Inode cache hash table entries: 2048 (order: 2, 16384 bytes)
Mount-cache hash table entries: 512 (order: 0, 4096 bytes)
Buffer-cache hash table entries: 1024 (order: 0, 4096 bytes)
Page-cache hash table entries: 8192 (order: 3, 32768 bytes)
POSIX conformance testing by UNIFIX
PCI: Probing PCI hardware on host bus 0.
Linux NET4.0 for Linux 2.4
Based upon Swansea University Computer Society NET3.039
Initializing RT netlink socket
tts/%d0 at MEM 0x1b000400 (irq = 2) is a PLSER
Prolific addr driver v0.0.4
Starting kswapd
devfs: v1.12a (20020514) Richard Gooch (rgooch@atnf.csiro.au)
devfs: boot_options: 0x1
squashfs: version 3.0 (2006/03/15) Phillip Lougher
i2c-core.o: i2c core module version 2.8.1 (20031005)
i2c-dev.o: i2c /dev entries driver module version 2.8.1 (20031005)
Prolific i2c algorithm module v1.2
Initialize Prolific I2C adapter module v1.2.1
found i2c adapter at 0xd9440000 irq 17. Data tranfer clock is 100000Hz
i2c-proc.o version 2.8.1 (20031005)
pty: 256 Unix98 ptys configured
Software Watchdog Timer: 0.05, timer margin: 60 sec
RAMDISK driver initialized: 16 RAM disks of 4096K size 1024 blocksize
PPP generic driver version 2.4.2
PPP Deflate Compression module registered
PPP BSD Compression module registered
Linux video capture interface: v1.00
Prolific Audio AC97 driver version 2.2.2 for 63 and 29 Audio Module 2006/03/21
ac97_codec: AC97 Audio codec, id: 0x414c:0x4770 (Realtek ALC203/203LF)
PL-1029 NOR flash driver for MTD, version 0.8.0 sc 1
NOR flash type: ppi-amd 8x8 64x63
NOR flash id = 0xa8
nor interrupt 30 registered
Creating 9 MTD partitions on "plnormtd":
0x00000000-0x00008000 : "bootloader"
0x0000c000-0x0000e000 : "MAC"
0x000e0000-0x00400000 : "SQUASHFS"
0x0000e000-0x00010000 : "LOGO Image"
0x00020000-0x00400000 : "Kernel+FS"
0x00000000-0x00400000 : "ALL"
0x0000a000-0x0000c000 : "HTTPS CA"
0x00010000-0x00020000 : "CONFIG"
0x00008000-0x0000a000 : "Reserve"
usb.c: registered new driver hub
usb-ohci-pci.c: usb-00:05.0, PCI device 180d:2300
usb-ohci.c: USB OHCI at membase 0xd8400000, IRQ 9
usb.c: new USB bus registered, assigned bus number 1
usb.c: ### @@@ usb_set_address
usb.c: >>> usb_get_device_descriptor
usb.c: >>> usb_get_configuration
hub.c: USB hub found
hub.c: 4 ports detected
Prolific Real-Time Clock Driver version 1.0.0 (2003-04-02)
PL UART driver version 1.0.1-1 (2006-04-21)
Initializing Cryptographic API
NET4: Linux TCP/IP 1.0 for NET4.0
IP Protocols: ICMP, UDP, TCP, IGMP
IP: routing cache hash table of 512 buckets, 4Kbytes
TCP: Hash tables configured (established 2048 bind 4096)
ip_tables: (C) 2000-2002 Netfilter core team
NET4: Unix domain sockets 1.0/SMP for Linux NET4.0.
802.1Q VLAN Support v1.8 Ben Greear greearb@candelatech.com
All bugs added by David S. Miller davem@redhat.com
Fast Floating Point Emulator V0.9 (c) Peter Teichmann.
RAMDISK: squashfs filesystem found at block 0
RAMDISK: Loading 2221 blocks [1 disk] into ram disk… done.
VFS: Mounted root (squashfs filesystem) readonly.
Mounted devfs on /dev
Freeing init memory: 64K
pl serial only support even parity
init started: BusyBox v1.01 (2011.09.29-06:29+0000) multi-call binary
init started: BusyBox v1.01 (2011.09.29-06:29+0000) multi-call binary
Starting pid 11, console /dev/tts/0: '/etc/init.d/rcS'
Starting the boot scripts…
Watchdog is at home.
Using /lib/modules/2.4.19-pl1029/watchdog.o
Using /lib/modules/2.4.19-pl1029/gpio_drv.o
insmod: io_expander.o: no module by that name found
insmod: leds.o: no module by that name found
insmod: lcd.o: no module by that name found
insmod: switchs.o: no module by that name found
Using /lib/modules/2.4.19-pl1029/reset.o
insmod: rtc_drv.o: no module by that name found
insmod: iris_drv.o: no module by that name found
insmod: analog_out.o: no module by that name found
insmod: echo_cancel.o: no module by that name found
/etc/init.d/rcS: 69: /usr/local/bin/pt_chk: not found
/sbin/ifconfig lo
RT61: Vendor = 0x1814, Product = 0x0301
System Info:
fw_ver: V1.0.03
fw_create_date: [2011-09-29 14:31:19 +0800 ]
mac_addr: 00:0e:8f:7a:34:83
def_name: VZ7A3483
release_date: Sep 29,2011
model_name: RC8021V
Assigned Info:
domain=0x10
language=0x00000000
serial=
pin=
rc4=
pid=
vid=
en_name=
desp=
time=
boot_mode=0x0
power line freq=60
lens_type=1
aes interrupt 12 registered
Using /lib/modules/crypt.o
cp: /usr/local/bin/stunnel.pem: No such file or directory
cp: /usr/local/bin/CAcerts.pem: No such file or directory
OV7725 detected!
Using /lib/modules/2.4.19-pl1029/ov_7725.o
plmedia version 1.2.11.1
Hello Grabber!
Hello Encoder!
Hello PLMD!
Using /lib/modules/plmedia.o
Using /lib/modules/2.4.19-pl1029/dmac.o
DMAC Driver (Ver: 1.0.0.1, Release-Date: 25-Sep-2009)
/etc/init.d/rcS: 102: /usr/local/bin/io_init: not found
/etc/init.d/rcS: 116: /usr/local/bin/dnswitch_action: not found
Davicom DM91xx net driver loaded, version 1.37 (JUN 16, 2004): VLAN support 4…
RT61: Vendor = 0x1814, Product = 0x0301
Ethernet link is ready now
Jan 1 00:00:09 dhcpcd[136]: DHCP successfully worked with the server (192.168.0.43)
/etc/init.d/rcS: 139: /usr/local/bin/jabberlog: not found
pl serial only support even parity
RC8021V7A3483 login: pc : [<0000a8a4>] lr : [<0000a868>] Not tainted
sp : bffffb28 ip : ffffffff fp : 00036e90
r10: 00024eb8 r9 : 00033ce0 r8 : 00033ca8
r7 : 00000000 r6 : 00021db8 r5 : 00000000 r4 : 0000000a
r3 : 00000000 r2 : 0000000a r1 : 000171ca r0 : 00000000
Flags: nZCv IRQs on FIQs on Mode USER_32 Segment user
Control: 397D Table: 01E8C000 DAC: 00000015
RC8021V7A3483 login:

With this output we’re able to see a few interesting things. We get a lot of information about what the underlying bootloader/OS/filesystem look like, we get more hardware information, and all of this gives us more information we can Google against. I tried logging into the camera with root and various passwords, but was unsuccessful. Once I started Googling against this information I found that this camera platform was actually fairly common and that a number of other cameras were pretty much identical to this one. Some models were Sercomm RC8021, ADT Pulse RC8021W, Linksys WVC54GCA, Sitecom WL-404, Sitecom LN-406, NorthQ NQ-9006, and some others. The actual manufacturer of this platform was Sercomm with their RC8021 camera, but unfortunately their web site had absolutely nothing to offer.

Stay tuned for Part 2…

Modifying Windows Server Failover Cluster (WSFC) node subnets the quick and dirty way

We were in the middle of expanding a server subnet and ran into complications. If you did not bring down the entire cluster and tried to make subnet mask changes to a NIC you ran into all kinds of complications which could take a while to fix. We wound up evicting nodes, removing/recreating SQL AG listeners (for SQL AlwaysOn nodes), rebooting nodes, etc. Basically, WSFC pulls subnet configuration directly from the NICs used in the cluster and you have to bring cluster services down, make changes, bring them back up in a specific order to avoid issues. I decided to find a faster way to accomplish this. It involves modifying the CLUSDB cluster configuration registry hive (here is a good page on this database). When cluster services are actually running this registry hive is mounted as ‘HKLM\Cluster‘, but we will be taking down the cluster and manually mounting the hives for offline modification.

  1. Stop other cluster-related services (like SQL Server) on all nodes
  2. Shutdown the entire cluster. You can do this from the ‘Failover Cluster Manager’, PowerShell, or the old ‘cluster.exe’ utility
  3. Verify the ‘Cluster Service‘ service is stopped on all nodes
  4. Update the subnet mask(s) on the appropriate NICs on all nodes
  5. Update the CLUSDB registry hives on ALL nodes. Make sure you are updating the correct registry values with the correct values. There are both regular subnet mask values AND subnet CIDR values to update
    • BACKUP THE ‘C:\Windows\Cluster\CLUSDB’ files on all nodes BEFORE making any changes. You may want to backup/snapshot the server as well
    • Load the ‘C:\Windows\Cluster\CLUSDB’ registry hive (REG LOAD HKLM\CLUSDB “C:\Windows\Cluster\CLUSDB”)
    • Edit HKLM\CLUSDB\NetworkInterfaces\OBJECTGUID\AddressPrefixList\0000  values
    • Edit HKLM\CLUSDB\Networks\OBJECTGUID  values
    • Edit HKLM\CLUSDB\Networks\OBJECTGUID\PrefixList\0000  values
    • Edit HKLM\CLUSDB\Resources\OBJECTGUID\Parameters values (for IP address resources)
    • UNLOAD HIVES ON ALL NODES (REG UNLOAD HKLM\CLUSDB)
  6. Start cluster again
  7. Verify all networks and IP address cluster resources are displaying right subnet values
  8. Start any other cluster-related services you may have stopped in step #1
  9. Verify functionality of the cluster and all other services

This method could also be used to move the cluster to an entirely new subnet, but it might be easier just to create new IP resources (which will create new cluster networks) at that point.

Here are the first two examples of the registry changes above:

Apktool – build error – brut.common.BrutException: could not exec (exit code = 1)

I have an ongoing Android app ‘modification’ project that I periodically need to perform updates on. I recently went to disassemble, re-modify, and reassemble the new version of the APK and was presented with this error:

AndroidManifest.xml:1: error: No resource identifier found for attribute 'compileSdkVersion' in package 'android'
AndroidManifest.xml:1: error: No resource identifier found for attribute 'compileSdkVersionCodename' in package 'android'
W:
Exception in thread "main" brut.androlib.AndrolibException: brut.androlib.AndrolibException: brut.common.BrutException: could not exec (exit code = 1): […APK path information…]
at brut.androlib.Androlib.buildResourcesFull(Androlib.java:492)
at brut.androlib.Androlib.buildResources(Androlib.java:426)
at brut.androlib.Androlib.build(Androlib.java:305)
at brut.androlib.Androlib.build(Androlib.java:270)
at brut.apktool.Main.cmdBuild(Main.java:227)
at brut.apktool.Main.main(Main.java:75)

I updated apktool/JDK and made sure everything else with my setup was correct. After a little digging I found that I just needed to run a cleanup to resolve the issue. After running ‘apktool empty-framework-dir‘ I was able to successfully reassemble the APK.

Test-MailFlow cmdlet failing with *FAILURE*

I recently started looking into using the Test-Mailflow cmdlet to develop an email flow monitoring script in LogicMonitor. I had never tried using it in my current environment before and when I tried executing the cmdlet it just timed out with this output:

[PS] C:\Windows\system32>Test-Mailflow -Identity mailbox01
RunspaceId : 808205bb-e671-4a65-94ca-1828bf0f7ab8
TestMailflowResult : *FAILURE*
MessageLatencyTime : 00:00:00
IsRemoteTest : False
Identity :
IsValid : True

I tried adding -Verbose and -Debug switches and did not get anything useful. I checked to make sure all system mailboxes (Get-Mailbox -Arbitration) were in place and verified the test messages were going out via the transport logs. I dug a little more into how the cmdlet actually works and found that it sends an email with a delivery receipt which led me to look into that. I eventually found that we had our ‘DSNConversionMode‘ set to ‘DoNotConvert’ in our transport configuration:

[PS] C:\Windows\system32>Get-TransportConfig | fl DSNConversionMode
DSNConversionMode : DoNotConvert

After changing it back to the default (UseExchangeDSNs) the cmdlet started working. During testing I was sending email from my mailbox to a system mailbox with the ‘Request a Delivery Receipt‘ option checked. Exchange is expecting the default format in the delivery receipt DSN email and when it is modified Exchange cannot process it.

Delivery receipt with DSNConversionMode set to DoNotConvert:

Delivery receipt with DSNConversionMode set to UseExchangeDSNs:

Citrix Receiver Per-User Install Cleanup

For years Citrix has created the Receiver installer with per-user installation functionality where if the installer is launched in the context of a regular user it will install/register the components to the local user’s profile rather than just failing with a permission error. This creates a huge headache when trying to mass deploy Receiver (now Citrix Workspace) to the environment. You wind up with machines that have both installed. When this happens the user that had the per-user installation cannot launch applications. Even worse the machine/profile usually winds up being in a state where the per-user installation cannot be removed. Even if you get it removed the uninstaller and Citrix’s own cleanup utilities do an awful job at cleaning up the registered classes/components in the per-user installation. Their tools only clean up a fraction of what is actually there. My last two work environments (and current) have been plagued with these installations. I spent time a while back figuring out how to clean it up manually, but it is a major headache to do so. I tried logging a case (and an enhancement request) with Citrix about two years ago stating their Receiver Clean-up Utility utility does not properly clean up these installations. They later came back saying they no longer are supporting the utility. It seems that since then they’ve updated the utility to clean installs up to version 4.3.

Let’s take a look what Citrix is missing in their utility… To do this I take a clean profile, install Citrix Receiver 4.3.100 (not elevated/per-user install), and uninstall it using the Receiver Clean-up Utility (running as an administrator/elevated) while the regular user is still logged in and has their profile loaded. Here is a high-level list of what was left behind in the registry:

  • HKCU\Software\Classes\* – File Associations and COM object registrations
  • HKCU\Software\Classes\AppID\* – AppID registrations
  • HKCU\Software\Classes\Applications\* – More app registrations
  • HKCU\Software\Classes\CLSID\* – MANY COM class object GUIDs
  • HKCU\Software\Classes\WOW6432Node\CLSID\* – MANY COM class object GUIDs (32-bit)
  • HKCU\Software\Classes\Interface\* – MANY interface name to interface ID mappings
  • HKCU\Software\Classes\WOW6432Node\Interface\* – MANY interface name to interface ID mappings (32-bit)
  • HKCU\Software\Classes\MIME\Database\Content Type\* – x-ica MIME types
  • HKCU\Software\Classes\PROTOCOLS\Filter\* – Protocol filter handlers
  • HKCU\Software\Classes\Record\*
  • HKCU\Software\Classes\TypeLib\*
  • HKCU\SOFTWARE\MozillaPlugins\* – Firefox plugin registrations
  • HKCU\Software\Microsoft\Installer\Products – MSI installer product codes

In addition to this there are other issues:

  • When it is run against a machine it doesn’t properly load other existing (unloaded) profiles. This caused it not to fully process other user profiles on the machine
  • It doesn’t always kill processes correctly leaving file/directories behind

I decided to build a wrapper script around the Citrix Receiver Clean-up Utility to fill in the gaps. To do so I had to create a full list of everything I had to target. I decided to extract the MSIs from the installer I was testing with and dissect them. I used SuperOrca to pull the ‘Registry’ table from each MSI. I imported those into Excel and used filtering/VLOOKUPs to extract what I needed.

After chopping up the data I am left with the following groups of items to target:

  • A static list of unique registry keys under the user’s profile (this is a good start, but I noticed that some of the IDs in CLSID and Interface are different between versions)
  • A list of values to search for under ‘HKCU\Software\Classes\CLSID’ in order to determine of the root key should be deleted. Also need to target the WOW6432Node path
  • A list of values to search for under ‘HKCU\Software\Classes\Interface’ in order to determine of the root key should be deleted. Also need to target the WOW6432Node path
  • A list of values to search for under ‘HKCU\Software\Microsoft\Installer\Products’ in order to determine of the root key should be deleted. The clean-up utility cleaned up most of this, but one was left over for me

Now that I have the targets it’s time to write the script. In addition to targeting the various registry locations I want to:

  • Identify and load all profile registry hives. This will allow me to run my clean-up process against all profiles, but it will also allow the clean-up utility to process them without the having to be logged in
  • Kill processes that reside in certain paths using wildcards
  • Execute the Citrix Receiver Clean-up Utility silently
  • Clean up a static list of registry keys in all user profiles
  • Search for a list of value strings under in the ‘CLSID’ keys and delete the parent key if a match is found
  • Search for a list of value strings under in the ‘Interface’ keys and delete the parent key if a match is found
  • Search for a list of ‘ProductName‘ value strings under in the ‘Installer\Products’ keys and delete the product key if a match is found
  • Unload all registry hives that were manually loaded in the first step
  • I also wanted the script to work with older versions of PowerShell. I did my best to make it compatible with PS V2

I have tested the script with multiple versions and so far it is working well. It does require that you download the Receiver Clean-up Utility and place the executable in the same directory as the script. Feel free to submit any issues here or in the GitHub repository.

GitHub link: https://github.com/markdepalma/CleanPerUserReceiverInstall

Applying group policy preferences based on Citrix delivery group or machine catalog membership

We’ve slowly been transitioning our Citrix XenApp environment from static VMs to Machine Creation Services (MCS) based VMs. My goal was to have two master (fat) images and two machine catalogs. Because of policy and application segregation requirements those two catalogs translated into more than two delivery groups. With these delivery groups came the requirement to apply different group policies to different machines. One option would be to move the corresponding AD object into a different OU and apply policy that way. While that would work due to AD objects not being automatically moved/re-created after machine creation it still requires some administrative overhead. It was clear that dynamically adjusting certain policies based on delivery group membership would be ideal.

After a little digging I found where both the delivery group and machine catalog memberships were written to in the registry by the VDA. Below is an example of how we applied a user GPP item dynamically based on the delivery group of the machine.

Registry Key Path: HKEY_LOCAL_MACHINE\SOFTWARE\Citrix\VirtualDesktopAgent\State
Delivery Group Value Name: DesktopGroupName
Machine Catalog Value Name: DesktopCatalogName

Deploying a RODC for Palo Alto PAN-OS Credential Phishing Prevention

I was recently tasked with setting up the AD side of PAN-OS Credential Phishing Prevention. For some technical reason that I haven’t been able to find it requires a read-only domain controller (I attempted putting the credential agent on a regular DC just to see if it would work and it seemed to read credentials without issue. If anyone has information about RODC requirement I’d love to hear it.) We don’t have or use any read-only domain controllers currently, so I had to deploy one for each domain we needed to protect. This brought up a few questions to mind…

  • How would I decide/maintain what users have their passwords replicated to the RODC?
  • How do these passwords get replicated to the RODC? By design passwords are only replicated to an RODC after an initial authentication attempt when they are configured for password replication.
  • Since the sole reason this domain controller is being deployed is for PAN-OS I don’t want it to handle logons and I want to make it very lightweight. How do I prevent user logons/authentication from occurring on this DC?
  • How are usernames identified? Will it handle all formats (samAccountName, explicit UPN, implicit UPN, and email address)?

How would I decide/maintain what users have their passwords replicated to the RODC?

This one is pretty easy for me. I don’t see any reason to exclude any accounts from credential detection, so I will use ‘Domain Users’. I usually stay away from using default groups, but this is one of the few cases where it makes sense to do so.

How do these passwords get replicated to the RODC?

I turned the logging level up to verbose (HKEY_LOCAL_MACHINE\SOFTWARE\Palo Alto Networks\User-ID Credential Agent\Log | DebugLevel=5) on the credential agent after full configuration and saw that the agent enumerates all the objects within the ‘msDS-Reveal-OnDemandGroup‘ attribute of the RODC computer object (and DNs manually specified in the user-id agent seen in the screenshot below) and executes ‘repadmin‘ against each object to force replication. As password changes are detected it re-replicates passwords using the same method.

How do I prevent user logons/authentication?

Clients discover domain controllers using DC Locator. I decided to prevent the domain controller from registering all SRV records except for the two necessary for replication (LdapIpAddress + DsaCname). To do this I set a local policy under ‘Computer Settings → Administrative Templates → System → NetLogon → DC Locator DNS records‘ called ‘DC Locator DNS Records not registered by the DCs‘. The value I set for this policy was:

Ldap LdapAtSite Pdc Gc GcAtSite GcIpAddress DcByGuid Kdc KdcAtSite Dc DcAtSite Rfc1510Kdc Rfc1510KdcAtSite GenericGc GenericGcAtSite Rfc1510UdpKdc Rfc1510Kpwd Rfc1510UdpKpwd

How are usernames identified?

After experimentation it is clear that when using the domain credential filter method PAN-OS is getting the user from the IP<->user relationship and only looks for that user’s password in web site submissions. No matter what username I put in a form the submission triggered a detection as long as the password matched my password. Another user’s credentials under my session did not trigger a detection. I was happy with this because I do not have to worry about certain username formats not being detected.

After all of these questions/concerns were addressed came the actual implementation. You are required to install both the ‘User-ID Agent’ and the ‘User-ID Credential Agent’ on the RODC. According to the documentation this instance of the user-id agent should not be used for IP<->user relationship gathering and should only be pulling credentials. The credential agent creates the ‘bloom filter’ and sends it over to the user-id agent. PAN-OS connects to the user-id agent receives the newest version of the bloom filter. One issue I ran into was around permissioning and service accounts. Normally you would assign a domain account with limited permissions to the user-id agent, but the thing to consider here is that credential agent and user-id agent communicate using named pipes. According to the documentation on named pipes if no ACL is specified when creating a named pipe the default ACL is:

  • LocalSystem – Full Control
  • Administrators – Full Control
  • Creator Owner – Full Control
  • Everyone – Read
  • Anonymous – Read

The issue here is that the credential agent only runs under LocalSystem and assigning a non-administrator account to the user-id agent service prevents the user-id agent from communicating to the credential agent’s named pipe. Leaving the user-id agent service running under LocalSystem worked, but created another problem. When running under LocalSystem for some reason it was unable to enumerate the ‘msDS-Reveal-OnDemandGroup‘ attribute (seen in the UaDebug.log file) for the RODC meaning it couldn’t determine what user accounts were allowed to sync to this RODC. I found that if I manually specified a group DN in the user-id agent it would work under LocalSystem. The only other option would be switching to a ‘DOMAIN\Administrators’ service account (since this a domain controller) which I did not want to do. Since I’m only using ‘Domain Users’ this was easy enough to configure.

UPDATE: There seems to be a discrepancy between how the User-ID agent worked previously, the current documentation, and how it works now. In the past the User-ID agent configuration utility would adjust the ‘Log on as’ value for the ‘User-ID Agent’ service to the account you specified in the agent setup ‘Authentication’ tab. It seems now the service continues to run as LocalSystem, but uses the account specified in the configuration to actually probe the DCs and AD. I was able to leave it running as LocalSystem, specify an account with the proper rights in the ‘Authentication’ tab, and leave the group DN blank under the ‘Credentials’ tab in the user-id agent configuration utility. I verified the agent was using the account via logon events in the security event log on the RODC.

After configuring this you can monitor both log files to verify proper operation and then later verify PAN-OS is properly receiving the bloom filters. Be sure to restart the user-id agent after making any changes.

Credential agent log (UaCredDebug.log) sending bloom filter:

02/08/19 12:43:46:593 [ Info  667]: Sent BF to UaService. 21edc031f4891d2c42c133acded980ba

User-ID Agent log (UaDebug.log) receiving bloom filter from credential agent:

02/08/19 12:43:46:593[ Info 2896]: Received BF Push. Different from current one.
02/08/19 12:43:46:593[ Info 2897]: 0829f71740aab1ab98b33eae21dee122->21edc031f4891d2c42c133acded980ba

Azure AD Connect mail-enabled public folder synchronization issues – The cause of the error is not clear

We recently went through some Exchange Online Protection (EOP) cleanup and part of that involved turning on Directory Based Edge Blocking. We already went through the exercise of syncing all objects (especially ones part of Exchange), but the only ones that weren’t being synced were mail-enabled public folders. After turning on Directory Based Edge Blocking we realized there were a few public folders that needed to receive mail from the Internet. After syncing mail-enabled public folders (this is a newer feature in AD Connect) we received synchronization errors for four objects. The only thing these objects had in common was that they referenced a mail-enabled public folder by either having that object as a group member or having it as a forwarding object on a mailbox.

The errors we receiving were:

  • The cause of the error is not clear. This operation will be retried during the next synchronization. If the issue persists, contact Technical Support.
  • IdentityDataValidationFailed

The workaround is to create a mail contact object that has the same targetAddress as the mail-enabled public folder object and use that object in place of the public folder object in something like a group membership. The issue with this is that by design a mail contact’s targetAddress is also part of its proxyAddresses attribute and the mail-enabled public folder object of course already has the email address as part of its proxyAddresses attribute. This duplicate is not allowed. The way around this is to modify the mail contact object so that the targetAddress is not part of proxyAddresses. To create this special mail contact you do the following:

  • Create a mail contact in Exchange with a fake external address
  • Disable e-mail address policy for the object
  • Use ADSIEdit to:
    • Change the targetAddress to the email address of the mail-enabled public folder
    • Remove the fake external address you specified earlier from proxyAddresses

After the object has been created you can now use it in lieu of the mail-enabled public folder in group memberships and other attributes.

Deploying Home Assistant Hass.io to ESXi 6.5

I recently did a complete overhaul of my home ESXi home lab environment. With the new capacity/reliability came the desire to move as much onto it as possible. One of these items was my Home Assistant Hass.io instance which was running on a Raspberry Pi 3 (and originally a Raspberry Pi 1 B). Running it on the Pi has always come with painfully slow reboot and update times. With VM real estate available I see no reason to rely on mini computers to run various workloads around my home. I can re-purpose these boards for other projects. ESXi will also bring the ability to do machine-level snapshots which will be more complete and easier to revert to than the snapshot mechanism within Hass.io.

The main issue I ran into was with the VMDK. The way the VMDK was created it split into multiple files and I couldn’t consolidate or delete snapshots. The VMDK was also getting locked preventing vMotions. To get around this I cloned the VMDK in shell using vmkfstools. I also had to use the proper storage controller, network adapter, and firmware settings. I’ve listed all steps below:

  • Create a new VM with the following parameters:
    • Guest OS – CentOS 7 (64-bit) (The VM will adjust this automatically later)
    • 1 vCPU
    • 2GB RAM
    • 1x E1000 NIC (NIC will not be usable as VMXNET 3)
    • Remove any other devices like CD-ROM, hard disk, SCSI controller, etc.
  • Download the latest stable HassOS VMDK from https://github.com/home-assistant/hassos/releases
  • Copy this VMDK up to the VM directory in ESXi/vSphere
  • Open an SSH session to the ESXi host and change directory to the location of the VMDK you just copied (ex. cd /vmfs/volumes/datastore1/HASSOSVM)
  • Clone the VMDK using the following command: vmkfstools -i “hassos_ova-2.8.vmdk” “hassos_ova-2.8_new.vmdk” (This creates a thick copy of the disk and avoids locking/snapshot issues with the virtual disk)
  • Delete the original VMDK from the datastore as it is no longer needed
  • Edit the VM and add an existing hard drive selecting the VMDK you just cloned
  • Change the controller for the disk from “New SCSI controller” to “IDE 0”
  • Remove the newly added SCSI controller as it is not needed for the IDE virtual disk
  • Go to “VM Options” and change the Firmware from “BIOS” to “EFI”
image

After all this has been completed you just need to power on the VM. Assuming DHCP is configured properly on the network Hass.io is using it will pick up an IP and start configuring. With Hass.io running on an older Xeon-powered host I have never seen the VM get over 50-60 percent CPU utilization and even then I’ve only seen those spikes when running an update. Updates of HassOS and Hass.io take a minute or two when they would sometimes take up to 10-15 minutes when running on a Pi.