One issue we ran into during our Intune/Outlook pilot for Android/iOS devices was the inability to click RSA SecurID token links used to import tokens. We will eventually be moving away from RSA, but in the meantime this was a challenge. I was able to come up with a workaround that allowed an import from Intune/Outlook into RSA SecurID while using MAM policies an iOS device.
In the MAM policy (Application Protection policy) that targets Outlook/Edge create a ‘Data Transfer‘ exemption for ‘com.rsa.securid‘
Email the RSA SecurID token to the user using the format: com.rsa.securid://ctf?ctfData=xxxxxxxxxxxxxxxxxxxxxx
Copy this link (be sure to not copy any spaces or) into Edge and hit ‘go‘
After hitting ‘go‘ Edge should prompt you to open up the token in RSA SecurID.
The other day one of our NetScaler appliances was unable to boot up after a power down. It was getting stuck during the FreeBSD bootup phase (before the NetScaler software actually loads) with the error:
Fatal trap 9: general protection fault while in kernel mode
The only information I could find on this specific issue was here: https://support.citrix.com/article/CTX238252, but this was not relevant to us. I could not find anything else online talking about receiving this error on a NetScaler appliance. Restoring to previous snapshots of the appliance didn’t resolve the issue. After some digging I found that this VM was set to the highest VM compatibility level. At some point someone had set the comparability level of the VM to be upgraded to version 15, but this didn’t take effect until the VM was actually powered down (it had been rebooted many times since without issues).
To remediate this issue I did the following:
Removed the VM from inventory
Manually edited the vmx file ‘virtualHW.version‘ line to say virtualHW.version = “4”. I chose a lower version, so that I could use the GUI to upgrade the version later. This can be done using WinSCP or something similar to download/edit the file
Added VM back to inventory
Upgraded VM compatibility to version 7 in vCenter to let the system actually run through the VMX and check settings
After doing all of the above I was able to successfully boot up the NetScaler appliance. The main takeaway here is that the ‘fatal trap’ error was directly related to the VM compatibility setting in ESXi in this particular case.
My company recently started using new Biamp TesiraFORTE devices for their newer conference rooms. I have little experience with VoIP (besides my fun with Google Voice and GVSIP) or these types of devices, but I was asked to assist in diagnosing a strange issue where audio going from the Biamp device to the Avaya gateway would randomly cut out for 1-3 minutes. Audio going from the gateway to the device would continue to work during this drop. I started by looking at a Wireshark capture of a span port of the Biamp device. This trace initially looked fine to me. I was able to view the RTP traffic and use the RTP player (Telephony -> VoIP Calls). During the time of drop there was no loss of audio.
The next step was to get a trace of the other side involved. To do this we created a span of the interface on the switch that the gateway was sitting on. We then ran a packet capture of that, but using a capture filter to reduce the size of the capture since many other devices (mainly phones) were communicating with this gateway. We just used the filter ‘host <IP address of Biamp device>’ as the capture filter. I colorized traffic Biamp -> gateway traffic in this capture to make it easier to read. I also had to decode the traffic into RTP since the gateway traffic didn’t contain the initial SIP handshake.
After decoding the raw UDP traffic into RTP traffic and colorizing the Biamp -> gateway traffic we are left with a nice back and forth to look at.
Because we didn’t capture the entire SIP handshake (the SIP gateway actually being dialed is another gateway in a different datacenter and that SIP gateway is actually handing off the call to a local gateway in the same building as the Biamp device) we need to reconstruct the RTP streams to be able to view them on a graph and play them back.
Now that we have a nice graph to look at and audio to look at we can hone in on the time of the audio loss which was about 12:26PM. When we do this we can clearly see a loss of sound and traffic at that time.
Now that we have this we can confirm that traffic isn’t actually getting to the gateway and it is not a problem with the gateway. This means somewhere between the first switch and the last switch in the path the traffic is being lost. The next step was to create a span of the trunk leaving the first switch (the switch that the Biamp device was sitting on). We actually saw the same loss of outbound traffic here as well. When the Biamp devices were installed they were actually hard set with a 100mb/full duplex configuration because they supposedly weren’t negotiating correctly and were negotiating to 10mb/half duplex. I decided we should work on that because hard setting like this can actually hide interface drops. After resolving this issue we realized the negotiation issue was actually a cosmetic issue in the configuration GUI. We upgraded firmware on the device to resolve this issue.
At this point I was a little baffled and had to retrace my steps since this wasn’t making any sense. To do this I took the last received RTP packet before an instance of audio loss on the gateway side capture and made note of the RTP sequence number. I used this sequence number to locate the outgoing packet on the Biamp device capture. I then marked this packet and marked the very next outgoing packet and started comparing them. This is when it got interesting. The destination MAC address changed between the two packets and remained that way for the duration of the audio loss. What made it even more interesting is the only thing that changed in the MAC address was the second byte which went from EC to 00. This MAC address was the address of the VLAN SVI (gateway). At this point I created new columns that included MAC addresses in my Wireshark view.
To figure out what could have caused this let’s keep this first marked packet highlighted and remove our filter (I was filtering down to SIP + RTP traffic in most of these screenshots) to see all traffic on the Biamp device span. The ONLY traffic that is visible to the Biamp device in between the last ‘good’ RTP packet and the first ‘bad’ RTP packet is a series of ARP request broadcasts from the switch. These ARP requests were normal requests looking for who had a series of IP addresses. It seems that when the Biamp device sees a number of these ARP request broadcasts it relearns the switch’s MAC address incorrectly or these APR request broadcasts somehow trigger a software bug that essentially poisons the Biamp device’s ARP cache by causing that second byte to go from EC to 00.
Even though it is normal and shouldn’t cause any issues we traced the reason for these ARP requests back to a network scan happening on this subnet at the time. None of these IP addresses are valid addresses and network scan is what triggered the switches to start searching for these addresses via ARP requests. Below is one of the ARP request packets. There is nothing in this packet that references the ‘bad’ switch MAC address with the second byte having 00 instead of EC.
The next thing to look at is what actually causes the audio to be restored after a few minutes. Below is the same trace, but later on when the audio loss ends. The first marked (black) packet is the last ‘bad’ RTP packet and the last marked (black) packet is the first ‘good’ RTP packet when audio is restored. With no filter applied we can see that the ONLY thing happening between these two packets is ANOTHER ARP request broadcast. This ARP request seems to cause the Biamp device to re-learn the switch’s MAC address properly which restores RTP traffic delivery. The first ‘good’ RTP packet is the RTP packet with sequence number 53054. We can use the capture on the gateway side to find this packet.
With this new knowledge we were able to reproduce the issue on demand by scanning 20-30 non-existent IPs on the subnet which triggered the ARP requests. We could then restore audio by doing the same scan. Sometimes it would take a few extra scans to trigger the behavior.
We handed all of this information off to Biamp and they are still investigating the issue. When they come back with more information and/or a resolution I will update this thread. An important lesson here is to pay attention to the details. I technically had all the data I needed to figure out the issue in the original capture on the first day I was involved, but I wasn’t forced to look more closely until I looked at many other things.
As part of piloting O365 I was tasked with implementing hybrid modern authentication in our Exchange org in order to leverage functionality like the Outlook mobile application and MFA within the Windows version of Outlook for on-prem mailboxes. One caveat of enabling hybrid modern authentication in Exchange is that once this is flipped on any compatible client (ex. Outlook 2016) will begin using modern authentication (ADAL) exclusively by default. This switch can potentially be disruptive and we did not want to run into issues with the general user base. To do this we needed to disable modern authentication in Outlook on the client-side while being able to selectively enable it for certain users. This is easily handled with a ‘EnableADAL’ registry setting via GPO/Group Policy Preferences (GPP)/AD group. The issue is when you use an AD group with a group policy any member addition/removal needs to be coupled with a logoff/logon (or a reboot if it involves in a computer object in an AD group) to generate a new Kerberos token. I wanted to be able to quickly enable/disable ADAL for a user without requiring them to logoff/logon.
In order to get around this requirement I used GPP targeting with an LDAP query that looked for the group membership rather than standard group membership check. This LDAP query is completely dynamic and isn’t tied to the group list in user’s Kerberos token.
To do this you can do the following:
Create your GPP setting
Enable ‘Item-level targeting‘ on the setting
Create a new ‘LDAP Query‘ item
Create your filter using the distinguished name of your AD group and the ‘%LogonUser% variable
This method could also be used for traditional GPO settings as well, but you’d have to use GPP to directly target GPO registry value(s) (ex. HKCU\Software\Policies\Microsoft\Windows\Control Panel\Desktop – ScreenSaveActive=0/1). This method could also be used for computer-based settings, but the LDAP query would have to be adjusted to target a ‘computer‘ objectCategory and the name of the computer (%ComputerName%). I wouldn’t use this method for everything, but can be very helpful for those one-off situations where you want a setting to take effect immediately without requiring a logoff/logon or reboot.
A few years ago I became interested in electronic projects. I initially started tinkering with an original Raspberry Pi that purchased when they were first introduced and later moved to micro-controllers like the Arduino and Particle Photon (previously known as Core). My first projects were a set of Raspberry Pi thermostats and a Raspberry Pi deadbolt lock. Both of which could be remotely controlled.
After creating this blog I wanted a fun way to monitor visitors while at work. I wound up throwing together a Particle Photon I had lying around with a 7-segment display and some PHP to create this desk counter. It uses a PHP page that returns the count using a MySQL query. The PHP page is retrieved via the Photon and displayed to on the LED counter. Anything with blinky lights makes for a fun desk toy!
I recently started piloting patch management and needed the ability to exclude a number of device groups from a scope. We were already using Endpoint Manager for 3rd party patching using a simple task/scope targeting all workstations. The problem was that the task was overwriting the reboot settings of the agent configuration for devices in the patch pilot. I needed to create a new scope for all workstations that excluded all devices in the pilot (which were in multiple device groups in Endpoint Manager).
To accomplish this I did the following:
Created a dummy scheduled task and used my pilot device groups as targets in the task (you can also use a query, a scope, or individual devices here)
Created a query that excluded machines that were part of this task
…and created a scope using that query
This new scope is what I used as a target for my legacy 3rd party task. This is an easy way to exclude exclude a scope/device group/query from another scope/query. This can be very handy for more complex targeting.
We recently made changes to our on-prem Exchange org. Not long after we realized that any email flowing through Exchange Online to on-prem was not getting processed by our journaling configuration (per-database journaling). After digging and opening a case with Microsoft we found that Exchange Online was injecting this header:
This header tells Exchange that journaling was already processed. On-prem Exchange will then not process any journaling for that message. O365 apparently started injecting this header in the summer of 2018. The reason we did not run into the issue earlier is because until we were in hybrid mode (and ran the hybrid wizard) the Exchange header firewall was stripping this header as it arrived on-prem. They did release an article on this exact issue back in July 2016, but we didn’t come across it until Microsoft found the issue. The current fix is to duplicate all journal rules/settings in Exchange Online. According to Microsoft they are planning to add a warning in the hybrid wizard for this condition.
I recently started working on an O365 pilot/implementation and had issues getting into the Teams Admin Console. Even after making sure a license was applied to my admin account I was still receiving this error:
Sorry, we can't sign you in.
The domain you are trying to sign in to doesn't have any users that have a Microsoft Teams or Skype for Business Online license assigned to them. Learn more
In part two I covered how I was able to make use of the cameras by first attempting to crack a known root password hash (which wound up not working for this firmware) and then later re-flashing the camera with alternative firmware. Even though I accomplished my primary goal of making the Verizon-branded camera usable I still felt I could take this further by actually cracking the default admin credentials and avoiding the need to re-flash these cameras. I started by taking a closer look at the camera and its components. I had already identified and utilized the UART points, but hadn’t yet looked very closely at other items. One of the easiest things you can do is take FCC ID and look it up in the FCC database. I did this and although it didn’t provide me with anything especially helpful it was an interesting read. After this I decided to single out each component and research them. The picture below shows the front side of the main board. I’m particularly interested in the two chips in the top right corner of the board, but we’ll get there.
I started with the small 8-pin chip in the middle of the board. I had originally hoped this was a standard EEPROM chip as they are fairly easy to read, but taking a closer look it didn’t seem to be the case. The text read “ADE SLAVH“. I was unable to find anything on this, so it was clearly not an EEPROM.
I then moved to another 8-pin chip on the back of the board (next to the wireless interface module). After doing a little research on this chip it seemed to be a Renesas ISL 1208 RTC (real-time clock).
The next chip was one to the right on the front side of the board. It was a 16-pin chip, but I had trouble finding anything about this one. It was clear this wasn’t one that would be useful at this point, so I moved on.
The next chip I focused on was one of the two larger ones in the upper right of the front side of the board. This chip was labeled ESMT M12L128168A. It also had a matching partner on the direct opposite side of the board. When looking this up it wound up being a 16MB RAM chip. These two 16MB chips align with the 32MB of RAM in the boot output I observed via UART back in part one.
The last chip I moved to was one reading MX29LV320EBTI-70G. This one was sitting right next to the RAM chip above. A quick search revealed that this is actually a Macronix 4MB 48-PIN NOR parallel flash chip. This also aligned with the boot output from part one. During bootup the bootloader referenced a 4MB flash. This was the component to focus on.
Extracting a surface-mount device component
Now that I knew which chip was the flash chip it was time to attempt extracting it. I had never attempted the extraction of such a small component before, so I did some research into methods. I found that there are special low melting point alloys that are meant for just this type of task. I chose to go with SRA Fast Chip KIt SMD Removal Alloy. It was relatively cheap and available on eBay. Another popular choice for this task is Chip Quik which looks to be identical, but slightly more expensive.
To use these products you take the included flux (or a regular rosin flux) and cover all pins on the chip. You then melt a small amount of the alloy and cover all pins with it. Once everything is covered you try to keep the alloy melted by running the iron over the alloy with it while carefully extracting the chip with tweezers.
Once the chip has been pulled you can clean up the board with the iron (if you are planning to re-solder a chip there). The chip was a little more tedious to clean up. I carefully ran the iron’s flat tip over the pins while slowly dragging the chip across cardboard. The alloy came off in small streaks with each pass. The chip needs to be free of any bridging in order to be read properly.
Reading a NOR parallel flash chip
Reading a parallel flash chip can be a bit difficult. There are a lot of pins to communicate with and it all has be done in a specific order. Communicating with something like a standard 8-pin EEPROM is much easier. I was able to find someone using an Arduino to read a TSOP-48 NOR flash chip like this one, but I decided to take an easier route as I had already put a good amount of time into this project. I looked into different flash readers/writers and came across the FlashcatUSB xPort with the TSOP-48 adapter. It was relatively cheap and supported this chip along with many many others. I’m sure I’ll be able to make use of it in the future. Setup was easy and I was able to successfully pull an image from the chip.
Now that I had a binary image of the chip I could start digging for what I was after… the default username/password for the admin console. In part one we learned the partition layout from the bootup output:
The “CONFIG” partition is usually what holds the persistent device configuration in plain-text on these types of devices. I loaded the image into hexedit and started working my way down from 0x00010000… and there it was! Some developer at Verizon has a sense of humor. I finally had what I really wanted from the beginning… the default admin console username/password. I tested it immediately and it worked. I also verified that once in the console I could change the username/password along with any other settings and they persisted a reboot. The admin console was almost identical to the default Sercomm web console. All the same configuration options were present.
I also extracted the Squashfs filesystem for examination. I verified that the root password was definitely not the manufacturer default that I had cracked in part two. It is actually generated during the boot process. I may possibly go into breaking this down in a part four post, but for now I accomplished exactly what I set out to do. I obtained the default admin credentials for a camera that Verizon was trying to keep you locked out of. The cameras are now fully functional as-is no flash required.
In the last post I went over obtaining terminal access to the camera via the UART connection points on the main board. One I had terminal access I was still presented with a login that I needed to get past. With no other way at the time to get this I started doing more research on this and the other related camera models I mentioned previously. I was hoping to find firmware that could lead me to a static root password. I was unable to find any export of the Verizon firmware, but I did find others. With some Google searches I stumbled across unpublished links for source-code builds on the Sitecom web site for both the WL-404 and LN-406 cameras. The two archives I used were under: https://www.sitecom.com/documents/. Let’s take a look at GPL _ LN-406_WL-404 _ fw1_0_11.zip. If we dig into ZIP we can a lot of source code, but more importantly we see an /etc/passwd file. The contents of this file were:
The string we are concerned with is ‘9szj4G6pgOGeA‘. Taking a closer look at the string we see It is an old descrypt crypt(3) Unix password hash. The password itself has a maximum length of eight characters combined with a two character salt (‘9s‘). If you Google this hash you find that it actually is common among many of the camera models I mentioned in part 1, but I was unable to find any instance of this hash actually being cracked. Looking at the firmware it seemed that this was based on a core source code distribution that originally came from Sercomm (the actual manufacturer of the camera). I figured that there was a high probability of this being actual root password on the camera I was trying to crack. At this point I decided that cracking the root password may be the easiest way to get the web console admin password and get access to the camera.
Cracking a crypt(3) Unix Password
I had never attempted to crack this algorithm before and knowing the password wasn’t overly simple (as others have attempted to crack it) I knew I’d need external resources to accomplish this. I started off by calculating the approximate number of password combinations I’d have to go through. I know nothing about the password, so I’d at least have to assume it could have all printable characters with a length between one and eight characters. The math for this would be: 93^1 + 93^2 + 93^3 + 93^4 + 93^5 + 93^6 + 93^7 + 93^8 = 5656642206396600 combinations. I ran a quick Jonn the Ripper benchmark on a decent Azure VM (no GPU) and the descrypt performance on that box was only around 16385000 c/s. At that rate it would take ~11 years to crack. I found the most powerful video card I had lying around which was an NVIDIA GTX 750Ti (forgive me, I am not a gamer). The Hashcat benchmark for this card cracking descrypt was around 28989000 c/s. This was significantly faster, but still would take ~6 years. I started researching video cards and found that one of the more popular cards, the NVIDIA 1080 GTX Ti, had a descrypt crack rate of around 1227000000 c/s. At this rate it would take around ~53 days. I had a colleague who actually had this card at home and I figured I’d make a deal with him to run this crack for me during idle time if I couldn’t find anything else. I started researching heavy GPU cloud systems and the most powerful (GPU-wise) I found was the Amazon p3.16xlarge configuration which used EIGHT NVIDIA® V100 Tensor Core GPUs. These GPUs are commonly used in the machine learning space and are a bit more powerful than the top gaming GPUs. An instance with this configuration has a descrypt crack rate of around 15571600000 c/s and could crack the password in around ~4 days, but would cost $2000 for the privilege. As a last resort when looking at computing rentals I turned to Vast.ai. Playing with different GPU/server rental combinations I found the password could be cracked somewhere around $250, but I couldn’t get it any lower than this. Doing further research I came across and interesting site called Crack.sh which was founded by David Hulton (also a founder of ToorCon). He created a purpose-built system that uses a series of FPGAs and specializes in cracking DES. They rent it out for much less than any other cracking service comes close to and even offer some services for free. Because this system was specifically created for cracking DES it actually can crack a descrypt password in ~3 days which is even faster than the Amazon configuration I spec’d out. I decided to eat the cost in the name of the project and figured the cracked password would be useful to others as well. Once my crack job actually started after sitting in the queue for a while I had the password in around three days. The email read:
Crack.sh has found a password that works against your hash. The password match is included below both in ASCII and HEX:
Password (ASCII): "h@11oCAM"
Password (HEX): 684031316f43414d
This run took 313726 seconds. Thank you for using crack.sh, this concludes your job.
The password was a bit comical. I took the first instance of idle time I had, remotely connected to my lab environment, and attempted getting into the camera’s shell using the password. To my dismay I found that the password DID NOT WORK on this camera. I felt somewhat defeated at this point, but I had known when going this route that there was some percentage chance that the password was in the Verizon firmware was not the same as the other cameras using the same platform. Cracking what was LIKELY the root password was sort of an easy way out.
Flashing New Firmware
Since I had multiple versions of usable firmware (with a default web console password I actually knew) I decided to look into somehow flashing alternate firmware. I forgot to mention this in part 1, but during discovery I found that holding the reset button and powering the camera on put the camera into a download mode. I was able to see this through the UART connection. When in this mode the output showed this:
The bootloader version 4.09
Flash Size = 4M(8K x 8,64K x 63)
MAC address 00:0e:8f:7a:34:83
MAC address 00:0e:8f:7a:34:83
MAC address is 00:0e:8f:7a:34:83
DAVICOM driver ready
got PID in flash
The reason I didn’t further explore this was because at time I didn’t see how there was a way to upload this firmware. When in download mode the camera did not have an IP address and the UART connection didn’t seem to accept any input in this mode. What I didn’t realize at the time was that it was able to accept a firmware download over network in this mode, but not via TCP/IP. Further research led me to an old Sercomm tool that was used to flash a number of Sercomm-based products. I had seen references to this tool earlier on, but knowing it was a network-based utility I didn’t see how it could be used for this device. The filename/download for this tool is: Upgrade_207_XP.zip. The tool required a 32-bit OS and I didn’t have any instances running, so I decided to quickly spin up a Windows 98 VM. To my surprise the application immediately discovered the camera on the network.
I was curious what protocol this was using since it was not using IP, so I used the span port I setup earlier when I was performing the initial reconnaissance on the camera to sniff this traffic.
After the firmware update I was FINALLY able to get into the camera’s admin console making the camera completely usable. Though I technically accomplished my original goal I was not happy about spending time on the password crack that turned out to be a dead end and I still did not have or know the default admin console password (or root password) that the Verizon firmware was using. Also, my buddies would need to open and manually flash every camera before attempting to sell them. True to form I could not let it be and I will cover this in part 3!