Adventures in OPNsense

Our house received an upgrade to full fibre optic last year, and with the increase in available home bandwidth came the opportunity to retire our old NetComm router/modem/WiFi unit, which had been, if we’re generous, “adequate”.

Or, in fact, inadequate in a number of ways – its WiFi range was actually abysmal, something I only found out when I migrated to a dedicated WiFi unit and discovered that a single device can actually cover our entire property.

Also, it “supported” IPv6, in the sense that it could, technically, have an IPv6 address for a brief period of time, before just quietly giving up on routing IPv6 traffic until you either reboot or get exasperated and switch off IPv6 support.

So in the spirit of questionable decisions, and egged on by friends who are far more skilled and experienced in this sort of thing, I purchased a cheap all-in-one fanless PC with four 2.5Gb ethernet ports.

In this house, we enumerate from zero

For the curious, it runs at around 55 degrees in the Australian summer, 65 if I make the CPU think hard. Certainly an improvement over the poor old Mac mini I have running as an MPD server in our kitchen, which routinely idles at 80 degrees.

Because I can’t ever be content to do things the easy way, I put Proxmox on it first, with the intention of running OPNsense as a VM, alongside a PiHole instance.

My initial set up experience was tainted by an inability to get OPNsense to register a connection via the WAN port, which resulted in around 45 minutes of frustration before realising that my ISP requires me to manually “kick” connections between router changes. As a result of this faffing about, I now at least have a document detailing exactly what each of the physical ethernet ports are called in Proxmox and what their MAC addresses are – probably something I should have sorted out before doing anything else anyway.

Once the router had a connection, everything ran great. For thirty minutes. After which, the router simply couldn’t talk to the gateway any more. No Internet connection.

In OPNsense’s dashboard, it indicated that EVERYTHING IS FINE. Which, you know, not true.

This is a lie.

A reboot results in the same behaviour – everything is fine for around half an hour, then no WAN connectivity. Renewing the WAN connection fixes things… indefinitely. Until another reboot.

Thus began The Troubleshooting.

Various settings and configs were checked. IPv6 disabled (just in case), hardware VLAN filtering disabled etc etc

Same behaviour.

Install a completely fresh OPNSense VM – exact. same. behaviour.

And the whole time – once I renew the WAN connection, it fixes itself for as long as it remains up.
The command I ended up running in the shell:

configctl interface reconfigure wan

Magically fixed it… but only if it had already broken. That is, I couldn’t just run a renew at the end of the boot sequence and call it a day – I had to wait until the WAN connection failed before I could renew it – and that just didn’t fly (not that appending some magic words to a startup script would have made me happy).

I ended up digging through the DHCP client leases file in /var/db (in my case dhclient.leases.vtnet1) and noticing some strange overlaps in the renewal/rebind/expiry etc times.

These leases files look like this:

Turns out, this a log, rather than a config – FreeBSD is writing the dates and times there as a record, not an instruction. But this was the clue to help figure it out – when router first booted, the lease file was written to but the time for renewal and expiry passed without the router even trying to renew the connection.

Once I reran the renewal command manually, it worked fine, but crucially, it was writing dates and times that were wildly different to the initial time on bootup.

Anyway, as with all things, it was DNS.

A haiku

Well, sort of.

When I set up the initial Proxmox install, I somehow (?) managed to set its DNS server to be 127.0.0.1. Surely that can’t have been the default?

Anyway, that meant that while it could do all the work of creating VMs just fine, it couldn’t, among other things, talk to NTP servers to figure out the time.

But it did know it was in the UTC+8 timezone. And it did assume that the hardware clock was in UTC time. Which it wasn’t. It was set to local time. So my VM host believed it was 8 hours ahead of when it actually was.

When my router booted up, it was given this time and it then obtained a lease and made a note to renew that lease in 15 minutes. 30 minutes, tops.

But it then used an actual DNS server to do an NTP lookup for the correct time – at which point it said to itself, “hoo boy, I am 8 hours ahead of where I should be! Lemme just fix that right up.” But its lease now thinks the expiry is not for another 8 hours – hence failing to renew and not having any WAN connectivity.

To add to the confusion with all this – FreeBSD in all its wisdom writes dates and times to the lease file in universal coordinated time – not local time – without any indication that this is what it is doing. So when the router first obtained that lease and wrote the wrong time to the log, it looked to me like the correct time, but when I renewed the lease manually, it had corrected its own time and was now logging, what appeared to me to be 8 hours in the past.

Obviously, I think everything should always just be in UTC and that there are no problems or issues that would or could be caused by adopting a world free of timezones, but, please, indicate that somewhere, hey?

Anyway, OPNSense seems fine and good. I’ve managed to do some port forwarding and set up queues to minimise bufferbloat, so all is right with the world.

Now if I can just wrangle IPv6 DNS

SSH & HTTPS on the same port: Surprisingly easy

If you’re stuck behind a school or university firewall, you’ll often find that they’re unreasonably restrictive (as a user – as an administrator, well actually, most of the admins probably think it’s a bit over the top too, given it really doesn’t stop much untoward behaviour for the inconvenience caused).

As long as you want web traffic to sites that haven’t been blacklisted or have restricted keywords in the URL (sigh), you’ll be fine. But if, for example, you need SSH access to a *nix server offsite, you’re stuck using various web based SSH console solutions.

As always, there are a variety of ways around it: some more complex than others. But a good place to start is the fact that most corporate firewalls are not only unreasonably restrictive – they’re also lazy.

Port 443 is used for secure web traffic, and the firewall can’t really do much to inspect the back-and-forth through that port (you know, by design), so in many cases, they just let traffic through without even bothering to check that it’s actually HTTPS.

I mean, really. If someone is trying to get access through port 22, they can probably figure out how to achieve the same end through 443 (this post, case-in-point).

Enter the demultiplexers – software tools to simply listen on 443 and direct SSH traffic to sshd and HTTPS traffic to httpd (the two kinds of traffic are trivially and flawlessly distinguishable).

Continue Reading…

Stay Classy, Cert Companies

Let’s Encrypt has been a welcome addition to the security landscape – if only because it’s nice to do business with someone who actually gives a damn.

The trouble with HTTPS has always been more of a “business model” thing than a technical thing – anyone can set up strong encryption on their server and send/receive encrypted traffic to their users, but the initial connection needs to also confirm to the user that the site is who it says it is, and therein lies the rub.

The solution for the past 2 decades or so has been to have big corporations (called certificate authorities) who are trusted by browsers (the software, not the people) issue the certificates and keys needed for encryption. When a browser connects, it can identify whether or not one of the certificate authorities vouches for your website. If it does, the browser knows to trust that it is, indeed, connecting to the correct site.

This is a crucial step, as otherwise another site, posing as, say jonathan.ihle.in might manage to trick a browser into connecting to it. The connection itself would be perfectly encrypted, but the encryption would be for nought – as the user would be sending all their private data to the wrong party.

The problem with this arrangement is twofold – it forces site operators to decide whether or not a site is worth spending money to encrypt and it puts the issuing of certificates and keys into for-profit organisations who have varying demands for determining site identification. The end result: many sites remain without encryption.

Let’s Encrypt was created to resolve this specific issue. Continue Reading…

The backdoor in the iPhone *is* Apple

So recently, the FBI has obtained an order to have an iPhone compromised for an investigation.

The issue is thus:

  1. The iPhone is locked with a 4 digit passcode and the FBI doesn’t know what that passcode is.
  2. The iPhone’s data is encrypted – so they can’t just yank out the flash memory and attempt to read the contents. The passcode is required, through the operating system on the iPhone to decrypt that data.
  3. Because 4 digit codes aren’t really very secure (only 10,000 possible combinations), iOS will gradually force longer and longer delays between failed attempts to unlock the phone. (Edit: As Kieran points out below, codes in recent versions of iOS allow up to 6 digits, or 100,000 combinations)
  4. As an added layer of security, a user can set their iPhone to wipe its data after 10 consecutive failed attempts.

 

The FBI wants the data on that phone. But the process of brute forcing an unlock might wipe that data out and even if not, will still take a long time with the lockout delays and manual passcode entry. So a US federal magistrate has ordered Apple to do whatever is necessary to work around these safeguards so the FBI can access the data quickly and safely.

Apple is refusing.

They’re refusing on the grounds that (among other things) this will create a “backdoor” that will compromise all iPhones ever.

This is not true. iPhones are already all compromised. The backdoor is Apple. Continue Reading…

ECU Assignment Stapler

A working copy of the stapler can be found here.

Why does this thing exist?

When you submit assignments, you should submit them in PDF, not Word format.

There are a few reasons why, but the main two are as follows:

  1. You can be more sure that your tutor or lecturer will see the same thing you submitted
    Word documents can display differently depending on the version or device used.
    This is far less of an issue with PDF.
  2. You can’t accidentally munge your keyboard and change the final document.
    Probably not normally a consideration, but after 9 hours staring at papers on the significance of First Name Consonant Frequency in Childhood Misbehaviour*, you can easily make silly mistakes and be completely unaware that you just moved a crucial paragraph and are now submitting the antithesis of your intended argument.
(*Not a paper, but totally should be)

Continue Reading…

Milktape – brief excitement, abiding disappointment

Milktape - don't bother.

Pointlessness.

About 15 years ago (whoa!), I was researching which MP3 player to sink my limited funds into.

I did not want an iPod, a position that I continue to hold to this day – the veneer is nice, but the premium you pay for an inferior experience (particularly on the library management end) wasn’t worth it for me.

One of the options way out of my range was a little number called the “Rome MP3” – it managed to straddle both ends of the technological spectrum by simultaneously being a solid state MP3 player and a playable cassette. I’ll let that sink in.

rome_mp3

Check that puppy out.

Continue Reading…