Official RPi Touch Display – GPIO damaged by improper wiring

Some of my students have access to hardware for their projects and experiments, including various Raspberry Pi-s and alternative 1 operating systems and accessories.

Unfortunately, given the way the Pi interacts with HATs 2 and other similar devices via GPIO 3 pins, there is always the possibility that 5V will go where it shouldn’t and damage will be done.

In the case of the official Raspberry Pi Touchscreen Display, the device can be wired up to either receive or provide power to the Pi via jumper cables through the GPIO pins or provide power to the Pi via an included USB A port in the more “traditional” way.

 

When it comes to hobbyist hardware (and software!) there is an impetus to err on the side of giving the user as many options as possible.

When it comes to custom wiring, I think Murphy’s Law 4 should take precedence over hobbyist convenience. In other words, don’t even give us the option to power it via a method that will release magic smoke if done wrong.

There is some value in allowing users to power the Pi using the pins – and indeed this appears to be encouraged, as the enclosure that ships with the display only provides access to the Pi usb port.

At any rate, multiple options are available and inevitably, one of my students has configured one of the options that puts power where power should not go. As a result, neither the Pi nor the display are giving me any joy now when wired correctly.

It would seem that the Pi is beyond redemption – there is no display via HDMI and the SD card reader is unable to read cards.

The display is a happier story – there is no possibility to push power to or from it via the pins, but it seems perfectly happy to power on and pass power through via the USB port.

So just a quick note to anyone in a similar position – try your “dead” display with another (known working) Pi using the USB ports to provide power to both and you might find the display still has life yet.

Command Line History Search in Ubuntu Desktop

Just a quick note: I usually use server only Linux installs, but I’ve been trying out deb based desktops lately.

Ubuntu desktop doesn’t seem to honour the .inputrc file in the home directory – I usually use this to allow command line history searching:

With my server installs, that’ll let you use the up and down arrows to go back and forth through your history as usual, but if you start to type a command it’ll only go through the commands in your history that match what you’ve typed so far.

I find this behaviour to be really intuitive, to the point that it’s frustrating to use terminals without it.

It took some experimenting to find the solution, but in Ubuntu Desktop, the right file to edit is .bashrc and the lines are a little different – explicitly binding the functionality of the keys:

Ah – such a relief to have this working again in all my terminals!

SSH & HTTPS on the same port: Surprisingly easy

If you’re stuck behind a school or university firewall, you’ll often find that they’re unreasonably restrictive (as a user – as an administrator, well actually, most of the admins probably think it’s a bit over the top too, given it really doesn’t stop much untoward behaviour for the inconvenience caused).

As long as you want web traffic to sites that haven’t been blacklisted or have restricted keywords in the URL (sigh), you’ll be fine. But if, for example, you need SSH access to a *nix server offsite, you’re stuck using various web based SSH console solutions.

As always, there are a variety of ways around it: some more complex than others. But a good place to start is the fact that most corporate firewalls are not only unreasonably restrictive – they’re also lazy.

Port 443 is used for secure web traffic, and the firewall can’t really do much to inspect the back-and-forth through that port (you know, by design), so in many cases, they just let traffic through without even bothering to check that it’s actually HTTPS.

I mean, really. If someone is trying to get access through port 22, they can probably figure out how to achieve the same end through 443 (this post, case-in-point).

Enter the demultiplexers – software tools to simply listen on 443 and direct SSH traffic to sshd and HTTPS traffic to httpd (the two kinds of traffic are trivially and flawlessly distinguishable). Continue Reading…

XenServer 6.2 -> 6.5

In preparation for the upgrade from XS6.2 to 6.5 at my day job, I’m removing the XenTools drivers/software from our VMs (apparently old versions of the tools can cause booting issues for Windows VMs at the very least).

Something I hadn’t realised – removing the XenTools suite will cause Windows to lose its network drivers and revert to network defaults (which I guess should have been obvious? Not sure.).

Network defaults being DHCP for IP address, meaning our VM server just got assigned a client IP and for all intents and purposes couldn’t be found by the client software on our actual client machines.

Not a big deal or a hard fix in the end (VM reboot -> manually assign IPs again), but a reminder to self that this whole upgrade process will be a painful in both expected and unexpected ways.

Update:

The actual upgrade went well. Larger orgs usually have a physical server pool and upgrade using something called “rolling pool upgrades”.

We don’t have this, we’re poor. As a result, we have one main server with heaps of RAM and a second, backup server with enough resources to run essential VMs.

In the event of disaster, we can recover a VM image to the backup and pick up where we left off until a replacement main server is arranged.

So no rolling upgrades for us – instead I just prepped the VM images (as above) and then upgraded the backup server first (which, in case you’re wondering involves rebooting the physical server and running the installer from the ISO – either a USB or CD, of all things, and just selecting “upgrade” rather than “install and destroy all my stuff forever”) to verify everything would be running okay.

Once we were sure the backup could run our images on 6.5, we then processed the main server.

The whole process was significantly faster than the preparation, and there were no issues. Very impressed.

6.5 doesn’t do a heck of a lot for us overtly (although I’m quietly upgrading our Ubuntu VMs to 14.04), but there are a slew of improvements and I’m not sure whether it’s a coincidence or not, but our SVN/Dev/Local DB server is no longer having weird reboot issues.

Hooray for progress!