Using Synergy with your Raspberry Pi

If you have a lot of computers running at the same time and you don’t want to have to wrangle multiple keyboards and mice to control them all, you need Synergy.

Raspberry Pis are so small and cheap that, if you’re like me, you will probably end up with a few on your desk at the same time. Using Synergy (from Symless) you can enable your main keyboard and mouse to control all of them.

Synergy is easy to use – you simply mouse out of your main screen and over to the monitor of the other machine to control it. Whichever screen has the mouse focus also gets the keyboard input. It’s almost magical.

Synergy screen config page
Define your screen arrangement using Synergy

With Synergy, you have one computer acting as the server; this is the machine that has the keyboard and mouse we will use to control the other computers. Other computers connect to the server as Synergy clients.

Both the client and the server must be running compatible versions of Synergy. Unfortunately, current Raspbian builds only contain older versions of Synergy and these won’t work if your Synergy server is running a later version. If your server computer is a Mac or Windows machine this will probably be the case.  If you are in this situation, you will need to build a more recent version of Synergy on your Pis from source.

Note: I recommend you buy the Pro version of Synergy, not only because it adds SSL support but because the USD29 will support the ongoing development of the product.

Doing the build

I’m building on a Raspberry Pi Zero W running a newly-flashed SD card of Stretch Raspbian with Desktop.

To start, update the pre-installed packages then install the prerequisite packages for Synergy.

sudo apt-get update
sudo apt-get upgrade -y sudo apt-get install cmake make g++ xorg-dev libqt4-dev libcurl4-openssl-dev libavahi-compat-libdnssd-dev libssl-dev libx11-dev

While that’s happening, download the source of Synergy from https://symless.com/synergy/downloads as a tar.gz file and copy it to your Pi. Then

tar xzf synergy-v1.8.8-stable-25a8cb2-Source.tar.gz
cd synergy-v1.8.8-stable-c30301e-Source

Now you’re in the source directory you need to run the configure step

QT_SELECT=4 ./hm.sh conf -g1

Note: If you see an error “Error: Could not get revision, git error: 128” you can safely ignore it.

Then:

./hm.sh build

If you get an error “Error: make -w failed with error: 512″ you can ignore that too.

Now you can the build the GUI app.

cd src/gui
qmake
make

Go back to the source root and copy all the built binaries to /usr/local/bin

cd ../../
sudo cp bin/syn* /usr/local/bin

Then run the Synergy GUI,

/usr/local/bin/synergy &

In the GUI you can configure the server you will connect to, and enter your license key, if you have one.

Making sure Synergy is always running

You want Synergy to start on boot, so you never have to reconnect the keyboard or mouse. Follow these instructions to do that (adapted from this page):

sudo raspi-config
(Select "Boot Options", then "Desktop Autologin")

Create the Synergy client autostart file (must be as user pi, NOT root!)

mkdir -p ~/.config/autostart
nano ~/.config/autostart/synergy.desktop

Write a little bash script to start synergyc with the necessary options. This is based on one I found here – you should modify the command line to match your setup (eg remove the enable-crypto flag if you won’t use SSL).

#!/bin/bash

killall synergyc
sleep 1
/usr/local/bin/synergyc -f --no-tray --debug INFO --name rpizero --enable-crypto 192.168.0.127:24800
exit 0

Now edit the synergy.desktop autostart file and tell it to run your script:

[Desktop Entry]
Name=Synergy Client
Exec=/home/pi/.startsynergy.sh &
Type=application

Reboot your Pi and check it auto-connects, then enjoy the freedom of a desk free of multiple keyboards and mice!

Creating an Alexa device on a Raspberry Pi

I’ve been a fan of the Raspberry Pi for a few years now. This diminutive device, along with various other incredible tiny pieces of hardware, has helped drive a resurgent interest in physical computing and helped create the Internet of Things.

For those of you who don’t know what these things are, they’re credit card sized computers, complete with a CPU, GPU, RAM, Ethernet port, USB ports, WiFi, and Bluetooth. They have onboard sound and video support via HDMI.

Sounds expensive, right? Nope. Less than sixty Australian dollars will buy you one.

Raspberry Pi device
Raspberry Pi approximately actual size

Anyway, the other thing that I’m very interested in right now is how voice-based AI systems (eg Siri, Cortana) are being used to build the human-computer interfaces of the future. So, I was excited to see Amazon Web Services release their new Alexa Voice Service SDK to developers, and doubly so to see them provide support for running it on the Pi.

The AVS SDK is designed to allow device manufacturers to build their own Alexa device, but Raspberry Pi owners (or anyone with a computer running Linux or MacOS) can also get in on the act. For free. All it’ll cost you is the time setting it up, and in the case of a Raspberry Pi, the $55 for the Pi 3 plus another $30 or so for a few other bits and pieces.

That’s so cheap, you can start thinking about what it might be like to give a spoken-word interface to all the things you interact with.

Alexa everywhere

Sounds great, right? And it is, but technology is complicated and like someone once said “there ain’t no such thing as a free lunch”. Kudos to AWS for making such complete-looking documentation to get AVS on a Pi, but even though I followed it closely, there were still some places I stumbled.

I’m going to describe those here now so maybe you won’t have the same frustrations I did.

Prepare your device

First of all,  make sure you have your sound hardware sorted out before you begin. The Pi 3 has no audio inputs on board, so you have to buy a USB audio “card” (actually a USB dongle) to create an input for your microphone. There are units that are known to be compatible with the Pi – I suggest you buy one of those. I used a different type, but it worked, so I guess I just got lucky.

Once you have your audio card set up, check that you can record and listen back to audio. You can use arecord and aplay to do that:

$ arecord -d 10 -f S16_LE -r 16000 test.wav
$ aplay test.wav

If you hear what you’ve recorded, you’re on your way to success. If you don’t, or you hear static, you may need to follow this guide to set up your Pi to use your hardware.

Another tip before you begin – the guide suggests you might like to build your Alexa on the Lite version of Raspbian, which has no GUI, ie it does not include a point and click “windowed” desktop environment. If you choose to do this (it does make sense if this will be a dedicated Alexa device) you will need to be aware that various instructions in the guide are written as if you can open a web browser on the device, which you won’t be able to do. Instead, you will need to do the steps that involve a web browser from another machine on your network, ie treating the Pi like a web server. To do this you will probably need to set up an SSH tunnel that port forwards between the machines. For reference, the command to do that (from the machine with the web browser) is:

ssh -L 3000:192.168.0.81:3000 -p 22 -l pi -N 192.168.0.81

Where 192.168.0.81 is the IP of your Pi, and “pi” is the username you log on with.

Rubber, meet road

Ok, if you’ve gotten to this point, you’re ready to start going through the quick-start guide and installing the AVS SDK and sample app software on your Pi.

Go ahead, I’ll wait.

waiting...

This bit is going to take an hour or so, because the instructions do a lot of building of binaries from source instead of relying on the in-built package manager in Raspbian (apt). As it turned out, I ended up with reason to question the wisdom or necessity of at least some of that, but more on that later.

Once all the software has been compiled you’re at the exciting part where you fire up the AuthServer to get a token to bring Alexa to life. If you followed the guide like I did, though, you will be seeing errors about Flask about now.

The problem is that they missed a step in the Raspberry Pi guide – you need to install the Python package manager (pip), and use it to install Flask because the AuthServer uses it. They did list these instructions in the generic Linux guide, so you can follow the steps there, or tl;dr:

sudo apt-get install python-pip
pip install flask requests

Ok, so now you can run the authorise process and you have your token. If you’re like me, you’ll be very keen to run the sample app and ask Alexa some probing questions.

If you’re like me though, you will also get to the end of the guide, run the app and find Alexa remains stubbornly silent.

Hmm.

Finding the debug switch

In my case, everything *looked* like it was working fine – the app woke up when I said “Alexa”, said it was listening, then thinking…but there was no audio response. Of course, after all that time sitting watching the compiler this is the last thing you want to see.

So, what was going wrong? I needed to run the app in debug mode so I could see what errors were being thrown. The Pi guide doesn’t mention how to do this, but the Linux guide does. Here’s how:

TZ=UTC ./SampleApp ../../Integration/AlexaClientSDKConfig.json $LOCAL_BUILD/models DEBUG9

Yes, at the end of the command to launch the sample app, just add “DEBUG9” to drop into verbose logging mode.

Once I did this, I found my issue. This message was being output by the sample app.

Missing decoder: MPEG-1 Layer 3 (MP3)

Somehow all that “compiled from source” software had failed to find the resources it needed when trying to build an mp3 decoding plugin for Gstreamer. Perhaps if I had used a Raspbian with a GUI, that may have had a pre-installed mp3 decoder but the Lite version definitely didn’t. I futzed about for a bit trying to entice the mp3 decoder plugin for Gstreamer to build, but in the end decided to simply install the pre-built binaries using apt:

sudo apt-get install gstreamer0.10-plugins-bad

Once I did this, bingo! Alexa spoke. Phew!

I hope this article helps you successfully build your own Alexa device. Let me know in the comments.

What do you plan to do with Alexa on your Pi?

Using Node-Red to fix stuff

Node-RED is a truly awesome tool that allows you to very quickly build an app that can talk to IoT hardware (eg devices like a Raspberry Pi), your local machine and online services. In the matter of a few minutes you can hook all these things together and getting them doing something useful.

At my place we have two Internet connections. One of them is hooked up via a router that is the best part of a decade old. It works pretty reliably, when it works, but every 36-48 hours it locks up and stops working and has to be rebooted.

I’ve put up with the inconvenience of this for years, but tonight I decided I’d finally had enough and it was time to solve the problem.

If you want to give this a try at home, the first step is to install Node.js. You can do that from here. Then, install Node-RED and some other npm modules we’ll want (these instructions work for MacOSX, you may need to vary them for your system):

sudo npm install -g node-red node-red-node-twilio pm2

Then, once everything is installed, run node-red using pm2:

pm2 start /usr/local/bin/node-red -- -v

If all has gone well, you should now be able to open a web browser, point it to localhost on your machine on port 1880 and see this:

If that’s what you see, you’re ready to build!

Building an app in Node-RED involves creating a “flow”. A flow is simply a set of nodes (those things in the list on the left), “wired” together and configured to do what you need.

A node can be an input node, and output node, or a function node through which messages can flow both in and out. Messages originate at input nodes, travel through from none to many function nodes and are emitted at an output node. Each node has the opportunity to modify the message payload before it is passed to the next node. Function nodes can have more than one output, which allows you to create branching logic.

Nodes are configured by double-clicking them, which opens a panel that allows you to set parameters for that node. “Wires” connect the output from one node to the input of the next. This is all done by pointing and clicking and dragging. Couldn’t get much simpler!

So, what I need is a flow that will attempt to access the Internet via the flaky router. If it succeeds, all is well and I don’t need to do anything more. If it fails, I want to call the web UI on the router and tell it to reset the router. Then I want an SMS notification to be sent to my phone, letting me know of the outage and router reset.

Here’s how to do this with Node-RED:

You can see I have three input nodes – two are used to trigger test scenarios, so I won’t describe them here. The one that matters is the “Check every 30 seconds” input node that I have configured to inject a message into my flow every 30 seconds. This message flows to the http request node which is configured (when triggered by a message arriving) to do a GET call on www.microsoft.com (initially I used Google, but they don’t like being used this way). The data returned from that request gets loaded into the message object’s payload slot and passed to the next node.

The “No ping?” node is a Javascript function that looks at the message payload data from the http request and checks to see if it looks like it comes from the pinged site.

If the data doesn’t contain the string “Microsoft”, and the device isn’t currently being rebooted, the function emits a message out of its second output that flows into and triggers the “Reset Modem” HTTP request node.

The HTTP request simply emulates the web call that my router’s web UI makes when I click the “Reset” button on it.

var rebooting = flow.get('rebooting') || false;
// if null, no message will be passed to the output
var msg2 = null;
var msg3 = null;

if (msg.payload.match(/Microsoft/)) {
    if (!rebooting) {
        msg.payload = 'OK';
    }
    else {
        node.warn("Reboot complete");
        var currentTime = new Date().toLocaleTimeString();
        msg.payload = 'Reboot complete at ' + currentTime;
        flow.set('rebooting', false);
        var smsMessage = 'Router down from ';
        smsMessage += flow.get('rebootStart');
        smsMessage += ' to ' + currentTime;
        msg3 = { payload: smsMessage };
    }
}
else {
    if (rebooting === false) {
        node.warn("Rebooting");
        var currentTime = new Date().toLocaleTimeString();
        msg2 = { payload: 'factory=E0' };
        flow.set('rebooting', true);
        flow.set('rebootStart', currentTime);
        msg.payload = 'Requesting Reboot at ' + currentTime;
    }
    else {
        msg.payload = 'Reboot in progress';
    }
}

return [msg, msg2, msg3];

If the data does contain the expected string, either the router is still working fine, in which case it emits a message out to the console to say “OK”, or it indicates that the router is working again after a reboot.

I’m storing state in the flow context so that I don’t trigger additional reboots when a reboot is already in progress. I also use the stored state to determine when a reboot is complete, and when it is I use a third output to send an SMS telling me the start and end time of the outage.

Now, whenever my router goes down, it’ll automatically get reset, and once it’s back up I’ll get an SMS to let me know what happened, and how long the outage lasted.

Much better!

Have a play with Node-RED and let me know what you think in the comments.

PS: Node-RED has other nodes for getting data and working with it in a myriad of ways, including support for different kinds of protocols, storage engines, cloud services, and home automation gear. All of it is Open Source and free to use. Check it out.

What a time to be alive!

When I was at high school in the 80s, computers were about the most boring things I could imagine. They couldn’t do anything cool, unless your idea of cool was maths, and to program them was like talking very slowly to a barely literate person with an IQ of 50.

In the 90s, things changed.

By the 90s, computers had become capable of doing things for you that you couldn’t do better by hand. In the 90s, they started connecting to one another and becoming part of the Internet we all take for granted today. In the 90s, computers started waiting for us, instead of the other way around.

Let’s do stuff!

I got my first computer in the 90s, and immediately started a business doing digital imaging using Photoshop. Kodak at this point was still sleeping peacefully, figuring all this digital stuff was a fad that would be over soon.

The world wide web was hitting the news in the 90s. It took a while for people to figure out what it was, but when they did, the web took off exponentially. Even the dot.com meltdown in 2001 couldn’t really slow it down.

I started building for the web in the mid 90s and have been doing it ever since. In that time I have seen many incredible advances, and some monumental follies*, from vector-animation to 3D, streaming audio and video, WebSockets and WebGL through to initiatives like WebAssembly. The web just keeps getting stronger and more capable. Importantly, it has also stayed open, despite the best attempts by some companies to subvert it.

But even 10 years ago, few would have foreseen how different computing was going to be today.

Clouds appear

In the last 10 years, computing has gone from something we do at a desk or in server rooms to something we do everywhere, all the time. We are all carrying around computers in our pockets. We are seeing tiny cheap computers being built into every corner of our environments – from the smart TV to the wearable activity tracker and the smart watch, to the lightbulbs in your house and the locks on your doors. And it’s all connected via the web.

This is why, for me, the two most exciting technology trends today are Cloud computing and the Internet of Things.

Cloud computing got a lot of hype in the early years and some of it was just silly. The cloud’s infrastructure isn’t that different to what preceded it – it’s still run on servers in data centres, just like we did things in the past. What is different is how commoditised computing resources are changing the nature of computing itself. Servers are no longer purpose-built boxes in a DC that are configured to do one thing. Now they are simply a source of computing resources that can be abstracted away by higher level services. This means they can deliver the outcome we want without all the configuration and systems admin we used to have to do to get that outcome.

So while virtual machines that scale are nice, they are far from the most important thing that cloud computing has unlocked. By removing the need for me to manage my own servers, cloud computing has freed me to focus on the value I want my application to provide. Almost inevitably, this has lead to the concept of serverless architectures, where my application is only the code I need and nothing more. The cloud replaces the server stack I would otherwise spend my time maintaining.

New ways to think of software

This kind of thinking is opening up new ways to build applications. An example of this is AWS Step Functions, where an entire application can be pulled together via a visual workflow. Likewise, tools like AWS Simple Workflow Service offer ways to orchestrate your code in a serverless environment, and then to build out and connect it to systems hosted elsewhere and even to processes that existing in the non-virtual world. Tools like these are facilitating an increased connectedness, which in turn opens up new ideas as to what a software application is, and what it could be.

And then, humming at the edges of all that new cloud-enabled capability are the huge numbers of IoT devices that are popping up daily in our lives.

Devices everywhere

Before we had smartphones, who would have thought everyone carrying around a GPS receiver would be useful? Now we can’t live without them. This is just one familiar example of the IoT world that is heading our way, as we measure, monitor and report on more and more metrics we encounter in our everyday lives. Heart rate, steps taken, how much electricity we’re consuming, room temperature, environmental noise, pollution levels, security camera footage…it’s all being picked up and turned into knowledge we can use to improve our lives.

In industry, condition monitoring is a huge growth area, again driven in large part by low cost computer hardware. You can now put a $100 vibration monitor on a truck and collect that data. The data can allow you to predict when it will need servicing, which can save your company the cost of unscheduled downtime. The economics of this are becoming a no-brainer as computing hardware gets cheaper and smaller and wireless networking becomes increasingly ubiquitous.

One interesting result of the rise of IoT is how the cutting edge of computing has come full circle. In a world where servers are now being commoditised and abstracted away, there is renewed interest in physical computing. People are building their own devices, and plugging them into the cloud. They are getting reacquainted with low-level knowledge, like how serial communications work. They are learning how to gather data from sensors over GPIO pins on a circuit board. It’s an interesting development and one that bodes well for humanity, I think. It gets us back in touch with the magic of what, as a species, we’ve achieved over the last century.

 

* You can put the proprietary Microsoft Network and Rupert Murdoch’s purchase of a dying MySpace in that column.