dev

Haiku Reflections: Web Clients and Web Resources

This is part of a series of posts I’m writing to put down my thoughts on the recently retired Mozilla Connected Devices Haiku project. For an overview, see my earlier post.

My understanding of the overarching goal for the Connected Devices group within Mozilla is to have a tangible impact on the evolution of the Internet of Things to maintain the primacy of the user; their right to own their own data and experience, to chose between products and organizations. We want Mozilla to be a guiding light, an example others can follow when developing technology in this new space that respects user privacy, implements good security and promotes open, common standards. In that context, the plan is to develop an IoT platform alongside a few carefully selected consumer products that will exercise and validate that platform and start building the exposure and experience for Mozilla in this space. Over the last few months, the vision for this platform has aligned with the emerging Web of Things which builds on patterns for attaching “Things” to the web.

From one perspective, the web is a just a network of interconnected content nodes. It follows that the scope for standardizing the evolution of the Internet of Things is to define a sensible architecture and build frameworks for incorporating these new devices and their capabilities to maintain interoperability, promote discoverability etc. This maps well onto connected sensors, smart appliances and other physical objects whose attributes we want to query and set over the network. Give these things URLs and a RESTful interface and you get all the rich semantics of the web, addressability, tools, developer talent pool - the list goes on and on and its all for “free”. In one stroke you remove the need for a lot of wheel re-inventions and proprietary-ness and nudge this whole movement in the direction of the interoperable, standardized web. Its a no-brainer.

In this context however, the communication device envisaged by Project Haiku is orthogonal. While you can model it to give URLs to the people/devices and the private communication channel they share, the surface area of the resulting “API” is tiny and has limited value. It is conceptually powerful as it brings along all the normal web best practices for RESTful API design, access control, caching and offline strategies and so-on. Still, the Haiku device would be more web client than web resource and doesn’t fit neatly into this story.

Reflections on Project Haiku: Accounts and Ownership

This is part of a series of posts I’m writing to put down my thoughts on the recently retired Mozilla Connected Devices Haiku project. By focusing on the user problem and not the business model, we quickly determined that we wanted as little data from our users as we could get away with. For context and an overview of the project, please see my earlier post.

When I was a kid, my brothers and I had wired walkie-talkies. Intercoms really. Each unit was attached with about 100’ of copper wire. One could be downstairs and with the wire trailed dangerously under doors and up stairs we could communicate between kitchen and bedroom. Later, in order to talk with a friend in the appartment block opposite us, we got a string pulled taut between our two balconies. With tin cans on each end of the string, you could just about hear what the other was saying.

Direct, one-to-one communication

RF-based wireless communication had existed for a long time already, but I bring these specific communication examples up because the connection we made was exclusive and private.

We didn’t need to agree on a frequency and hope no-one else was listening in. The devices didn’t just enable the connection, they were the connection. We didn’t sign up for a service, didn’t pay any subscription, and when we tired of it and it was given away, no contracts needed to be amended; the new owners simply picked up each end and started their own direct and private conversation. In Project Haiki, when we thought about IoT and connecting people, this was the analogy we adopted.

Reflections on Project Haiku: WebRTC

This is part of a series of posts I’m writing to put down my thoughts on the recently retired Mozilla Connected Devices Haiku project. We landed on a WebRTC-based implementation of a 1:1 communication device. For an overview of the project as a whole, see my earlier post.

WebRTC triangle diagram

This was one of those insights that seems obvious with hindsight. If you want to allow two people communicate privately and securely, using non-proprietary protocols, and have no need or interest in storing or mediating this communication - you want WebRTC.

Reflections on Project Haiku

I’ve written before on this blog about my current project with Mozilla’s Connected Devices group: Project Haiku. Last week, after close to 9 months of exploration, prototyping and refinement this project was put on hold indefinitely.

So I wanted to take this opportunity - a brief lull before I get caught up in my next project - to reflect on the work and many ideas that Project Haiku produced. There are several angles to look at it from, so I’ll break it down into separate blog posts. In this post I’ll provide a background of the what, when and why as a simple chronological story of the project from start to finish.

Emoji + Voice Prototype

Project Haiku Update

At Mozilla, I’m still working with a team on Project Haiku. Over the summer we had closed in on a wearable device used for setting and seeing friend’s status. It took a while for that to crystallize though and as we started the process of building an initial bluetooth-ed wearable prototype, our team was handed an ultimatum: Go faster or stop.

We combined efforts and ideas with another Mozilla team that had arrived at some very similar positions on how connected devices should meet human needs. As I write we are concluding a user study in which 10 pairs of grandparents and school-age grandchildren have been using a simple, dedicated communication device.

The finished unit

48 Hours of Hacking in Chattanooga

I spent this past weekend in Chattanooga, Tennesse, in a whirlwind of planning, prototyping and generally collaborating on a pitch for the 48 Hour Launch event. I was invited to attend as one of several mentors from Mozilla, to help develop product and company ideas from the local community into something clear and compelling in just two days. For more info on the event, go read the wrap-up on Mozilla’s blog. I’m just going to detail some of my personal highlights.

About seven teams were at the kick-off Friday night, each giving an introduction to their concept and what they wanted to achieve over the weekend. After drifting around a bit and listening in to the conversations that emerged afterwards, I gravitated towards the “Inclusive Makerspace” project. Cristol Kapp is a librarian at a local elementary school, and one of the first in the region to set up a functioning makerspace in her library for the kids. But, there’s a problem: some of the students have conditions and disabilities which prevent them getting involved in the makerspace activities. The need for a steady hand, fine motor control skills to manipulate tools - are just 2 barriers that effectively exclude some of these kids from what should be fun, collaborative activities in the space. Cristol clearly felt this deeply, and was accomanied by a colleague - a special education teacher - who was also committed to fixing this. That stood out for me: a clear need expressed again and again at the school, and no doubt echoed at home. And people with the opportunity and drive to find, test, improve and promote a solution. (On the Sunday, this was reinforced again when the school principal visited the hackathon to support Cristol, listen to her plans and give feedback.)

I think I’ll keep this short and devote a separate post to the Inclusive I/O project itself (a renaming and branding that emerged from the weekend) and confine myself to the event here. Friday evening was spent narrowing down both the problem and set of solutions into something properly joined up and actionable. With a million ideas buzzing around all the participants heads, we needed to focus on telling a story with well defined characters, with a clearly defined problem and a solution that demonstrably addresses that problem. Of course, reality is never so simple, but for the purposes of this pitch - and to get this project into gear and actually moving down the road - we had to temporarily remove variables. We wound up Friday evening with a plan - sketched out on the back of a cupcake box (which I didn’t have the presence of mind to photograph) - and a consensus to make it so first thing in the morning.

I was pretty blown away by the level of energy, the collective good will and breadth of expertise that descended on the venue over the weekend. Although each team was ultimately competing for prizes, there was no hesitation in sharing tips or resources, getting each other unstuck or even devoting large chunks of time to contribute skills where they were needed. Over the Saturday and Sunday we divided and conquered - with Tamara and I hacking up a prototype, with the help of some great talent from the community. Meanwhile Cristol was moving efficiently though business planning, with cost and market estimates, branding and strategy, all the while tightening up the story we had started that first evening. By Sunday she had a great slide deck and a clear, concise telling of that story, practiced again and again.

The Inclusive I/O team

It worked. Inclusive I/O was well received by the panel and awarded 2nd place. This is huge - not only for the cash and other resources it grants - but for the validation of the idea and its originator. And for the problem Cristol saw and its real need of a solution. Thanks to all whose names I either didn’t list, forgot or never learnt who helped out along the way. I hope to stay involved in this project in some capacity; watch this space.

Making a Research Prototype

Haiku UR#2 prototypes w. lanyards

The last round of user research for my project with Mozilla’s Connected Devices team threw up a ton of useful ideas and insights. We shuffled them around on a gazillion post-its and narrowed in eventually on a theme of communication - specifically simple, non-intrusive/non-interrupting ambient messages. We saw a recurring need for ways to say “I’m still here”, “I’m OK”, “I’m thinking of you”. I was reminded of the Goodnight Lamp - one of the first really nice IoT products I remember seeing.

We wanted to validate our thoughts, and dig a little deeper into this area, so we came up with another study, this time using a simple functional prototype. Not so much a product prototype, more a prop and a way to move away from the abstract and focus in on actual reactions when interacting with a thing. In this post I’ll go into some detail on what we built, how we went about it and what we learnt.

IoT Useless Box

A couple of weeks ago I wanted to look into Amazon’s IoT service and conceived a slightly less-dry “hello world” project I could build as a vehicle for this research. You have probably seen the “Useless Box” concept before - its simply a box with a switch on it. When you flip the switch, a flap opens in the box and some kind of finger comes out and switches it back. I wanted to build that, but IoT-enable it, using the MQTT broker to listen for the state change in the switch, and notify the servo-listener to kick into action and put the world back to rights.

SmartHome User Research

For the SmartHome project, we’ve taken a step back to better understand the problem space and come at solutions based on research and evidence. As a team we spent last week interviewing people in their homes, with questions on a theme of freedom and independence. We choose early teens (12-15yrs) and post-retirement folks as a demographic that might have useful insights into this topic.

As a software engineer this has been an interesting process. There’s a temptation to jump into any new project with both feet and start hacking on code; starting with simple successes and working iteratively to add features and meet requirements. And we’ve done some of that in order to get familiar with what we expect will be our prototyping platforms - the raspberry pi and ESP8266. But the more we looked at the problem the more it became clear that we weren’t yet sure what technical questions the project would ask, let alone how to solve them. In the meantime, our team was trying to figure out a better vision for the smart home that would align with Mozilla’s values and potential solutions in a broad and confusing product space. None of us were in our comfort zone, so we made a determination to put roles aside and roll up sleeves and muck in. We’ve posted craigslist ads, we’ve had Skype calls, house visits, coffee shop rendezvous; interviewed, transcribed and now begun to process input from 15 different people.

It turns out that a curious mind, a knack for spotting patterns, analysing outcomes for the motivations and circumstances that produced them - these are the stock-in-trade of any software engineer - and they are skills that work just as well in user research and exploratory product definition as they do in software development. And perhaps more important than that, before we are engineers, we are people. We have families, jobs, aspirations, frustrations and concerns. We were young and hope to grow old. Talking to people this last week has been a great reminder that it is people that solve problems, not code.

Smart Mirror: Starting Up

With no keyboard or pointer inputs, ensuring the Smart Mirror can be restarted and booted up entirely automatically was high on my priority list. Once installed, I can’t startx or click on any icons; it needs to bring up all the backend services and the dashboard to leave it in a working state without any user intervention. That lead me down a merry path and was (for me) the trickiest part of this project.

Here’s the moving parts:

  • The kernel and OS itself, with networking and other key systems
  • The display and window manager - the subsystems that allow me to put my dashboard up on the screen
  • The mosquitto message broker
  • The gpio listeners
  • The web server
  • The browser, which should load up my dashboard URL

I went through lots of helpful posts and projects on creating linux kiosks to figure out potential approaches. While the mirror isn’t really a kiosk - a kiosk usually has keyboard/pointer/touch user input - its a reasonable match up. After a few false starts trying Firefox/Iceweasel and Chromium kiosk options, I settled on the approach outlined in this Dashing-Pi page. This eschews the LXDE desktop environment entirely and uses nodm and the matchbox window manager to boot into the browser with the minumum of unecessary fluff inbetween.

Orchestrating startup is a bit fiddly even so. First, nodm is configured to startup with/as the ‘pi’ user. The rest of the graphics/display related startup is then in a script copied to /home/pi/.xsession, which starts up the matchbox display manager, and the Uzbl browser to load the dashboard. For the backend pieces, Raspian uses the init.d system, so we install scripts in /etc/init.d/ to start up mosquitto, pm2 (which manages the node.js server(s)) and scripts to relay GPIO events as MQTT messages for the rest of the system.

That done, I can plug the thing in and in just a minute or so it brings up the dashboard on screen and responds to sensor events. The Uzbl browser is a wrapper around WebKit. It supports commands via a socket, which means that once up, I can ssh to the rPi and remotely refresh the page, navigate to other URLs and so on which has proved valuable during development as I have none of the traditional inputs (e.g ctrl+r on the keyboard) to accomplish this otherwise.