With the intentions of ending the week well, I decided to go through Mozer’s paper. Our seniors and professor considered this to be a model paper given the extensive use of reinforcement learning by the researchers.
The resulting wiki entry was longer than any so far, and I managed to extract several great ideas. Here’s a (longer) excerpt:
The researcher then states an obvious but often forgotten truth — technology will be adopted only if the perceived return outweighs the effort required to understand the new technology (something I like to call the adoption hill — base is how much comfort the technology gives and the height is difficulty in understanding it; so we’re looking for a short and wide mountain).
A central claim to understanding the paper is the author’s guiding tenant of UI design: the only way to improve an ordinary familiar environment is to eliminate the interface altogether (the invisible interface hypothesis). Consequently the house the researcher designed for himself has no UI other than basic switches (not even speech input, which as a JARVIS fanboy I think is a tad bit backward) and the house simply learns a user’s patterns. The researchers decided to focus on air temperature regulation (we can mimic this using Nest), water temperature regulation and lighting (we can imitate this using Hue).
The researchers also recognized the two conflicting constraints that home automation seeks to fulfil: the user’s convenience and low energy consumption. They developed a novel method for balancing the two opposing constraints, which seems applicable to most of home automation. They expressed the cost of each action in two types — a discomfort cost incurred if the inhabitant preferences are not met (for instance if the user changes a setting manually after the machine has had a shot at it), and an energy cost based on the monetary price of electricity. The final formula involves summation of these two costs, and the paper has an accompanying mathematical formula (which I will readily admit, that I did not fully understand).
Additional points of interest include using noise levels to assist ML — a creaky floor triggered the machine to when the user was entering the bathroom as did the radio the author liked to play while showering.
Reading this paper has certainly made the path ahead clearer, and led to new ideas bouncing around my head. The success of reinforcement learning when applied to lighting hints that researchers need to apply this to other aspects of home automation as well. To that end, we plan to start work on adding devices to HomeOS (not too many, but enough for some interesting data collection). So next week (this weekend?) we will add mulitsensors and door sensors to our our ‘data funnel’ (HomeOS -> Helios).
Other ideas and housekeeping:
- Smart trash can: waste segregation is a potential application.
- Naturalistic voice commands using Kinect but limited vocabulary (as seen in the Magnetic Poetry paper).
- Kinect should record noise for use in ML.