# Author: Jiyun Lee(Page 1 of 2)

Yesterday, I had lunch with a friend and had an opportunity to share our summer experiences. The typical response I get when I say I am working on “Internet of Things” and “Smart Homes” is “That is so cool! I would never be able to do CS.”

This friend, keen on knowing more details, asked what exactly a Smart Home is. With vague details, I mentioned that it is a house that can learn behaviors of people and automate certain repeated patterns and can communicate with people living in it.

She raised a question, “So would it turn lights off automatically after averaging out the time I go to bed?” I acknowledged that is one possibility. Then, she quickly asked, “What if one day I go to bed at a different time?”

Yes. This is the nature of humans. More prone to pick out faults then appreciating what works. Even if a smart home correctly learns patterns and does things 99% correctly, people will take it for granted. However, they will pick up on 1% that didn’t meet their expectations.

Machine learning is in a sense an “averaging” process. An “average” is bound to have standard deviation, or margin of error. Will people be generous to accept these margins of error?

Let’s say a student’s bed time is on average, 1:16AM. One day he decides to go to bed at 1:15AM. Will he be patient enough to wait for 60 seconds until lights turn off automatically? And he almost always has coffee right after he takes a shower in the morning, so the smart home brews coffee while he is taking a shower after detecting motion and water usage in the bathroom. What if he decides not to drink coffee that day because he has a stomachache? Will he turn a blind eye to the coffee that was automatically brewed, or complain about wasted coffee beans and electricity?

Recalling the papers on Nest thermostat, people were more unhappy about Nest picking up on exceptional cases rather than it learning the completely wrong patterns. If Nest works as it should, people will gradually forget about the need to control temperature manually, forget that Nest is doing all the work for them, and then get mad when Nest does one thing wrong.

So maybe we don’t want automation. No one’s life follows a fixed schedule to the second 24/7, so there will be that 1% that makes people unhappy.  Maybe all we want is just a house that listens to what we want — as opposed to making decisions on its own —  and obediently follows directions.

Although being tracked doesn’t feel comfortable, today is one of those days where I wish I lived in a smart home that tracked my patterns. Currently my roommate and I are living in a big house with 6 rooms (4 of which are empty) and together in a month apparently used 800 kWh of electricity and had to pay \$160 together. Given that we did not use our air-conditioner, we were very surprised. Since we could hear the air-conditioner running almost all day long from the other side of the house (our 6 rooms are one half of the house), we were wondering if their air-conditioner was connected to ours. We are looking into whether that is true or not. In the mean time, I thought I would do a little research on how much electricity home appliances use to get a sense of what the most likely culprits that drained energy would be.

Below are what I found online as average usage of kWh in households per year:

• Circulating fan =  4 kwh
• Coffee Maker= 9 kwh
• Frying Pan= 8 kwh
• Microwave Oven= 16 kwh
• Self-Clean Oven= 61kwh
• Toaster= 3 kwh
• Blender= less than 1 kwh
• Refrigerator 14-17 cu. ft.= 170 kwh
• Washing Machine= 9 kwh
• Dryer= 75 kwh
• Lighting 4-5 Room= 50 kwh
• Outdoors, 1 Spotlight, All Night= 45 kwh
• Vacuum Cleaner= 4 kwh
• Hair Dryer= 2 kwh

Now I generally get a sense that ovens, fridges, and dryers use a lot of energy. With power meters available, smart homes would be able to tell me where most of my energy use is coming from. It would be even better if it analyzed my living pattern and told me when I use most electricity.

This is exactly what Kamin Whitehouse envisioned — that smart homes would be beneficial for the environment — and I can do nothing but agree. Right now, my roommate and I know that we used a lot of electricity(although we are both usually out working), but we have no idea what to do in order to fix it.

Today we devoted time on finishing up a crash course development for new members to recruit for UPOD. This crash course covers topics from JSON parsing, API, and making HTTP requests, to writing data models in SQL Alchemy. We believe that this will allow for a much more efficient transition for new members.

We also completed writing door sensor models and views. It is up on Helios-dev website. We documented each step taken on Helios wiki, so new members who went through the apprenticeship process can leave their marks by adding a new device to Helios.

Tomorrow we will tackle writing a SmartApp so that SmartThings devices will send data to Helios directly. Professor Kraska introduced to us a Masters student, Giselle, who has a complete kit of SmartThings installed at her house and wrote numerous SmartApps, so we plan show her the system we have and receive feedback.

Overall, very productive day!

My roommate installed a home security system called Canary. Once you install an app on your phone, your phone serves as a presence sensor and knows whether you are home or not. It time stamps when you entered and left the house.

When both my roommate and I are not in the house, the device is in “Armed Mode” which allows us to make a siren alarm go off. When one of us is home, it is in “Disarmed Mode.”

It is currently sitting on a table in front of the front door, and I can watch a live feed from my phone. It uses a HD camera for vision and sound. Whenever motion is detected, it records 10 minutes or so of footage(both image and sound) which can be accessed from timeline of my phone. Yes, it is creepy.

Canary device also senses temperature, humidity, and air quality.

My phone is busily buzzing today because I am getting notifications from SmartThings door sensor whenever lab room door is opened and closed, and also from Canary whenever motion is detected in the house.

I find it interesting how there is a huge market for all the camera systems for safe home. About a week ago when I was in Boston, there were so many posters of Nest’s camera in subway stations. What is it that drives people to buy cameras to watch their homes, which is the most intimate and private place of all places? Is privacy an inevitable price you have to pay for security?

Any how, I am being tracked everywhere, from lab room with two Kinects and multisensors along with door sensor, and Canary at home.

Finally we have good news to share today!

Let’s start off with a bad one : we have decided to put the blue Surface Pro 3 in the drawer because it seems to be faulty and malfunctioning in many cases. Even after uninstalling the old Kinect SDK and reinstalling a new one, it still did not recognize Kinect and freaked out when USB cable was plugged in.

For Surface Pro 2 and HP tablet, we figured out a way to install Zwave driver for Windows. Extracting before running the application was the key.

Then magic happened where everything else also worked. Given the issue raised on Microsoft website about Kinect needing to be the only device connected to one USB port, we thought that was the reason why Kinect kept on dying when plugged into USB hub with Zwave stick was because Kinect is resource-hungry. However, it turns out that they can coexist! Must have been a problem with the blue Surface itself.

We have  a package ready for Professor Littman to take home and install. The box includes a Surface Pro 2 that can successfully run HomeOS and send both Kinect and multisensor data to Helios. Also, we wrote up documentation for self-deployment so that when we send the package to Blase, he can set things up himself as well.

Currently, there is a set of multisensor and Kinect connected to HP tablet that are logging data in the lab room.

Tomorrow we will work on preparing package for Blase, debug Helios, and write door sensor model.

We are half way done. I am a bit careful to use an exclamation point —  we have gotten some things to work, but we still have a long way to go. Continue reading

Good to be back! I decided to become a terse writer: mostly because I am typing on a Surface Pro right now.  Here are the updates:

In the morning, Tushar and I watched Kamin Whitehouse’s talk on MIT Technology Review and learned the importance of work environment: he highlighted the fact that Americans spend 60% of their time in their homes, thus innovation in home is important.

So we did some rearrangement of furniture and organization of wires. We had been avoiding the lab room because it felt crowded. Now, it is so fresh I think I could wake up earlier in the morning to work in the lab room(or maybe not).

I read this paper written my a few researchers in University of Michigan. The name Newman drew me in, since I have seen the name multiple times in this field. This paper was like many others: a study on how people interact with a smart home system they came up with.

I was quickly skimming through the paper since I was quite well-aware of what most HCI papers talk about, but slowed down a bit when I found an interesting approach to testing how well their system OSCAR works. OSCAR is a form of end-user programming that allows user to flexibly control their devices of networked devices at home. The paper did not mention this explicitly, but from a photo, I could tell that OSCAR in a physical form is a tablet that people can interact with.

I read a paper written by Microsoft on HomeOS. Just from reading, I do have to give credit to them for being pioneers in the field. We do know that there are some difficulties using their system and devices, but their innovative idea on PC abstraction is indeed very promising for smart homes and user satisfaction.

Microsoft advocates centralized PC operating system for home and decentralized network of devices because 1)all devices will be connected and this connection will allow for smart home functionality, 2)it is easy to add devices since they are not closed system, 3)users already know how how to use computers, and 4) it is easier for developers to implement functionality without worrying about devices.