Mobile Sensing for User Activity and Context

GOAL

A team led by Prof. Shie Mannor (Technion) plans to harness machine learning methodology to understand the context of a human user, discover the user’s purpose, and eventually proactively make recommendations and takeactions for the user. The raw data that we want to use consists of continuous recordings of sensors including: accelerometers, ambient light, GPS, Wi-Fi, GSM tower location, e-compass, camera, microphone, etc. From the sensory data, we intend to infer where a person is physically (indoor and outdoor localization), and what the person is physically doing (walking, running, sitting, etc.). Our research group has experience in data collection on Android, localization, and understanding the physical action a person is doing.

But all these are only building blocks in the process of understanding what a person is really doing. Our first task is to understand the context for the user. The context means answering the question “what the user is doing?” in the similar way which the user himself would provide (“I am in the office”) and not a list of primitive actions (“in room 456 of the Electrical Engineering building, sitting”). We think of context as something that can be expressed using natural language and plan to use lightweight feedback from the user. User context is, of course, subjective, so we have to account for user heterogeneity.

Understanding the context is the first stage. The second stage is the understanding of the user’s goals: “why is the user doing what he is doing?” The goal in the second stage is to understand the user’s world, action-reaction connection and causality. The main challenge here is how to represent the behavior of the user and how to deduce it from the data, possibly with the help of light feedback. A big part of understanding purpose is concerned with prediction of future user context based on current context, behavior patterns, and on outside factors.

The final and most challenging stage of the research is to close the feedback loop and figure out how to proactively assist the user. For example, if we deduce that the user is interested in doing more physical exercise (implicitly or explicitly), and we predict that he is going from his office to a meeting in a different floor; we can propose that he use the stairs and not the elevator. Another example is where we know that the user is fond of Jazz and that the user is traveling to another city. We can actively discover if there are jazz shows in the user’s destination, find out their price and location and even make reservations.

The research involves developing the algorithms for each of the three stages as well as a significant data collection effort. From an algorithmic perspective, we will heavily rely on machine learning focusing techniques on some issues such as lifelong learning (learning from very long traces, representing many months), multi-view learning (learning from several persons taking their difference into account), sensor fusion (fusing information from multiple sources), and knowledge representation (how to represent the user’s context as well as exogenous factors in a way that is amenable to computation).

The expected output by the end of the first year includes:

  1. A working “data collection” platform on Android that records all the sensory input from a user and allows for light supervision.
  2. Recorded data traces from users.
  3. An algorithmic framework for understanding context from data focusing on using natural language to describe the context.
  4. A proposal for a methodology to identify the user’s purpose from user’s behavior and to proactively interact with the user.

The expected output In the subsequent years will include:

  1. A prototype application that records sensory inputs, transfers the information to a cloud based application for analysis, and can provide feedback to the user.
  2. Recorded traces from hundreds of users from different generations of the sampling platform. This data set will be made public to the research community.
  3. Algorithms for understanding context and for proactively intervening in the users. The algorithms will be deployed in the cloud-based application.
  4. Case studies and analysis of the proactive intervention in the user’s daily routine. This will include new algorithms for learning and the analysis of their success on actual users.
STATUS
TBD
PEOPLE
Prof. Shie Mannor, Technion EE
Prof. Scott Kirkpatrick, HUJI CSE
Dr. Koby Crammer, Technion EE
Dr. Amnon Dekel,  Shenkar and HUJI CSE
PUBLICATIONS
Shie Mannor ➭
Scott Kirkpatrick ➭
  1. Dekel, A., Kirpatrick, S., Weller, S., Cadan, J., Bar, Hanny, Kessler, B. (2014), “What am I doing now? Pythia: A mobile service for Spatial Behavior Analysis”. Mobility 2014, Paris, France.

Koby Crammer ➭

Amnon Dekel ➭