Wearables: Projects in this area can be software or hardware. In terms of software, our work centers around trying to expand the range of activities that wearable devices can recognize from things like steps walked and calories burned to daily health activities such as brushing teeth, washing hands, etc. This is primarily done using Android and Pebble apps to collect data, and then analyzing that data using machine learning . One area for projects would be to use machine learning to recognize other such activities (not necessarily health). Some projects past students have worked on include recognizing pacing, putting on a seatbelt, and smoking.
Alternatively, one area that remains to be explored is the interventions that can be build on top of this recognition. For example assuming you could detect when someone was not taking their medication, what kind of application would you build? Would you provide reminders to the patient, to their family members, and/or their physician? That would be more of an HCI project.
In terms of hardware we have three projects that have all been worked on extensively by undergrads in past semesters. While I'm calling them hardware projects, note that there are both hardware and software components to all of them, so you can choose to focus on one side if you'd like. One is HaptiGo/HaptiMoto, which is a navigational vest that we've used to navigate soldiers and motorcyclists (Video). It connects to Google Maps and provides directions to users through vibrations on three locations on the users back. There are a few opportunities for projects on this one. The Android code is ancient and could do with updating, i.e. making the UI pretty, more intuitive, and runnable on modern Android phones. One study we've been wanting to do, that got held up by IRB but should be good now, is testing the vest on users suffering from cognitive decline to see if it would be feasible to use the vest to help them find their way home if they get lost.
The second hardware project is CANE, which is an offshoot of the vest focusing on navigation and obstacle avoidance for the blind. This is a belt that connects to Google Maps, and then navigates users to their destination using vibration motors.
The last hardware project is PhysiotherAPPy which is a system designed to improve physical therapy compliance. It uses a Microsoft Kinect to detect arm exercises, and then provides users with feedback using a wearable armband for tactile feedback (via vibrational motors) and a Windows application for visual feedback. Furthermore it has an iOS app that allows for communication between the patient and the physical therapist, where both sides can track what exercises have been done. There is a ton of stuff that can be worked on here. First off we have most of a journal paper currently written, however there a little bit of C# coding needs to be done on the backend of the Windows application. Once that's done we need to conduct user studies to test the efficacy of the entire system. In addition to this, you could also design and build a better armband, work on recognizing more exercises, improve the iOS app, build an Android version of the app, and/or improve the Windows application.
KidGab: KidGab is a social network for little girls. This is great if you're interested in any kind of web development (both front end and back end) as it has a great API to work with, but there are also opportunities to do machine learning. You could work on improving the user experience by adding functionality or could work on analyzing any of the vast datasets she's accumulated using machine learning to see if you can discern anything.
Mechanix/Persketchtivity: Mechanix is a system we built for teaching undergraduate phsyics/statics students how to draw trusses and the forces acting on them. We're in the process of converting this to a web application, so this would be a good opportunity to work on front-end and/or back-end development. Persketchtivity is a web application that teaches undergraduate engineering students how to draw in perspective, so this is also a good project for working on front-end and/or back-end development as well as sketch recognition.
Eye-tracking: This research is focused on developing new interactions with eye-tracking. Vijay, who is currently working on this in our lab, has been working on developing a system that uses an eye-tracker and a foot-wearable to replace the tradidional mouse for those are unable to use their hands. More recently he's also been using it as an alternative to manually entering your PIN at an ATM machine. This is a really great project to get on if you're interested in eye-tracking, developing wearables, and coding in C++ and/or C#.
Neuropsychological sketch recognition: This work focuses on making tablet-based versions of traditionally paper-based neuropsychological tests (such as the Rey-Osterrieth complex figures test). These tests are what they use to evaluate the cognitive function of people, and what we've found is that by having users do tests on a tablet as opposed to paper you can discern a lot more about the person's condition using the specifics of how they drew the test. He has focused on recognizing a few tests, but he has a large book of many tests, so a good project would be to take one or two of those tests and see if you can recognize them using sketch recognition.
Comments
Post a Comment