Many sensors used in robotic perception produce 3D point clouds. Stitching such point clouds together to produce a consistent 3D representation of the world is a computationally expensive process which often can't be done in real-time. Our algorithm exploits the geometry of one such sensor (the Velodyne LiDAR) to detect planar structures in the environment, and uses these structures to efficiently reconstruct the path of the robot.
cereal takes arbitrary data types and reversibly turns them into different representations, such as compact binary encodings, XML, or JSON. cereal was designed to be fast, light-weight, and easy to extend - it has no external dependencies and can be easily bundled with other code or used standalone.
The Neuromorphic Robotics Toolkit was developed alongside Dr. Laurent Itti, and is a C++ framework for writing distributed, modular applications. The main focus of the project is on fine-grained modularity for computer vision applications, which requires a very high performance messaging system.
The main features of the tookit are:
The task of the Neovision2 project is to develop a state of the art object recognition system based on current neuroscience research, and to prove that such a biologically plausible system is superior to non biologically inspired systems.
My main contributions to the project were:
The CT2WS project was initiated by DARPA to develop a soldier-portable visual threat detection platform. Furthermore, the goal of the project was to implement such a system using biologically plausible models of primate visual attention.
I acted as the primary developer in iLab to implement, tune, and interface a model of visual attention developed by Dr. Itti and Dr. Baldi (see here for more details). Our lab worked in close conjunction with Hughes Research Labs to produce a working system that fused the automatic detections of our attention model with the detected neural signatures of a human operator.
The Beobot 2.0 is a rolling mini cluster used in the iLab for visual navigation research. It's mechanical base is an electric wheelchair, which holds a custom CPU rack, water cooling system, and a host of cameras and other sensors.
The main work I did on this project was as follows:
I advised Sagar Pandya on his semester-long directed research project to implement a fast SSE optimized edge detector in C. The code has an easy to use plain C interface, compiles to a .mex file for interfacing with MATLAB, and has an interface to NRT.
My first job as a graduate student at USC was TAing CS445 - Introduction to Robotics. At the time, the class was based around a 2Mhz microcontroller robotics board called the Handy Board.
After my first semester of teaching, it became obvious that the Handy Board was just no longer cutting the mustard when it came to teaching modern robotics. After much research, I determined that the best solution was to design my own custom board that I could use for the class. The result was a great success, and has since allowed us to upgrade the CS445 lab to teach such topics as image processing, and probabilistic robotics.
Here are some quick specs on the board:
Nemo::setMotor(0, 100);to set motor 0 to 100% power.
float distance = Nemo::getSonar(6);to get the distance from a sonar connected to pin 6.
Image<PixRGB<byte> > img = Nemo::grabImage();to grab an image from an attached USB webcam.
USC hosts two primarily undergraduate robotics teams which compete in yearly competitions:
I advise both teams on all aspects of their designs, including software, electrical and mechanical.
Randterm is a simple python serial terminal inspired by the awesome (but Windows only) Realterm. It was built mainly to help me debug serial protocols for microcontrollers, and so it includes the ability to send and display data in ascii, decimal, hex, or binary.
SimpleSLAM simulates a robot cruising around a 2D environment, occasionally receiving range and bearing measurements to landmarks. The task of the robot is to simultaneously build a map of the environment, and to localize itself within that map. It was written to teach (myself and others) about the fundamentals of simultaneous localization and mapping.
I had a ton of fun writing this site because I used it as an excuse to learn as much as possible about modern web programming. Here's how the site works: