- Finish KinectDAR visually impaired assistance device.
- Test and work with reading IMU Data.
- Work on presentation for grant proposal presentation.
- Research IMU based localization and motion tracking.
- Obtain preliminary parts – ESC + Motors
- Experiment with connections to Arduino, attempt to control motor speed with Arduino
- Derive general control algorithm i.e. mapping thrust/speeds to motor values
- May require use of propeller force output calculation.
- Work on serial Raspberry Pi to Arduino connection
- Create basic serial client algorithm. Python on the RPi and Sketch for Arduino
- Design serial command protocol for RPi to Arduino communication i.e. framework for sending commands from RPi to Arduino.
- Research and develop Arduino to flight board communication protocol so that the Arduino can essentially emulate the output of an RC receiver, which normally would come from user commands.
- Work with Ground Control program to coordinate Arduino command -> ESC output -> flight controller response.
- Play with PID control
- Actively begin work on Autonomous Navigation for Flying Robots and Machine Learning courses on EDX
- Begin creating main data flow control structure on Raspberry Pi
- Higher level decisions made on RPi or transmitted to RPi from user.
- Sensor readings are read and preliminary calculations are done on the Arduino then sent to RPi for heavier computations.
- Preliminary Phase 0: Boot Sequence.
- Overview: This phase is to mark the completion of the boot up sequence. The boot up sequence will entail the connection of all ESCs/Motors, Sensors and Processors. The sequence will most likely initiate on the RPi and “trickle” down to the lower level components. The process will check to see if all components have been successfully connected and start all necessary programs for flight.
- Phase 1: Basic Hover
- Overview: Phase 1 will accomplish the first quad rotor design stage of a basic hovering flight. In theory the quad rotor will accelerate off the ground and hold a specified vertical altitude for an allotted amount of time before touching back down.
- As chief integrated system architect. I will design/integrate our prototype flight control board for in flight stabilization and vehicle attitude control.
- Start to develop algorithms to translate quad copter position in 3D space.
- Experiment with yaw, pitch, and roll value outputs to flight controller to see which combinations of input factors lead to what output factors.
- Begin to develop function to take an (x, y, z) coordinate relative and translate robot to the new position. Note the sum of these individual coordinate translations is the position on a global scale.
- Design algorithm to work within bounds of a specified maximum velocity.
- Tune PID loop on flight controller accordingly.
- Work with sensor feedback control.
- Two distance sensors will be on the robot. One will be mounted facing down for localized altitude monitoring, and the other will be facing forward for object collision.
- Work with IMU for movement feedback monitoring.
- Compass for global directional vector monitoring.
- Monitor 3-axis acceleration for position estimate feedback
- Monitor speed of yaw, pitch, roll to tune output function to a desirable control level
- Phase 2: Directed Movement
- Overview: This phase is a continuation of the hover phase and it will add the element of interpreting commands to move in a specified <x, y> directional vector for a specified amount of time.
- Example: Takeoff -> Hover – > Move left -> Stop -> Hover -> Land
- At this point we will hopefully have a functional Arduino to Quad communication protocol and command translation agent so that movement parameters can be customized to user specified values.
- Phase 3: Autonomous Wander
- Overview: The autonomous wander will essentially act as our benchmark level of autonomy for this project. The idea is that without the input of a user the robot will fly around and avoid obstacles aimlessly.
- The process will go as follows, the quad will proceed forward until it detects a path obtrusion, then it will adjust its yaw until the obtrusion is remove.
- Hopefully I will have already experimented with RPi Camera vision by now…
- Start development on RPi Camera board integration into overall system architecture.
- Research image compression for integrated devices.
- Create interface across wireless XBee communication system to send images to host Android device.
- Research computer vision techniques for RPi: Matlab, OpenCV (C++ or Python)
- Figure out efficient debugging and testing technique for computer vision development
- Create simple blob detection for specified color threshold for RPi system.
- Will be designed to be robust i.e. algorithm will ignore background static
- Continuous blob detection and size ranking.
- Work with shape detection.
- Combinations of lines and vertices will contribute towards overall ranking algorithm
- Ex: Triangles, square, circle, etc.
- Phase 4: Autonomous Objective Seeking
- Overview: Autonomous objective seeking will be a demonstration and benchmark for the quad rotor’s ability to-given an unknown objective layout-seek out, find, and then navigate toward specified objectives or landmarks
- An example of a landmark could be very simple like a red square or a blue triangle, but they could also if possible grow in complexity to accept pictures of key landmarks fixed in an environment, like an American flag.
- This stage will also try to begin working with the notion that the quad rotor will start to build a spatial map of found objects i.e. it will store the approximated (x, y, z).
- Phase 5: Objective Mission w/ Pre-made Maps
- Overview: Using a pre-defined map of an environment the quad copter will be given some task to perform within the environment, and then the quad rotor will attempt to execute the mission within the best of its abilities, accounting for unforeseen complications (ex: obstacle introduced)
- We propose perhaps storing the map as a bitmap with nodes of barrier and open space, or even specially marked nodes, and these nodes can be dynamically updated as new information is received.
- We plan to employ a host of sensor input to refine our localization estimates within the environment. Possibly a separate IMU will perform motion tracking to act as a sensor estimate and combined with the quad’s knowledge of its own kinematics we can use a Kalman filter to track motion. Using distance sensors and possibly a computer vision landmark system the quad could make guesses as to where it is within its known map.
- Phase 6: Objective Mission In Unknown Environment
- Overview: Instead of coming equipped with the knowledge of the geographic layout of an environment, the quad must passively build the map of its environment and relay this information back to the host Android. The quad rotor will still carry out its mission, but we must design the algorithm to act with uncertainty.
- Essentially the quad rotor will explore it’s environment searching for its goal, and begin to build a representation of its environment through a process known as SLAM (Simultaneous Localization and Mapping).
- Begin process of consolidation, i.e. find a stopping point in development so documentation can be started.
- Final Phase: Documentation and Presentation Preparation
- Compile findings into research paper style format.
- Heavily document final product with video, photo, and description.
Design an example mission to showcase abilities of quad rotor to crowd.