Every student in ECEN 4623/5623 completes a project either individually or on a small development team using VxWorks or Linux. Projects must apply real-time and embedded systems theory covered in class and usually involve either communications, device control, or telemetry acquisition. The projects require groups to complete a detailed design using typical industry design methodologies including: UML, SA/SD, SDL, and use of Hardware EDA tools if applicable - these methods are not taught in the course - please see reference texts held on reserve in the library if you are not familiar with HW/FW/SW system design methods. Students can propose their own project or complete one of the standard course projects provided by the instructor. All projects must be proposed with MINIMUM, TARGET, and OPTIMAL objectives for the system clearly stated (see Grading criteria below). Here are a few examples of projects:
Jonathan Bruneau, Richard Devore, and Abhishek Ramesh Keshav first took on this project in Fall of 2007 and built a laser vision system for the Crust Crawler 6-DOF arm to locate a target, compute range to grapple it, and pick up and place the target autonomously. With their design the target can be placed anywhere within reachability space of the end effector.
Crust Crawler Laser Vision System in Action
Students will write PCI driver for Cirrus audio encoder/decoder and trasmit packet switched audio over 10 Mbps ethernet between 2 x86 PCI targets. The Cirrus encoder is capable of 8-bit mono encode/decode, 8-bit stereo, 16-bit stereo, and 20-bit stereo. As a minimum projects should implement 8-bit mono full-duplex VoIP, and as target and goal objectives, projects should implement the stereo format and provide transport QoS. A number of protocols for VoIP are typically used over UDP/IP in order to provide QoS such as the RTP protocol. Basic QoS over UDP/IP can be obtained with careful tuning of the DMAs and buffers to eliminate excessive latency and "clicking" noises due to playback drop-outs.
Project 2 Resources
Students use the Bt878 (Hauppauge WinTV PCI) card and optionally the Cirrus 4281 Audio CODEC card to encode audio and video for transport (ideally compressed) over a TCP/IP or UDP/IP network and playback. The two cards together can provide a video-conferencing capability when two targets are set up this way along with duplex transport of the audio and video. The Hauppauge WinTV card has an example VxWorks device driver bottom-half. For a more ambitious project, students can consider developing a bottom-half driver for a new CODEC such as:
Videio Compression and Image Processing Resources
Video Conf Video Clip
Many of the newer video CODECs include hardware-based MPEG-1 or MPEG-2 video stream encoding. Alternatively the Bt878 can be used and a software compression algorithm used such as "change only", difference image, run-length encoding, or many other typical data compression schemes.
Using the Bt878, it is not possible to do audio playback, so the Cirrus 4281 must be used in addition
Using the Bt878, it is not possible to do audio playback, so the Cirrus 4281 must be used in addition - newer CODECS provide full encode and decode. Also, the encoded NTSC video data is normally displayed on a PC using the example Python digital video display application - newer CODECS provide full encode and decode so that the implementation would not require the PC for video display and the 2 seperate PCI devices.
Project 3a Resources
Students will integrate an NTSC camera, Bt878 frame grabber, and a Pentium VxWorks target platform which can be used for computer vision on a tethered vehicle to perform line following and/or collision avoidance. As a minimum, the vehicle should be able to follow a high contrast tape line on the floor with tethered power, serial data/control link, NTSC video link, and fixed computing and video processing platform. It is useful to stream the video back to a lab PC for debugging (so you can see what the vehicle is seeing), however, given a 10Mbps ethernet and limitations of the Python viewer running on Windows, frame-rates of 4 fps are the typical maximum obtainable unless change-only and/or Run-Length-Encoding compression are used - with compression, 10 fps is typically obtainable for the streaming link - the card and cameras are capable of 30 fps for embedded frame processing.
RC Car Line Follower Video Clip - MPEG
Mobile Line Follower START and END Video Clips - MPEG
Another Mobile Line-Follower Video Clip - AVI
And one More Line-Tracer Video Clip - AVI
Computer Vision Resource Web Links:
As a goal, students can put together a completely autonomous platform using PC/104+ equipment pictured below and battery power - PC/104+ form factor size is 3.5 inches square and power consumption is about 4 watts.
Acroname robotics has a number of robotic sensors and even a generic mobile platform which may also be useful for this project.
Project 3b Resources
Students have built a number of interesting variants on the basic computer vision platforms including: 1) camera embedded in robotic arm for visual feedback, 2) scanner which copies drawing (e.g. the Etch-n-Sketch), and 3) numerous plotter and robotic arm vision projects.
Etch-n-Sketch Video Clip
Students build a 5 DOF robot from a kit (OWI-7), wire the relay control (and position/limits sensor interface) to an x86 microprocessor controlled relay-board, and then write software to implement a basic "pick and place" capability. The microcontroller system from the Embedded Systems Fundamentals course may be used or alternatively a commercial ADC/relay interface - typically commanded and sampled via an RS232 serial interface. The motor interface can be contolled with 10 relays used for forward/reverse control of each joint motor by switching polarity using the hand controller PCB interface. The basic motor circuit has gnd, +3v, and +6v which are switched to create positive/negative potentials on each joint motor. As a minimum, the robotic arm must have limits switches to prevent the arm from being over-driven on the shoulder and elbow joints and must dead-reckon to pick up a target object and place it at a different target location. Here's some pictures of one of the robots in action.
Project 4 Resources
Students write code for a robust serial link using either forward-error-correction codes (e.g. Reed-Soloman) or byte stream EDAC codes (e.g. Hamming) along with a packet protocol for retransmission. The link can be implemented with a laser cross-link or IRDA link - this allows for simple insertion of errors on the physical link and therefore demonstration of the EDAC and re-transmission protocol.
For the laser cross-link, students build a low-power serial laser communications device and write software for a simple packet transport over point-to-point link protocol. This project is a standard project that was first completed by a group in Spring 1999 and here's a picture of their LASER terminal setup. It is easy to introduce link errors into the LASER link, providing a nice means to test error detection and correction protocol developed for the link - e.g. simply by blocking the LASER cross-link momentarily. Error detection and correction on the link as well as a basic packet format and tranport protocol are required elements of this project. Automatic retransmission and automatic error correction are nice goals for the implementation.
In the IRDA variation of this project errors at the link layer can be injected using a TV remote to test the link layer error detection and correction. This was first done in Fall of 2002 as depicted below - note that the IRDA transceivers point up in the picture, toward the wall and the link is reflected back to the Rx port from the Tx port on each side. The input for both the LASER and IRDA terminals is simple RS232 serial - rates as high as 38,400 baud can be achieved with both the LASER and IRDA terminals.
Project 5 Resources
Students write code to acquire NTSC frames through a PCI bus frame-grabber and process frames for compressed network transport and viewing on a workstation . This project was first completed by students in the Spring 2000 course. As a n interesting variation or enhancement, a PIC can be used to provide NTSC camera tilt/pan control for applications such as peak-up target acquisition (i.e. scan ning for a light source or video target).
Peak-Up Video Clip
A variation on this project is Stereo Vision using 2 cameras separated on a known baseline to estimate distance to a target as well as tracking it with tilt/pan control. This project was first successfully completed in Summer 2003 and is pictured below.
Project 6 Resources
Very Large GPS Rover Video Clip - MPEG
GPS Rover Video Clip - MPEG
Students develop an autonomous rover vehicle using NTSC camera based navigation. As a minimum the vehicle should be able to return to a starting point using GPS positioning - typically using a low-cost serial GPS receiver such as the Motorola OnCore systems. A digital compass can be added for heading control (to avoid having to move to get a heading with GPS). As a goal, the NTSC camera can be added so that close-in navigation can be completed with computer vision (GPS is typically only accurate to 1 meter at best). This project must include either GPS, a mobile camera, or both - i.e. processing of continuous stream data for real-time navigation. Collision avoidance can be implemented with ultrasonic rangers from Acroname Robotics.
Project 7 Resources
Students develop position encoding interface for the 5-DOF OWI arm and software for active position control of all 5 degrees of freedom. For position feed-back it is possible to integrate 10K ohm multi-turn potentiometers and an ADC to encode position based upon the potentiometer output. It is also possible to use a position encoder based upon a radial switch that counts "clicks" using the parallel port in ECP/EPP mode for TTL logic level GPIO from the switches.
Project 8 Resources
50% - Completion of all proposal, analysis, design and all sections of final report 30% - Completion and demonstration of working system that meets minimum goals 10% - Completion and demonstration of working system that meets target goals 10% - Completion and demonstration of working system that meets optimal goals E.g. a project that completes all steps, a complete final report and demos minimum goals will receive at least 80% of total points for the project. Partial completion of any portion will be graded and awarded partial credit according to the judgement of the instructor. Significant changes in the project completed and the original proposal minimum, target, and optimal objectives are not allowed - any changes from the original proposal objectives should be approved by the instructor if such changes are intended after the 5th week of the course. No changes are allowed after the 10th week.