Crop Harvesting Robot’.
T.J. David and Kudikala Shravan Kumar from the International Institute of Information Technology, Hyderabad, came up with the prize-winning entry that fetched them Rs. 10 lakh in prize money. The nationwide competition is an initiative by Intel in India to recognize and reward outstanding ideas in embedded technology, based on the Intel Atom processor.
The winners also have the opportunity to participate in ‘The Next Big Idea’ run by the Department of Science and Technology (DST), Government of India, and the Indian Institute of Management, in conjunction with Intel.
The initiative hopes to catalyze entrepreneurial growth across the country by acting as an incubator for outstanding ideas. The program assesses business plans and provides practical support to develop them into sustainable business models.
“We want to cultivate technical talent in India and promote our knowledge economy, so we can cater to emerging markets. Our association with Intel India on the Intel Embedded Challenge awards helps to create a pipeline of ideas in the field of embedded technology that will further advance social development," said H. K. Mittal, Head, National Science and technology Entrepreneurship Development Board, DST.“Intel believes technological innovation is the driver of future economic and social success. By providing India’s engineering students and technical professionals with exposure to the highest level of proficiency, Intel seeks to nurture an ecosystem in which innovation can thrive. This platform offers students and professionals an environment where their talent can be nurtured,” said Praveen
Vishakantaiah, President, Intel India.
This year’s Intel India Embedded Challenge had two main categories – ‘Embedded Geeks’ and ‘Embedded Solutions for a Social Cause’. Entries were invited under five themes: Biomedical and Healthcare, Education, Smart Solutions, Industrial and Consumer Electronics, Rural IT, e-governance and citizen services.
In addition to the grand prize, each of the following theme winners received a Rs. 50,000 prize for the best innovation in their category.
Given here are the eight abstracts of the winning projects in the different themes:
Solar Powered Crop Harvesting Robot
T J David & Kudikala Shravan Kumar, International Institute of Information Technology, Hyderabad
This invention relates to the most energy efficient, lightweight, remote-operated solar powered crop
harvesting robot. The objective of this invention is to overcome drawbacks of prior arts of conventional harvesting machines. This most energy efficient crop harvesting machine requires only 500 W of electric energy stored in batteries, which can be charged by solar PV panels or by grid power with domestic current.
Even the same can be operated with 500W of PV panel. The cost of harvesting one acre of land is Rs. 10, if grid power (at Rs. 5 per kW) is used for charging batteries, while a conventional harvester consumes worth diesel worth Rs. 800.
The weight of the machine is about 125 kg. It is operable both via remote and electrically, and it needs an embedded system and control to operate. The total cost of the reaping system is Rs. 50,000 and it is the only machine of its kind in this segment, at this cost. It can be adapted for cutting various crops apart from paddy and wheat by changing the cutter bar. It can provide both fodder and foodgrains at the same time.It saves 80% of the harvesting cost, benefiting the farmers. The invented machine comprises of a lightweight metal chassis, having a low friction cutter bar with a dual edge cutting blade, without requiring a pressure plate to minimise energy losses. The self-balancing cutter bar remains suspended on the ground without requiring manual balancing of the machine when compared to conventional systems. The machine can manoeuvre in all kinds of farm field terrains, whether wet or dry. It has a two wheeled drive mechanism that can revolve 360 degrees in either direction, and has a very steep turning capability. It is a mass impact innovation/technology that will benefit farmers all over the globe.
Amit Ranjan & Deepender Singla, Thapar University, Patiala
Smart Arm is a smart solution to all Indian amputees who can’t avail a costly arm imported from abroad. Most of the artificial arms available in the market cost between Rs. 12 - 14 lakh so the effort was to develop a cost-effective artificial arm, and add extra features in the existing one, so that the prosthetic arms of tomorrow are smarter than the real ones.
Smart Arm is a user-friendly arm, which can understand and follow the user’s commands through simple and robust hardware. What sets this arm apart from the existing solutions available in the market is that it blends simple day-to-day functionalities with complex functions like transferring files via Bluetooth, using inbuilt Windows applications like Calculator and Notepad, without complicating its operation. With the use of the Intel Atom processor, this arm is nothing but a mini-computer. So, a plethora of revolutionary applications can be introduced.It is difficult to restore the writing capabilities of an amputee, but Smart Arm can store whatever the amputee dictates to it, which can then be printed or transferred to a mobile phone. This can act as a promising alternative to writing. Smart Arm can mimic the activities of the natural arm; hence, most of the activities like lifting a tray can be simplified. Smart Arm is really a fast learner; it can learn any
new action in minutes.
The user can train it as per his or her needs. Amputees can control this arm completely through speech. One can listen to music, type in Notepad, use Calculator, send messages, and fully control the arm, all through voice commands.In the three months time that Intel India Embedded Challenge provided, they have developed a prototype of the arm which will be further refined in the days to come. They plan to use a camera and image processing to add artificial intelligence into the arm and intend to work more on body signals in future.
VORWIS (Virtual Object in Real World – Interacting and Sharing)
Pragyanandesh Narayan Tripathi & Ganesh Pitchiah, Indian Institute of Technology, Kanpur
The objective is to make an application through which a user can create and modify 3-dimensional virtual objects in the real world via gesture recognition. The viewpoint is a kinect sensor attached to an LCD screen which shows the video of the real environment. Two users with the same kind of device can share their virtual objects.The virtual objects are embedded in the real world, and cannot be differentiated as virtual looking at the
screen, because, on changing the viewpoint, the 3D object doesn’t move with the screen; rather, appropriate faces become visible depending on the new viewpoint. For example: If a user is looking at the front face of a virtual cube positioned 2m ahead of him, rotating his viewpoint to his left should
enable him to see the right face of the cube.This application will provide extraordinary teaching assistance in understanding 3D structures and graphs. It will have widespread use in 3D CAD design and medical science imaging technology, and enhance the gaming experience of users. Artists will be able to create 3D artworks in a real time environment through our gesture recognition and hand tracking algorithm. It can also be used in virtual reality applications. With the assistance of VRD Displays or 3D eyewear displays, this application can be used very effectively in interacting with the virtual environment, changing the user experience of human computer interface to a large extent. The user will be able to view his digital information in 3D space around him.
A Novel Approach to 4-Dimensional Interfacing
Gaurav Jain, SASTRA University
The project subverts the paradigm of conventional two dimensional interfacing techniques (touchscreens) and makes an innovative path to 4-dimensional interfacing, which targets new gadgets as
well as existing gadgets and helps to reduce e-waste. This Interfacing space uses all four dimensions (three spatial, plus time) and the prototype is made using IR sensory units.
The interfacing space being used with existing gadgets has the capability to replace most of the HIDs (human interfacing devices) by one better device. The interfacing technique is most suitable to the four dimensional GUI, and the three dimensional games where 2D finds limitations.4 Dimensional Interfacing = 3
Dimensional Interfacing Space + 1 Dimensional Sensory Array for Time
In the prototype at the POC (Proof of Concept) level, the very first array of the space decides the X-Y coordinate of the interaction (location of finger or object), and then the other arrays downside decide the depth of the interaction which controls another parameter.
The time-based functions are defined by the interaction speed and time scaling mode. The diode arrays for depth are connected to the comparator and the output of the comparator goes to the controller/processor unit.The prototype mostly uses Open CV, C and Windows programming in the software part. To make the approach more useful, the OS (Operating System) selected is Windows, the most commonly used
OS in India. In future, it can work like an electronic slate, a writing pad for the visual disabled; it will be very much resizeable and useful for gesture recognition based industry applications and many more.
Portable Electronic Nose for Tea IndustryAmritasu Das & Hena Roy, Centre for Development of Advanced Computing, Kolkata
The main goal of this project is to deliver a Portable Electronic Nose for Tea Industry (PENTI), so as to
monitor the tea aroma, generate a proper identification report on tea quality based on its aroma, and also to generate a profile plot during the tea fermentation process to obtain the optimum fermentation time from the plot.
Identification of tea quality depends on following parameters like flavor, aroma, color, and strength. Conventionally, the tea quality parameters are quantified on a scale of 1 to 10 based on human senses, by professional human experts called “tea tasters”.
The necessity for portable handling of an electronic system for a dedicated application to replace bulky desktop PC-based systems has lead to the application of embedded technology in handheld instrumentation for the tea industry. It is the purpose of this paper to present a PENTI based on the Intel Atom processor, which is an embedded substitute of a PC based e-Nose system.This system has got a primary role in tea aroma identification based on SVD and correlation algorithms embedded in the target platform. The system has got a unique human interactive interface specific for the application of tea quality detection based on its aroma. The system consists of an array of gas sensors for
responding to tea volatiles, a data acquisition module, a graphic LCD display with a proper human machine interface (HMI)/Touch Screen, and an SD card interface for data logging.In the present study, Electronic Nose-based aroma and flavor categorization of black tea has been attempted, and promising results have been obtained. The Electronic Nose has the potential to eliminate the problems associated with human panel tasting, and, if it is standardized for quality characterization of tea, it may serve as a very useful instrument for fast, reliable, continuous and real-time monitoring of the aroma of finished tea.
Remote Operation of Electrical Gadgets Using Simple Mobile Phone
Debapratim Sarkar & Saikat Das Adhikari, Techno India College of Technology, Kolkata
An embedded monitoring andcontrol system has been designed and developed which can perform the task of home and industrial automation very efficiently from any remote part of the earth. This device can automate the electrical switching process in any domestic or industrial site, through telephone, SMS,
Internet or IR remote.
The system consists of 5 main hardware modules:
- The Intel Atom Embedded Development Kit (EMX-PNV) - the heart of the system
- The GSM module consists of a GSM modem to receive/transmit the calls/ SMS.
- The display and sensor module comprises display, temperature sensor, and remote control receiver.
- Camera interface for remote surveillance
- Power driver module, consisting of four 220V AC power drivers with step control capability
The user can operate the device in phone mode, SMS mode and Web access mode. In any mode, the user can control the electrical loads connected to the system. The user is assisted remotely by voice response and SMS.
3D Mapper & Navigation System for Autonomous Ground Robots
Akash Mohan Singhal & Aditya Shanker Raghuwanshi, Birla Institute of Technology & Science, Pilani
3D Mapping has a broad range of applications; for instance, urban mapping, coastal mapping, vegetation mapping, mine mapping, and geological survey. These allow different organizations to efficiently manage resources, plan infrastructure development, survey areas for deploying manpower and the like. The military requires robots for surveillance, reconnaissance, low intensity conflict, landmine detection, etc. Robots equipped with the autonomous navigation system can be used in industries, mining activities, etc, for carrying out work in hazardous conditions.
To achieve the tasks mentioned, we propose to develop a robot with functionalities that are flexible according to different application requirements. The navigation system will map the features in the surroundings, a three dimensional model of the environment will be generated from sensor inputs, and the robot will navigate using a motion control algorithm. The system consists of a robot equipped with laser rangefinder, stereo vision camera, inertial measurement unit, Global Positioning System, and Intel processors. At the start, the robot is given a target location or a set of waypoints.
At any point during its motion, the robot will be localized using the inputs from the GPS, IMU and wheel encoders. The range data acquired from the laser rangefinder will be fused with the vision data from the camera to generate an accurate colored, 3D plot of the environment. This would be used for generating 3D textured surfaces and autonomous feature identification. The waypoints will decide the primary reference trajectory for the robot motion. Based on the map of the environment, the robot will move on the reference trajectory while avoiding obstacles.The navigation system will also feature options for manual override. For processing such large amounts of data from different sensors in real time, multi core processor(s) are used. A visualization application has been developed, which displays the complete 3D map of the environment and other important data as needed by the user in real time.
Embedded Eyes for the Blind (EEB)
Anurag Awasthi & Avani Nandini, Indian Institute of Technology, Kanpur
There are millions of people who do not have the gift of sight. In the U.S. alone, there are over 15 million visually impaired people. Among the several difficulties that a blind person has to face, one is the inability to walk around without any support to prevent collisions. There are very few solutions existing today for this purpose, and, besides being expensive, they are not easily available in India.
Even in this technological age,there is scarcity of work being done in this regard. With the emergence of computer vision and machine learning, it is possible to have a robust match between images, irrespective of small distortions, changes in scale and quality to some extent. There has also been some excellent research work in robotics for localization and mapping. However, these do not consider the distortions due to diverse movements and the heterogeneous nature of obstacles in case of a human walk.
We propose a novel system to support indoor navigation for the blind. We present a device, using two cameras as sensors, to provide obstacle avoidance and localization, while moving in an indoor environment. The stereoscopic vision from the cameras is used to develop geometric and texture-based cues, which are then used for obstacle detection while moving. For navigation, the device is pre-calibrated with several landmarks in the indoor location, with each landmark giving the system information to guide the person with directions. The person is prompted with an auditory response in real time. Finally, the device identifies the significant indoor objects when queried, such as a chair or a sofa, for which also it needs pre-calibration.
Our prototype has been tested for indoor navigation in simple households, and the query time has been found quite appreciable in real time.
About the Contest:
The Intel India Embedded Challenge 2011 contest is part of the Intel Higher Education Program and this initiative, run in association with Intel’s Embedded Communications Group, seeks to support the exploration of innovative embedded technology applications, based on the Intel Atom processor, at universities and engineering institutes.
This year, the entries – from individuals or teams of two – covered a range of applications: from socially relevant areas like education, medicine, rural development and environment, to commercially exciting areas like gaming, social networking, infotainment and robotics.
Following more than 1,600 applications from across the country, Intel shortlisted 119 teams based on their innovative ideas in embedded technology. An expert panel then selected the 31 finalists for the prototype phase of the contest. In this final phase, each of the finalists worked on Intel Atom Processor kits to prototype their idea. The prototypes were then showcased to a jury consisting of senior academic researchers, government representatives and industry leaders, who selected the winners.