Projects:2015s1-17 Analysis of Electrical and Software Design in the Effectiveness of Robotics STEM Outreach Programs
Note: Due to the participation of Department of Education and Child Development (DECD) schools in this project, the names of any robotics kits cannot be mentioned; as such all kits will be referred to by a letter, e.g. Kit A.
The aim of this project was to analyse a number of different robotics kits, to determine which is the most effective as a learning tool for students and to assess what kit features influenced the effectiveness of each robotics kit.
To test each robotics kit a series of workshops were run in schools with eighth grade students, having the students attempt to complete ten tasks with a robotics kit in 60 minutes. We determined that we required approximately 80 students to test each different kit to achieve a 90% confidence interval. The tasks were designed by the project team having researched lesson plans and robotics kits tasks currently available. The ten tasks were aimed to test the basic functions of the kits utilising key features such as motor movement, the use of sounds and obstacle and line detection; with the tasks to become progressively more difficult as the students progress. The time taken for the students to complete each task was recorded as well as the total number of tasks the students could complete in the 60 minutes. Once the students had finished the workshop with the robotics kits, they were asked to complete a survey we designed to gather their opinions on the robotics kit.
- Dr Braden Phillips
- Dr Hong Gunn Chew
- James Cadzow
- Benjamin Hayton
In 2005-2006 and 2011-2012, Australia experienced engineering skill shortages that were described as ‘persistent and widespread across engineering specialisations’ . This, combined with a decline in student interest in Science, Technology, Engineering and Mathematics (STEM) fields , has left Australia needing ‘long-term growth in the number of secondary students studying mathematics and science’ . STEM outreach programs are a way to get school students interested in STEM fields and potentially having future careers in STEM. The robotics industry is seen by some economic forecasters as becoming the next big boom industry ; as such STEM outreach programs have used the availability of robotics kits to run programs with an aim of getting more children interested and involved in STEM fields. With robotics having ‘been shown to be a superb tool for hands-on learning’ , outreach programs such as Robogals are using robotics kits , to help students become more interested and involved in STEM fields in a hands-on manner. With backing from Engineers Australia, the Governments Chief Scientist and with international interest in the field , a large number of robotics kits have become available for use in STEM teaching. Despite this, no research has been made publicly available to aid teachers and STEM outreach programs in choosing the most effective robotics kit for their needs. This project attempted to fill this void that exists in the STEM education industry.
Before we could explore the technical aspects of this project we first defined what effectiveness, as a learning tool, would refer to for this project. We established that a robotics kit would be considered an effective learning tool if it met the following criteria:
• The kit and software challenged the students to think and problem solve when attempting tasks without being too easy or too difficult
• The kit would be affordable, relative to its level of performance. For example, if there were two kits, one that was cheap and one that was expensive, if the students were found to perform tasks with the cheaper kit at a similar but slightly decreased level to the expensive kit, then the cheaper kit would be considered more effective as more students are likely to be able to have access to them. Alternatively, if the more expensive kit clearly out-performs the cheaper kit, then it is obviously more effective even though it costs more.
• Finally, the students had to find the kits engaging and interesting; most importantly the students had to have fun with the robotics kit and want to use them.
The kits we selected for use in our project had to be similar so that we could develop standardised experiments and tasks for the students to put the kits through. However, the kits also had to have a diverse range of features so that we could draw conclusions on if these features impacted the overall kit effectiveness. To select the three kits we compiled an initial list of 45 different robotics kits, by using terms such as ‘robotics kits’, ‘educational robotics kits’ and ‘programmable robotics kits’ in internet search engines. To further refine the list, we defined the following criteria the kits had to meet to be considered suitable for our project:
• Be Autonomous – i.e. not remote controlled
• Require Programming – the kit needed to require the students to create a program with a form of programming language to complete the different tasks. A number of kits in the initial list used options such as barcode scanning, to program the kit to perform different actions, without the user needing to actually create and develop the program.
• Not Require a Smart Device – a number of the kits required a smart device such as an iPad tablet or smart phone to use. The kits required the smart devices for both app-based programming and remote control, or were the actual computer for the kit.
• Meet Kit Intangibles - this included having to be a land-based kit with wheels or tracks; excluding kits such as mechanical arms, walker kits and drones or water devices. The kits also had to meet the appropriate skill range, i.e. they could not be too basic or too complex, such as being intended for use by children aged 5-10 years or requiring previous programming experience.
We were fortunate that a local STEM outreach program, Robogals, were generous enough to provide us free access to the ten-robotics kits they use for their programs (Kit N). These kits have a recommended retail price of around $500 and met all of the above criteria. To select the remaining two kits to use, we also had to factor in our project budget constraints. We were provided a budget of $1000 and as such used a final kit criterion of a maximum purchase price of $150 per kit. Implementing these criteria, we filtered down the list to a final 12 kits that seem suitable for our project. Over half of the kits were deemed unsuitable due to being too expensive for our project budget, with a further 19% requiring the use of a smart device. 9% of the initial kits were found to be remote controlled and the remaining 19% were excluded due to various kit intangibles. The final three selected kits were Kit N, available to us through Robogals, Kit E and Kit F. Kit E has a recommended retail price of $65 and Kit F has a recommended retail price of $150. These kits were selected as they had various similarities that enabled us to create standardised testing methods, yet had various unique features and differences so that we would be able to draw conclusions on the impact of these features on the overall kit effectiveness. Kit N is compiled of construction bricks that make up the body of the robot which coupled with detachable sensors allows the robot to be highly customisable. The body of Kit E is made of hard plastic that is compatible with popular construction bricks and has different sensors built into the body. Kit F is contained in a hard plastic shell designed to mimic animal shapes which cannot be physically customised; it has different sensors built into the body of the kit and is required to be connected to a computer via USB cable at all times.
Ten tasks were produced with the aim of an average student completing 2-3 tasks and an advanced student completing 3-5 tasks. In order to keep students motivated and on task, task sheets were handed out to each group of students as they complete the previous task and students were not be told how many tasks where developed.
Tasks were developed by referencing teaching resources for kits to gain a better understanding of the tasks achievable. Potential tasks were analysed against kit elements to ensure each task provided data on separate kit elements. Tasks were then formed by applying lesson plan techniques, defining objectives, considering differentiation to define acceptable solutions potentially achievable by all students and formalising project relevance.
The students were asked to complete a survey we developed after their 60-minute workshop with the robotics kit. The survey was compiled of two types of questions- open questions, where the participant responds using their own words- and closed questions, where the participant chooses from a list of predetermined responses . Our survey had a majority of closed questions, as responses to closed questions provide data that can be statistically analysed that helped us in determining any trends that were present in student responses between the different kits. The closed questions were aimed at determining the student’s prior experience with robotics kits and to find out their opinions on their experience with the robotics kit they used. These questions were aimed at determining what aspects of the robotics kits they found particularly hard or easy and if the robotics kit they used had any issues or areas for improvement. The open questions we had in our survey were to potentially provide responses that we had not anticipated . These questions were aimed at determining what aspects of the robotics kits they found particularly hard or easy and if the robotics kit they used had any issues or ideas for improvement.
The ratio of open to closed questions was aimed at making the survey quick and simple for the students to complete and upon request by the University of Adelaide’s Faculty of Engineering, Computer and Mathematic Sciences (ECMS), a further question was included in the survey asking for the students’ gender, so that we could assess if there were any differences between task performances and interest levels for male and female students.
To assess each of the robotics kits, we ran 12 workshops with eighth grade students, where students were given 60 minutes to complete as many tasks as possible. Before using the kits, the students were given a brief 15 minute tutorial on how to use the kits; after the 60 minutes, the students were asked to complete a survey on their experiences with the kits.
From the workshops, we found that from the total of 244 students, only 30 students were able to get up to attempting Task 4, with all students failing to complete this task within the allotted time. For Task 1, Kit N and Kit E had almost identical percentages of students completing the first task at around 98%; with Kit F having a slightly lower completion rate of 91%, with seven students unable to complete the task. After the first task however, the completion rates for all three kits dramatically decreases, with Kit N having the highest completion rate for Task 2 at 45%. Kit E and Kit F’s completion rates were significantly lower at 2% and 5% respectively. Due to the low completion rates for Task 2, few students attempted Task 3; in particular students that used Kit E or Kit F. Of the 45% of students that completed Task 2 with Kit N, a further 72% went on to complete Task 3, for an overall completion rate of 32% for Kit N. Due to the low overall number of students that were able to complete Task 2 & 3, there were not enough data points for the average completion times for the two tasks for each of the different kits for the data to be meaningful. Task 1 required the students to program the robotics kits to move in a triangular shape and the guidelines for this task were very lenient. The project average of all students for Task 1 was 23:41, with students using Kit N on average able to complete Task 1 in 17:24. This was 7 minutes quicker than the next best kit, with students using Kit F at 24:27. Kit N was 11 minutes quicker than the students using Kit E at 28:41, which was also 4 minutes slower than the students using Kit F.
To gather the students’ opinion on how difficult the programming software was to use and to create functional programs with; the students were asked to rate the programming difficulty of the kit out of five given options, varying from Very Difficult to Okay to Very Easy. The results show that the majority of students found programming all of the kits Okay. The responses to the programming difficulty for Kit N are relatively evenly distributed around the Okay response, with the responses distributed in a normal bell-curve shape and neither the difficult nor easy side are heavily weighted. The responses for the programming of Kit F are similar, but slightly weighted towards the easy side, with 8% of students finding the programming Very Easy and no students finding it Very Difficult. The responses of 5% of students finding the programming of Kit E Very Difficult, combined with 35% finding it Difficult and no students finding it Very Easy, results in the responses for Kit E being weighted towards the difficult side. Overall, all three kits responses for the programming difficultly were between 43% and 49% for Okay. This is a positive result, as it suggests that overall the students did not find programming any of the kits too difficult, which could lead to student frustration and in turn poor learning effectiveness, or too easy, which could lead to student boredom and again poor learning effectiveness.
One of the survey questions was targeted at determining the students’ opinion on how well they thought the kits hardware performed and how difficult it was to control. The students were given five different options to choose from, varying from Very Difficult to Okay to Very Easy. The results show that, overall the students found getting all three kits to perform how they wanted to, with the responses weighted towards the difficult side. The result suggest that Kit L was this easiest to control, with 23% responding Easy, which was 8.5% higher than Kit E who received the next most Easy responses. Kit E was found to be the most difficult to control out of the three kits, with 12% of students responding Very Difficult and 36% responding Difficult. The responses for Kit F lean slightly towards the difficult side, with 34% responding with Difficult, however only 2.7% responded with Very Difficult and 12% responded with Easy. The majority of students found controlling Kit F to be Okay at 50%. Kit N also had a high number of students respond with Okay at 48%, and the remaining responses were similarly distributed, with only 4.5% responding with Very Difficult and 1% responding Very Easy.
One of the survey questions asked the students if they wanted to use the kit again, with the students given five options to select, from Definitely Not to Definitely. The overall response to this question was very positive, with over 85% of the project responding that they Maybe to Definitely wanted to use the kit again. All three kits follow this trend closely; the responses for all kits being heavily weight to the want to use again side. There does not appear to be any obvious differences between the responses for any of the kits, although an interesting note is that Kit F received both the most Definitely and Definitely Not want to use again responses. These are positive results that suggest that a majority of students found that the kits were interesting and engaging.
Comparing the kits based solely on task programming time, Kit N is the most effective with the shortest time to completion for task 1, the highest number of students completing task 2. Extending this analysis to include student enjoyment further backs up Kit N's effectiveness, with none of the survey responses exhibiting a skew towards the difficult as was seen with Kits E and F.
Analysing the three kits against all effectiveness measures shows Kit E to be the most effective, while it has the longest programming times for task 1, the ratio of cost far out weights the ratio of completion time. This allows for its use in a wide variety of schools increasing its impact and thereby its effectiveness.
 Department of Employment, Labour Market Research and Analysis branch. (2014,Aug.).“Labour Market Research - Engineering Professions", [On-line]. Available at: https://docs.employment.gov.au/system/files/doc/other/engineeringprofessionalswa.pdf [14/4/2015]
 M.J. Mataric, N. Koenig, D Feil-Seifer. (2007). “Materials for Enabling Hands-On Robotics and STEM Education” [On-Line] Available at: http://robotics.usc.edu/publications/media/uploads/pubs/536.pdf [8/5/2015]
 K. Condon. (2012, Aug.) “New study shows depth of engineering skills shortage”, Engineers Australia Media Release”[On-line]. Available at: https://www.engineersaustralia.org.au/sites/default/files/shado/News%20and%20Media/Media%20Statements/2012MediaStatements/new_study_shows_depth_of_engineering_skills_shortage.pdf [14/4/2015]
 P. Turner.(2009).“Why Use Robots in Education...” Educational Technology Solutions. [On-line]. Available at: http://www.tribotix.com/EducationInfo/WhyRobotics.htm [9/4/2015]
 Robogals School Visits, Robogals Website. Available at: http://adelaide.robogals.org.au/activities/school-visits [ 14/10/2015]
 Australian Government Chief Scientist. Science, Technology, Engineering and Mathematics: Australia's Future, 2014. pp. 8-10, 20-22. [14/4/2015]
 N. Thayer-Hart, J. Dykema, N.C. Schaeffer, J. Stevenson. (2010, Dec.) “Survey Fundamentals”, Office of Quality Improvements, University of Wisconsin-Madison, USA.
 A. Fink. (2010). “Survey Research Methods”. Education Research Methodology: Quantitative Methods and Research. University of California, USA. pp.152-160