This paper reports on the use of an electronic voting system (EVS) in a first-year computing science subject. Previous investigations suggest that students' use of an EVS would be positively associated with their learning outcomes. However, no research has established this relationship empirically. This study sought to establish whether there was an association between students' use of an EVS over one semester and their performance in the subject's assessment tasks. The results from two stages of analysis are broadly consistent in showing a positive association between EVS usage and learning outcomes for students who are, relative to their class, more correct in their EVS responses. Potential explanations for this finding are discussed as well as modifications and future directions of this program of research.
There has been considerable recent effort to improve introductory physics courses, especially after 1985 when Halloun and Hestenes1 published a careful study using massive pre- and postcourse testing of students in both calculus and non-calculus-based introductory physics courses at Arizona State University. Their conclusions were: (1) "....the student’s initial qualitative, common-sense beliefs about motion and....(its).... causes have a large effect on performance in physics, but conventional instruction induces only a small change in those beliefs."
In the popular TV programme Who can ask the audience to help picks up its handsets and votes anonymous and show up as a bar different answers. Essentially thereof in a statistics course taught that is central to this article.
Three years ago, the Department of Aeronautics and Astronautics at MIT expanded its repertoire of active learning strategies and assessment tools with the introduction of muddiest-point-in-the-lecture cards, electronic response systems, concept tests, peer coaching, course web pages, and web-based course evaluations. This paper focuses on the change process of integrating these active learning strategies into a traditional lecture-based multidisciplinary course, called Unified Engineering. The description of the evolution of active learning in Unified Engineering is intended to underscore the motivation and incentives required for bringing about the change, and the support needed for sustaining and disseminating active learning approaches among the instructors.
By Steven R. Hall1, Ian Waitz2, Doris R. Brodeur3, Diane H. Soderholm4, and Reem Nasr5
Assessment to support rather than just to measure learning has now been accepted as part of the government’s teaching and learning strategies (QCA, 2006) and a range of teacher support materials is now being produced (eg. AAIA, 2006). This would be progress if it actually meant that teachers and schools were being enabled to leave behind a decade memorable only for accountability-driven, externally-set performance targets in which ‘assessment is synonymous with testing’ (Hall et al,2004) and ‘SATs success is the main driver for the models of pupildom available’ (Hall et al,2004) rather than for its contribution to teaching and learning
The DfES set the target of 85% of end of key stage 2 pupils in 2006 to achieve level 4 or above.
Standard Assessment Tasks (SATs) was the original title for the national end of key stage tests which originated from the 1988 Education Reform Act which instituted the ‘National Curriculum and its assessment’ – the mnemonic seems to have stuck!
The University of Strathclyde in Glasgow has an undergraduate population of approximately 14,500. The Department of Mechanical Engineering at Strathclyde is one of the largest in the UK, with some 500 undergraduate and 80 postgraduate students.
Founded in 1943, ASCD (formerly the Association for Supervision and Curriculum Development) is an educational leadership organization dedicated to advancing best practices and policies for the success of each learner.
For many teachers across the country, annual evaluations are mysterious. Every state, every district uses a different tool to determine teacher effectiveness and rate their teachers on different scales. Sometimes teachers are unsure as to why they are rated the way they are. I've seen many a tear in the teachers' lounge during evaluation season. There is incredible anxiety inherent in this process, especially for new teachers who are still trying to get their "teaching legs" under them. They have the knowledge, they are gaining the skills, but putting it all together is no easy feat. (...)
An overview of the experience of the opening two years of an institution-wide project in introducing electronic voting equipment for lectures is presented. Eight different departments and a wide range of group size (up to 300) saw some use. An important aspect of this is the organizational one of addressing the whole institution, rather than a narrower disciplinary base. The mobility of the equipment, the generality of the educational analysis, and the technical support provided contributed to this. Evaluations of each use identified (formatively) the weakest spots and the most common benefits, and also (summatively) showed that learners almost always saw this as providing a net benefit to them. Various empirical indications support the theoretical view that learning benefits depend upon putting the pedagogy (not the technology) at the focus of attention in each use. Perceived benefits tended to increase as lecturers became more experienced in exploiting the approach. The most promising pedagogical approaches appear to be Interactive Engagement (launching peer discussions), and Contingent Teaching – designing sessions not as fixed scripts but to zero in on using diagnostic questions on the points that the particular audience most needs on this occasion.
Students returning to school this fall may have a new item on their list of supplies--a gadget that's making classrooms more interactive. Photos: Clickers in the classroom A CNET article by Alorie Gilbert, Staff Writer, CNET News.
“Get your people together and talk about this. The stakes are high; make adjustments and set a better course. As the ancient Chinese proverb puts it, ‘If you don’t change your direction, you’ll end up exactly where you are headed.’”
Assessment to support rather than just to measure learning has now been accepted as part of the government’s teaching and learning strategies (QCA, 2006) and a range of teacher support materials is now being produced (eg. AAIA, 2006).
This would be progress if it actually meant that teachers and schools were being enabled to leave behind a decade memorable only for accountability-driven, externally-set performance targets in which ‘assessment is synonymous with testing’ (Hall et al,2004) and ‘SATs success is the main driver for the models of pupildom available’ (Hall et al,2004) rather than for its contribution to teaching and learning.
Recently, much criticism has been levied at PowerPoint's default structure of a topic-phrase headline supported by a bullet list of subtopics. This web page advocates an assertion-evidence structure, in which a sentence headline states the main assertion of the slide. That headline assertion is then supported not by a bullet list, but by visual evidence: photos, drawings, diagrams, graphs, films, or equations.