Proceedings of the 20th Annual Conference of the Australasian Society for Computers in Learning in Tertiary Education (ASCILITE)
Adelaide, Australia7–10 December 2003
Editors
Geoffrey Crisp, Di Thiele, Ingrid Scholten, Sandra Barker, Judi Baron
Citations of works should have the following format:
Author, A. & Writer B. (2003). Paper title: What it’s called. In G.Crisp, D.Thiele, I.Scholten, S.Barker
and J.Baron (Eds), Interact, Integrate, Impact: Proceedings of the 20th Annual Conference of the Australasian Society for Computers in Learning in Tertiary Education. Adelaide, 7-10 December 2003.
ISBN CDROM 0-9751702-1-X WEB 0-9751702-2-8
Published by ASCILITE www.ascilite.org.au
Phillips and Lowe
ISSUES ASSOCIATED WITH THE EQUIVALENCE OF
TRADITIONAL AND ONLINE ASSESSMENT
Rob Phillips and Kate LoweTeaching and Learning Centre Murdoch University, AUSTRALIA
r.phillips@murdoch.edu.au, k.lowe@murdoch.edu.au
Abstract
With the increasing use of online technologies, Murdoch University has developed a flexible unit model that aims to retire the use of different versions of unit materials for external and internal students, and instead provide an equivalent learning experience through a single set of materials for all students, to be accessed according to their circumstances.
Issues of equivalence of assessment raised by this initiative are examined in the context of current theory, past and emerging practice and the role that university policy and attitudes to supervised assessment play in adoption of alternative assessment methods made available through the use of online technology.
Keywordsassessment, online
Introduction
“From our students’ point of view, assessment always defines the actual curriculum...that is where content resides for them, not in lists of topics or objectives”. (Ramsden, 1992: 187)
There is a substantial body of literature about deep and surface approaches to learning (Biggs, 1999; Gibbs, 1992; Ramsden, 1988, 1992), and evidence supports the view that deep learning is more appropriate than surface learning at university. These authors posit that: • transmission approaches lead to surface learning;
• depth of learning is determined by the nature of the learning activities; and• surface and deep approaches are reactions to the teaching environment.
Students choose either a surface or a deep approach to learning depending on the circumstances, and one of the determinants of the approach taken is the nature of the academic task (Ramsden, 1992: 48-49). While many university units of study may set out to develop understanding and critical thinking in students, Ramsden contends that “it is in our assessment practices and the amount of content we cover that we demonstrate to undergraduate students what competence in a subject really means” (Ramsden, 1992: 72).
Assessment is, therefore, an important factor in the success of university learning. A study in the 1990’s (Warren Piper, Nulty, & O’Grady, 1996) investigated examination practices and procedures in Australian universities, and questioned the effectiveness of examinations. 63% of 435 academics surveyed
“agreed with the statement that most students concentrate more on passing the examinations than on understanding the subject”. However, examinations are still the dominant form of assessment in many disciplines and institutions.
419
Phillips and Lowe
Rowntree (1987) characterised traditional examinations as follows:
“The traditional three hour examination tests the student’s ability to write at abnormal speed, under unusual stress, on someone else’s topic without reference to his customary sources of information, and with a premium on question spotting, lucky memorisation, and often on readiness to attempt a cockshy at problems that would confound the subject’s experts” (p. 135).
There is a substantial literature on alternative approaches to assessment (see, for example, Brown & Knight (1994), James, McInnis, & Devlin (2002), Nightingale et al. (1996) and Ramsden (1992)).
However, this research goes largely unnoticed by the majority of academics and university policy makers, who base their decisions on their own experiences as students (Alexander et al., 2002).
Educational technology has enabled alternative approaches to assessment to be developed, and these have provided an impetus to reconsider traditional views of education at university, including assessment. The importance of assessment in the effectiveness of educational technology was identified by Alexander & McKenzie (1998) in a meta-analysis of development projects. Innovative projects, carefully designed to improve student learning outcomes, were often unsuccessful because traditional assessment procedures tested a different set of learning outcomes. In successful projects, “the assessment of student learning is modified where necessary to reflect any changes made to the content and process of learning as a result of the project” (Alexander & McKenzie, 1998: x). Laurillard (2002: 205) reinforces this view:
“The only real test of any learning material is its use under normal course conditions. This means it must be integrated with other methods, the teacher must build on the work done and follow it through. And most important, the work students do with ICT media must be assessed.”
She goes on to state that “this may require new standards to be set”, which leads to a consideration of the institutional policies in place. In many institutions, assessment practices and policies have not responded to changes brought about by Information and Communications Technology (ICT). It may be that these policies mitigate against the effectiveness of ICT in education. This paper will describe assessment
practices at several Australian universities, particularly Murdoch University, and discuss issues relevant to online learning.
According to an Australian Government study (Bell, Bush, Nicholson, O’Brien, & Tran, 2002), 54% of units of study have content available on the web, and 207 award courses are offered totally online. This underscores the need to review issues of online assessment.
Relatively little work seems to have been published about the issues related to assessment in an online environment. Morgan and O’Reilly (1999) have written about assessment in open and distance learning, and have included online learning in this context. Otherwise, there is a growing literature about
Computer-aided Assessment (Brown, Race, & Bull, 1999; Bull et al., 2002), but this work focuses on using computers to automate assessments such as multiple choice tests. While clear efficiencies can be gained from Computer-aided Assessment, its educational effectiveness must be carefully considered. A recent study of assessment in Australian Universities reported:
“If lower-order learning is an unintended educational consequence of online assessment, then any perceived or real gains made in efficiency, staff workload reduction and/or cost savings are at a questionable price.” (James et al., 2002, 23)
This paper takes a broader view of online assessment, considering a range of assessment which can be facilitated through ICT, rather than what can be marked by computers. A major impetus for this work has been Murdoch University’s move away from internal and external modes of study to a model of flexible access to tertiary education, where students choose the types of study and access to materials which most suit their needs. Currently, a range of assessment approaches is used at Murdoch, but these may be different for internal and external students. The move to a flexible access model has led to a need for equivalence of assessment across types of study.
420
Phillips and Lowe
In the arguments that follow, it is useful to draw a distinction between two types of assessment: • Formative assessment - providing students with feedback on their progress
• Summative assessment - making judgements about an individual student’s achievement of subject objectivesFormative assessment is increasingly seen as important in supporting the learning process. Bransford, Brown, & Cocking (1999) report a synthesis of results from developmental psychology, cognitive psychology and neuroscience, showing that learning changes the physical structure of the brain. They conclude that practice increases learning. In their view, formative assessment is especially important:• to provide regular feedback;
• to provide opportunities for revision; and
• to improve the quality of thinking and understanding
This paper will analyse how a comprehensive range of assessment types can be implemented online with a view to understanding issues of online assessment, and therefore informing the policy debate.
Flexible Learning Initiative
From the time that Murdoch University first offered online units of study in 1998, it struggled with various practical and policy issues. Murdoch’s history of both internal, campus-based education and
external, print-based distance education, led in some schools, to different versions of a unit of study being taught by separate teachers, with different resources and materials.
When online education was first adopted, it was understandable that the university viewed it as a third
mode of delivery. This decision soon caused organisational problems, because it was assumed that students studying in online mode would be logically distinct from internal and external students. However, this distinction did not match the needs of students. Many external students wanted online materials, and some online students wanted to attend some classes on campus. Of the approximately 9000 students who access online materials at Murdoch, 90% use the materials as a complement to face-to-face classes.
Working parties of the Academic Council addressed these issues over several years, without resolution, until a proposal was made to radically rethink how the university handled “flexibility” (Thiele, 2002). In essence, instead of thinking of a unit as having various delivery versions, the University decided to think of a unit as a coherent package of resources that can be accessed in various ways (see Figure 1).
421
Phillips and Lowe
In this model, the unit materials consist of: • a single print-based study guide and reader,
• a web page, in most cases provided through WebCT, and • face-to face classes.
This package is available for all students, whether they are on-campus or off-campus, to access as they wish. The print materials form the common base for students’ interface with the unit, to which they may add on-line and/or face-to-face on-campus sessions depending on their ability to access them. Thus occasional on-campus attendance by otherwise off-campus students can be accommodated and complementary online materials can be made available. While face-to-face continues to be an important part of the on-campus learning experience, enabling technology (discussion lists, web-casting/data-streaming lectures) becomes increasingly accessible to off-campus students, and a viable substitute for on-campus students negotiating time-table clashes.
The new flexible approach also has implications for assignments and assessment. In many cases, Murdoch currently uses different assessment patterns for different modes, but under the new model equity demands equivalent assessment. This raises a number of issues about online assessment:• Which types of assessment can be offered online?
• What are the issues associated with online examinations?
• What is meant by, and required for, supervision and invigilation of assessment?
Policy Issues
Murdoch University
At Murdoch University, the assessment policy requires each unit to base its assessment on more than one piece of work, and to include at least two distinct methods of assessment. No one component or assessment method should account for more than 70% of the total assessment, nor class participation more than 15%, nor peer assessment more than 15%, nor group assessment more than 30%. At least 30% of the total assessment in each unit must be based on supervised assessment tasks, such as: written tests or examinations; seminar/tutorial presentations including responses to questions; practical tasks including laboratory work or performance, technical or field work; oral assessment if recorded and/or two members of academic staff are present; and class participation.
Murdoch University has processes enabling students to take invigilated examinations anywhere in the world. This, and approved third-party supervised project placements, are the only mechanisms by which external students can undertake supervised assessment. Teachers of internal students, on the other hand, have a richer range of assessment tools available, which may be more appropriate to a given teaching situation than examinations. In moving to a flexible learning model, it is important not to diminish the pedagogical effectiveness of the assessment used, and, thus, it is appropriate to investigate how a range of assessment types can be implemented online.
First, however, it is informative to summarise the policy position at other Australasian universities.Policy at Other Australasian Universities
An approach was made to other Australasian universities through the listserv of the Australasian Council of Open, Distance and E-learning (ACODE)1, representing directors of flexible learning and their equivalents. Of eight universities from which replies were received, policy on online assessment was scarce if at all present. The University of Wollongong is currently addressing policy on this issue, at
Deakin it is being discussed in the Teaching and Learning Fellowship program and at Central Queensland University, while it has been raised with the Education Committee and Academic Board, concrete evidence of policy development is not yet apparent. Adelaide University has a comprehensive site
providing information about online assessment and the University of Canberra includes issues in online assessment in its professional development program and some units in the Graduate Certificate of Higher Education, but policy about it does not exist.
1
http://ncode.mq.edu.au/
422
Phillips and Lowe
If attention is being paid to online assessment at an institutional level, it is typically in relation to authentication and plagiarism. For example, the University of Sydney requires a statutory declaration to accompany assignments. Deakin requires a student statement and accepts the online signature as verification of student identity. At the University of South Australia, the ‘Turn it in’ tool is being trialled to identify cases of plagiarism.
Based on responses received, Murdoch’s policy designating clear percentages for various kinds of assessment appears to be rare. Policy in other institutions addresses issues such as aligned curricula, communication of expectations, the use of more than one item for assessment, and the avoidance of over emphasis on examinations. Development of criteria for assessment is, in the main, the responsibility of faculties, schools and teaching teams. While online learning is in various stages of adoption, it is clear that there is discussion in a number of universities about the value and appropriate construction of computer marked assessments and about how online activities can be designed, used and assessed effectively.
Types of Assessment
Several years ago, Nightingale et al. (1996) studied the range of assessment activities used in Australian Universities. A list of possible activities was compiled from this work2 as part of an educational development project about outcomes-based subject design (Phillips, 2001; Phillips, Pospisil, Bell, & Patterson, 1999). In Table 1, the abovementioned list was categorised into a number of styles of assessment. For example, all classroom-related activities, such as presentations and discussions, have been combined into one style of assessment. The broad assessment styles are similar, but different to those developed by Morgan and O’Reilly (1999) and discussed by Kerka, Wonacott, Grossman, &
Wagner (2000). In Table 1, each assessment type is described briefly, with comments about its suitability as an online activity.
Discussion
It can be seen from Table 1 that many types of assessment can be replicated online. However, several issues arise from the discussion in Table 1. A number of the issues can be resolved at an operational level: • Technical issues relating to the availability of technology are becoming less problematic as ICT becomes ubiquitous at universities;
• Staff and student skills in submitting and marking online work can be addressed by staff development;• An assignment submission/ exchange process can be put in place;
• Funding can be sourced for the development of relatively expensive computer simulations, where appropriate.However, two major issues remain, which need to be addressed at a policy level:• Assessing practical and oral work online;
• Validating the identity of the examinee and invigilation of online examinations.
Practical and Oral Work
Some learning outcomes, such as oral and practical capabilities, can only be appropriately assessed face-to-face.
If manual skills of any sort (patient manipulation, using laboratory equipment, etc.) are important learning outcomes in a unit of study, it is inappropriate to teach these without the opportunity for
modelling and practice. It is, therefore, appropriate to assess these face-to-face. The issue is not about the appropriateness of the online assessment, it is about the appropriateness of online teaching in this context.
2
The list of assessment types and accompanying descriptions are available athttp://wwwtlc2.murdoch.edu.au/outcomes/src/bg/as/bgas1.html
423
Phillips and Lowe
424
Phillips and Lowe
425
Phillips and Lowe
Table 1. List of types of assessment and analogous online activities
In the same way, if oral presentation skills or interviewing skills are an important graduate attribute, then these need ideally to be modelled and practised through classroom contact. They should also be assessed face-to-face. However, it may not be necessary for the class to meet face-to-face every week, and blended learning environments, with a mixture of face-to-face and online classes, are increasingly common.On the other hand, the intended learning outcomes may focus on critical thinking, analysis and structuring of arguments. Traditionally, these skills may have been learnt through tutorial presentations. However, they can be learnt just as well, perhaps better, by participation in structured online discussion activities. A further, and increasingly feasible possibility, is the use of audioconferencing capabilities of programs such as Microsoft NetMeeting for oral work. Use of the telephone network is also an option.
The challenge for course designers when they adopt a more flexible approach is to reconsider their
teaching and assessment strategies and reconfigure them to best satisfy the learning outcomes, and make best use of accessible media technologies, budget, available teaching time and supervision requirements.Authentication and Online Assessment
A major issue with online assessment is ensuring that the work submitted is that of the student. This has always been an issue for external students, and it has been resolved, historically, by insisting that students sit an invigilated exam. However, student feedback (Thiele, 2002) indicates that they do not feel this is appropriate in many cases, where educationally sound unit design does not match well with the requirement for a final examination.
Various technological approaches have been suggested to verify student identity. A recent study by Fröhlich (2000), cited in (Williams, 2000), reviewed these technologies, which included fingerprint
426
Phillips and Lowe
recognition, smart cards, hand geometry, retinal scans, iris recognition, facial recognition, voice recognition and remote invigilation. Clearly, most of these technologies are not in wide use in the community. However, remote invigilation is becoming more feasible. Small ‘webcams’ are becoming widely available and are relatively easy to install. It is now feasible to use desktop videoconferencing software and webcams to supervise a number of students at a distance.
However, when considering the need for invigilation, it is appropriate to consider the risks (Morgan & O’Reilly, 1999: 79-80). Should a student wish to submit any assessment fraudulently carried out by a third person, there are ways and means that this can be accomplished, however many precautions are taken. This is the case for traditional assessment, as well as online assessment. Where assessment is dependent on a single examination, a third party could be approached to sit the exam on behalf of the student. However, if a range of continuous assessment is used, then the third party would need to be available for a number of assessment tasks. It is less likely that the third party would be willing to do this. O’Reilly and Morgan (1999) report that continuous assessment has been used in distance
education to reduce the risk of cheating by students. Kerka et al. (2000), in a review of issues surrounding authentication, also point out that using multiple sources of assessment data reduces the likelihood of cheating.
The collaborative and communicative aspects of the web can be used to develop online communities3. In these communities, teachers can become familiar with the writing styles and abilities of individual students, making cheating and plagiarism easier to discover (Gray, 1997, cited in; Kerka et al., 2000). Kerka et al. (2000) claim that changes to pedagogy reduce the risks of cheating, through focusing on deep learning approaches calling for analysis and application (essays, case studies, etc.), while de-emphasising objective tests and other surface learning approaches. At the same time, the traditional notion of the closed book examination can be challenged. Open book exams can require deeper understanding on the part of students and are more authentic (Nelson, 1998). Exam question design can, therefore, significantly reduce the likelihood of cheating by requiring examinees to respond critically or creatively, rather than reproductively.
Williams (2000) has experimented with online (open book) ‘take-home’ examinations, and claims that there are significant benefits to be gained by both students and academics, and that evidence suggests that “the concern about cheating is exaggerated” (p274). The design of authentic assessment (and learning environments), which the learner perceives as useful and desirable (Kerka et al., 2000), increases the likelihood of students doing their own work.
On the other hand, online, closed-book exams expose further possibilities for cheating. While it is
possible for exams to be available online only for a specified time period, students could seek out answers to questions during the exam, using search engines, for example.
Online technology is particularly suited to formative assessment (Kerka et al., 2000). In some disciplines, online quizzes, whether scored or not, can test student understanding, provide helpful feedback and
minimise teacher involvement (Peat & Franklin, 2002). In other disciplines, online discussions can enable students to build their understandings, while reducing the time that teachers spend answering questions. In both cases, however, the learning activities need to be well designed.
Conclusion
This paper has reported an analysis of the types of assessment which can be carried out online. It
concludes that the only situations where analogous online assessments cannot be conceived of is where there is a requirement for oral or practical work. In these cases, it is the nature of the intended learning, rather than the assessment tasks, which make an online approach impractical.
3
See, for example, a special issue of the Australian Journal for Educational Technology, 19(2),http://www.ascilite.org.au/ajet/ajet19/ajet19.html
427
Phillips and Lowe
Given this exception, and in the context of Murdoch University’s move towards flexible access to learning materials, it is feasible to develop assessment that is equivalent, whether conducted online or face-to-face. A further driver for equivalent assessment is that it is inequitable to make examinations compulsory for external students when they are not compulsory for internal students.
This research has identified a tension between designing assessment which is likely to lead to desired learning outcomes, and ensuring that the student who undertakes assessment is the student who is
enrolled. This tension leads to a reconsideration of fundamental assumptions about the nature of teaching, learning, assessment and cheating.
The traditional ‘norm’ has been for at least one component of assessment to be supervised individual work on closed-book examinations, which, in many cases, elicit surface learning. In this circumstance, it has been deemed essential to verify that the assessment has been undertaken by the enrolled student, without input from other sources, whether personal or bibliographic, by a process of invigilation. In the online context, such assumptions are called into question. Because it is difficult to invigilate assessment online, and because of the communicative and collaborative nature of the online environment, it is appropriate to question the validity of supervised, individual, closed-book assessment.
The analysis here concludes that a well-designed flexible unit should:
• include a range of assessment tasks, both formative and summative, over the time period of the unit;• assess deep learning;
• utilise open-book instead of closed-book examinations; and• be relevant to students in preparing for the workplace.
The research indicates that these approaches are likely to lead to successful university learning outcomes, and they will also reduce incentives for students to cheat, thereby reducing the need for supervised
assessment. The movement to flexible access, therefore, provides an opportunity to reassess and redesign both units of study and their associated assessment. At the same time, associated policy, including requirements for invigilated assessment, needs to be reviewed.
Each institution will need to make its own judgement about the balance between the risks of cheating and the pedagogical benefits afforded by a review of assessment design and policy. Hopefully this analysis will assist them.
References
Alexander, S., Kandlbinder, P., Howson, E., Lawrence, L., Francois, A., & Housego, S. (2002).
Simassessment: Enhancing Academics’ Understanding of Assessment through Computer Simulation. In A. Williamson & C. Gunn & A. Young & T. Clear (Eds.), 19th Annual Conference of the
Australasian Society for Computers in Learning in Tertiary Education (pp. 47-55). Auckland, New Zealand. [Online] Available at http://www.unitec.ac.nz/ascilite/proceedings/papers/196.pdf.
Alexander, S., & McKenzie, J. (1998). An Evaluation of Information Technology Projects for University Learning. Canberra, Australia: Committee for University Teaching and Staff Development and the Department of Employment, Education, Training and Youth Affairs. [Online] Available at http://www.autc.gov.au/in/in_pu_cu_ex.htm.
Barrett, C., & Luca, J. (2002). Open online assessment: keeping the tutors honest! In A. Williamson & C. Gunn & A. Young & T. Clear (Eds.), 19th Annual Conference of the Australasian Society for Computers in Learning in Tertiary Education (pp. 783-786). Auckland, New Zealand. [Online] Available at http://www.unitec.ac.nz/ascilite/proceedings/papers/104.pdf.
Bell, M., Bush, D., Nicholson, P., O’Brien, D., & Tran, T. (2002). Universities Online: A survey of online education and services in Australia (0 642 77256 8). Canberra: Commonwealth of Australia. [Online] Available at http://www.dest.gov.au/highered/occpaper/02a/02_a.pdf.
Biggs, J. B. (1999). Teaching for quality learning at university. Philadelphia, PA: Society for Research into Higher Education & Open University Press.
428
Phillips and Lowe
Bransford, J. D., Brown, A. L., & Cocking, R. R. (Eds.). (1999). How People Learn: Brain, Mind, Experience, and School: National Academy Press. [Online] Available at http://www.nap.edu/html/howpeople1/.
Brown, S., & Knight, P. (1994). Assessing Learners in Higher Education. London: Kogan Page.Brown, S., Race, P., & Bull, J. (1999). Computer-assisted Assessment in Higher Education. London: Kogan Page.
Bull, J., Conole, G., Davis, H. C., White, S., Danson, M., & Sclater, N. (2002). Rethinking Assessment Through Learning Technologies. In A. Williamson & C. Gunn & A. Young & T. Clear (Eds.), 19th Annual Conference of the Australasian Society for Computers in Learning in Tertiary Education (pp. 75-86). Auckland, New Zealand. [Online] Available at http://www.unitec.ac.nz/ascilite/proceedings/papers/056.pdf.
Collis, B. (1996). Tele-learning in a Digital World: The Future of Distance Learning: International Thomson Computer Press.
Cooper, M., Donnelly, A., & Ferreira, J. (2002). Remote Controlled Experiments for Teaching over the Internet: A Comparision of Approaches Developed in the PEARL Project. In A. Williamson & C. Gunn & A. Young & T. Clear (Eds.), 19th Annual Conference of the Australasian Society for Computers in Learning in Tertiary Education (pp. 119-128). Auckland, New Zealand. [Online] Available at http://www.unitec.ac.nz/ascilite/proceedings/papers/112.pdf.
Fyfe, G., Fyfe, S., & Phillips, R. (1995). Sarcomotion: IMM used across the learning spectrum. Paper presented at the Australian Society for Computers in Learning in Tertiary Education, Melbourne, Australia.
Gibbs, G. (1992). The Nature of Quality in Learning. In G. Gibbs (Ed.), Improving the Quality of Student Learning (pp. 1-11). Bristol: Technical and Educational Services Ltd.
Gray, S. (1997). Maintaining Academic Integrity in Web-Based Instruction. Educational Media Journal, 35(3), 186-188.
Harasim, L., Hiltz, S. R., Teles, L., & Turoff, M. (1995). Learning Networks- a Field Guide to Teaching and Learning Online. Cambridge Massachusetts: The MIT Press.
Higgison, C. (2000, 5 June 2001). Online Tutoring e-book [Web site]. Heriot-Watt University and Robert Gordon University. Retrieved 17/4/02, 2002, from the World Wide Web: http://otis.scotcit.ac.uk/onlinebook/
James, R., McInnis, C., & Devlin, M. (2002). Assessing Learning in Australian Universities. Canberra: Australian Universities Teaching Committee. [Online] Available at
http://www.cshe.unimelb.edu.au/assessinglearning/docs/AssessingLearning.pdf.
Kerka, S., Wonacott, M., Grossman, G., & Wagner, J. (2000). Assessing Learners in Higher Education. ERIC. Retrieved 29 July, 2003, from the World Wide Web: http://ericacve.org/docs/pfile03.htm#principles
Kinder, J., Fardon, M., & Yasmeen, S. (1999). Offline or Online? A Simulation Exercise in a First Year International Politics Unit. In J. Winn (Ed.), Australasian Society for Computers in Learning in Tertiary Education Conference. Brisbane, Australia: Teaching and Learning Support Services, Queensland University of Technology. [Online] Available at
http://www.ascilite.org.au/conferences/brisbane99/papers/kinderfardon.pdf.
Laurillard, D. M. (2002). Rethinking University Teaching: A Conversational Framework for the Effective Use of Learning Technologies (2nd ed.). London: Routledge.
Linser, R., Naidu, S., & Ip, A. (1999). Pedagogical Foundations of Web-Based Simulations in Political Science. In J. Winn (Ed.), Australasian Society for Computers in Learning in Tertiary Education Conference. Brisbane, Australia: Teaching and Learning Support Services, Queensland University of Technology. [Online] Available at http://www.ascilite.org.au/conferences/brisbane99/papers/linsernaidum.pdf.
Luca, J., & McLoughlin, C. (2002). A Question of Balance: Using Self and Peer Assessment Effectively in Teamwork. In A. Williamson & C. Gunn & A. Young & T. Clear (Eds.), 19th Annual Conference of the Australasian Society for Computers in Learning in Tertiary Education (pp. 833-837). Auckland, New Zealand. [Online] Available at http://www.unitec.ac.nz/ascilite/proceedings/papers/072.pdf.
Luca, J., & Phillips, R. A. (1999). Designing, Implementing and Evaluating Project-based Learning on the Web. In J. Winn (Ed.), Supplement to Proceedings: Australasian Society for Computers in Learning in Tertiary Education Conference (pp. 63-65). Brisbane, Australia: Teaching and Learning Support Services, Queensland University of Technology.
429
Phillips and Lowe
Morgan, C., & O’Reilly, M. (1999). Assessing Open and Distance Learners. London: Kogan Page.
Nelson, G. E. (1998). On-line Evaluation: Multiple Choice, Discussion Questions, Essay, and Authentic Projects. Paper presented at the Teaching in the Community Colleges Online Conference, ‘Online Instruction: Trends and Issues II’, Kapiolani Community College. [Online] Available at http://leahi.kcc.hawaii.edu/org/tcon98/paper/nelson.html.
Nightingale, P., Te Wiata, I., Toohey, S., Ryan, C., Hughes, C., & Martin, D. (1996). Assessing Learning in Universities. Sydney: University of New South Wales Press.
Northover, M. (2002). Online discussion boards - friend or foe? In A. Williamson & C. Gunn & A. Young & T. Clear (Eds.), 19th Annual Conference of the Australasian Society for Computers in Learning in Tertiary Education (pp. 477-484). Auckland, New Zealand. [Online] Available at http://www.unitec.ac.nz/ascilite/proceedings/papers/193.pdf.
O’Reilly, M., & Morgan, C. (1999). Online assessment: creating communities and opportunities. In S. Brown & P. Race & J. Bull (Eds.), Computer-assisted Assessment in Higher Education. London: Kogan Page.
Oliver, R., Omari, A., & Herrington, J. (1998). Developing Converged Learning Environments for on and off-Campus Students using the WWW, Flexibility: the next wave. Proceedings of the 1998 ASCILITE Conference. Wollongong. [Online] Available at
http://www.ascilite.org.au/conferences/wollongong98/asc98-pdf/oliver0051.pdf.
Paloff, R., & Pratt, K. (1999). Promoting collaborative learning, Building learning communities in cyberspace. San Fransisco: Jossey Bass.
Peat, M., & Franklin, S. (2002). Use of online and offline formative and summative assessment opportunities: have they had any impact on student learning? In A. Williamson & C. Gunn & A. Young & T. Clear (Eds.), 19th Annual Conference of the Australasian Society for Computers in Learning in Tertiary Education (pp. 505-514). Auckland, New Zealand. [Online] Available at http://www.unitec.ac.nz/ascilite/proceedings/papers/019.pdf.
Phillips, R. A. (2000). Facilitating online discussion for interactive multimedia project management. Heriot-Watt University and Robert Gordon University. Retrieved 25 May 2001, from the World Wide Web: http://otis.scotcit.ac.uk/eworkshop.htm
Phillips, R. A. (2001). A Case Study of the Development and Project Management of a Web/CD Hybrid Application. Journal of Interactive Learning Research, 12(2/3), 225-243.
Phillips, R. A. (2002). Innovative use of Microsoft Word and QTVR for Teaching Radiology and Diagnostic Imaging. In D. McConnell (Ed.), Proceedings of the 9th International Conference of the Association for Learning Technology (pp. 71-81). Sunderland, U.K.: Association for Learning Technology. [Online] Available at www.alt-c2002.org.uk.
Phillips, R. A., Jenkins, N., Fyfe, G. M., & Fyfe, S. (1997). The User Interface Design of Learner-centred Interactive Multimedia Programs. Paper presented at the Ed-Media 97 Conference, Calgary Canada.Phillips, R. A., Lafitte, F., & Richardson, J. L. (2001). The use of QTVR for teaching Radiology and Diagnostic Imaging, AUC Academic and Developers Conference. Townsville, Australia: Apple University Development Fund. [Online] Available at
http://auc.uow.edu.au/conf/conf01/downloads/AUC2001_Phillips.pdf.
Phillips, R. A., & Luca, J. (2000). Issues Involved in Developing a Project-based Online Unit which Enhances Teamwork and Collaboration. Australian Journal of Educational Technology, 16(2), 147-160. [Online] Available at http://www.ascilite.org.au/ajet/ajet16/phillips.html.
Phillips, R. A., Pospisil, R., Bell, J., & Patterson, A. (1999). Meeting Needs: A Staff Development Resource for Redesigning Sociology Courses According to an Outcomes-Based Model. In J. Winn (Ed.), Australasian Society for Computers in Learning in Tertiary Education Conference (pp. 265-275). Brisbane, Australia: Teaching and Learning Support Services, Queensland University of Technology. [Online] Available at
http://www.ascilite.org.au/conferences/brisbane99/papers/phillipspospisil.pdf.
Phillips, R. A., Pospisil, R., & Richardson, J. L. (2001). The Use of a QTVR Image Database for Teaching Veterinary Radiology and Diagnostic Ultrasound to Distance Education Students. Australian Journal of Educational Technology, 17, 96-114.
Ramsden, P. (1988). Studying Learning: Improving Teaching. In P. Ramsden (Ed.), Improving Learning: New Perspectives (pp. 13-31). London: Kogan Page.
Ramsden, P. (1992). Learning to teach in higher education. London: Routledge.
Rowntree, D. (1987). Assessing Students: How shall we know them? (2nd ed.). London: Kogan Page.
430
Phillips and Lowe
Salmon, G. (2000). E-moderating: the key to teaching and learning online. London: Kogan Page.Thiele, B. (2002). ‘DEGREES OF FLEXIBILITY’ Report of the Academic Council Working Party on External Studies and Flexible Delivery, March 2002 [online]. Murdoch University. Retrieved 12 July, 2002, from the World Wide Web:
http://www2.murdoch.edu.au/admin/cttees/ac/2002/march/ESFDC%20Final%20Report.docWarren Piper, D., Nulty, D. D., & O’Grady, G. (1996). Examination Practices and Procedures in Australian Universities: Summary Report. Canberra: Australian Government Publishing Service.Weaver, D. A., Delbridge, L.M.D., Harris, P.J., Petrovic, T. and Kemm, R.E. (2000). Blood Pressure: Reflex control. Sydney, Australia.: Pub. ADInstruments.
Williams, J. (2000). Flexible Assessment for Flexible Delivery: On-Line Examinations that Beat the Cheats, Proceedings of the First Moving Online Conference. Gold Coast: Norsearch Ltd. [Online] Available at http://www.scu.edu.au/schools/sawd/moconf/mocpapers/moc33.pdf.Notes
1 http://ncode.mq.edu.au/
2 The list of assessment types and accompanying descriptions are available at http://wwwtlc2.murdoch.edu.au/outcomes/src/bg/as/bgas1.html
3 See, for example, a special issue of the Australian Journal for Educational Technology, 19(2), http://www.ascilite.org.au/ajet/ajet19/ajet19.html.
Copyright © 2003 Rob Phillips and Kate Lowe.
The author(s) assign to ASCILITE and educational non-profit institutions a non-exclusive licence to use this document for personal use and in courses of instruction provided that the article is used in full and this copyright statement is reproduced. The author(s) also grant a non-exclusive licence to ASCILITE to publish this document in full on the World Wide Web (prime sites and mirrors) and in printed form within the ASCILITE 2003 conference proceedings. Any other usage is prohibited without the express permission of the author(s).
431
因篇幅问题不能全部显示,请点此查看更多更全内容