EDUC633: Week4 Assessment Video Blog

Revolutions

Distant and online learning has revolutionized education by expanding the use and availability of knowledge sharing. Carnevale (2000) stated that there is a need to formulate a clear vision of how the higher education system can most effectively contribute to the development of a knowledge-based economy. Distant and online learning have provided an open path for educational institutions to fulfill that goal.

Challenges

Assessment of student learning is a challenge of distant and online learning due to the reduction in student-teacher physical interaction during the teaching and learning experience. In a research study of assessment methods and the online environment, Kerns (2012) found that challenges also arise due to workload and time management issues of online learning students. As an online learning student, I can contest to that.

Current online and mobile assessment methods being used

Authentic assessment, team and peer, online discussion, and formative to name a few. Formative is effective because it provides ongoing assessment. Authentic assessment focuses on the learner’s personal experiences related  to the learning. Kelly (2014) suggest incorporating mobile technology into authentic learning. Mobile technology such as video recording allows students to be creative when relating their individual experiences to their learning. Team and peer effectively fosters collaboration, in this class for example, through online discussion.

Reflection

I believe that online learners are best assessed by doing. Field experiences and presentation of skills achieved are strong assessment tools for online learning students. I consider that assessments serve a dual purpose of helping students receive feedback on their school performance academically and demonstrate if students have achieved learning objectives presented in the curriculum. Teachers can use formative and summative assessments as a guide to analyze student’s progression, and to modify teaching techniques to meet their student’s learning needs.

References

Kearns, L. R. (2012). Student assessment in online learning: Challenges and effective practices. Journal of Online Learning and Teaching, 8(3), 198. Retrieved from http://search.proquest.com/docview/1500356017?accountid=12085

Kelly, R. (2014). Alternative Assessment Methods for the Online Classroom. http://www.facultyfocus.com/articles/educational-assessment/alternative-assessment-methods-online-classroom/

Salmi, J. (2000). Tertiary Education in the Twenty-First Century Challenges and Opportunities. The World Bank. Retrieved from http://www-wds.worldbank.org/external/default/WDSContentServer/WDSP/IB/2005/06/03/000090341_20050603091517/Rendered/PDF/324370Tertiary0Education0LCSHD062.pdf

Advertisements

EDUC633: Week5 Book Chapter3 Review Blog

Summary

The purpose of chapter 3 of the text e-Learning and the Science of Instruction by Clark and Mayer (2011), was to define the various elements that are attached to evidence-based practice in the instructional design of eLearning courses. How people learn and the important role environment plays in accommodating various learning levels are outlined; How evidence based instruction is important to consider when designing eLearning curriculum is addressed; The inclusion of instructional effectiveness and how instructional methods work to address the various ways people learn and proven methods being implement to improve learning equating to instructional effectiveness were also outlined in this chapter. The chapter highlights the best practices in eLearning course design through three approaches to research on instructional effectiveness: What works, when does it work, and how does it work? When determining which approach to instructional effectiveness is appropriate, it is important to keep in mind that the type of research method will vary to effectively predict a learning outcome.

In order to be successful in proper experimental comparing, the text explains that a good consumer of experimental research is very selective via the application of a criterion for research section.   It is important to be sure that research selected qualifies with the creation of experimental control, random assignment, and appropriate measures. These criterion are important in reducing outcome variations and that equal treatment of all subjects in the experiment are carried out. Chapter three also offered a criterion for ruling out reasons for a result of no effect in experimental comparisons. Though ineffective treatment would be the most obvious reason for an instructional treatment not to effect learning, the authors suggest that good research selection includes a deduction of other reasons that a particular instructional method may have been proven to be ineffective.

Interpreting research statistics is relevant in a researcher’s ability to understand the results of any study being applied to your instructional design. Statics will allow the instructional design to be proven to be effective when applied to the eLearning environment during teaching and learning.  Specifically, this chapter highlights probability and effect size as common statistical measures that all consumers of research should be familiar. Probability less that .05 and effect size greater than .5 are common interpreters of research statistics. I understand them to mean: A) when the difference between groups are statically significant or real; and B) tells how strong the effect is when comparing one group to the other.

Reflection

In reading chapter three, I learned that not any particular type of research method will necessarily be the right one. In fact, many research methods can be applied. What is important is that the research method matches the research being implemented.

Reading this chapter didn’t change my views on educational technology and distant education. In fact, it broadened my knowledge on evidence based practices for instructional design. Evidence-based research solidifies the connection of theory and practice. It allows the researcher to avoid having to reinvent the wheel by following techniques that have been proven to be effective. Chapter three suggests using research as a guide to eLearning curriculum planning. In addition, I found the author’s implementation of what we don’t know about evidence based practice to enhance the quality of evidence-based practices presented in the chapter. The skill set offered by Clark and Mayer are relevant for today’s eLearning instructional designers and curriculum planners through highlighting the connection between how people learn to what factors that drive the decision making during the instructional design process.

Relevance to our ISD project is present in particular since objectives and assessment are currently being aligned in order to add cognitive complexity and while achieving a new skill (digital portfolio creation). Depth alignment refers to the match between the cognitive complexity of the knowledge/skill prescribed by the standards and the cognitive complexity required by the assessment item/task (Webb, 1999).  Assessment is an important part of the curriculum because it determines the need for the instructional design and post determination for continued curriculum implementation or discontinuation of the delivered materials.

With providing the tools for the development of relevant online curriculum, authors Clark and Mayer have provided me a significant wealth of informative tactics as a consumer of research to accurately identify relevant research for instructional design. I found the author’s implementation of technology to enhance the quality of teaching and learning strategies presented in the text. In the future, as I embark on research and writing in completing my doctoral degree.

References

Clark, R.C., & Mayer, R. E. (2011). E-learning and the science of instruction: Proven guidelines for consumers and designers of multimedia learning (3rded.). San Francisco, CA: Wiley & Sons.  ISBN: 9780470874301.

Webb, N. L. (1999). Alignment of Science and Mathematics Standards and Assessments in Four States. Washington, DC: Council of Chief State School Officers. Retrieved from http://files.eric.ed.gov/fulltext/ED458288.pdf