I am a pragmatic education researcher interested in research that is timely and relevant to the students I teach, the institution at which I am employed, teacher education programs, and liberal arts colleges. Specifically, I am interested in curriculum development and program evaluation research. Determining the effectiveness of a course, curriculum or program requires evaluation strategies that are not only reliable and valid but logistically feasible as well. I research the extent to which evaluation protocols are reliable, valid and logistically feasible by exploring stakeholders’ experiences and end-user feedback.
Teacher preparation is in the midst of a significant transition to a single, national accrediting agency, CAEP. In 2013, the CAEP Commission on Standards and Performance Reporting referred to their reliance on “wisdom of practice” when reporting to the CAEP Board of Directors because “research, to date, does not tell us what specific experiences or sequence of experiences are most likely to result in more effective beginning teachers” (retrieved from https://caepnet.files.wordpress.com/2013/09/final_board_approved1.pdf). This gap in the literature is an area of practical importance relevant to all teacher education programs, accrediting agencies, and partnering school systems. Determining what specific experiences are most likely to result in effective beginning teachers depends on reliable and valid evaluation protocols. Evaluation practices in teacher preparation have become increasingly complex as evidence of preservice teacher and completers’ impact on K-12 students is required. Developing evaluation protocols in teacher education that are reliable, valid, and logistically feasible for large and small teacher preparation programs continues to be a challenge, and our accrediting agency acknowledges the dearth of research addressing the need. This is my primary research agenda.
My dissertation research was a meta-evaluation of the fieldwork evaluation protocol used by the Randolph-Macon College Education Department. I interviewed course instructors, preservice teachers, and cooperating teachers from elementary, middle and high schools to determine the extent to which the current fieldwork evaluation protocol was reliable, valid, and logistically feasible. The results yielded numerous recommendations that will be considered for implementation by the department in 2016. Formally assessing evaluation protocols in this way helps the department move beyond “tradition, intuition, and common sense” (Pepper & Hare, 1999, p. 353) to a more systematic, research-based evaluation that identifies strengths and weaknesses of the program in light of explicit standards set forth by accrediting agencies. Participant-oriented evaluation models like this generally rely on qualitative methods, are classified as constructivist and have been used to evaluate higher education programs in the literature (Ross, 2010). In this project, the participant-oriented evaluation model was responsive in that the major focus was “attending and responding to the participant and stakeholder need for information” (Ross, 2010, p. 483). The Council for the Accreditation of Educator Preparation (CAEP) Commission on Standards and Performance Reporting (2013) referred to their reliance on “wisdom of practice” (p. 16) when reporting to the CAEP Board of Directors because “research, to date, does not tell us what specific experiences or sequence of experiences are most likely to result in more effective beginning teachers” (p. 16). As the Randolph-Macon Education Department prepares for a CAEP accreditation visit in May 2015, highlighting this research demonstrates our commitment to a culture of continuous improvement through formal program research. My research addressed a gap in the literature that is an area of practical importance relevant not only to the Education Department at Randolph-College, but to other teacher education programs, accrediting agencies, partnering P-12 school systems, and the state and federal departments of education. Since completing my dissertation, I have been asked to replicate my study at Virginia Commonwealth University which suggests the relevance of the research questions and design to other institutions.
Analyzing preservice teachers’ reflective writing is a common evaluation methodology in the teacher education (Coffey, 2010; Hanline, 2010). Though common, using reflective writing to evaluate learning is inherently challenging. I have developed some reflective writing protocols and am interested in researching the impact of those protocols on preservice teachers’ learning. I am currently working on submissions to present these protocols at teacher conferences.
In addition to studying curriculum development and program evaluation, I am also interested in researching (1) special education teacher recruitment, (2) teacher retention, and (2) American schools in international locations.
Coffey, H. (2010). They taught me: The benefits of early community-based field experiences in teacher education. Teaching and Teacher Education, 26(2), 335-342.
Council for the Accreditation of Educator Preparation. (2013). CAEP accreditation standards and evidence: Aspirations for educator preparation. Retrieved from http://caepnet.files.wordpress.com/2013/02/commrpt.pdf
Hanline, M. F. (2010). Preservice teachers’ perceptions of field experiences in inclusive preschool settings: Implications for personnel preparation. Teacher Education and Special Education, (4), 335-351.
Pepper, K., & Hare, D. (1999). Development of an evaluation model to establish research-based knowledge about teacher education. Studies in Educational Evaluation, 25, 353-377.
Ross, M. E. (2010). Designing and using program evaluation as a tool for reform. Journal of Research on Leadership Education, 5(12.7), 481-506.
Contact: Amber Peacock
P. O. Box 5005
Ashland, VA 23005-5505