Noelle Carlozzi, Ph.D.

Associate Professor
Medical SchoolPhysical Medicine & Rehabilitation

Biography

Dr. Carlozzi is the director of the Center for Clinical Outcomes Development and Application (CODA), which provides expert consultation for measurement selection and application to evaluate clinical questions and interventions across the Medical School. Her research includes two multi-site NIH studies, both designed to develop and validate new outcomes measures to evaluate health-related quality of life for individuals with Huntington disease, and for caregivers of individuals with traumatic brain injury (TBI).

  • Ph.D., Clinical Psychology, Oklahoma State University
  • M.S., Clinical Psychology, Oklahoma State University
  • B.A., Vassar College

U-M Academic Affiliation(s)

Featured Member Profile


What are you thinking about?
The Center for Clinical Outcomes Development and Application (CODA) has expertise in measurement — including development, selection and application, and consultation. My primary focus is on measurement development, specifically computer adaptive test development for evaluating different aspects of health-related quality of life. I have R01s focused on Huntington disease, and on caregivers of individuals with Traumatic Brain Injury. I also have funded measurement development work in neonatal brachial plexus palsy, and measurement validation/application in spinal cord injury, traumatic brain injury, stroke and mild cognitive impairment.

Why is this interesting to you?
I always have been interested in measurement — hence, my training in neuropsychology. Measurement, while only one piece of the intervention puzzle, is an important one. It is what we use to evaluate the effectiveness of treatments/interventions. 

What are the practical implications for health care?
Better outcomes measures (i.e., more sensitive assessment measures) maximize the efficiency of clinical research — fewer participants are needed to determine whether or not an intervention is effective. Furthermore, computer adaptive testing takes substantially less time than traditional tests (without sacrificing sensitivity), thus decreasing the assessment burden.

 

 

Search our members by typing in names or keywords, or use the category filters at the left