top of page
website_header_less_crowded_neon_waves.png

AdaptEd

AI-Powered Learning Evaluation

AdaptEd reimagines the Feynman Technique for inclusive learning. Using a multimodal AI that understands text, voice, and video, it provides personalized feedback to help every learner, regardless of speech or vision ability, explain complex ideas clearly.

01

Background

image.png

The Feynman Technique is a technique used for simplifying complex ideas by explaining them in basic terms. However, this technique has an immense reliance on both verbal and non-verbal forms of communication, which restricts people with speech or visual impairments. Vision impairments limit access to nonverbal cues like facial expressions and written feedback, while speech impairments make it harder to articulate ideas through explanation. 

02

Details

This project aims to solve these challenges using a multimodal LLM that can process voice, video, and text input to generate clear, accessible feedback. By analyzing gestures, expressions, and tone (when available), the model can provide more nuanced responses, helping users refine their understanding. 

Screenshot 2025-12-10 at 5.49.39 PM.png
Screenshot 2025-12-10 at 5.51.45 PM.png
Screenshot 2025-12-10 at 5.54.23 PM.png

03

Implementation

 The tool will be available as a web or mobile interface, integrating features like text-to-speech, adaptive visual settings, and structured feedback tailored to individual needs. By leveraging AI to make the Feynman Technique more inclusive, this project aims to remove barriers to deep learning and make knowledge more accessible for everyone.

Ackhnowledgements

  • University of Washington Department of Electrical and Computer Engineering for providing research support, facilities, funding, and guidance.

Department of Electrical and Computer Engineering

University of Washington
185 E Stevens Way NE
Seattle, WA 98195

Connect With Us

  • LinkedIn

© 2025 by ARC Lab. All Rights Reserved.

bottom of page