Decoding Speech from Brain Waves - A Breakthrough in Brain-Computer Interfaces Researchers from Meta have discovered how to turn brain waves into speech using noninvasive methods like EEG and MEG
ViT Attention, Infinite LLMs, and Age-Warping Friends with AI This week's advancements in AI: focus, flow, and fun
Can Large Language Models Self-Correct Their Own Reasoning? Probably Not. A new paper takes a critical look at the promise and limits of self-correction
Enabling Language Models to Implicitly Learn Self-Improvement Rather than manually distilling criteria into prompts, implicit information in preference data can be leveraged.
LLMs can be extended to infinite sequence lengths without fine-tuning LLMs trained with a finite attention window can be extended to infinite sequence lengths without any fine-tuning.
Tool-Integrated Reasoning: A New Approach for Math-Savvy LLMs TORA combines both rationale-based and program-based reasoning to deliver results to math problems that were previously too difficult for LLMs to solve
Researchers discover explicit registers eliminate vision transformer attention spikes When visualizing the inner workings of vision transformers (ViTs), researchers noticed weird spikes of attention on random background patches. Here's how they fixed them.