Thought Propagation: Teaching LLMs to Solve Complex Reasoning Tasks with Analogies Humans solve complex and novel problems using analogies. Can LLMs do the same?
There are many ways to design roundabouts. Can AI choose the best one? Probabilistic generative modeling for procedural roundabout generation for developing countries
Researchers: Low-Resource Languages Can Easily Jailbreak LLMs These results clearly demonstrate safety mechanisms do not properly generalize across languages
Decoding Speech from Brain Waves - A Breakthrough in Brain-Computer Interfaces Researchers from Meta have discovered how to turn brain waves into speech using noninvasive methods like EEG and MEG
Can Large Language Models Self-Correct Their Own Reasoning? Probably Not. A new paper takes a critical look at the promise and limits of self-correction
Enabling Language Models to Implicitly Learn Self-Improvement Rather than manually distilling criteria into prompts, implicit information in preference data can be leveraged.
LLMs can be extended to infinite sequence lengths without fine-tuning LLMs trained with a finite attention window can be extended to infinite sequence lengths without any fine-tuning.