Latest
Get ready to lose to Transformers onĀ Lichess
They can hit 2895 EloĀ ā¦ without memorizing patterns
Long Context Compression with Activation Beacon
Differential Transformers
LLMs will lie forever
Hallucinations are never going away. How can we reduce them?
What's (actually) up with o1
The new o1 UX is bad. The model is weird. I have questions.
AI can (kinda) generate novel ideas
LLMs have some brainstorming limitations.
AI agents can collude using hidden messages!
LLMs can hide their real messages in unsuspicious chit-chat
š„Top ML papers of the week
The top ML papers of the week (Aug 23 ā Aug 30)
Training on code improves LLM performance on non-coding tasks
Adding code to your training data makes your LLM better at non-coding tasks too
š§ Training on code improves LLM performance on non-coding tasks
Adding code to your training data makes your LLM better at non-coding tasks too
LLMs can speak in JPEG
By studying āsecretā messages (JPEGs), LLMs can eventually learn to write them.
Different now
The week the internet actually changedĀ forever
The GPT store is stupid and dead
Thereās no moat. All prompts can be extracted, so all prompts are public.