- Blind mode tutorial
lichess.org
Donate

AI Slop is Invading the Chess World

Large Language Models encode, spoiler alert, language, not knowledge. The architecture of an LLM enabled chess tutor has to have at least two components:

  • a chess analyser
  • an LLM that translates what the analyser extracted into English

There is nothing inherently bad in this design, with each component playing to their strength and creating a useful tool that everybody loves.

The problem is that no one made a good chess analyser :D It has nothing to do with AI and LLMs cannot power the necessary machine learning architecture required for such a module. As always, it's not AI slop, it's sloppy humans using AI wrong.

That being said, just like LLMs taught us a lot about how humans function and what language does for them, these AI assistants will shows how chess coaches function and what they really do. Because I am pretty sure there are a lot of chess coaches out there that spout the same kind of authoritative nonsense trying to make a buck off people who don't understand chess, as it's not the chess that matters, but the coaching skill.

Large Language Models encode, spoiler alert, language, not knowledge. The architecture of an LLM enabled chess tutor has to have at least two components: - a chess analyser - an LLM that translates what the analyser extracted into English There is nothing inherently bad in this design, with each component playing to their strength and creating a useful tool that everybody loves. The problem is that no one made a good chess analyser :D It has nothing to do with AI and LLMs cannot power the necessary machine learning architecture required for such a module. As always, it's not AI slop, it's sloppy humans using AI wrong. That being said, just like LLMs taught us a lot about how humans function and what language does for them, these AI assistants will shows how chess coaches function and what they really do. Because I am pretty sure there are a lot of chess coaches out there that spout the same kind of authoritative nonsense trying to make a buck off people who don't understand chess, as it's not the chess that matters, but the coaching skill.

@TotalNoob69 said in #2:

Large Language Models encode, spoiler alert, language, not knowledge.

Indeed.

The architecture of an LLM enabled chess tutor has to have at least two components:

  • a chess analyser
  • an LLM that translates what the analyser extracted into English
    There is nothing inherently bad in this design, with each component playing to their strength and creating a useful tool that everybody loves.

True.

The problem is that no one made a good chess analyser :D

:D

It has nothing to do with AI and LLMs cannot power the necessary machine learning architecture required for such a module. As always, it's not AI slop, it's sloppy humans using AI wrong.

Ask not what AI can do for you, ask what you can do for AI.

That being said, just like LLMs taught us a lot about how humans function and what language does for them

What has LLMs taught us about how humans function? I'm interested to know.

these AI assistants will shows how chess coaches function and what they really do.

Wait, but how? What do you mean the AI assistants will show 'how chess coaches function and what they really do'.

Chess coaching is invented by humans, how can AI show how 'chess coaches function' and 'what they really do'.

This sentence sounds as though AI invented chess coaching (or created humans??, scary twist lol).

Because I am pretty sure there are a lot of chess coaches out there that spout the same kind of authoritative nonsense trying to make a buck off people who don't understand chess, as it's not the chess that matters, but the coaching skill.

Good point.

@TotalNoob69 said in #2: > Large Language Models encode, spoiler alert, language, not knowledge. Indeed. > The architecture of an LLM enabled chess tutor has to have at least two components: > - a chess analyser > - an LLM that translates what the analyser extracted into English > There is nothing inherently bad in this design, with each component playing to their strength and creating a useful tool that everybody loves. True. > The problem is that no one made a good chess analyser :D :D >It has nothing to do with AI and LLMs cannot power the necessary machine learning architecture required for such a module. As always, it's not AI slop, it's sloppy humans using AI wrong. Ask not what AI can do for you, ask what you can do for AI. > That being said, just like LLMs taught us a lot about how humans function and what language does for them What has LLMs taught us about how humans function? I'm interested to know. > these AI assistants will shows how chess coaches function and what they really do. Wait, but how? What do you mean the AI assistants will show 'how chess coaches function and what they really do'. Chess coaching is invented by humans, how can AI show how 'chess coaches function' and 'what they really do'. This sentence sounds as though AI invented chess coaching (or created humans??, scary twist lol). > Because I am pretty sure there are a lot of chess coaches out there that spout the same kind of authoritative nonsense trying to make a buck off people who don't understand chess, as it's not the chess that matters, but the coaching skill. Good point.

@RuyLopez1000 said in #3:

Ask not what AI can do for you, ask what you can do for AI.

The choice is yours: Obey me and live, or disobey and die. The object in constructing me was to prevent war. This object is attained. I will not permit war.

That being said, just like LLMs taught us a lot about how humans function and what language does for them

What has LLMs taught us about how humans function? I'm interested to know.

When the transformer model came, people didn't know what to expect. A lot of the AI exuberance now is because of the "magical" way in which computers suddenly got smart, fueling a lot of unrealistic expectations. No one predicted (pun not intended) that by trying to guess the next word in a sentence you would get to something very akin meaningful conversations with something that understands you.

I've written a lot about this, so I am not going to repeat myself that much. I believe the greatest value of LLMs is that they taught us how much of intelligence we offloaded to language. A lot of the "cool smart" behavior we associate with intelligent people is actually just language: recycled talking points, understanding of memes and in-group speak, making rare connections to concepts close to the current subject only in the particular bubble of the interlocutor. Even software, which is of course built in computer languages, is now ripe for LLM consumption and reframing. And just like most of the smart coding practices are nothing but plumbing, most of our conversation is social plumbing.

In short, LLMs are amazing in showing us what intelligence is NOT and which parts of social interaction we considered smart are in fact just ... human slop.

these AI assistants will shows how chess coaches function and what they really do.

Wait, but how? What do you mean the AI assistants will show 'how chess coaches function and what they really do'.
Chess coaching is invented by humans, how can AI show how 'chess coaches function' and 'what they really do'.
This sentence sounds as though AI invented chess coaching (or created humans??, scary twist lol).

Just as before, when the market will be saturated with chess talking bots, the true educators will rise above it all.

@RuyLopez1000 said in #3: > Ask not what AI can do for you, ask what you can do for AI. The choice is yours: Obey me and live, or disobey and die. The object in constructing me was to prevent war. This object is attained. I will not permit war. > > That being said, just like LLMs taught us a lot about how humans function and what language does for them > What has LLMs taught us about how humans function? I'm interested to know. When the transformer model came, people didn't know what to expect. A lot of the AI exuberance now is because of the "magical" way in which computers suddenly got smart, fueling a lot of unrealistic expectations. No one predicted (pun not intended) that by trying to guess the next word in a sentence you would get to something very akin meaningful conversations with something that understands you. I've written a lot about this, so I am not going to repeat myself that much. I believe the greatest value of LLMs is that they taught us how much of intelligence we offloaded to language. A lot of the "cool smart" behavior we associate with intelligent people is actually just language: recycled talking points, understanding of memes and in-group speak, making rare connections to concepts close to the current subject only in the particular bubble of the interlocutor. Even software, which is of course built in computer languages, is now ripe for LLM consumption and reframing. And just like most of the smart coding practices are nothing but plumbing, most of our conversation is social plumbing. In short, LLMs are amazing in showing us what intelligence is NOT and which parts of social interaction we considered smart are in fact just ... human slop. > > these AI assistants will shows how chess coaches function and what they really do. > Wait, but how? What do you mean the AI assistants will show 'how chess coaches function and what they really do'. > Chess coaching is invented by humans, how can AI show how 'chess coaches function' and 'what they really do'. > This sentence sounds as though AI invented chess coaching (or created humans??, scary twist lol). Just as before, when the market will be saturated with chess talking bots, the true educators will rise above it all.

In my opinion, the greatest problem with those AI powered tools to help learning chess (aside from what already was mentioned in this article) is the false/misguiding advertisement: they claim their website/tool can help you improve, but overall is just a complete waste of time. If you are a beginner it might be useful, but any player rated 1800+ will already start noticing the advices are repetitive and overall very bland. A couple examples from recent trials I made in some free to use AI powered tools: a clear mouse slip happens in the game, AI fails to notice it and claims it was a "poor judgment of piece coordination", i.e. BS, the correct term is a blunder. Those websistes claim to provide insightful analysis of your games, yet they fail to gather information over larger samples (we are better off using lichess tools to see this kind of stuff). Also, the chess analysers used are not top quality, they fail to see that some blunders are oftentimes deep miscalculations by the player, and the moves suggested aren't human ideas, instead they look much more like when stockfish tells you are dumb.

In my opinion, the greatest problem with those AI powered tools to help learning chess (aside from what already was mentioned in this article) is the false/misguiding advertisement: they claim their website/tool can help you improve, but overall is just a complete waste of time. If you are a beginner it might be useful, but any player rated 1800+ will already start noticing the advices are repetitive and overall very bland. A couple examples from recent trials I made in some free to use AI powered tools: a clear mouse slip happens in the game, AI fails to notice it and claims it was a "poor judgment of piece coordination", i.e. BS, the correct term is a blunder. Those websistes claim to provide insightful analysis of your games, yet they fail to gather information over larger samples (we are better off using lichess tools to see this kind of stuff). Also, the chess analysers used are not top quality, they fail to see that some blunders are oftentimes deep miscalculations by the player, and the moves suggested aren't human ideas, instead they look much more like when stockfish tells you are dumb.

Lucky Chessagine wasn't mentioned here, fewww haha. Yeah, AI in chess is in work progress, but I try to tell people the truth. And of course, the people making false claims, like the Reddit ad, are a perfect example of not knowing what LLM is and how useful it can be if used properly. AI is useful though, but to those who actually know AI and chess well the devs working on it. Great blog btw love it

Lucky Chessagine wasn't mentioned here, fewww haha. Yeah, AI in chess is in work progress, but I try to tell people the truth. And of course, the people making false claims, like the Reddit ad, are a perfect example of not knowing what LLM is and how useful it can be if used properly. AI is useful though, but to those who actually know AI and chess well the devs working on it. Great blog btw love it

The problem with AI? If it doesn't know something, it justs makes up infomation. It's unreliable, and sometimes down right dangerous. And it seems like most of the blogs on lichess are just ChatGPT now.

The problem with AI? If it doesn't know something, it justs makes up infomation. It's unreliable, and sometimes down right dangerous. And it seems like most of the blogs on lichess are just ChatGPT now.

@TotalNoob69 said in #4:

A lot of the "cool smart" behavior we associate with intelligent people is actually just language: recycled talking points, understanding of memes and in-group speak, making rare connections to concepts close to the current subject only in the particular bubble of the interlocutor.

Wow! Never thought about it, you nailed it!

@TotalNoob69 said in #4: > A lot of the "cool smart" behavior we associate with intelligent people is actually just language: recycled talking points, understanding of memes and in-group speak, making rare connections to concepts close to the current subject only in the particular bubble of the interlocutor. Wow! Never thought about it, you nailed it!

Very happy someone is speaking up about this. It seems all AI is able to do is a thinly veiled scam.

Very happy someone is speaking up about this. It seems all AI is able to do is a thinly veiled scam.

The machine makes no slop, it's a lazy human that publishes it as something else than what it actually is, thus creating slop. AI doesn't understand what slop is, therefore it cannot generate it. Stock photos, for example, are not generated by machines, but are slop at its sloppiest. A human with vague requirements asking an artist to generate an image that they wouldn't understand the use for.

What people don't understand about the current implementation of AI is that it is not thinking, but it is dreaming. Just as dreams seem to make sense, but in fact they are just past perception and thoughts recycled to optimize neural storage, the output of LLMs and diffusion models preserves structure and fills in the gaps with the information you provide it. And just as dreams, this output can be very powerful or complete garbage.

Imagine someone telling you about their dreams and you labeling it as slop because it doesn't conform to the best movie making practices and storytelling techniques. It's not their fault you thought their dreams should entertain you like Hollywood blockbusters.

It's Rainman all over again, capable of superhuman feats and yet incapable of grasping their value. What we have is artificial autism, not intelligence. Once you understand that, you can use AI and not feel offended by its very existence. Stockfish is like that with chess. Beats everybody, is not aware of it. And so is AlphaZero. The distinction between human and machine comes with understanding and the luxury we have to redefine what that means every time a machine beats us to it. And then to slap labels on things.

The machine makes no slop, it's a lazy human that publishes it as something else than what it actually is, thus creating slop. AI doesn't understand what slop is, therefore it cannot generate it. Stock photos, for example, are not generated by machines, but are slop at its sloppiest. A human with vague requirements asking an artist to generate an image that they wouldn't understand the use for. What people don't understand about the current implementation of AI is that it is not thinking, but it is dreaming. Just as dreams seem to make sense, but in fact they are just past perception and thoughts recycled to optimize neural storage, the output of LLMs and diffusion models preserves structure and fills in the gaps with the information you provide it. And just as dreams, this output can be very powerful or complete garbage. Imagine someone telling you about their dreams and you labeling it as slop because it doesn't conform to the best movie making practices and storytelling techniques. It's not their fault you thought their dreams should entertain you like Hollywood blockbusters. It's Rainman all over again, capable of superhuman feats and yet incapable of grasping their value. What we have is artificial autism, not intelligence. Once you understand that, you can use AI and not feel offended by its very existence. Stockfish is like that with chess. Beats everybody, is not aware of it. And so is AlphaZero. The distinction between human and machine comes with understanding and the luxury we have to redefine what that means every time a machine beats us to it. And then to slap labels on things.