Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Go still isn't solved (neither is chess), we just have a machine good at knowing which parts of the search space are worth checking.


I think achieving superiority over humans is practically solving the problem though. Solving chess or go by going through a complete search space seems more like a hardware/computational goal than a practical ml/ai goal.


It all hinges on your definition of "solved".

"Solved" in the AI/game theory has a very strict definition. It indicates that you have formally proven that one of the players can guarantee an outcome from the very beginning of the game.

The less-strict definition being thrown around here in the comments is more like "This AI can always beat this human because it is much stronger."


I think most people discussing this mean the later, less pedantic option. I mean, that’s the spirit of AI. Can we make it think like a human, or even more so. We are the yardstick.


That is a silly mis-use of the term and that is not being pedantic. A problem isn't solved just because you beat the existing solution (i.e. human players). As long as there is the potential for a better solution that can beat your solution there is work to be done.


You don't have to go through the complete search space if it turns out optimal strategies are sparse. What do I mean by that? Take a second-price auction: the dominant strategy here is to always bid your true value. Meanwhile, the search space for this would be any real number in between 0 and your true value. What does this mean for computational games like Chess or Go? It may mean while the search space is exponential, there may exist computationally trivial strategies that work. I would compare this to Kolmogorov complexity, except instead of having a program as your output, it's a strategy.


Any substandard statistical model fitted to by a simple computer program is superior to what an unaided human could achieve with pen and paper, but few of them can claim to practically "solve" the problem because they are better than crude fit heuristics proposed by humans who are not good calculating machines.

An algorithm can't claim to have "solved" Go, when future versions of the algorithm are expected to achieve vastly superior results, never mind any formal mathematical proof of optimality. What it has demonstrated is that humans aren't very good at Go. Given that Go involves estimating Nash equilibrium responses in a perfect information game with a finite, knowable but extremely large range of possible outcomes, it's perhaps not surprising that Go is the sort of problem that humans are not very good at trying to solve and that computers can incrementally improve on our attempted solutions. Perhaps the more interesting finding from AlphaGoZero is that humans were so bad at Go that not training on human games and theory actually improved performance.


We've just created a tool people can use to play Go better than a person without the tool. Until something emerges from the rain forest, or comes down from space that can throw down and win I'd say Humans are still the best Go players in the known universe.


That's like when the whole class fails a test, but the prof. grades on a curve. Someone gets an A, but not really. edit: some grammar.


"Solved", in this case, means "computers can play the game at levels no human can beat."


That's not the normal meaning of solved in regards to game theory.


I believe with respect to game theory, solving a game like Go would require finding a strategy that obeys the one shot deviation principle. The result would rather be boring to watch however, because the conclusion of every game played under this strategy would either be draw, or based on which player starts off first.

[1] https://en.wikipedia.org/wiki/One-shot_deviation_principle


But it is what "solved" means in deep learning. Common terms very often acquire different technical meanings in different fields. Or even technical terms! Whether a single hydrogen atom is a "molecule" depends on whether you're talking to a physicist or a chemist. And "functor" means something very different in Java programming patterns than it does in math.


I have to concur with the other poster here. You're not using the term in the usual way. Traditionally a game is called solved when its search space has been searched exhaustively or there is some other analytic solution that allows you to determine who wins and who looses in every constellation. Tic-tac-toe is solved, for instance.

https://en.wikipedia.org/wiki/Solved_game


Regardless, we need a term for “computers can play the game at levels no human can beat.”


Well, don't use one that has an existing meaning in that exact context.


Superhuman


This is the performance vs understanding dialectic. A bunch of humans built a machine that is superior at chess, but that machine can't teach humans what it knows.


Humans can and are absorbing some of what the machine has demonstrated.

I think Kasparov has it right when he says that the best player isn't a human or a machine, but a human using a machine as a tool. The machine can help the player optimize and reduce mistakes, but machines don't yet know how to ask questions and explore in the same way. Maybe they never will.


There's a name for this approach, they call human-AI teams "centaurs". It's a fascinating concept. I am deeply curious if eventually that will be outstripped by pure AI too. I believe so.


Optimal strategy for human-AI teams has been really close "defer to the computer for every move" for a while now. They're only interesting because they're adversarial.


Chess computers teach by sparring rather than lecture. Humans still learn.


Alpha *, at least, also learns by sparring, so there is a nice symmetry there.


Reminds me how sometimes geniuses cannot translate the way their mind works to non-geniuses. It's such an implicit talent that it's not even .. "reified" in their upper brain. It all happens in cache.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: