or Artificial Intelligence Bull Shitake
There are a lot of claims being made, and as this article points out, not many of them are supported by strong evidence/math.
In Rebooting AI, Ernie Davis and I made six recommendations, each geared towards how readers – and journalists â€“ and researchers might equally assess each new result that they achieve, asking the same set of questions in a limit section in the discussion of their papers:
Stripping away the rhetoric, what does the AI system actually do? Does a â€œreading systemâ€ really read?
How general is the result? (Could a driving system that works in Phoenix work as well in Mumbai? Would a Rubikâ€™s cube system work in opening bottles? How much retraining would be required?)
Is there a demo where interested readers can probe for themselves?
If AI system is allegedly better than humans, then which humans, and how much better? (A comparison is low wage workers with little incentive to do well may not truly probe the limits of human ability)
How far does succeeding at the particular task actually take us toward building genuine AI?
How robust is the system? Could it work just as well with other data sets, without massive amounts of retraining? AlphaGo works fine on a 19×19 board, but would need to be retrained to play on a rectangular board; the lack of transfer is telling.