mtbc: photograph of me (Default)
Mark T. B. Carroll ([personal profile] mtbc) wrote 2021-07-21 11:38 pm (UTC)

I'm a big fan of the classic symbolic AI approaches that explicitly represent various things. The modern trend is very much toward connectionist deep learning and suchlike, for example the networks I mention in my followup entry and, tomorrow, I have a meeting about a project that applies PyTorch to predictive simulation like https://arxiv.org/abs/2010.03409 which is cool, I had no idea that we could use neural networks to derive models of physical systems. Beyond that, neuromorphic computing, where they actually run timed pulses along the simulated axons, now shows promise. It's all somewhat black-boxy though, there's a temptation to see a problem, throw a Learning Thing at it, observe pleasing results arising from its mysterious means, then come away with a possibly useful thing but not really much more understanding than before. That view's highly personal, though.

I'm not aware of much interest or progress in AIs setting their own goals, except of course as intermediate to, however implicitly, extrinsic goals. One could teach systems that are simply curious, even with limited ability to hypothesize then design experiments accordingly but, in general, I suspect that the state of the field would underwhelm you from a philosophical point of view, rather than the look at the cool thing. Personally I'm intrigued by and sympathetic to Douglas Hofstadter's Analogy as the Core of Cognition suggestion but I don't know what to do with it.

Post a comment in response:

This account has disabled anonymous posting.
(will be screened if not on Access List)
(will be screened if not on Access List)
If you don't have an account you can create one now.
HTML doesn't work in the subject.
More info about formatting

If you are unable to use this captcha for any reason, please contact us by email at support@dreamwidth.org