mtbc: maze I (white-red)
Mark T. B. Carroll ([personal profile] mtbc) wrote2021-07-20 07:51 pm
Entry tags:

A creative block on Artificial Intelligence research projects

I have a decent background in artificial intelligence in terms of both education and experience. I have conceived and played a leading role in AI research projects in a wide variety of problem domains. After a few days' thinking, I can normally come up with an original, effective approach, then show that it works.

At my new employer, last week they put out an opportunity for staff like me to propose a risky, different, modest AI project that would be funded internally. Much of what they are looking for hits my experience well, and doing a good job of this kind of project is probably a significant aspect of what they are paying me for. My career here will be very much what I make it.

I find myself oddly stymied. I am used to looking for projects that fit particular approaches, this opportunity is more open, though I should note that we have impressive supercomputers at hand. I am also used to facing specific problems. With a problem and an existing toolkit, there is more to fertilize the crystallization of ideas.

Further, at work, there are many experts in all kinds of fields. In previous jobs, about anything I proposed, no colleague knew much more than I, so it made sense for me to take the lead as I was willing and able to buckle down and study when necessary. Now, it feels weird to propose something in what would probably be a colleague's field. Indeed, for many interesting problems, there are already other teams working on them.

If I knew a bit more detail about problems that remain unsolved and what relevant pieces we have in place at work, maybe I could think about those and come up with something. However, my employer is large so this feels unlikely to happen before the proposal deadline, especially as I should limit the time for which I am distracted from work that is already funded. I also don't want to propose some relevant further work from a project from a previous job: if nothing else, the legal and moral aspects are more that I feel ready to face right now.

Perhaps I will be lucky enough to remember some other problem soon and be able to look for something relevant at work, where there is some particular person whom I can ask to collaborate, e.g., because they have a relevant simulation model.
aldabra: (Default)

[personal profile] aldabra 2021-07-21 11:18 am (UTC)(link)
ISTM (I have stopped following AI some time ago and may be entirely wrong) that there is a great chunk missing around emotion and appetite and intent. I can see how a paperclip maximiser might appropriate the world to turn it into paperclips, but I don't see how you get anything which is meaningfully a hunger for power for its own sake. (Probably this is a good thing and we don't want it.) How are AIs at setting their own independent goals, rather than coming up with heuristics to achieve externally-defined goals? Why do they get out of bed in the morning?

If you had a kid without emotion and appetite and intent, how would you begin to teach them anything?
aldabra: (Default)

[personal profile] aldabra 2021-07-22 09:19 am (UTC)(link)
Mmm. I've been on seminars about AI in medicine, and they have massive massive problems with the medical regulators because they can't explain how things work, and even greater problems with things that might keep learning after they've been deployed.

I like Hofstadter and also predictive layers; I wonder how they mesh. Does one construct higher predictive layers out of more abstract analogies?