Entry tags:
A creative block on Artificial Intelligence research projects
I have a decent background in artificial intelligence in terms of both education and experience. I have conceived and played a leading role in AI research projects in a wide variety of problem domains. After a few days' thinking, I can normally come up with an original, effective approach, then show that it works.
At my new employer, last week they put out an opportunity for staff like me to propose a risky, different, modest AI project that would be funded internally. Much of what they are looking for hits my experience well, and doing a good job of this kind of project is probably a significant aspect of what they are paying me for. My career here will be very much what I make it.
I find myself oddly stymied. I am used to looking for projects that fit particular approaches, this opportunity is more open, though I should note that we have impressive supercomputers at hand. I am also used to facing specific problems. With a problem and an existing toolkit, there is more to fertilize the crystallization of ideas.
Further, at work, there are many experts in all kinds of fields. In previous jobs, about anything I proposed, no colleague knew much more than I, so it made sense for me to take the lead as I was willing and able to buckle down and study when necessary. Now, it feels weird to propose something in what would probably be a colleague's field. Indeed, for many interesting problems, there are already other teams working on them.
If I knew a bit more detail about problems that remain unsolved and what relevant pieces we have in place at work, maybe I could think about those and come up with something. However, my employer is large so this feels unlikely to happen before the proposal deadline, especially as I should limit the time for which I am distracted from work that is already funded. I also don't want to propose some relevant further work from a project from a previous job: if nothing else, the legal and moral aspects are more that I feel ready to face right now.
Perhaps I will be lucky enough to remember some other problem soon and be able to look for something relevant at work, where there is some particular person whom I can ask to collaborate, e.g., because they have a relevant simulation model.
At my new employer, last week they put out an opportunity for staff like me to propose a risky, different, modest AI project that would be funded internally. Much of what they are looking for hits my experience well, and doing a good job of this kind of project is probably a significant aspect of what they are paying me for. My career here will be very much what I make it.
I find myself oddly stymied. I am used to looking for projects that fit particular approaches, this opportunity is more open, though I should note that we have impressive supercomputers at hand. I am also used to facing specific problems. With a problem and an existing toolkit, there is more to fertilize the crystallization of ideas.
Further, at work, there are many experts in all kinds of fields. In previous jobs, about anything I proposed, no colleague knew much more than I, so it made sense for me to take the lead as I was willing and able to buckle down and study when necessary. Now, it feels weird to propose something in what would probably be a colleague's field. Indeed, for many interesting problems, there are already other teams working on them.
If I knew a bit more detail about problems that remain unsolved and what relevant pieces we have in place at work, maybe I could think about those and come up with something. However, my employer is large so this feels unlikely to happen before the proposal deadline, especially as I should limit the time for which I am distracted from work that is already funded. I also don't want to propose some relevant further work from a project from a previous job: if nothing else, the legal and moral aspects are more that I feel ready to face right now.
Perhaps I will be lucky enough to remember some other problem soon and be able to look for something relevant at work, where there is some particular person whom I can ask to collaborate, e.g., because they have a relevant simulation model.
no subject
If you had a kid without emotion and appetite and intent, how would you begin to teach them anything?
no subject
I'm not aware of much interest or progress in AIs setting their own goals, except of course as intermediate to, however implicitly, extrinsic goals. One could teach systems that are simply curious, even with limited ability to hypothesize then design experiments accordingly but, in general, I suspect that the state of the field would underwhelm you from a philosophical point of view, rather than the . Personally I'm intrigued by and sympathetic to Douglas Hofstadter's suggestion but I don't know what to do with it.
no subject
I like Hofstadter and also predictive layers; I wonder how they mesh. Does one construct higher predictive layers out of more abstract analogies?
no subject
Abstractly (ha) it makes sense that reason why a thing is analogous to another, the essence of the analogy, may live in a higher layer. Recognizing that it matches a new concrete thing feels like a tough problem, though, perhaps, not categorically harder than, say, image classification. The predictive aspect further challenges the matching: the new instance may not yet have all the parts, the missing piece of the partially fitting analogy becoming a prediction. I don't yet have the experience to know what deep neural networks are capable of.