mtbc: maze I (white-red)
[personal profile] mtbc
I have a decent background in artificial intelligence in terms of both education and experience. I have conceived and played a leading role in AI research projects in a wide variety of problem domains. After a few days' thinking, I can normally come up with an original, effective approach, then show that it works.

At my new employer, last week they put out an opportunity for staff like me to propose a risky, different, modest AI project that would be funded internally. Much of what they are looking for hits my experience well, and doing a good job of this kind of project is probably a significant aspect of what they are paying me for. My career here will be very much what I make it.

I find myself oddly stymied. I am used to looking for projects that fit particular approaches, this opportunity is more open, though I should note that we have impressive supercomputers at hand. I am also used to facing specific problems. With a problem and an existing toolkit, there is more to fertilize the crystallization of ideas.

Further, at work, there are many experts in all kinds of fields. In previous jobs, about anything I proposed, no colleague knew much more than I, so it made sense for me to take the lead as I was willing and able to buckle down and study when necessary. Now, it feels weird to propose something in what would probably be a colleague's field. Indeed, for many interesting problems, there are already other teams working on them.

If I knew a bit more detail about problems that remain unsolved and what relevant pieces we have in place at work, maybe I could think about those and come up with something. However, my employer is large so this feels unlikely to happen before the proposal deadline, especially as I should limit the time for which I am distracted from work that is already funded. I also don't want to propose some relevant further work from a project from a previous job: if nothing else, the legal and moral aspects are more that I feel ready to face right now.

Perhaps I will be lucky enough to remember some other problem soon and be able to look for something relevant at work, where there is some particular person whom I can ask to collaborate, e.g., because they have a relevant simulation model.

Date: 2021-07-21 11:18 am (UTC)
aldabra: (Default)
From: [personal profile] aldabra
ISTM (I have stopped following AI some time ago and may be entirely wrong) that there is a great chunk missing around emotion and appetite and intent. I can see how a paperclip maximiser might appropriate the world to turn it into paperclips, but I don't see how you get anything which is meaningfully a hunger for power for its own sake. (Probably this is a good thing and we don't want it.) How are AIs at setting their own independent goals, rather than coming up with heuristics to achieve externally-defined goals? Why do they get out of bed in the morning?

If you had a kid without emotion and appetite and intent, how would you begin to teach them anything?

Profile

mtbc: photograph of me (Default)
Mark T. B. Carroll

June 2025

S M T W T F S
1 2 34567
8 9101112 1314
1516171819 2021
222324252627 28
2930     

Most Popular Tags

Page Summary

Style Credit

Expand Cut Tags

No cut tags
Page generated Jul. 3rd, 2025 05:11 am
Powered by Dreamwidth Studios