freelance writing mostly! I've written for FLI, Bluedot and a few other places. also doing an AI governance research fellowship in London next month. trying to decide between direct policy work and sticking with this whole 'trying to communicate AI risks to a broad audience' gig
Fellow AI safety lurker here. I've reached the point where I'm like "I may not know as much as the people working in the field, but I know I'm not wrong to think there's a problem everyone should know about and I know enough to be able to explain it, who the major players are, and what the options we have are". And I feel like that's valuable, because the people who are smarter than me and work on AI safety full time are um, a little busy at the moment. Hearing Nathan Labenz say he can't keep up with the rate of new developments, I'm like "Ok, I'm not going to be able to keep up either, in my spare time, best just accept it and do the best I can". And doing the best I can and sharing information seems super valuable - the state of knowledge of a lot of people is still "Is it LLM, or LMM, or something, that the kids are using these days?"*, and just explaining the very basic facts of the situation and keeping abreast of changes and helping others to do so is quite useful, in my opinion.
Your experience just asking the smarter people things is a key takeaway for me, I've been reluctant to waste their time. Subscribed, and will be very interested to follow how your work goes.
*This is not an exaggeration, it's based on a real-life situation from late 2024 where it became clear to me that not one of a group of 20+ cybersecurity professionals I work with, who were discussing a magazine article pontificating about the effects of AI on the job market, knew what "LLM" stands for, nor did the supposed expert who had written the article they were all nodding along to, who kept mixing up LLM and LMM.
Just discovered you--great post! Can you say more about what your full-time work in AI safety is or link to where you explain this elsewhere? I'm very curious as I am still searching for ways that I can work in AI safety full-time. Thanks!
my only comment as a fellow ai safety lurker is elizier might be right about ai safety but is generally quite unpleasant to interact with and you shouldn't take his opinions to heart. glad to see you working in ai safety full time: its a question i keep asking myself
I don't think this! I think his comment was actually quite kind and was echoing the same sentiment I'd put out. my point is that if you self-deprecate in public, you can't be upset when people take you at your word
Hell yeah
Oh wow, you work in safety now? That's great! What are you up to?
freelance writing mostly! I've written for FLI, Bluedot and a few other places. also doing an AI governance research fellowship in London next month. trying to decide between direct policy work and sticking with this whole 'trying to communicate AI risks to a broad audience' gig
what did you do before?
Fellow AI safety lurker here. I've reached the point where I'm like "I may not know as much as the people working in the field, but I know I'm not wrong to think there's a problem everyone should know about and I know enough to be able to explain it, who the major players are, and what the options we have are". And I feel like that's valuable, because the people who are smarter than me and work on AI safety full time are um, a little busy at the moment. Hearing Nathan Labenz say he can't keep up with the rate of new developments, I'm like "Ok, I'm not going to be able to keep up either, in my spare time, best just accept it and do the best I can". And doing the best I can and sharing information seems super valuable - the state of knowledge of a lot of people is still "Is it LLM, or LMM, or something, that the kids are using these days?"*, and just explaining the very basic facts of the situation and keeping abreast of changes and helping others to do so is quite useful, in my opinion.
Your experience just asking the smarter people things is a key takeaway for me, I've been reluctant to waste their time. Subscribed, and will be very interested to follow how your work goes.
*This is not an exaggeration, it's based on a real-life situation from late 2024 where it became clear to me that not one of a group of 20+ cybersecurity professionals I work with, who were discussing a magazine article pontificating about the effects of AI on the job market, knew what "LLM" stands for, nor did the supposed expert who had written the article they were all nodding along to, who kept mixing up LLM and LMM.
Just discovered you--great post! Can you say more about what your full-time work in AI safety is or link to where you explain this elsewhere? I'm very curious as I am still searching for ways that I can work in AI safety full-time. Thanks!
my only comment as a fellow ai safety lurker is elizier might be right about ai safety but is generally quite unpleasant to interact with and you shouldn't take his opinions to heart. glad to see you working in ai safety full time: its a question i keep asking myself
I don't think this! I think his comment was actually quite kind and was echoing the same sentiment I'd put out. my point is that if you self-deprecate in public, you can't be upset when people take you at your word