I just love Anthropic for the research they do and share with us on AI safety. I feel like they are the real torchbearers when it comes to ensuring that AGI remains on the good side of humanity. Guardrails need to be put in place before things go awry. Thanks for putting this together, Jonas!
Your look at rogue AI agents really opens my eyes. I appreciate how you apply lessons from high-frequency trading to improve it. Critical stuff for anyone in AI development!
This article is wonderfully written and I loved the comparison with the use of algorithms in HFT. This has nudged me to do a bit more research on mis-alignment, something which I have been taking for granted till now!
I just love Anthropic for the research they do and share with us on AI safety. I feel like they are the real torchbearers when it comes to ensuring that AGI remains on the good side of humanity. Guardrails need to be put in place before things go awry. Thanks for putting this together, Jonas!
Your look at rogue AI agents really opens my eyes. I appreciate how you apply lessons from high-frequency trading to improve it. Critical stuff for anyone in AI development!
This article is wonderfully written and I loved the comparison with the use of algorithms in HFT. This has nudged me to do a bit more research on mis-alignment, something which I have been taking for granted till now!
Fascinating piece. As someone heavily invested in AI this is super interesting 🤓
Thanks Chris!