Sharan Maiya

I'm a second year PhD student in the Language Technology Lab at the University of Cambridge, where I work on aligning AI assistants. I'm supervised by Anna Korhonen.

I care very deeply that AI goes well, and I think it can go very very well. Fast, raw intelligence can accelerate science and help address global challenges, including climate change, and this is part of what I work on with my co-supervisor Ramit Debnath at the Collective Intelligence & Design Group.

But intelligence isn't all that, and I think these artifacts we're creating can also help us cultivate wisdom in a way we haven't been able to before. Part of that is teaching AI assistants what it means to be profoundly good: I started working on character training during MATS 7.0 with Evan Hubinger, and I continue to with Nathan Lambert. Check out our recent paper! This is a very nascent area of research, so get in touch if you'd like to get involved.

Among my other interests is making AI systems more robust and trustworthy, particularly through interpretability research. One way I'm doing this is at Cadenza Labs, where we work on AI truthfulness.

With the rest of my time I like to sit at new tables (restaurants), sit on the floor (meditate), sit on a train (home to Scotland and better tap water), or sit at a stand (a music one, while holding a saxophone), so reach out to talk about any of those (or about research, I guess). You can do that at sm2783[at]cam[dot]ac[dot]uk or through the links below.