Register by October 17 to Secure Your Spot!
Registration Type | Member Price |
---|---|
Early Bird Registration (Sept. 11-Oct.3) | $750 |
General Registration (Oct. 4-Oct.17) | $850 |
Registration Type | Member Price |
---|---|
Early Bird Registration (Sept. 11-Oct.3) | $750 |
General Registration (Oct. 4-Oct.17) | $850 |
Registration Type | Member Price | Non-Member Price |
---|---|---|
Early Bird Registration (Sept. 11-Oct. 3) | $750 | $850 |
General Registration (Oct. 4-Oct.17) | $850 | $950 |
Not a member? We'd love to have you join us for this event and become part of the Chorus America community! Visit our membership page to learn more, and feel free to contact us with any questions at membership@chorusamerica.org.
Registration Type | Non-Member Price |
---|---|
Early Bird Registration (Sept. 11-Oct. 3) | $850 |
General Registration (Oct. 4-Oct.17) | $950 |
Think you should be logged in to a member account? Make sure the email address you used to login is the same as what appears on your membership information. Have questions? Email us at membership@chorusamerica.org.
Registration Type | Price |
---|---|
Individual Session | $30 each |
All Four (4) Sessions | $110 |
*Replays with captioning will remain available for registrants to watch until November 1, 11:59pm EDT.
Member Professional Development Days are specially designed for Chorus America members. If you're not currently a member, we'd love to welcome you to this event, and into the Chorus America community! Visit our membership page to learn more about becoming a member of Chorus America, and please don't hesitate to reach out to us with any questions at membership@chorusamerica.org.
Registration Type | Price |
---|---|
Individual Session | $30 each |
All Four (4) Sessions | $110 |
*Replays with captioning will remain available for registrants to watch until November 1, 11:59pm EDT.
Registration Type | Price |
---|---|
Individual Session | $30 each |
All Four (4) Sessions | $110 |
*Replays with captioning will remain available for registrants to watch until November 1, 11:59pm EDT.
Member Professional Development Days are specially designed for Chorus America members. If you're not currently a member, we'd love to welcome you to this event, and into the Chorus America community! Visit our membership page to learn more about becoming a member of Chorus America, and please don't hesitate to reach out to us with any questions at membership@chorusamerica.org.
An Interview with Beth Kanter and Maggie Vo
Trained as a classical flutist, Beth Kanter started working in orchestra management before becoming a technology consultant and one of the very first nonprofit bloggers in the early days of the internet. She now works as a consultant, author, and thought leader in digital transformation and wellbeing in the nonprofit workplace. She’s thinking and writing about AI now, she said, “because it has been my career to look at the emerging tech for the social sector and try to be a translator and try to encourage thoughtful adoption.”
Maggie Vo’s varied career path includes academic research, a stint as a youth librarian, and professional choral singing. When she started playing around with early AI technologies like ChatGPT, she quickly became both fascinated by AI’s power and concerned about potential risks—including the fact that she found she was able to break models in ways that she didn’t want other people to be able to do. She’s currently the head of technical education and enablement at AI research and safety company Anthropic, developer of the AI assistant Claude, where she feels an alignment with the company’s focus on the “stewardship of safe and fruitful AI futures for humanity.”
In conversation together at the 2024 Chorus America Conference, Kanter and Vo will share how AI technology can benefit choral organizations of all sizes and capacity levels as part of the plenary workshop Harmonizing AI. Before the plenary, they spoke with Chorus America president and CEO Catherine Dehoney about the best starting points for organizations new to AI, managing risk, and how AI can help choruses have a greater impact on the people and communities they serve.
CD: Beth, in your book The Smart Nonprofit, you and Allison Fine write about the idea of nonprofits moving away from busyness and doing all the time to more thinking, and planning, and dreaming. How do you see AI as supporting this utopia?
Beth Kanter: If we in the nonprofit sector are able to use this technology well and ethically,
we believe strongly that it will create the dividend of time. This technology is able to not just automate things, but to actually work with us as a thought partner, in addition to being our intern.
And this should free up nonprofit staff time from some of this granular, overwhelming work. Busyness, right? The hamster wheel. And refocus that time on mission-driven tasks which are really about human relationships.
Let’s think about major gift officers right now. If they haven't embraced this technology, they may be spending 10 hours of their week doing what we call desk work, which could be something like researching prospects. That's a use case that the technology can help summarize. And let's just say it's done well, and they're using the tools ethically, and it frees up five hours of their time per week. How are they going to repurpose that time? Take the prospect out to lunch. Call the donor on the phone. When do we ever do that?
So we have this opportunity of a dividend of time in the nonprofit sector. But we also need leadership behind this, and leadership and capacity to lead us through this transformation and adopting this technology.
CD: And to stop long enough to learn enough to lead it, right?
Maggie, I was looking at Anthropic’s description as an AI research and safety company that builds “reliable, interpretable, and steerable” AI systems. That also gets at this question of ethical use. Could you share a little bit about what each of these three terms means?
Maggie Vo: The whole reliability, interpretability, and steerability thing is centered around our mission of trust and how you can trust AI and trust the companies behind them. Reliability means that you can trust the actual model. It means that you have a very low hallucination rate, which is when the model makes up answers when it shouldn't, and also a low jailbreakability. Jailbreaking is when you can creatively prompt the model to do harmful tasks, such as producing instructions for how to build a bomb or outputting racist content.
Interpretability is going to be increasingly important as models get bigger because it's about understanding how the models think on a fundamental, really transparent level. Anthropic, I'm very proud to say, leads the industry in this area, because we've always been a research company. Since our founding, we've been doing research and publishing findings on novel interpretability methodology that has pushed the frontier of how to understand these models.
And then, lastly, steerability is basically the principle of making the models extremely easy to control. So if you say you want the model to do something, it should do it. You can trust the model to follow your instructions and you can trust that the model works on your behalf. That’s the key of it there: That there's no nefarious underpinning, or that when you will it to do something, it does something else. Steerability is all about making sure that the model really is your tool.
These three aspects together build the foundations of what we believe is a trustworthy future for AI, where you can believe—and not even just believe, but verify for yourself—that the models are actually aligned to human ethics and values, and your organization's way of working.
CD: That is something I had not thought about before: looking at the companies behind the AI to make sure they're in line with your values as well. That's such an important point and I think it can help reduce our worrying about the myths and fears around AI. Speaking of, do you have a favorite myth that we can debunk in this interview? Or are we really all going to be run by robot overlords?
BK: I hear this a lot working with nonprofit folks: “The robots are going to eat my job and I’m going to be unemployed.” There's a lot of fear around that. It’s not going to eat your job. It's going to change the skills within your job or certain parts of it—not completely automate your whole job. Leaders and organizations need to focus on calming people's fears around this, having conversations about that at the beginning. That's a first step.
The second piece is there’s going to be a shift to focusing more on human skills. The most in-demand skills will be interpersonal skills, communication skills, and leading with empathy. Creative problem framing is a big one. AI tools will get better at that. Right now, they're better at problem solutions, but you have to frame how you're going to ask the questions. Critical thinking. All of these are things that we want to think about as we're building our staff or our volunteers and giving people training and support.
It starts at redesigning our tasks, then redesigning our workflows, then redesigning our jobs and departments and structures and the way our organizations work.
MV: I always progress from talking to people about various task-oriented things to the whole mindset of how to orient their work structure around this new paradigm, because it is a new paradigm.
CD: So where do you start? What is the entry point, then?
BK: Tasks are the entry point. And then I think there's a continuum of risk cases to ethics, and it begins with the lowest amount of risk. It's a fallacy to think that we've eliminated all the risk—there's going to be some amount of risk.
The lowest risk is individual use for tasks like generative AI for writing or analysis. And that's a great place to start. Then there's the mid-size risk. I classify this as when we're using it internally
to support our meetings, our collaboration, HR, finances, all of those internal operations, and I'm talking artificial intelligence more broadly.
The highest amount of risk is when you're putting it on the front line to interact with your constituents, whether that’s your donors or the people you serve. And it depends on the type of nonprofit. A nonprofit delivering mental health services for suicide prevention for youth wanting to use this technology—that kind of a project is set up to be really high risk. That's not a place to start. Start with the smallest Legos, which are tasks, and then start to build your castles.
MV: The phrase in the industry is “human in the loop.” You should always keep a human in the loop across your processes. For the lowest risk use cases, you are the human that's always in the loop because you're asking it to do tasks on your behalf.
At the end of day, whatever you're doing, you should always read it over. The way I like to think about it is that AI can write your first draft, but you should always be the one to hit publish.
BK: I've been advising people not to put any personally identifiable information into a public model. You don't want to upload your employee’s performance evaluation with your organization's name and their email and their social security number and all that stuff. You really have to have some policy about what data you can share in your prompts. And what shouldn't you share? Another piece is also about what's confidential information and what if it is confidential?
Those are the three guidelines: Don't fall asleep at the wheel. Always check it. Consider privacy and confidentiality.
MV: AI organizations have really different policies around how they protect and use data when you pass it into their models. At Anthropic, we have such strict rules and it's all outlined in very plain language in our user agreements. Not only have we never trained models on any user data period, but we never even see it—like, I could not even look at any data if I wanted to.
That means that at least where Claude is concerned, if you were ever to do something like put your performance review content into our models—I mean, we do that ourselves! I'm currently putting performance review content into Claude so it can help draft things for me, because I trust our data security. I understand very well how secure our backend is and the fact that we don't keep prompts in storage beyond a standard 30 days to ensure that we have time to investigate anything our system flags as blatantly harmful. There's a lot of safeguards we put in place that make me much more comfortable with using our models. Different companies deal with data differently.
CD: In your experience, where are nonprofits starting with AI? What is the task?
MV: Writing everything: ad copy, marketing, copy, solicitation emails, proposals. Anything that you need to do at scale, that you could just give it some examples of: “Here's some previous versions I've written that are really good and please make this draft better.” And allowing companies to get more of a global reach, because at least Claude has the ability to write in over 200 languages. So you're able to start getting a first draft of some things that you might not be able to otherwise. Hiring someone to fact check your final copy in Japanese is a very different thing than hiring someone to write it for you in terms of how much it costs your organization.
Another thing that is really helpful to a lot of organizations is the ability to free up mental space when it comes to meetings. None of the meetings at Anthropic have any notetakers. We just transcribe a meeting and record it. And then we ask Claude to make a memo email out of the meeting, including next steps and action items and all that stuff. It allows you to free up your human brain space to participate and be present in the conversation.
BK: I see three work tasks. The writing tasks, like Maggie said. Maybe it’s helping me write first drafts. Maybe it’s helping me brainstorm titles for my blog post or subject lines for my email. Maybe I'm stuck on the last sentence of a paragraph, and I'm going to ask it to rewrite that sentence a whole bunch of times in different styles.
Then there are the analysis tasks, the summary tasks. There's a lot of stuff we have to keep up with in our work. AI can translate a 100-page academic study to the sixth-grade level so you can easily grasp it. That's big for a lot of executive directors that want to keep up in their field.
And then survey analysis and analyzing open-ended comments from a survey. A lot of people spend hours on open-ended data from their feedback forms. Claude is really good at that—you can put in comments and do a theme analysis or sentiment analysis, and it helps people get off that hamster wheel of data.
CD: We’ve been talking about the organizational use of AI. I know the topic of AI and music creation would be a whole other session and probably be a whole other interview as well! But for the artistic people in the room, do you have a few thoughts you could share about using AI creatively and how we might expect that to affect the choral world going forward?
MV: I have thoughts here oriented around my experience as a choral singer. I think a lot about why we make music and why we listen to music. That is informed a lot by human connection, and also something indescribable and ethereal about making music or listening to music that is made by human connection. I think that part is pretty insulated from AI.
Having AI generate a kind of synthesized choral music or something that sounds very choral is definitely in the cards, but choral music relies a lot on phrasing and emotion and expression in a way that is going to take AI a lot longer to master.
BK: When I was in Melbourne two weeks ago, I got a tour of the Moving Image Museum. There was this one conceptual piece about how our society is so overloaded with information that was an installation of a waterfall, with words being projected on the waterfall. When I read the description, it was actually an AI algorithm that was pulling words from headlines and putting them together in a random way to then project on the waterfall. So the artist was using AI as a kind of paint, if you will.
As opposed to trying to emulate something that's human, let’s try to push the boundaries of art-making with using AI as a tool instead of the end work. It seems like we, especially creative artists like composers, have to think about it in that way. And what are the creative opportunities that open up to us because of that?
CD: Last question: What makes you the most excited about the use of AI—for nonprofits particularly?
MV: Out of all the classes of organizations that make the world go around, I think of nonprofits as the most human-oriented. I think that the heart of a nonprofit is about making the world better and doing so usually relies on connecting with people and brainstorming highly creative and lateral ways to approach things like funding or reach. Those things require that people have time on their hands away from doing more drudgery and menial work.
AI can be a force multiplier for making nonprofit work better, more powerful. That's what I'm most excited about: To take away all of the boring stuff and then to allow nonprofits to be a lot more powerful at a scale that's much larger than what current staffing might be able to have them do without AI.
BK: We could shift from a sector that operates on time poverty and time scarcity and move to abundance. By freeing up that time so that we have time to plan, time to take a breath, time to find joy in doing the work and connecting with other people, as Maggie said. Which in turn can help us scale some of our work, and maybe then have a greater impact on the people being served.
I know we have this loneliness epidemic in America and I'm not saying AI could solve that. But could it help choruses have more time to do more outreach, to get more people in to have that communal experience singing, and in some ways begin to solve that problem? And if other nonprofits in other areas are also able to do that, could we come out with a better, more connected, more optimistic society? I take the optimistic view.