With increasing reliance on AI for tasks such as data analysis and decision support, how is Moderna ensuring responsible and ethical use of AI technology within its operations?
That’s a great question. We always ask that question at the top of our priorities. We’re not in charge of designing the technology itself. We use APIs or we use products that other companies have designed, called hyperscalars. And those have their own responsibilities. Our responsibilities are primarily focused on how we use it, how we use it properly. First, we published a code of conduct. It’s a published code of conduct, by the way, and it’s in our company policy on Miranda outcomes. It’s an evolving document, and we’re learning along the way what the right code of conduct should be. But it’s a combination of what is legal, of course, and also what is desirable to use AI in a responsible way. For example, we want to make sure that we respect human integrity and we respect human diversity. In every part of the company. And especially for us, a life sciences company, that’s very important. It means providing universal products that can serve people and save lives in every part of society and in every country in the world. We have a stronger desire and ambition for diversity than any other company that I know of. For us, diversity is intrinsic. AI may have biases built into it that we inherit from the training dataset, but this is not a new topic. This is part of the industry, so how we use AI, the care we take with AI is a buffer between layers of training and corpora and datasets that have inherent biases, and how we leverage AI with a mindset that is consistent with respecting human integrity and human diversity.
In the process of building a code of conduct, a set of principles is expanded and layered, translating into a more detailed user policy, which you must read, understand, learn, and demonstrate that you understand and have learned before you can access the AI product. So we make the AI product accessible to anyone, but you must be trained before you are allowed to access the user policy part. This is a right to understand what to do and what not to do with the AI. This right is granted because with authority comes responsibility. At an even higher level of management, we talked about the code of conduct and the user policy. The third level is governance. Detailed governance of use cases is helpful if you are running a GPT for yourself. This helps unify the foundation. If you are running a GPT for a team, have your manager approve that GPT, have your team have input on how to build and organize the GPT, and review the dates. So if you are creating a GPT, you want governance for that GPT because it is going to be business critical and it is going to impact your company. I am looking at this, but I don’t want to do this for 1,000 GPTs. And this is like using a hammer to drive a nail into the wall, right? It doesn’t make sense.
But we want to be mindful of dozens of GPTs that may be important to who we are and how we work as a company in the future. And we want to provide the right level of governance on how they are designed, how they are trained, how they are updated, who owns them, and if they need to be updated. Because they are products. I call chatGPT a product, and that’s true. AI agents are products, too. That’s why my dentistry is about products and platforms. You can also think of ChatGPT as a platform that delivers products, so each AI agent is a product. And we need to apply a product mindset to it and make the same demands on that product as we do on any other technology in our company. But we can’t do this for everything someone does. We need to be mindful of priorities and give people a sense of experimentation, the freedom to do it with their own personal use cases. So we’re learning as we go, and there’s still a lot of research and thoughtfulness put into this. But we’re learning every day on three levels: code of conduct, user policies, and fine-grained governance. What makes sense, how do we keep ourselves and our company safe, and how do we ensure that as our reliance on AI increases, this is a momentum that we don’t see us being short on in the future, we need to be in an environment where we can safely learn with AI all the time, and we’re always getting the most out of AI. This is no small topic. Mitigating the risks of AI and continuing to use AI safely is incredibly important to us, both in how we work and in the way we use AI in different parts of the company.