The Prompt, Ep 7 — Why Companies Need AI Ethics Committees (and How to Start One)

[Music] hi everyone welcome to today's episode of The Prompt today we are joined by Jasper's very own internal Ethics Committee Megan Keeny Anderson and John below I'll pass it off to them to introduce themselves but again I'm samtha ready I lead Enterprise marketing at Jasper and I'm the host of The Prompt Megan why don't we start with you sounds good uh so yeah SM as you said I'm Megan Kenan Anderson I run the marketing organization at Jasper and I get to be in sort of the unique position of both being part of building Jasper in this company but also being one of its biggest users uh we use AI um pretty heavily across our marketing strategy and so I'm coming at this both as a a Jasper Builder and a user great thanks thanks Megan thanks samua I'm John uh I lead the security initiatives at Jasper so I do security compliance privacy uh and help where I can with with other topics like ethics uh been a Jasper for over a year I've been doing security for uh 15 or so years um in various Industries but I I love the fact that I get to work with AI in this kind of emerging technology space it's super cool love it thanks you two so uh John and Megan kind of make up the bulk of Jasper's uh ethics Council um the two of you are kind of the steering heads of this Council can you tell us a little bit more about how this committee came to be whose brainchild was it or did you inherit it or was it uh thrust upon you how did this all start at josper John why don't you dive in there I mean you were sort of there from the beginning and you know it's really a pretty robust diverse collection of people that are you know from our product organization from um all all aspects of the the company um but John you were there at the beginning I actually joined it a bit late yeah I we started this uh very early and as with a lot of things in emerging spaces it started very simply we wanted a place to take these questions that come up in the AI space that that you know we don't know we don't know we don't have all the answers right away and so we wanted to get together groups um a diverse group of of people from inside the company that could answer some of these questions so it started out with um just questions about how do we talk about AI to people outside the company how do we talk about AI inside the company how do we use AI responsibly ourselves um you know what models do we use and how do we uh ensure that those models are diverse and and are not are not you know ex have extreme biases in them um but then also questions of uh uh decision making around that stuff internally uh so so it's not just one person or two people making a decision uh and that we can have a more diverse background since then it's gotten bigger and we we cover a lot more decisions now we've we've even published some stuff that we'll get to I think in the discussion um and so it's been uh as with everything at Jasper has been great to watch it start very small but evolve into a much uh larger and more structured part of the organization yeah that makes sense and can we even get more specific who exactly is on the council at Jasper in terms of functional heads of different areas and how did you select them what was the rationale behind it partially we took we we accepted volunteers whoever uh in some ways who wanted to be on it who wanted to uh who had a a voice and it felt like they they felt strongly about ethics and and ensuring that we get this right uh but we really wanted to make sure we had representation so I kind of represent the the security side um of that discussion we have members from our people team uh to to represent the employee base of the company um you know Megan is there from from marketing and and a more public facing um uh perspective but we also have um people from customer success uh one of our Founders is a member of the Ethics Committee to to to kind of give an overall view um I'm sure I'm missing some uh perspectives there did you have anything to add our council is also in it as well um he brings really good sort of perspective on legality and how regulations are developing and that's an important factor too absolutely I mean we have a lot of pressure on us but like being Chief legal counsel of an AI application company like woof like I hope that man's taking his PTO because that just sounds so high pressure um that's great to talk through how does it work when you guys disagree on things um and maybe even taking a step back like how are issues introduced how do you figure out what's what's on the agenda of each meeting and then taking that one step further as you guys talk through it what are the checks and balances if folks disagree with each other because you have a pretty diverse and like you said robust group of people I'm sure there's different points of view and disagreements yeah I mean I can take a stab at that I think that so first of all there's sort of a natural maturation that happens with these committees and I think as John was saying in the beginning it it starts out with just like a group of passionate people who are drawn to these questions who want to understand them who want to try to tip you know the strategy of the company in the right direction and uh then so it's a lot of like interest and questions and curiosity in the beginning and then the next sort of stage is sort of understanding defining roles in a way like understanding what each person kind of brings to the equation so legal perspective you know a voice to Communications uh the actual ability to build and make changes into the product um and then I think it's sort of like okay cool how do we shift from discussion and decision- making to action and to more as John was saying more structured approach to this and I think you know I think that's important to say because most companies right now that are starting to put together ethics councils they're they're somewhere in those early stages and I think it's it's got to be a very intentional decision to to move from discussion to okay what are we here to do and what's our Charter and how do we operational this um so don't get frustrated if you are still stuck in the Amorphis phase um but do try to think through what that next stage needs to look like and as far as um debates and disagreements go I think that's if we're that's healthy right like if we're all agreeing on everything then maybe we don't have a diverse enough group in the committee to actually make these decisions because the whole point of ethics is you know is there it's complicated right it's if it were easy you wouldn't need to have sort of these hard uh debates and and discourse around it right uh so I think you kind of welcome the um the disagreements and you find a structured way to work through them and to to get to like a reliable decision- making process so you can think through like all right what what are the criter through which we make decisions is it do we factor in you know um who is most affected by this decision do we factor in the the size and scope of the impact um you know what are the sort of it's not quite a scoring mechanism but what are the sort of like filters through which we try to come to the best possible conclusion um and you you give yourself a map to get through that conflict I don't know John have you we you know we haven't had a ton of those moments internally where there's been a lot of debate um but anything jumped to mind or any other advice out of that I look at it I look as I look at a lot of my job I look at it through the the lens of security and we do this a lot on the security team where we talk about there that there's a lot of right ways to do security I think there's several right ways to do an internal Ethics Committee like we're trying to like we're trying to use um you know I I don't think we've had a lot of like knockdown uh fights about anything but it's more um a discussion around what's the best approach since there are a lot of fine right approaches um you know what can we do that will have the biggest impact that will serve the most people um that will uh create a system that we can be proud of and that we can talk about publicly and publicly and make our uh our customers proud of as well uh so I think that's more the focus and and uh you know taking those ideals that everybody brings to you know issues that we talk about and coming up with the best solution yeah that makes a ton of sense and something you said earlier resonated with me Megan where you said we don't necessarily have a scoring system just yet I'm curious around like is there some sort of voting mechanism or a scoring system that you think a council like this might start to establish as you get bigger and have more nuanced issues and issues that have disagreements like have you thought through um any part of the logistics of that you know it we haven't had um to to come to that just yet we're early in ourselves but I will say in past companies I've I've been on committees where we had to make decisions about kind of terminating a customers account for misuse um and we you know content moderation committees that kind of thing at prior companies and one of the things that I learned out of that experience is you actually want to like anonymize the incident as much as possible and get it down to the bare facts as much as possible to look at whether a violation of the terms or an ethical breach has sort of happened um because everybody comes with their own their own biases to those discussions right and so you know I remember a specific case I'll I'll leave the details out around you know questioning whether someone was um using hate speech using a former platform for hate speech this was prior to my time for for Jasper and we had to like take the actual um a lot of the details out about the um the customer and about their political stands in order to be able to sort of Fairly articulate you know if we if the situation had had flipped over and we were dealing with somebody on the opposite end of the political Spectrum or we dealing with somebody sort of on the opposite uh opposing viewpoint but still using that kind of language would the language classify as you know uh so that kind of there's like a stripping down that has to happen so you can see clearly and then even then it's it's tough It's a judgment call right uh but those are the sorts of things we would try to do to to get to the cleanest decision that we could make and I would imagine at at Jasper as encounter things like that we'll have to get together as a as a group and um figure out yeah what are the devices we're going to use to get to the best decision we can yeah that makes a ton of sense something that's coming up for me now is around um education right so I think to your point ethics councils have existed outside of AI for in every industry in every vertical and now with AI what's different here is the speed at which the industry indry is innovating and at which the market is changing and at which users are adapting the technology and figuring out workarounds and how to amplify it how do you all stay educated on this committee I think for so many people right some of the precursors to wanting to be a part of council was interest in the issues and um you know a desire to be part of change but I think a big portion is like ensuring that you are educated and articulate coming into these so how do you two make sure that you come into these meetings armed with all of the contextual knowledge you need to have thoughtful and fruitful discussion was a lot of information sharing that happens there's a lot of sort of um article sharing um and I think that you know I I think that's actually the piece that that works really well because there's a natural interest there people are you know catching things and sharing them and having discussions and unpacking them um one point that I would love to make here though is that this is I think I think think we go astray if that's only happening inside the Ethics Committee right like I think a part of this is how do you increase the AI literacy and the sort of um the knowledge of these topics and these sort of issues across the entire company um and so even that's why people office is in there um and you know I was just talking yesterday with our head of people Ops about you know can we have more discussion groups can we have more um you know a book group on AI that will help surface some of these issues so across the company there's an elevation of knowledge in the space um but yeah I mean it's a fire hose you just kind of find a few people that you think are producing really helpful content and you build out from there I can I can I can share a little bit Megan I think your example is great about like General AI knowledge I think this applies to the entire space it's really hard to keep up with um but one of the things that we talked about recently was there's this company Lara I think that's how you say the name they they created this this little game um called Gandalf where you could uh try to do prompt injection um and through this game you could try to get the password out of this system um it's a really cool like gamified example of of one of the the issues or concerns around around Ai and um you know like poisoning prompts and that kind of thing so this we were able to use this as an example to to bring to the committee talk about uh you know is this a concern for us what protections do we have in place to uh help our uh users to not um uh exploit the system in this way and and and a larger discussion about how big of a problem this is um so I think that's just one example this company actually now has some tools to kind of prevent this and uh it's it's interesting to see as this uh as this space evolves this is kind of how that Evolution takes place is issues are identified you know somebody raises a concern we kind of get proof of concept uh systems that will demonstrate the concern and then we put in place protections and controls to not only protect a system like Jasper but also help users make right choices and protect themselves yeah that makes a ton of sense and my my last thought on this topic in terms of the actual structure of the AI council is around at what point do you get your customers involved are you um Consulting with you know Executives at your top customers or even users at your top top customers to help make some of these decisions or to factor them into the conversation how does that work I mean we try to like hear from customers as much as possible across the board and you do find that unlike other software perhaps like the ethical questions are mixed right in there with the Tactical questions of how do I use this thing right I mean we've talked with customers who are trying to roll this out responsibly across their company and they're worried about you know hey what happens if somebody just uses this exclusively and doesn't do any editing and puts out junk on the internet right um or what if there's what if there's an inaccuracy and we publish it are we then liable for it are we in trouble for it it um we had a lot of questions early in around plagiarism um and a lot of questions around you know is training uh a model on existing content stealing from that content in some way shape or form and so unlike other software I think we get those ethical questions right off the bat Even in our earliest conversations with prospective buyers right so that role is there all the way through it's a big part of the reason that um you know we try to tackle it head-on we try to sort of hear where their concerns are give them a way of thinking about it um and you know help them ask the right questions to get to their own conclusions on it uh recently John and I worked on a um kind of a template uh that we put up on our website just at jasper.

aethics where um you can have that use it as a starting place to have those discussions internally uh talk about you know do we want a transparency statement on our website that shows that we use AI do we um you know how do we think about uh the standards of use across the AI what we use it for what we don't use it for do we use it for performance evaluations do we not you know tools to have those conversations internally and draw up your own standards because it is part of rolling out this kind of software let me just add an example to that point that I was actually just on a call yesterday with a customer who mostly was asking security questions those are the kind of questions I get a lot but intermingled with those work questions about ethics questions about abuse uh of of the of the platform questions about you know the ethics around using uh using Jasper to generate content um that's used in very specific cases like Megan was talking about so we get a lot of feedback from customers and and my Approach in those conversations is usually to say okay you know here's we're thinking about it now tell me tell me how that's wrong tell me how what you can add to that tell me what we can do better around those um considerations so I I really try to make it a a two-way street there when we're having those discussions yeah completely John I think you're in a really interesting position here because it sounds like you're involved in some of those Prospect conversations and those security conversations around is Jasper the right right platform for us as end users but you're also involved in some of these security questions and procurement questions around Which models is are J is Jasper going to partner with right like what is going to be our driving our AI engine how do you sort of reconcile that right um you know technically the models are the one training on data they're the ones you know giving you output how does that role and responsibility around ethics whose responsibility is it at times it I think people can think about it like it's punted to models and it lives there but I think ultimately our customers pay us and expect a level of um like ethical oversight so how do you reconcile all of that I think that's a great question um if you want to join my team and help me like figure that out that would be great but um no it's a it's a it's a question we get from customers and Prospects a lot around um around exactly that and we see the value that Jasper adds on top of the of the base models that are used to generate uh the outputs and part of that part of that value ad is security but also uh ethics considerations and and uh ensuring that those models are the best uh the best available for this specific use case and so those um I don't have a silver bullet to answer all the questions around that but I think what uh what people who are looking at generative AI specifically really want is a partner that's taking the time to consider those right that that they know that that's what we're that's what our goal is is to make sure that it works as well as possible that the data that they enter is secure and safe that their uh employee data stays private and and that we have privacy controls around that stuff and uh that the generations in the system are as good as they can be and their their uh the content that they create is as high quality as they can get and that's that's how you get these you know multiples on production that we see a productivity that we see in uh you know from clients and that's how um uh that's how we kind of are successful so it is a really tough question to ask answer it's a it's a it's a lot to cover uh but we we take it seriously and we try to answer all of those questions individually as best as we can yeah can add to that a little bit which is like yes there are pieces of this that the that happen in the model right so a lot of the filtering to prevent hate speech um or reduce hate speech happens at the model level um a lot of the choices around efficiency and impact on the environment kind of happen at the model level those are things that we don't control but we can influence in terms of the mo the model providers that we choose to work with but then there's also choices all the way through little choices seemingly small choices that do have an impact on you know responsible use right so I think that I've seen our team do a really nice job of being opinionated in the choices they make in the product and and correcting things when they feel like hey maybe this is leading to bad use so a good example of that is um you know uh we used to have a a template called the oneshot blog post right because you could give it a prompt and it would write a very extensive end to- end blog post um but we're also trying to guide people that like you don't want to just hit publish on something that is AI only you want to have human editing you want to have checks and look for things like bias or inaccuracies in it shape it as a human that's a important role and so recently we've changed that to the oneshot blog draft right like little little guidance changes but like that word draft is really important that's carrying a huge load and a cue to people to understand that you know there's a role here for responsible use um so you'll sort of see us we don't always get it right the the first time but you'll see us try to always evolve and um and in the way that not just the product works but in how we talk to our perspective buyers our customers the education that we do this is more than just technology it's a entirely new strategy so we're trying to help steer it um in a way that that produces you know the best uh outcomes from a ethical and responsible standpoint yeah an observation for me too I I love all of those points an observation from me is that generative AI feels this like this huge omnipotent like you it's everywhere right and everyone's talking about it and everyone has an opinion on it and when I'm thinking about how the media portrays Ai and ethics in AI I think the moves of a couple key CEOs and a couple key companies in terms of how they articulate their stance on AI and ethics actually is what creates some of the buzz like it's actually a smaller group of people and I think their viewpoints and the policies they put out um directly correlate to what the media reports on and I think what's important is what the media reports on directly correlates to some of the conversations that happen at the policy level at um the government level in terms of how are we thinking about regulation um what what are some of these doomsday talk tracks that come out of some of these clickbait headlines that come out of you know companies maybe speaking irresponsibly or not articulating themselves well so there's I think I mean to underline that these ethics councils are incredibly impactful even outside of you know how does Jasper interact with their customers but also how are they a bigger player in this generative AI space um so with that being said maybe my last question around this is um is there a concern that um you guys talk about internally a lot or that comes up and it doesn't necessarily have to be completely ethical but just a component of generative AI that you talk about often that you don't feel like the media is reporting on um that you feel like hey why is no one talking about this this comes up in our meetings often um anything that immediately comes to mind for you there I've actually got one let me let me share this I I I talked about this to a few people and uh it's easy in in media and you know as we talk about how cool generative AI is to think of it as magic that you just kind of go in and it will like do all the things for you and I I actually was in a discussion with other Security leaders about this just just this week the idea that like we're going to get to this point where you know the technology that we're that we're talking about this like generative AI technology is going to like you know rule the world control the world from that perspective like what we're talking about is very limited and and very um and not related to to that type of discussion um generative AI is awesome it's it's cool technology I say this often it's probably the coolest technology I'll ever work with in my career but it does have its limits right and to Megan's Point earlier you know you shouldn't just take the output and post it anywhere and then and a step further to that I don't think it's it's wise necessarily to kind of let it do its own generations and and understand what it's uh what it should be generating and then just kind of go on its own um I think we'll see a a a quick degradation in the in the quality of that output but we also run this risk of of generative I on its own um being less representative and being less ethical u in its decisions we've seen examples of this in the past uh with other uh machine learning and AI systems and so I think a little bit of control around um how we use the outputs and what we use it for really does take it from this really cool awesome technology into like that magical realm but you can't get rid of the the little that interaction that personal interaction that you need to have to kind of make that transformation yeah so John if I'm hearing you correctly you I think what I'm hearing you say is that um sometimes it's like a All or Nothing approach that you're hearing in the media and just people's opinion right either it's like keep me away from AI it's going to take my job and it's going to destroy all the things I'm passionate about in the world or you hear kind of the others sometimes Silicon Valley Echo chamber on this is the promise to revolutionize everyone's world and this is the compl sweet future and almost the role of the human is disappearing and so I think your Viewpoint is like we're somewhere in the middle and being in the middle is actually The Sweet Spot we're not working towards um like complete one end of the extreme yeah I think totally agree I I think the human comp component is super important sorry Megan go ahead no that's good the other thing that jumped out to me John is this idea of like we have to De demystify this thing the more that we Meredith brard is a author of this book called more than a glitch which is about bias and AI um but she has this really great line around like the more that we treat this thing as this magician right as this omniscient as you said you know ununderstandable like um petri dish then the harder it's going to be for us to make those judgments about what's right and what's not and so we really have to ofttimes when I see the media talk about AI it's this with this awe or this disdain to your point samtha but you know they sort of see it as oh we can't really explain it there's a lack of explainability in it there's a lack of traceability in it so it just magically produces these answers and I think we need to push a little bit further to um not just assume this thing as a black box and you know hold ourselves accountable to asking the questions of like how does that decision process happen within AI it's far more complex than an algorithm or you know like a traditional Google search results but you know there is some explainability and traceability that can happen in there so I think we should not keep it in that realm of this mystical thing and keep asking those questions yeah I love that um so uh another question around um the the future of an AI Council or how this might evolve as uh the market in the industry mature and the technology matures even further is do you anticipate that something like an AI um ethics role will sit on um like boards one day like how do you feel like this will manifest in the future like are you seeing roles for this sort of um person in AI companies and even otherwise even if they're not AI ever company is now becoming an AI first company so how do you feel job creation is going to play into the emphasis on AI ethics I think there are eth I mean it's funny even Beyond AI it's there should be ethicists that company you know certain companies that can even outside of AI that can then you know through their decisions affect the society around them in in major ways they should have ethesis on staff or on board um and if not on board uh there's I actually was talking the other day with a woman named um Olivia uh Gamblin who runs this um this community and sort of consultancy called um ethical intelligence and she will go into businesses big businesses and help them think through ethical standards and practices and even like specific instances that they're having give them the Frameworks to sort of make those decisions um so whether it's internal or you're bringing in ethicist from the outside who can help uh you know create a operating model for dealing with these questions um I absolutely think that's going to be an ever increasing role and probably should have been a role uh for many years even before I a yeah absolutely well if anyone is an AI ethesis out there I want you on the prompt and I want to know everything that you're doing uh having this be your full-time role but in the meantime Megan and John thank you so much for joining us and giving us a sneak peek into what Jasper's ethics Council looks like um thanks for having thanks for being on thanks so much for having us this was [Music] great r a couple of [Music] CH K screening in my [Music] head

As found on YouTube

Get Your Resources Here:

You May Also Like