Okay. Hello everybody. Welcome. Bay. Hello everybody. Welcome to this week's brown bag event. If you don't know me, my name is Mark Adele, I'm a professor and interactive computing here. And it is my absolute pleasure to bring to you today and to introduce to you very low. She's a Principal Researcher at Microsoft Research in Montreal in the phage group. And if you don't know this, this is a group between New York and now Montreal that looks at fairness, accountability, transparency and ethics in AI and computing systems. Really great initiative they have going up there. Prior to that of Vera was a researcher at IBM Research. She earned her PhD at the University of Illinois. So you can guess by the fact that she works from the fates group, that she works on issues related to fairness, accountability, transparency, ethics. Specifically, she works on human-centered, explainable artificial intelligence systems, as well as conversational interfaces. Very much how eaters and AI systems are going to interact with each other. She has numerous awards coming from the chi community as well as the intelligent user interfaces Conference, which is a great conference that looks at these issues. And then she is, if you haven't come across or you will, because she is an associate editor for the International Journal of Human Computer Studies. She's on the editorial board for the ACM Transactions on interactive intelligent systems. And she is going to be a guest editor on an upcoming special issue at transactions on interactive intelligent systems. Focusing on explainable AI systems. Though, I'd love for you all to give. Give her a warm welcome. Welcome here. And this is how we sign. Applause. And a whole I'll hand it off to Vera, but just to say, you all know the drill. If you have questions, you can put them in the Q and a system. And I'll also be monitoring the chat and we'll get to your questions at the end of the talk. So there they are. The floor is yours. Won't say we have to bring to us. Yeah. Thanks, Mark. Thanks for that kind introduction. I'm going to start sharing my slides and I hope you all can see my slides. Yeah, Thank you for having me here. I visited Georgia Tech TV you during high 2010 when I was a first-year master's student and so very excited to be back even though we're interacting three screens. So I'll set up the stage. As Mark mentioned. This has been a pretty interesting, exciting period from my, for me personally, I just joined Microsoft Research two weeks ago after spending five years at IBM Research. I'm part of the phage group. And today we're going to focus on the transparency of the four letters. So why don't I was enjoying transition period. I was wrapping up my old project, got to think a little bit about what to work on. Let's say I'm going to read more paper somehow. I was also asking me to write a book, chapters to give an overview of recent HCI work on the topic of the spine the boy. So I wrote the book chapter with my former colleague, push, push partially. We put it on archive. I'm very much welcoming feedback at this point. So this talk is going to be based on this book chapter. I want to discuss some of my previous projects with my collaborators. But I also want to kinda look broadly at recent HCI work and give it a try answering this broader question. What are human center approach doing for explainable AI? Though, chances are have heard the term explainable AI or related term like interpretability, fence parents eat, right? The backdrop of the rise of this topic is the popularity of opaque box AI model, for example, deep neural network. So the humanities skirts with village. It is really dangerous for this kind of powerful tech knowledge that can be used for scaling decision making task ie, many high-stake domain. Meanwhile, technology creators, they are also very motivated to for this problem. So this lack of us visibility so that users will not be turned away. They will be able to trust and adopt new tech knowledge. Though there is a technical community that's already made prior strides in the past five to six years, we're seeing this explosive amount of papers published in AI venues, right? They produce algorithms, which way the goal of making AI model more than the hole for people. So I love, and I'm getting into this many definition of probability interpretability. I think the common denominator is the goal. Make AI more understandable for people regardless of whether it's delivered. Me while we also see in the past two or three years, we're seeing a paper published in HCI and a social science venues looking at how people interact with, use or are impacted by explainability features. For example, the term humans since her explainable AI OR was actually coined by repo and mark here at Georgia Tech Summit or use a term user-centric. She's thought about AI this year at high we will get Nicole organize a workshop, use the term Humans sends her perspective. So there's really an emerging community within HCI looking at a topic of explainable AI. And these are what I broadly referred to as human-centered approach as to EXXAT. So these are necessary, even inevitable work because expandability is here and human centric property, right? The success depends on the person receiving explanation, whether they can understand AI, not how much of a detail you can review about the model. Though I'm interesting what are the current trends and the important problems? I'm also interests seeing usings plausibility as a task that, where there are as much of a technical problem at a human problem. So kind of reef flat on AI and HCI community, she can better work together hopefully by the end of my talk and we can do some of the refraction together. Though my answer to this question is influenced by my own. That's why is that my training I'm a HCI researcher by training with a strong focus on the cognitive aspects, meaning that I'm often interested. How people process information, make judgment and make the section wasn't my rows. I am a researcher in industrial labs. I often intersect with AI researchers in the lab who are working on developing new algorithm. I will also interact with product teams a lot. I referred to the designers, data scientists, engineer working on product team as AI practitioner. Probably the most defining experience for me is my experience working AI expandability 360. It's an open source tool kits that were released in 2019. It's off-the-shelf solution, collaborate I, if you are data scientists, UPS, plugging your own model, then you can leverage some of the state of the art algorithm. Do build moist tenable model. We're generating explanation for your own MOD. So after sixties, only one of the many, actually I had two kids on the market right now. Microsoft has his version could interpret ML. They're almost all the big tech companies. Some startups have their own version of this kind of toolkits that a lot of my work, my experience are also shaped by its trying to get to practitioners to make you use this kind of tool kits make appropriate to use and also to get them to think about explainability. In general. They'll very quickly plot the technical landscape. There's many different algorithm is not the focus of this talk. If you're interested, I encourage you to check how this website, we taught a course at KAI this year where I gave an overview of what our different algorithm look like. What's, what the output would they look like? Hannah also linked to the code library extra 60. So just to give you the backgrounds, if we look at this hundreds of hours, I'm in literature, we have a dozen of them in this kind of toolkit. They generally fall into two camps. Like why is she builds directly interpretable model, traditional way. Think about simpler model like linear regression, where decision tree, to be directly interpretable, you show the model internal decision process. They're relatively intuitive. The more recent algorithm make a research was looking at developing new algorithm that will perform better, for example, most sophisticated role model, but still preserve this directly interpret or property. On the other side we have what's compose Hopi's probability is why you already up for opaque box model, let's say a deep neural network. And then you can use another set of algorithm to generate explanation. There are many different algorithm by they kind of generally fall into three areas or three purpose. We have global explanation, which is EBU, and an overview of the general modal logic. A popular approach is called model just nation, which is to produce a Summation of the complex model in a simpler form, like a decision tree. To give you an overview of its logic that will have a local explanation, is shoe focusing on explaining a particular prediction made our particular instance. A very popular approach is called feature importance is to highlight features of this instance, for example, patches on image. I really contribute to this decision whether the model is seen, this is a cat or a dog. I will also have inspecting counterfactual is a relatively new emerging area that give you an idea how the IMF for the change might impact the output change. For example, telling you which feature you focused on changing or improving, you can get a different outcome, often a more desirable outcome. They'll come back to human center approach. Okay, wow, wow. Way I conceptualize this work is I see them as regime org to braised from this algorithmic Research user experience. And also to bridge this what I call an authentic reality where algorithms are developed. Well kids are put out to a desirable future. We're going to see many real-world actually I see which are AIS system that has viewed in explainability as a core part of the user experience. They system will work in many domains, serve many user group. They are also not just seeing a lab, they are going to be viewed by practitioners. Thought once we take this kind of bridging view and we're so focused on practitioner. While way I find it really helpful is to zoom out to see this technical work as reduced thing, a toolbox of techniques. Of course, there are incredibly interesting technical challenges about for practitioner at the end of day, they want a great, good toolbox. They want to know when to use what kind of tool and how to use them effectively to build AI systems. Must take this conceptualization of a toolbox, of a technique I Samurais is a human-centered approaches into three areas. The first area is to help navigate a toolbox. Really inform when to use what kind of tool to dry technical choice by understanding user needs. Second is to assess the toolbox by empirical studies. You really understand where are they mutation, where things for short, where are the pitfall of the existing paths? And lastly, we want to expand the toolbox, wants to inform the new methods and also provide a conceptual framework for more Human Compatible. Though, I will walk you through each of three area and discuss some of our org and other recent at work. So the first area is to help navigate this toolbox. We really answer when to use what always kind of algorithm. A starting point is, you have a good arise this space of different fixed part of UT needs. And that may help kind of form more up for chickpea with that guideline terms of when to use what? Many were trying to answer this question? Who are the typical users? Ai? I'm really glad that the field have moved beyond the initial focus. More though, developer who wants explanations to inspect the model and teapot model, right? If we think about real-world xy I system, lots of S system for into a realm of decision support. Though a importing user groups are decision-makers or Iraqi user of this AI system. They wanted to look at explanation and make more informed decisions. There are also impacting the groups who may not directly interact with the AI system, but their life can be impacted by the AIS decision. Why example is if there are no applicants and your application will be assessed by AI system. Even though we're not directly interacting with it, you might want explanation to know why we're long application is rejected to be able to sacred course in the future or even contest AI. Then we'll also have business owners who wants to assess the capability of the AI system and deter me or whether they are going to use this system for their organization. There's also regulatory body, one explanation to be able to audits model for issues such as biases, safety, and privacy. Having this kind of persona of access systems even very useful result localhost. So star recognize that the brand team might not be enough, right? Taking the exam hall of a decision-maker in the onboarding stage, they might want explanations to understand the system to form appropriate trust when they're making a particular decision. They may want you to understand a rationale behind the decisions so they know how to take better action. So even for the same user, this user journey at different points, they may have different kinds of objective too sticky explanation for. So new paper at high this year by Suresh at all. Based on characterize this base of each kind of book 2 needs by users object. Well, I find that really useful to think of users objective. But my personal view is that even bathroom may not be granular enough. One way I find it really useful is to think of users, expand the boatman nice one kind of explanation they need by what kind of questions they're asking for the AI. So there is a broad body of HCI social science literature showing that people's explanatory goals can be expressed by the question they ask, right? Take again this motto debugging example. A model developer may ask the question, why is, what is the performance of AI a good enough? They may also ask, how does the AMH for addiction, what is the overall logic? They may also look at a particular, a mistake of the AMA came. Why, why it's such a mistake? Though? A whole partial Y question, what Eve question, they will need different kinds of explanation. So the takeaway here, he's also, this is a very nuanced space. What kind of question depends on user's goal, their background. Also, there are contexts. Though having this kind of top-down framework, It's really useful. But at the end of day, we also need to have user centered design process. You understand users needs specific to an application, to a particular interaction. So this has to be my own research focus. In the past few years, we are developing a user centered design process that our product team at IBM could use. That center around what kind of questions user ask though using that to dry both technical development and it wasn't design choices. And I want to give you an overview this off this work. Don't where we started was a formative study looking inch. The current design practice we want you to all to designer or talk to a practitioner. First is to understand what is this design space of xy. I use experience a little bit forward-looking given that it's still not established practice. And I also want to understand what our design challenges and this question view is really helpful that we were able to focus TO asking our designer what kind of questions user have for understanding the AI also has some pre-prepared of Russian cars to kind of walk through whether goods or ask those questions. Do they up here, why would they ask this question? Well, we can ground our discussion with a question without getting into the technical details, different kinds of algorithms on different kinds of your foundation on which the designer may not be familiar with. So one reason we're doing that work, he's kind of understand this design space based on real-world AS system. We want to bring that incisive back into algorithmic research. Trying to inform where our opportunity for new methods will also launch to understand like how this expend energy needs emerge and trying to derive some high-level guideline for our designer to think about different kinds of explanation. Though these details are in this paper. One thing we try to contribute from this work is developed these SCI question bank. So recall that we asked our designer to come up with the user questions. So through this quote unquote designer source method, we gathered a list of questions, we perform analysis, content analysis, and then we summarize them into this question bank categorised into this nine different category. So they represent a kind of a common space of users. Explainability needs a one interesting thing to look out, Prussian band he is users and these are indeed very broad, right? Although I talk about an algorithm, they tend to focus on the right side. Like, why, why not? Question by users are also interesting in data output performance. So they want to have a holistic understanding. The AI system. And another work we did is we took those main categorize, always summarize. And then we map them two hours and hours and that can answer, can address those questions. Seems to take hold of my research. Are practitioner, we particularly focusing on algorism that are available in this open source tool kits including AKS 360. And by doing that mapping, we will also derive a set of guidelines in the middle, which are explanations that I can answer this categories of user oppression that are grounded in current technical feasibility. So there are two core idea of this mapping. Why is that we want to re-frame this technical space. Instead of thinking about global, local, expand Asia, we won't map them to one kind of user, a question they answer. So that will also encouraged practitioner to foreground the user and ease is that of technical feasibility. And then another idea is we want to create a boundary objects by looking at the table. Designers, maybe through user research, they can have a clear understanding of what are the user question, why they are asking those questions. Data scientists can look at Table, what was the URL links that they can look at the technical detail and also consider what model, what data they're using. So they can work together to identify appropriate techniques they can use for their products. So the reason that I was developing this, this methods is table was also informed by the study. One thing that really opened my eyes by talking to designer, he's this kind of a process oriented challenge they face. Designers are very eager to advocate for expandability, but they faced some barriers. What is that? They have a difficulty navigating to technical space. They're often not training machine learning. They don't necessarily notice different kinds of technique. And their task is also really to find the right pairing between what's right for the user and what is feasible from a technical point of view. And also the need to convince data scientists who are burdened with implementation. So because of a lack of communication barrier or a lack of a shared process, they also have problems just comments that team comments data scientists to prioritize expandability. They'll pay them that challenge. We develop these design process which we call question driven XE I design. I still haven't got time to really publish this work. We again putting the archive. But this year I spend most of my time when I was at IBM really trying to run a workshop run educational event should get our product team, you know, about this process, adopt this process in their product development. Though, that the design process consists of four steps. We encouraged to first start with identifying loser question. This can be just a lightweight it exercising or views or research right after you sculpt the tool, stoke the vision. Follow up with the question of if we have this kind of AI system, what kind of questions you might have. And then by other users question you can analyze the question. Group, say mirror question, identify the priority. You can use a Question Bank as a reference. Unless you do that. The step three is where designers and data scientists to really say down together and find the right technical solution, right? They can again use that mapping guide as a reference to find candidates solution. Once they had the candidate solution, data scientists can proceed with the implementation and designer can start create a version of design. We also emphasized that design process should have each for two, you should evaluate whether you've addressed user needs, user preference. And he turned to be filled the gap though without getting into a detailed, right. This is the example that we describing the paper Darwin actually practice this process or this was also a collaboration with a teaming Watson Health. They were trying to develop this AI system for atomists event prediction. So a doctor looking at a system you can tell Dr. They've, patient might have high risk of unplanned hospitalization than thoughts or I can maybe come up with better care plans. And you can see we started with a runny user research, interview doctors and trying to gather, one can question. They have a lot of them asking the why question, why this patient is given a high risk. For addition, most of them are also interested in performance, as well as the data, whether that data aligns with their population, patient population. And you can see on the right side the design we came up with, with multiple kinds of spreadability feature. Had a clear correspondence with the questions we gathered from the user research. Though, moving down to the second part, I want to talk about some of our look at our work on SOC team and pure CO experimental study. Thou review certain limitations and pitfalls of because this Txy I'm methods. I want to say that this is not to negate the whole field. Of course, there are many empirical research showing promise of explainable AI. But I do want to call attention to some of the pitfalls that reviewed it didn't work. I kind of summarize this beautiful into two area of just connect that. All currently the algorithms are developed. The first disconnect is it just come back to with user objectives and contexts in deployment, right? In the first part, we'll talk about there's many different kinds of use case user objective, but they're often not in the mind of researcher when they develop their algorithm. More often than not, they may think about some one like themselves while and expandability to inspect the model with Deepak the model. But if you prism for downstream end-user who use the explanation for other purposes, I say The poor decision, they may not work. So there is a turn team mirror saying inmate who running their own phylum. I think it's, it's pretty precise that that algorithm are developed in expandability, defining a vacuum without consider the intended usage. There's also a dangerous practice, that evaluation task right now used by AI researcher. They have limited evaluate your power. They also do not consider these different kinds of and recent HCI work shows that they may fail to predict the actual success in the deployment contexts. I want to take a step back and say, I think this is the big progress now is a common practice that people would do right? User study to evaluate the algorithm. By the task is often simplified or use what's called proxy task. For example, it's common for a researcher to run a Mechanical Turk study and ask the Turkers, wish of the true explanation is better. A better is either species fight without wrongdoing the end goal of the user. Another common proxy task used to evaluate XA algorithm is what's called a simulates abilities task. The idea is do if the participants of a Mechanical Turkers instance an explanation and ask them to stimulate what is the model's prediction? The rationale here is that if an explanation is helpful, helping them understand the model rationale, then they can stimulate the outcome. But the problem is, if you look at this kind of tasks, right? If you just look at this part. So this is an AI system that used image recognition to predict whether this is a high-fat diet, right? Shows explanation. But if you just look at this part and it tells you that AI is recognizing avocado and bacon. And he said, easy judgments, bad, yes, it will see that this is a high-fat diet. But the problem is that this is not a real task that a user would occur for ways, explanation, right? More Than a lot by user, a decision support system. They won't explanation to, to be able to have a proper Reliance. They will also see this explanation. They want shoes not only understand How worldwide AI is making this prediction, but we also want to see, is that a reliable prediction. And to make that judgment, not only you have to see the AI is recognizing bacon and you also have to recognize that is nice, recognizing the beat into place as bacon. The user have to have the knowledge to recognize data and also have the time the colony to a resource to make that adjustment, which is often not the case in a real war decision contexts. And that kind of usage contacts, right? Using this diaphragm high-fat diet prediction is the user might wanted to really answer the question, I improve my diet. What a user might be interests is, how can I improve my, what part of the place I should focus on improving so that my diet would not be constant at height. High-fat, though again, I'll use I'll just take recourse action that is not be able to answer by this kind of explanation. So sad, that would just come back. I'm seeing current work is there's a disconnect with how the cognitive process, how people actually per save and a re-save explanation. A pretty robust finding in recent study, I'm trickle study is that. Explanation can lead to, I'm a warranty of trust and confidence. So we had this paper published as fat Conference 2020. Now quote a fact conference, we had this experiment using a decision support system. The use case is the way I look at a customer profile and predict whether this is a high-income customer and make a prediction. The parties, a pendant decision-maker can either accept as decision what they want to make a different prediction. But we find that surely explanation showing this kind of feature importances a Nation versus not showing one. Actually reduce people's decision accuracy to some extent. The reason is that if you look at this kind of low confidence cases where the model has low confidence, people really should be cautious, not over-relying on the decision. Even though the explanation can hint on that. If you look at the bars, if we add a bar, it is a pretty ambivalent decision. But participants, users are not really paying attention to that. We're making that judgment while having to expand nation, they're actually more likely to choose to rely on the prediction even though they are low. Confidence predictions are one other study, for example, my colleague at MSR, they run this study looking at how data scientists use interpretability tools is kind of visualization tool to help them. He's back at the model debug model. They find that having access to this kind of tool actually lead to overconfidence. They don't really get a sense of what's going on, but they thought the model is ready for deployment. The study also showed that even plus c big explanation without really telling you useful information and increase users. So these phenomena and I think point to a kind of blind spot in current xy I system or Xn work is there are plurality of people's quality to process, right? You probably read the book Thinking Fast and Slow. You understand that one, people process information and make judgments. There are two kinds of process. We have a system to process is still thinking, was quite deliberate on the information, make Atletico judgment. But more likely than not, people resort to System 1 thinking, which is a fast thinking that often binding, invoking heuristic or rule of thumb to make a judgment. Though the underlying assumption of current xn work is that there are only IDEO user will really expand Asia and carefully and able to understand it. But again, we'll open our real users interacting with the AI system. They may rely on system 1 thinking. They going to involve coloring heuristic, and if you applied in appropriately, that lead to biases. Though there is a body of social science literature, find that what's really useful is work predicting when people are more likely to engage in system 1 thinking. And the overarching variable is when people lack either ability or motivation, they're more likely to resort. They someone thinking. So that points to another pitfall of r and z i system is that it can lead to EU quality of experience for people who don't have this ideal profile of ability or motivation, they may have a different experience. Actually, I may even lead to negative consequence or even harms for them. They're already empirical studies showing that, hey, novices compared to experts, they have less performance gain but more illusory satisfaction with expandability. Lots of studies showing that people will gain less from expert Asia if they are in accommodative resource constrained settings, is among all work who also look like look into personality trait. We look at this personality trait for need for cognition, which basically tell you whether sunlight enjoy deep thinking activity. We find that people who have low need for cognition, they have a different experience with x-ray either they tend to even decrease their overall satisfaction with the system. So now we have talk about this design challenges, these different kinds of pitfall. How do we move forward? I'm approaching the end of my talk. I just want to points in a few emerging direction i'm, I'm seeing in recent work that aim to expand the toolbox to move us from this algorithmic explanation to really achieve actionable understanding for end user. The deployment contexts. One path forward, as we're discussing, this disconnect with the quality process is to develop more cognitively compatible, explainable AI, right? Release. In the past few months, we're seeing more and more new work looking at what kind of heuristic or involve the explainable AI will. So human AI interaction in general. I'm see more paper, for example, AUI at this year. We've also had a collaboration with Paul and Mark here we'll look at what kind of heuristics are invoked when people interact with actually I wasn't. And different kind of heuristic for people with or without AI background. I also wanted to point out that heuristic are not bad thing is an indispensable part of people's cognition. So we also need to think about how to enumerate heuristic and cultivate a warranted heuristic, right? This heuristic of associating, be explainable directly way of being capable is unwarranted heuristic. But we can envision a feature, for example, having a third authority. We're experts to inspect the model and, uh, keep an endorsement. And then novices looking at that feature, they can resort to authority heuristic, which are a more warranties heuristic. And other work looking into how do we improve the system two thinking with explainable AI while ways to looking at how do we develop interventions for people to have better system two, processing, for example, get people to slow down a lead them into, through looking at explanation and looking at different parts of explanation. I also think that there is opportunity in my technical opportunity to develop XA, I would lower quality team workloads, for example, algorithm that can balance the trade-off between accuracy of explanation and a cognitive workload. I think what this worker point shoe there is under development design space of xy I, communication, right? We've, we take algorithmic expand Asia. How do we communicate them? What is the format, what is the modality? One level of detail, and also how do we influence key pose, colleague, process, do receive and process explanation. So these are what I consider as communication. 4d is we need to take sociotechnical approach. I don't exactly want to talk about that will pull Animaux paper in front of them. But I think this is a very important, these varying work. I think I'll talk about these contexts and objective dependent nature of its profitability. Inherent in that is a nice to think of AI system as socio-technical. They're there, often situated in social organizational contexts. Many people are interacting with them. There are social organizational nor Barbara and usage. So to understand this explainable who, whom and there since making process, we need to take. So, so technical view. So this was a very inspirational paper for me and good outcome is now we started collaborate. When will point mark here at Georgia Tech, while outcomes we oppose this concept of social transparency, wishes to make transparent. I'm not just a technical system, but also the socio-technical system. And while we proposed to do so, is to present past user's interaction and their reason any way, the AI system, for example, why they read Jack, why they accept that recommendation? We publish this paper at high this year. I encourage you to check out the paper. We find that this kind of still show transparency feature, improve decision-making and also improve just as collective experience for it is socio-technical system. And last part, I think this part probably deserve its own talk, is I'm see more and more work that really build on theories of human explanation, right? The process of how human produce explanations, selection explanation, how human consume expand Asia, as well as explanation dialogue where c1 work starts. You build a computational model based on them or design framework based on that. To conclude my talk, what are some lessons from like thinking about this work is, how do I do humans center? Explainable AI will. So in general, how AI researchers can better work together. There are few lessons I wanted to point out what it is. I think I would say humans center approach. It's not just about designing, evaluating system where the user, you'd require us to fundamentally shift our view to re-frame the tactical space. Not in terms of what's technical affordances, but what kind of human condition value one nice, they serve. And that will also encouraged practitioner to foreground the user need deployment. And a second is humans. An approach means that we need to make more responsible use of technical toolbox. We need to carefully examine a limitation, the pitfall of the techniques we produce. And also for, um, for practitioner NaCl solution, our technical, we want you to expand a toolbox with design tool, whether a common forms of design guidelines or a new design framework, what design space. Lastly, and most importantly, I think as HCI researcher would do have a responsibility to teach you engaged way. It's Deep Learning, Deployment contexts and P post leave these purines and also bring that back into the technical community. Whether through critical work, through developing better evaluation master, or provide new conceptual framework that can inspire new set of tech knowledge that's more human compatible. And with that, I'd like to conclude my talk. I want to thank my collaborator, thank my human-centered AI community. And I want to open a full, I think we have a few minutes to take questions. Thank you. Something that it's on the surface seems very simple. And then you open it up and you discover, bye. Thank you. Open the black box of a neural network. Opening up the black box of this topic of explainability, discover so many different dimensions to this. We do have some, some great questions. I'm going to try to get to some of these. I'm gonna have to prioritize them, I think, but I'll just start one that I think talks directly to your experiences, don't we? You asked in terms of question driven XA I design approaches, you have any advice or tips for researcher who wants to ask these questions to potential users who don't know what AI technologies are at all, such as a patient who will be impacted by a healthy AI technology, right? That's a really good question. Which one that I also reflect on a large, when I tried to introduce this is process two different product team, different designers. So we have some refraction in that paper as well. I archive one approach with find really useful and really important, right? When I say, the first step is elicit a question as an exercise, you really want to set up this stage to a point that a user are able to come up with a question. There are many different ways to set up the stage depending on what kind of AI system you're developing. For example, you can get the words out of your users if you're developing a system that's relatively near term, user can imagine hedge and then you can have user first work with you to define if you're introducing AI, what kind of, what kind of per pass they can perform. And by defining the task and how you can start asking them what kind of come up with. If you have users who are just not very familiar with at all. There's a lot of HCI tools. Gauge these kind of elicitation exercise, right? You can create a lo-fi prototype, you can create a scenario, you can create a user story. And I think those are place that you can, you can start this illustration exercise. There's ON face off. I think again, you need to consider what is, what, what did your users current understanding and what is a good place that you can start getting those question out of out of what they can think of. That answers the question. Okay? Yeah, longer conversation is probably necessary. They're going to jump to a related topic. Taylor asks, it sounds like just because an AI is explainable doesn't mean it's worthy of a user's trust. What are other ways to help a non AI experts evaluate whether AI is trustworthy? And that's a great question, anything? And there's also probability when we talk about Prize in Chrome, marriage and different, Todd, I think why important point I often make is when we talk about trust, we need to decouple boss worthiness of the AI system. And the trust as the user judgment will also show them be talking about the trust in a vacuum. We need to first think about whether the AI system yourself is trustworthy. What are the dimensions that will guarantee is trustworthy? And valve when you should think about what are some design features that are actually collaborate, that perception, if it's trustworthy, user will perceive it as trustworthy, less trustworthy user will be able to identify is not trustworthy, right? So, so I think there's also a literature. Hey, Aughie, acknowledge trust, trust of web trust of a digital media in general, There's also again, join on this dual process. One way to think about if we want to encourage people to perform more allergy cold judgments, right? What are the way to scaffold that you can provide people with checklist or, or cloning to be enforced infection and to guide them through how to judge whether this is trustworthy. And other we have on our feature that more resorts to this invoke heuristic. And again, warrants each heuristic right? Website. You may have things like other people's thumbs up and that will evoke certain heuristic about social heuristic or bandwagon cure is going to help you make that kind of judgment. So for a novices, again, if they're not able to perform this more and ICD code judgment, maybe we need to resort to this kind of more heuristic feature. And also think about responsibly, what are the design that feature that can provide is warranted trust. So that's yeah, that's that's how I think about this, this this space. I think that it wasn't worth, worth a long discussion. I love these questions because I think they're getting at the heart of the issue, which is that there's a lot of research still, a white explanation and work and work. I think my guess is some people are going to have to run to class now, but I'm going to shift the gears a little bit and ask maybe one more question. Tony asks, What are your thoughts about applying x ai systems to massive neural networks such as GPT-3, which are broadly and generally capable or these systems explainable. And is there an effective limit to the complexity of systems that can be explained? That's interesting question. I don't know if I have an immediate answer. I believe is of course, emerging research area. But my belief is always, I think we expand Asia and help people expand to other people right away. We leverage very, very diverse communication UIs. We don't necessarily gave you, there's a whole causal chain of how I come up with a decision. Sometimes we draw an analogy, sometimes we may be acheive. Determine courses. Though. I don't necessarily think explanation as this uniform feature that we're just going to make. Everything about this motto transparent. We were so we can draw on other forms of communication. Arieti, example base expand Asia. We're going into theta or other kind of form. So if we think of data, take this broader view. I do believe there are many approaches, many communication devices we can help people get better understanding of the network, the neural network, regardless of how complex they are. So, so yeah, my personal view is I'm pretty optimistic. That good to know that, that way. I'm going to ask one more question. This is an easy one. Or are you able to make your slides available? Yes, I will send that to my baby. Mark. Can you send it to me and then if anyone wanted to copy these slides, just get in touch with me and we'll make that. And I also want to point, folks, if you rewind to the video, there is a fair appointed to her chi tutorial. I think it was. And I'm sure a lot of these resources are also kinda pointed to there, though. All right, Vera, this was amazing. Awesome. Thank you again for coming and sharing this fabulous research with you. Again. Thank you again. I wish you could be here, understand the circumstances are such that that cannot happen. But my eye, I hope everyone can can reach out to you. Reach out to me if you want the slides, but reach out to very few are very interested in figuring out how to get involved with these processes. And should I mentioned that fate is hiring interns? Oh, yeah. Yes. Yeah, Harold and Montreal lab by hiring. Okay, great. Now you're going to get lots of, lots of respiration, okay, thank you again very much and great talking to you and I look forward to talking to you again for coming.