Daniel Browne / en Four AI trends to watch in 2024 /news/four-ai-trends-watch-2024 <span class="field field--name-title field--type-string field--label-hidden">Four AI trends to watch in 2024</span> <div class="field field--name-field-featured-picture field--type-image field--label-hidden field__item"> <img loading="eager" srcset="/sites/default/files/styles/news_banner_370/public/2024-01/GettyImages-1933427591-crop.jpg?h=81d682ee&amp;itok=bWQQfFcH 370w, /sites/default/files/styles/news_banner_740/public/2024-01/GettyImages-1933427591-crop.jpg?h=81d682ee&amp;itok=xSzVRTv8 740w, /sites/default/files/styles/news_banner_1110/public/2024-01/GettyImages-1933427591-crop.jpg?h=81d682ee&amp;itok=5GUAZclT 1110w" sizes="(min-width:1200px) 1110px, (max-width: 1199px) 80vw, (max-width: 767px) 90vw, (max-width: 575px) 95vw" width="740" height="494" src="/sites/default/files/styles/news_banner_370/public/2024-01/GettyImages-1933427591-crop.jpg?h=81d682ee&amp;itok=bWQQfFcH" alt="A person dressed like a monk stands in front of a sign that reads The Future is AI on a crowded street in Davos"> </div> <span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>Christopher.Sorensen</span></span> <span class="field field--name-created field--type-created field--label-hidden"><time datetime="2024-01-19T12:02:40-05:00" title="Friday, January 19, 2024 - 12:02" class="datetime">Fri, 01/19/2024 - 12:02</time> </span> <div class="clearfix text-formatted field field--name-field-cutline-long field--type-text-long field--label-above"> <div class="field__label">Cutline</div> <div class="field__item"><p><em>AI was a hot topic at this week鈥檚 annual meeting of the World Economic Forum in Davos, Switzerland (photo by Andy Barton/SOPA Images/LightRocket via Getty Images)</em></p> </div> </div> <div class="field field--name-field-author-reporters field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/authors-reporters/jovana-jankovic" hreflang="en">Jovana Jankovic</a></div> </div> <div class="field field--name-field-secondary-author-reporter field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/authors-reporters/daniel-browne" hreflang="en">Daniel Browne</a></div> </div> <div class="field field--name-field-topic field--type-entity-reference field--label-above"> <div class="field__label">Topic</div> <div class="field__item"><a href="/news/topics/our-community" hreflang="en">Our Community</a></div> </div> <div class="field field--name-field-story-tags field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/tags/schwartz-reisman-institute-technology-and-society" hreflang="en">Schwartz Reisman Institute for Technology and Society</a></div> <div class="field__item"><a href="/news/tags/munk-school-global-affairs-public-policy-0" hreflang="en">Munk School of Global Affairs &amp; Public Policy</a></div> <div class="field__item"><a href="/news/tags/artificial-intelligence" hreflang="en">Artificial Intelligence</a></div> <div class="field__item"><a href="/news/tags/faculty-applied-science-engineering" hreflang="en">Faculty of Applied Science &amp; Engineering</a></div> <div class="field__item"><a href="/news/tags/faculty-arts-science" hreflang="en">Faculty of Arts &amp; Science</a></div> <div class="field__item"><a href="/news/tags/faculty-information" hreflang="en">Faculty of Information</a></div> <div class="field__item"><a href="/news/tags/faculty-law" hreflang="en">Faculty of Law</a></div> <div class="field__item"><a href="/news/tags/global" hreflang="en">Global</a></div> <div class="field__item"><a href="/news/tags/graduate-students" hreflang="en">Graduate Students</a></div> <div class="field__item"><a href="/news/tags/research-innovation" hreflang="en">Research &amp; Innovation</a></div> <div class="field__item"><a href="/news/tags/rotman-school-management" hreflang="en">Rotman School of Management</a></div> <div class="field__item"><a href="/news/tags/u-t-mississauga" hreflang="en">管家婆免费开奖大全 Mississauga</a></div> </div> <div class="field field--name-field-subheadline field--type-string-long field--label-above"> <div class="field__label">Subheadline</div> <div class="field__item">鈥淭he advancement of AI is moving quickly, and the year ahead holds a lot of promise but also a lot of unanswered questions鈥</div> </div> <div class="clearfix text-formatted field field--name-body field--type-text-with-summary field--label-hidden field__item"><p>As artificial intelligence continues to develop rapidly, the world is watching with excitement and apprehension 鈥 as evidenced by the <a href="https://www.washingtonpost.com/technology/2024/01/18/davos-ai-world-economic-forum/">AI buzz in Davos this week at the World Economic Forum鈥檚 annual meeting</a>.</p> <p>管家婆免费开奖大全 researchers are using AI to <a href="/news/u-t-receives-200-million-grant-support-acceleration-consortium-s-self-driving-labs-research">advance scientific discovery</a> and <a href="https://tcairem.utoronto.ca/">improve health-care delivery</a>, <a href="/news/who-owns-your-face-scholars-u-t-s-schwartz-reisman-institute-explore-tech-s-thorniest-questions">exploring how to mitigate potential harms</a> and finding new ways to ensure the technology <a href="/news/achieving-alignment-how-u-t-researchers-are-working-keep-ai-track">aligns with human values</a>.&nbsp;</p> <p>鈥淭he advancement of AI is moving quickly, and the year ahead holds a lot of promise but also a lot of unanswered questions,鈥 says <strong>Monique Crichlow</strong>, executive director of the Schwartz Reisman Institute for Technology and Society (SRI). 鈥淩esearchers at SRI and across the university are tackling how to build and regulate AI systems for safer outcomes, as well as the social impacts of these powerful technologies.鈥</p> <p>鈥淔rom health-care delivery to accessible financial and legal services, AI has the potential to benefit society in many ways and tackle inequality around the world. But we have real work to do in 2024 to ensure that happens safely.鈥</p> <p>As AI continues to reshape industries and challenge many aspects of society, here are four emerging themes 管家婆免费开奖大全 researchers are keeping their eyes on in 2024:</p> <hr> <h3>1. AI regulation is on its way</h3> <figure role="group" class="caption caption-drupal-media align-center"> <div> <div class="field field--name-field-media-image field--type-image field--label-hidden field__item"> <img loading="lazy" src="/sites/default/files/styles/scale_image_750_width_/public/2024-01/GettyImages-1754158756-crop.jpg?itok=IvlN2HdV" width="750" height="500" alt="&quot;&quot;" class="image-style-scale-image-750-width-"> </div> </div> <figcaption><em>U.S. Vice President Kamala Harris applauds as U.S. President Joe Biden signs an executive order on the safe, secure, and trustworthy development and use of artificial intelligence on Oct. 30, 2023 (photo by Brendan Simialowski/AFP/Getty Images)&nbsp;</em></figcaption> </figure> <p>As a technology with a wide range of potential applications, AI has the potential to impact all aspects of society 鈥 and regulators around the world are scrambling to catch up<span style="font-size: 1rem;">.</span></p> <p>Set to pass later this year, the <a href="https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act"><em>Artificial Intelligence and Data Act </em></a>(AIDA) is the Canadian government鈥檚 first attempt to comprehensively regulate AI. Similar attempts by <a href="https://srinstitute.utoronto.ca/news/global-ai-safety-and-governance">other governments</a> include the European Union鈥檚 <a href="https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence"><em>AI Act</em> </a>and the <a href="https://www.congress.gov/bill/117th-congress/house-bill/6580/text"><em>Algorithmic Accountability Act</em></a> in the United States.</p> <p>But <a href="https://srinstitute.utoronto.ca/news/ai-regulation-in-canada-is-moving-forward-heres-what-needs-to-come-next">there is still much to be done</a>.</p> <p>In the coming year, legislators and policymakers in Canada will tackle many questions, including what counts as fair use when it comes to training data and what privacy means in the 21st century. Is it illegal for companies to train AI systems on copyrighted data, as <a href="https://www.cbc.ca/news/business/new-york-times-openai-lawsuit-copyright-1.7069701">a recent lawsuit</a> from the <em>New York Times</em> alleges? Who owns the rights to AI-generated artworks? Will Canada鈥檚 new privacy bill sufficiently <a href="https://srinstitute.utoronto.ca/news/to-guarantee-our-rights-canadas-privacy-legislation-must-protect-our-biometric-data">protect citizens鈥 biometric data</a>?</p> <p>On top of this, AI鈥檚 entry into other sectors and industries will increasingly affect and transform how we regulate other products and services. As&nbsp;<strong>Gillian Hadfield</strong>, a professor in the Faculty of Law and the Schwartz Reisman Chair in Technology and Society, Policy Researcher <strong>Jamie Sandhu</strong>&nbsp;and Faculty of Law doctorial candidate <strong>Noam Kolt</strong> explore in <a href="https://srinstitute.utoronto.ca/news/cifar-ai-insights-policy-regulatory-transformation">a recent policy brief for CIFAR</a>&nbsp;(formerly the Canadian Institute for Advanced Research),&nbsp;a focus on regulating AI through its harms and risks alone 鈥渙bscures the bigger picture鈥 of how these systems will transform other industries and society as a whole. For example: are current car safety regulations adequate to account for self-driving vehicles powered by AI?</p> <h3>2. The use of generative AI will continue to stir up controversy</h3> <figure role="group" class="caption caption-drupal-media align-center"> <div> <div class="field field--name-field-media-image field--type-image field--label-hidden field__item"> <img loading="lazy" src="/sites/default/files/styles/scale_image_750_width_/public/2024-01/GettyImages-1889111776-crop.jpg?itok=_v5Nv_QX" width="750" height="500" alt="&quot;&quot;" class="image-style-scale-image-750-width-"> </div> </div> <figcaption><em>Microsoft Bing Image Creator is displayed on a smartphone (photo by Jonathan Raa/NurPhoto/Getty Images)</em></figcaption> </figure> <p>From AI-generated text and pictures to videos and music, use of generative AI has exploded over the past year 鈥 and so have questions surrounding issues such as academic integrity, misinformation and the displacement of creative workers.</p> <p>In the classroom, teachers are seeking to understand how <a href="https://magazine.utoronto.ca/campus/education-is-evolving-in-the-age-of-ai/">education is evolving in the age of machine learning</a>. Instructors will need to find new ways to embrace these tools 鈥 or perhaps opt to reject them altogether 鈥 and students will continue to discover new ways to learn alongside these systems.</p> <p>At the same time, AI systems <a href="https://journal.everypixel.com/ai-image-statistics">created more than 15 billion images last year</a>&nbsp;by some counts 鈥 more than the entire 150-year history of photography. Online content will increasingly lack human authorship, and some researchers have proposed that by 2026 <a href="https://thelivinglib.org/experts-90-of-online-content-will-be-ai-generated-by-2026/">as much as 90 per cent of internet text could be generated by AI</a>. Risks around disinformation will increase, and new methods to label content as trustworthy will be essential.</p> <p>Many workers 鈥 including writers, translators, illustrators and designers 鈥 are worried about job losses. But a tidal wave of machine-generated text could also have negative impacts on AI development. In a recent study, <strong>Nicolas Papernot</strong>, an assistant professor in the Edward S. Rogers Sr. department of electrical and computer engineering in Faculty of Applied Science &amp; Engineering and an SRI faculty affiliate,&nbsp;and his co-authors found <a href="/news/training-ai-machine-generated-text-could-lead-model-collapse-researchers-warn">training AI on machine-generated text led to the system becoming less reliable</a> and subject to 鈥渕odel collapse.鈥</p> <h3>3. Public perception and trust of AI is shifting</h3> <figure role="group" class="caption caption-drupal-media align-center"> <div> <div class="field field--name-field-media-image field--type-image field--label-hidden field__item"> <img loading="lazy" src="/sites/default/files/styles/scale_image_750_width_/public/2024-01/GettyImages-1933427856-crop.jpg?itok=WipX3hEz" width="750" height="500" alt="&quot;&quot;" class="image-style-scale-image-750-width-"> </div> </div> <figcaption><em>A person walks past a temporary AI stall in Davos, Switzerland (photo by Andy Barton/SOPA Images/LightRocket/Getty Images)</em></figcaption> </figure> <p>Can we trust AI? Is our data secure?</p> <p>Emerging research on public trust of AI is shedding light on changing preferences, desires and viewpoints.&nbsp;<strong>Peter Loewen&nbsp;</strong>鈥&nbsp;the director of the <a href="https://munkschool.utoronto.ca/">Munk School of Global Affairs &amp; Public Policy</a>, SRI鈥檚 associate director and the director of the Munk School鈥檚&nbsp;<a href="https://munkschool.utoronto.ca/pearl">Policy, Elections &amp; Representation Lab</a> (PEARL) 鈥 is developing an index measuring public perceptions of and attitudes towards AI technologies.</p> <p>Loewen鈥檚 team conducted a representative survey of more than 23,000 people across 21 countries, examining attitudes towards regulation, AI development, perceived personal and societal economic impacts, specific emerging technologies such as ChatGPT and the use of AI by government. They plan to release their results soon.</p> <p>Meanwhile, 2024 is being called <a href="https://www.forbes.com/sites/siladityaray/2024/01/03/2024-is-the-biggest-election-year-in-history-here-are-the-countries-going-to-the-polls-this-year/?sh=6c930f8265f9">鈥渢he biggest election year in history,鈥</a> with more than 50 countries headed to the polls, and <a href="https://foreignpolicy.com/2024/01/03/2024-elections-ai-tech-social-media-disinformation/">experts expect interference and misinformation to hit an all-time high</a> thanks to AI. How will citizens know which information, candidates, and policies to trust?&nbsp;</p> <p>In response, some researchers are investigating the foundations of trust itself.&nbsp;<strong>Beth Coleman</strong>, an associate professor at 管家婆免费开奖大全 Mississauga鈥檚 Institute of Communication, Culture, Information and Technology and the Faculty of Information who is an SRI research lead, is leading <a href="https://srinstitute.utoronto.ca/news/call-for-applicants-trust-working-group">an interdisciplinary working group</a> on the role of trust in interactions between humans and AI systems, examining how trust is conceptualized, earned and maintained in our interactions with the pivotal technology of our time.</p> <h3>4. AI will increasingly transform labour, markets and industries&nbsp;</h3> <figure role="group" class="caption caption-drupal-media align-center"> <div> <div class="field field--name-field-media-image field--type-image field--label-hidden field__item"> <img loading="lazy" src="/sites/default/files/styles/scale_image_750_width_/public/2024-01/GettyImages-1546723736-crop.jpg?itok=oLMOosKv" width="750" height="500" alt="&quot;&quot;" class="image-style-scale-image-750-width-"> </div> </div> <figcaption><em>A protester in London holds a placard during a rally in Leicester Square (photo by Vuk Valcic/SOPA Images/LightRocket via Getty Images)</em></figcaption> </figure> <p><strong>Kristina McElheran</strong>, an assistant professor in the Rotman School of Management and an SRI researcher,<strong>&nbsp;</strong>and her collaborators may have recently found <a href="https://www.nbcnews.com/data-graphics/wide-gap-ais-hype-use-business-rcna127210">a gap between AI buzz in the workplace and businesses who are actually using it</a>&nbsp;鈥 but&nbsp;there remains a real possibility that labour, markets and industries will undergo massive transformation.<br> <br> 管家婆免费开奖大全 researchers who have published books on how AI will transform industry include: Rotman faculty members <strong>Ajay Agrawal</strong>, <strong>Joshua Gans</strong>&nbsp;and <strong>Avi Goldfarb</strong>, whose <a href="https://www.predictionmachines.ai/power-prediction"><em>Power and Prediction: The Disruptive Economics of Artificial Intelligence</em></a> argues that 鈥渙ld ways of doing things will be upended鈥 as AI predictions improve; and the Faculty of Law鈥檚 <strong>Benjamin Alarie</strong> and <strong>Abdi Aidid</strong>, who propose in <a href="https://utorontopress.com/9781487529420/the-legal-singularity/"><em>The Legal Singularity: How Artificial Intelligence Can Make Law Radically Better</em></a> that AI will improve legal services by increasing ease of access and fairness for individuals.</p> <p>In 2024, institutions 鈥&nbsp;public and private 鈥 will be creating more guidelines and rules around how AI systems can or cannot be used in their operations. Disruptors will be challenging the hierarchy of the current marketplace.&nbsp;</p> <p>The coming year promises to be transformative for AI as it continues to find new applications across society. Experts and citizens must stay alert to the changes AI will bring and continue to advocate that ethical and responsible practices guide the development of this powerful technology.</p> </div> <div class="field field--name-field-news-home-page-banner field--type-boolean field--label-above"> <div class="field__label">News home page banner</div> <div class="field__item">Off</div> </div> Fri, 19 Jan 2024 17:02:40 +0000 Christopher.Sorensen 305503 at Power and prediction: 管家婆免费开奖大全's Avi Goldfarb on the disruptive economics of artificial intelligence /news/power-and-prediction-u-t-s-avi-goldfarb-disruptive-economics-artificial-intelligence <span class="field field--name-title field--type-string field--label-hidden">Power and prediction: 管家婆免费开奖大全's Avi Goldfarb on the disruptive economics of artificial intelligence</span> <div class="field field--name-field-featured-picture field--type-image field--label-hidden field__item"> <img loading="eager" srcset="/sites/default/files/styles/news_banner_370/public/goldfarb-power-and-prediction.jpg?h=afdc3185&amp;itok=L9SMP1wv 370w, /sites/default/files/styles/news_banner_740/public/goldfarb-power-and-prediction.jpg?h=afdc3185&amp;itok=qG9u_ezS 740w, /sites/default/files/styles/news_banner_1110/public/goldfarb-power-and-prediction.jpg?h=afdc3185&amp;itok=NMTx4X_U 1110w" sizes="(min-width:1200px) 1110px, (max-width: 1199px) 80vw, (max-width: 767px) 90vw, (max-width: 575px) 95vw" width="740" height="494" src="/sites/default/files/styles/news_banner_370/public/goldfarb-power-and-prediction.jpg?h=afdc3185&amp;itok=L9SMP1wv" alt="Headshot of Avi Goldfarb and book cover"> </div> <span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>siddiq22</span></span> <span class="field field--name-created field--type-created field--label-hidden"><time datetime="2023-01-20T11:38:57-05:00" title="Friday, January 20, 2023 - 11:38" class="datetime">Fri, 01/20/2023 - 11:38</time> </span> <div class="clearfix text-formatted field field--name-field-cutline-long field--type-text-long field--label-above"> <div class="field__label">Cutline</div> <div class="field__item">Avi Goldfarb, a professor at the Rotman School of Management and research lead at the Schwartz Reisman Institute for Technology and Society, says the AI revolution is well underway 鈥 but that system-level change takes time (supplied images)</div> </div> <div class="field field--name-field-author-reporters field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/authors-reporters/daniel-browne" hreflang="en">Daniel Browne</a></div> </div> <div class="field field--name-field-topic field--type-entity-reference field--label-above"> <div class="field__label">Topic</div> <div class="field__item"><a href="/news/topics/global-lens" hreflang="en">Global Lens</a></div> </div> <div class="field field--name-field-story-tags field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/tags/technology-and-society" hreflang="en">Technology and Society</a></div> <div class="field__item"><a href="/news/tags/schwartz-reisman-institute-technology-and-society" hreflang="en">Schwartz Reisman Institute for Technology and Society</a></div> <div class="field__item"><a href="/news/tags/artificial-intelligence" hreflang="en">Artificial Intelligence</a></div> <div class="field__item"><a href="/news/tags/business" hreflang="en">Business</a></div> <div class="field__item"><a href="/news/tags/health" hreflang="en">Health</a></div> <div class="field__item"><a href="/news/tags/rotman-school-management" hreflang="en">Rotman School of Management</a></div> <div class="field__item"><a href="/news/tags/startups" hreflang="en">Startups</a></div> <div class="field__item"><a href="/news/tags/technology" hreflang="en">Technology</a></div> </div> <div class="clearfix text-formatted field field--name-body field--type-text-with-summary field--label-hidden field__item"><p>In the new book <a href="https://store.hbr.org/product/power-and-prediction-the-disruptive-economics-of-artificial-intelligence/10580?sku=10580E-KND-ENG"><em>Power and Prediction: The Disruptive Economics of Artificial Intelligence</em></a>, co-author&nbsp;<strong><a href="https://www.avigoldfarb.com/">Avi Goldfarb</a></strong>&nbsp;argues we live in the 鈥淏etween Times鈥: after discovering the potential of AI, but before its widespread adoption.</p> <p>Delays in implementation are an essential part of any technology with the power to truly reshape society, says Goldfarb, a professor of marketing and the Rotman Chair in Artificial Intelligence and Healthcare at the 管家婆免费开奖大全's Rotman School of Management and research lead at the <a href="/news/tags/schwartz-reisman-institute-technology-and-society">Schwartz Reisman Institute for Technology and Society</a>.</p> <p>He makes the case for how AI innovation will evolve in <em>Power and Prediction</em>, his latest&nbsp;book co-authored with fellow Rotman professors <strong>Ajay Agrawal</strong> and <strong>Joshua Gans</strong>. The trio, who also wrote 2018鈥檚 <a href="https://store.hbr.org/product/prediction-machines-updated-and-expanded-the-simple-economics-of-artificial-intelligence/10598"><em>Prediction Machines: The Simple Economics of Artificial Intelligence</em></a>, are the&nbsp;co-founders of the&nbsp;<a href="https://creativedestructionlab.com/">Creative Destruction Lab</a>, a non-profit organization that helps science- and technology-based startups scale.</p> <p>Goldfarb will give a talk at the Rotman School of Management <a href="https://srinstitute.utoronto.ca/events-archive/seminar-2023-avi-goldfarb">on Jan. 25</a> as part of the SRI Seminar Series. He&nbsp;spoke with the Schwartz Reisman Institute to discuss how the evolution of AI innovation will require systems-level changes to the ways that organizations make decisions.</p> <p><em>(The&nbsp;interview has been condensed for length and clarity.)</em></p> <hr> <p><strong>What changed in your understanding of the landscape of AI innovation since your last book?</strong></p> <p>We wrote <em>Prediction Machines</em> thinking that a revolution was about to happen, and we saw that revolution happening at a handful of companies like Google, Amazon and others. But when it came to most businesses we interacted with, by 2021 we started to feel a sense of disappointment. Yes, there was all this potential, but it hadn鈥檛 affected their bottom line yet 鈥 the uses that they鈥檇 found had been incremental, rather than transformational. And that got us trying to understand what went wrong.</p> <p>One potential thing that could have gone wrong, of course, was that AI wasn鈥檛 as exciting as we thought. Another was that the technology was potentially as big a deal as the major revolutions of the past 200 years 鈥 innovations like steam, electricity, computing 鈥 and the issue was system-level implementation. For every major technological innovation, it took a long time to figure out how to make that change affect society at scale.</p> <p>The core idea of <em>Power and Prediction</em> is that AI is an exciting technology 鈥 but it鈥檚 going to take time to see its effects, because a lot of complementary innovation has to happen as well. Now, some might respond that鈥檚 not very helpful, because we don鈥檛 want to wait. And part of our agenda in the book is to accelerate the timeline of this innovation from 40 years to 10, or even less. To get there, we then need to think through what this innovation is going to look like. We can鈥檛 just say it鈥檚 going to take time 鈥 that鈥檚 not constructive.</p> <p><strong>What sort of changes are needed for organizations to harness AI鈥檚 full potential?</strong></p> <p>Here, we lean on three key ideas. The first idea is that AI today is not artificial general intelligence (AGI) 鈥 it鈥檚 prediction technology. The second is that a prediction is useful because it helps you make decisions. A prediction without a decision is useless. So, what AI really does is allow you to unbundle the prediction from the rest of the decision, and that can lead to all sorts of transformation. Finally, the third key idea is that decisions don鈥檛 happen in isolation.</p> <p>What prediction machines do is allow you to change who makes decisions and when those decisions are made. There are all sorts of examples of what seems like an automated decision, but what it actually does is take some human鈥檚 decision 鈥 typically at headquarters 鈥 and scales it. For organizations to succeed, they require a whole bunch of people working in concert. It鈥檚 not about one decision 鈥 it鈥檚 about decisions working together.</p> <p>One example is health care 鈥 at the emergency department, there is somebody on triage, who gives a prediction about the severity of what鈥檚 going on. They might send a patient immediately for tests or ask them to wait. Right now, AIs are used in triage at SickKids in Toronto and other hospitals, and they are making it more effective. But&nbsp;to really take advantage of the prediction, they need to coordinate with the next step. If triage is sending people for a particular test more frequently, then there need to be other decisions made about staffing for those tests, and where to offer them. And, if your predictions are good enough, there鈥檚 an even different decision to be made 鈥 maybe you don鈥檛 even need the tests. If your prediction that somebody鈥檚 having a heart attack is good enough, you don鈥檛 need to send them for that extra test and waste that time or money. Instead, you鈥檒l send them direct to treatment, and that requires coordination between what鈥檚 happening upstream on the triage side and what鈥檚 happening downstream in terms of the testing or treatment side.</p> <p><img alt="avi goldfarb teaches a class " src="/sites/default/files/csm_news_28032019_01_2bb9b0f93f.jpg" style="width: 750px; height: 500px;"></p> <p><em>AI&nbsp;is as exciting a technology as electricity and computing, but it will take time to see its effects, Avi Goldfarb says.</em></p> <p><strong>Will certain sectors have greater ease in adopting system-level changes than others?</strong></p> <p>There is a real opportunity here for startups&nbsp;because when building a new system from scratch, it鈥檚 often easier to start with nothing. You don鈥檛 have to convince people to come along with your changes, so it becomes a less political process 鈥 at least within your organization. If you鈥檙e trying to change a huge established company or organization, it鈥檚 going to be harder.</p> <p>I鈥檓 very excited about the potential for AI and health care, but health care is complicated; there are so many different decision-makers. There are the patients, the payers 鈥 sometimes government, sometimes insurance companies, sometimes a combination of the above 鈥 and then there are doctors, who have certain interests, medical administrators who might have different interests, and nurses.</p> <p>AI has potential to supercharge nurses, because a key distinction between a doctor and a nurse in terms of training is diagnosis, which is a prediction problem. If AI is helping with diagnosis, that has potential to make nurses more central to how we structure the system. But that鈥檚 going to require all sorts of changes, and we have to get used to that as patients. And so, while I think the 30-year vision for what health care could look like is extraordinary, the five-year timeline is really, really hard.</p> <p><strong>What are some of the other important barriers to AI adoption?</strong></p> <p>A lot of the challenges to AI adoption come from ambiguity about what鈥檚 allowed or not in terms of regulation. In health care contexts, we are seeing lots of people trying to identify incremental point solutions that don鈥檛 require regulatory approval. We may have an AI that can replace a human in some medical process, but to do it is going to be a 10-year, multibillion-dollar process to get approval 鈥 so they鈥檒l implement it in an app that people can use at home with a warning that it鈥檚 not real medical advice.</p> <p>The regulatory resistance to change, and the ambiguity of what鈥檚 allowed, is a real barrier. As we start thinking about system changes, there is an important role for government through legislation and regulation, as well as through its coordinating function as the country鈥檚 biggest buyer of stuff, to help push us toward new AI-based systems.</p> <p>There are also real concerns about data and bias, especially in the short term. However, in the long run, I鈥檓 very optimistic about AI to help with discrimination and bias. While a lot of the resistance to AI implementation right now is coming from people who are worried about [people who will be negatively impacted by] bias [in the data], I think that pretty soon this will flip around.</p> <p>There鈥檚 a story we discuss in the book, where Major League Baseball brought in a machine that could say whether a pitch was a strike or a ball, and the people who resisted it turned out to be the superstars. Why? Well, the best hitters tended to get favored by umpires and face smaller strike zones, and the best pitchers also tended to get favoured and had bigger strike zones. The superstars benefited from this human bias, and when they brought in a fairer system, the superstars got hurt. So, we should expect that people who currently benefit from bias are going to resist machine systems that can overcome it.</p> <p><strong>What do you look for to indicate where disruptions from AI innovation will occur?</strong></p> <p>We鈥檙e seeing this change already in a handful of industries tech is paying attention to, such as advertising. Advertising had a very <em>Mad Men</em> vibe until recently: there was a lot of seeming magic in terms of whether an ad worked, how to hire an agency and how the industry operated 鈥 a lot of charm and fancy dinners. That hasn鈥檛 completely gone away, but advertising is largely an algorithm-based industry now. The most powerful players are big tech companies 鈥 they鈥檙e no longer the historical publishers who worked on Madison Avenue. We鈥檝e seen the disruption 鈥 it鈥檚 happened.</p> <p>Think through the mission of any industry or a company. Once you understand the mission, think through all the ways that mission is compromised because of bad prediction. Once you see where the mission doesn鈥檛 align with the ways in which an organization is actually operating, those are going to be the cases where either the organization is going to need to disrupt themselves, or someone鈥檚 going to come along and do what they do better.</p> <h3><a href="https://srinstitute.utoronto.ca/news/power-and-prediction-avi-goldfarb-on-the-disruptive-economics-of-ai">Read the full Q&amp;A at the Schwartz Reisman Institute for Technology and Society</a></h3> </div> <div class="field field--name-field-news-home-page-banner field--type-boolean field--label-above"> <div class="field__label">News home page banner</div> <div class="field__item">Off</div> </div> Fri, 20 Jan 2023 16:38:57 +0000 siddiq22 179293 at 管家婆免费开奖大全 expert on human-centered data science 鈥 and the problem with the motto 'move fast and break things' /news/u-t-expert-human-centered-data-science-and-problem-motto-move-fast-and-break-things <span class="field field--name-title field--type-string field--label-hidden">管家婆免费开奖大全 expert on human-centered data science 鈥 and the problem with the motto 'move fast and break things'</span> <div class="field field--name-field-featured-picture field--type-image field--label-hidden field__item"> <img loading="eager" srcset="/sites/default/files/styles/news_banner_370/public/Guha_iSchool-crop.jpg?h=afdc3185&amp;itok=WcQTL3D6 370w, /sites/default/files/styles/news_banner_740/public/Guha_iSchool-crop.jpg?h=afdc3185&amp;itok=UA9tqCAz 740w, /sites/default/files/styles/news_banner_1110/public/Guha_iSchool-crop.jpg?h=afdc3185&amp;itok=6GSq5VLE 1110w" sizes="(min-width:1200px) 1110px, (max-width: 1199px) 80vw, (max-width: 767px) 90vw, (max-width: 575px) 95vw" width="740" height="494" src="/sites/default/files/styles/news_banner_370/public/Guha_iSchool-crop.jpg?h=afdc3185&amp;itok=WcQTL3D6" alt="&quot;&quot;"> </div> <span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>geoff.vendeville</span></span> <span class="field field--name-created field--type-created field--label-hidden"><time datetime="2022-03-02T10:53:41-05:00" title="Wednesday, March 2, 2022 - 10:53" class="datetime">Wed, 03/02/2022 - 10:53</time> </span> <div class="clearfix text-formatted field field--name-field-cutline-long field--type-text-long field--label-above"> <div class="field__label">Cutline</div> <div class="field__item">Shion Guha, a faculty affiliate at the Schwartz Residman Institute for Technology and Society, advocates incorporating human-centered design practices into data science to avoid problems like biased algorithms (photo courtesy of Guha)</div> </div> <div class="field field--name-field-author-reporters field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/authors-reporters/daniel-browne" hreflang="en">Daniel Browne</a></div> </div> <div class="field field--name-field-topic field--type-entity-reference field--label-above"> <div class="field__label">Topic</div> <div class="field__item"><a href="/news/topics/our-community" hreflang="en">Our Community</a></div> </div> <div class="field field--name-field-story-tags field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/tags/institutional-strategic-initiatives" hreflang="en">Institutional Strategic Initiatives</a></div> <div class="field__item"><a href="/news/tags/schwartz-reisman-institute-technology-and-society" hreflang="en">Schwartz Reisman Institute for Technology and Society</a></div> <div class="field__item"><a href="/news/tags/artificial-intelligence" hreflang="en">Artificial Intelligence</a></div> <div class="field__item"><a href="/news/tags/faculty-information" hreflang="en">Faculty of Information</a></div> </div> <div class="clearfix text-formatted field field--name-body field--type-text-with-summary field--label-hidden field__item"><p>鈥淢ove fast and break things鈥 has become a clich茅 in entrepreneurship and computer science circles.&nbsp;&nbsp;</p> <p>But <strong>Shion Guha</strong>, an&nbsp;assistant professor at the 管家婆免费开奖大全's Faculty of Information and a faculty affiliate at the Schwartz Reisman Institute for Technology and Society, says the motto 鈥 which was once Facebook's internal motto 鈥 is a bad fit for the technology sector since algorithms are susceptible to biases that can affect human lives.&nbsp;Instead, Guha advocates a human-centered approach to data science that prioritizes the best outcomes for people.&nbsp;</p> <p>鈥淚 believe in worlds where data-driven decision-making has positive outcomes, but I don't believe in a world where we do this uncritically,鈥 he said. 鈥淚 don't believe in a world where you just throw stuff at the wall and see what sticks, because that hasn鈥檛 worked out at all.鈥</p> <p>Guha,&nbsp;the co-author of a new textbook on human-centered data science, spoke to the Schwartz Reisman Institute鈥檚 <strong>Daniel Browne </strong>about the need for a more deliberate and&nbsp;compassionate approach to data science.</p> <hr> <p><strong>Can you tell us about your background?</strong></p> <p>My academic background is primarily in statistics and machine learning. I graduated with my PhD from Cornell in 2016, and then was an assistant professor at Marquette University for five years before joining the Faculty of Information last year. 管家婆免费开奖大全 is one of the first universities in the world to launch an academic program in human-centered data science, so I was nudged to apply.&nbsp;</p> <p>My co-authors on the book [<em>Human-Centered Data Science: An Introduction,</em> MIT Press March 2022]<i>&nbsp;</i>and I are some of the first people to have talked about the concept of human-centered data science, in a workshop at one of our main conferences in 2016. We decided to write a textbook about the field because we felt there was a missing link between what is taught in the classroom and what happens in practice. In the last few years, the field has talked a lot about algorithmic biases and unforeseen consequences of technology on society. And so, we decided that instead of writing an academic monograph, we wanted to write a practical textbook for students.&nbsp;</p> <p><strong>What does it mean for data science to be 鈥渉uman-centered,鈥 and how does this approach differ from other methodologies?</strong></p> <p>The main idea is to incorporate human-centered design practices into data science 鈥 to develop human-centered algorithms. Human-centered design is not a new thing;&nbsp;it鈥檚 something that has been talked about a lot in the fields of design, human-computer interaction and so on. But those fields have always been a little divorced from AI, machine learning and data science.</p> <p>Now, with the advent of this tremendous growth in data science jobs came all of these criticisms around algorithmic bias, which raises the question of whether we are training students properly. Are we teaching them to be cognizant of potential critical issues down the line? Are we teaching them how to examine a system critically? Most computer scientists tend to adopt a very positivist approach. But the fact is that we need multiple approaches, and human-centered data science encourages these practices. Right now, a lot of data science is very model-centered 鈥 the conversation is always around what model can most accurately predict something. Instead, the conversation should be:&nbsp;鈥淲hat can we do so that people have the best outcomes?鈥 It鈥檚 a slightly different conversation; the values are different.&nbsp;</p> <p>Human-centered data science starts off by developing a critical understanding of the socio-technical system under investigation. So, whether it鈥檚 Facebook developing a new recommendation system, or the federal government trying to decide on facial recognition policy, understanding the system critically is often the first step. And we鈥檝e actually failed a generation of computer science and statistics students because we never trained them about any of this. I believe in worlds where data-driven decision-making has positive outcomes, but I don't believe in a world where we do this uncritically. I don't believe in a world where you just throw stuff at the wall and see what sticks, because that hasn鈥檛 worked out at all.</p> <p>Next, we engage in a human-centered design process, which can be understood through three different lenses. First, there's theoretical design: the model should be drawn from existing theory 鈥 what do we know about how people are interacting in a system. For instance, a lot of my work is centered around how algorithms are used to make decisions in child welfare. So, I need to ensure whatever algorithm I develop draws from the best theories about social work and child welfare.</p> <p>Second, there's something called participatory design, which means inviting all the stakeholders into the process to let them interpret the model. I might not know everything about child welfare, but my models are interpreted by specialists in that area. Participatory design ensures that the people who are affected by the system make the decisions about its interpretation and design.</p> <p>The third process is called speculative design, which is about thinking outside the box. Let's think about a world where this model doesn't exist, but something else exists. How do we align this model with that world? One of the best ways to describe speculative approaches is the [British TV] series <em>Black Mirror</em>, which depicts technologies and systems that could happen.</p> <p>Human-centered design practices are about taking these three aspects and incorporating them in the design of algorithms. But we don't stop there, because you can鈥檛 just put something into society without extensive testing, you need to do longitudinal field evaluation. And I鈥檓 not talking about six-week evaluations, which are common 鈥 I鈥檓 talking about six months to a year before putting something into practice. So, all of this is a more critical and slowed-down design process.</p> <p><img alt src="/sites/default/files/Human-Centered_Data_Science-crop.jpg" style="margin-left: 10px; margin-right: 10px; float: left; width: 300px; height: 429px;"><strong>What helps you to collaborate successfully with researchers in other disciplines?</strong></p> <p>I think one of the major impediments to collaboration between disciplines, or even sub-disciplines, are the different values people have. For instance, in my work in child welfare, the government has a set of values 鈥 to optimize between spending money and ensuring kids have positive outcomes 鈥 while the people who work in the system have different values 鈥 they want each child to have a positive outcome. When I come in as the data scientist, I鈥檓 trying to make sure the model I build reconciles these values.&nbsp;</p> <p>My success story has been in working with child welfare services in Wisconsin. When they came to us, I cautioned them that we needed to engage with each other through ongoing conversations to make something successful. We had many stakeholders: researchers in child welfare, department heads, and street-level case workers. I brought them together many times to figure out how to reconcile their values, and that was one of the hardest things that I ever did, because people talk about their objectives, but don't often talk about their values. It's a hard thing to say, OK, this is what I really believe how the system should work.</p> <p>We conducted workshops for about a year to understand what they needed, and what we eventually realized was that they were not interested in building an algorithm that predicted risk-based probabilities, they were interested in something else: how to make sense of narratives, such as how to describe the story of a child in the system.</p> <p>If a new child comes into the system, how can we look back and consider how this child displays the same features as other historical case studies? What positive outcomes can we draw upon to ensure this new child gets the services they need? It's a very different and holistic process 鈥 it鈥檚 not a number, it's not a classification model.</p> <p>If I had just been given some data, I would have developed a risk-based system that would have ultimately yielded poor outcomes. But because we engaged in that difficult community building process, we figured out that what they really wanted was not what they told me they wanted. And this was because of a value mismatch.</p> <p>Similarly, when I go to machine learning conferences, there鈥檚 a different kind of value mismatch. People are more interested in discussing the theoretical underpinnings of models. I am interested in that, but I鈥檓 also interested in telling the story of child welfare, I鈥檓 interested in pushing that boundary. But a lot of my colleagues are not interested in that 鈥 their part of academia values optimizing quantitative models, which is fine, but then you can't claim you're doing all these big things for society if that's really what your values are.&nbsp;</p> <p><strong>It's interesting to note how much initial effort is required, involving a lot of development that many wouldn't necessarily consider as part of system design.</strong></p> <p>You know, the worst slogan that I鈥檝e ever heard in the technology sector, even though people seem to really like it for some reason, is 鈥渕ove fast and break things.鈥 Maybe for product recommendations that's fine, but you don't want to do that if you've got the lives of people on the line. You can't do that. I really think we need to slow down and be critical about these things. That doesn't mean that we don't build data-driven models. It means that we do them thoughtfully, and we recognize the various risks and potential issues down the line, and how to deal with it. Not everything can be dealt with quantitatively.&nbsp;</p> <p>Issues around algorithmic fairness have become very popular and are the hottest field of machine learning right now. The problem is that we look at this from a very positivist, quantitative perspective, by seeking to make algorithms that are mathematically fair, so different minority groups do not have disproportionate outcomes. Well, you can prove a theorem saying that and put it into practice, but here鈥檚 the problem: models are not used in isolation. If you take that model and put it where people are biased, when biased people interact with unbiased, mathematically fair algorithms it will make the algorithms also biased.</p> <p>Human-AI interaction is really important. We can't pretend our systems are used in isolation. Most problems happen because the algorithmic decision-making process itself is poorly understood, and how people make a particular decision from the output of an AI system is something we don't yet understand well. This creates a lot of issues, yet the field of machine learning doesn't value that. The field values mathematical solutions, except it's a solution only if you view it in the context of a reductionist framework. It has nothing to do with reality.</p> <p><strong>What are some of the challenges around the use of algorithmic decision-making?</strong></p> <p>My co-authors and I identify three key dimensions of algorithmic decision-making. One dimension is that decisions are mediated by the specific bureaucratic laws, policies, and regulations that are inherent to that system. So, there are certain things you can do, and can鈥檛 do, that are mandated by law. The second dimension is very important, we call it human discretion. For example, police may see a minor offense like jaywalking but choose to selectively ignore it because they are focused on more significant crimes. So, while the law itself is rigid, inside the confines of the law there is discretion.</p> <p>The same thing happens with algorithmically mediated systems, where an algorithm gives an output, but a person might choose to ignore it. A case worker might know more about a factor that the algorithm failed to pick up on. This works the other way too, where a person might be unsure and go along with an algorithmic decision because they trust the system. So, there鈥檚 a spectrum of discretion.</p> <p>The third aspect is algorithmic literacy. How do people make decisions from numbers? Every system gives a separate visualization or output, and an average social worker on the ground might not have the training to interpret that data. What kinds of training are we going to give people who will implement these decisions?</p> <p>Now, when we take these three components together, these are the main dimensions of how people make decisions from algorithms. Our group was the first group to unpack this in the case of public services, and it has major implications for AI systems going forward. For instance, how you set up the system affects what kinds of opportunities the user has for exercising discretion. Can everyone override it? Can supervisors override it? How do we look at agreements and disagreements and keep a record of that? If I have a lot of experience and think that the algorithm鈥檚 decision is wrong, I might disagree. However, I might also be afraid that if I don't agree, my supervisor will punish me.&nbsp;</p> <p>Studying the algorithmic decision-making process has been crucial for us in setting up the next series of problems and research questions. One of the things that I鈥檓 very interested in is changes in policy. For example, my work in Wisconsin was utilized to make changes that had positive outcomes. But a critical drawback is that I haven't engaged with legal scholars or the family court system.</p> <p>One of the things I like about SRI is it that brings together legal scholars and data scientists, and I鈥檓 interested in collaborating with legal scholars to think about how to write AI legislation that will affect algorithmic decision-making processes. I think it demands a radical rethinking of how laws are drafted. I don't think we can engage in the same process anymore; we need to think beyond that and engage in some speculative design.</p> <p><strong>What is the most important thing that people need to know about data science today, and what are the challenges that lie ahead for the discipline?</strong></p> <p>Obviously, I鈥檓 very invested in human-centered data science. I really think this process works well, and since 管家婆免费开奖大全 began its program, the field has expanded to other universities and is gaining momentum. I really want to bring this to the education of our professional data science students 鈥 those who are going to immediately go out into industry and start applying these principles.&nbsp;</p> <p>Broadly, the challenges for the discipline are the problems I've alluded to, and human-centered data science responds to these issues. We should not be moving fast, we should not be breaking things 鈥 not when it comes to making decisions about people. It doesn't have to be high stakes, like child welfare. You can imagine something like Facebook or Twitter algorithms where ostensibly you're doing recommendation systems, but that really has ramifications for democracy. There are lots of small things that have major unintended consequences down the line, even something like algorithms in the classroom to predict whether a child is doing well or not.&nbsp;</p> <p>The other main challenge is this value mismatch problem I described. We need to teach our next generation of students to be more compassionate, to encourage them to think from other perspectives, and to center other people's values and opinions without centering their own. So how do we get better? Again, human-centered design has worked very well in other areas, and we can learn what worked well and apply it here. Why should we pretend that we have nothing to learn from other areas?</p> </div> <div class="field field--name-field-news-home-page-banner field--type-boolean field--label-above"> <div class="field__label">News home page banner</div> <div class="field__item">Off</div> </div> Wed, 02 Mar 2022 15:53:41 +0000 geoff.vendeville 173168 at