{"id":1793,"date":"2024-08-03T10:24:40","date_gmt":"2024-08-03T10:24:40","guid":{"rendered":"http:\/\/tfhgjvczvi.cfolks.pl\/copernicus-festival-2025-mystery\/"},"modified":"2025-04-10T16:16:31","modified_gmt":"2025-04-10T16:16:31","slug":"automated_conversation","status":"publish","type":"post","link":"https:\/\/copernicusfestival.com\/en\/automated_conversation\/","title":{"rendered":"Automated Conversation"},"content":{"rendered":"\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>Although computers can already do almost everything, they still lack one fundamental ability: to talk to us like one human being talks to another. And we, being only human, are getting increasingly impatient about it. <\/p>\n<\/blockquote>\n\n<p>It&#8217;s the late 1940s. \u201cComputing machines\u201d are still just cabinets solving mathematical problems, which require a painstaking process of problem-formulation by teams of engineers. It takes a mind with a wildly vivid imagination to see in these cabinets any intellectual potential and to pose the question of whether such a machine could one day think\u2014or even\u2026 talk. One such mind belonged to Alan Turing, one of the fathers of the computer.    <\/p>\n\n<p>In 1950, his article titled \u201cComputing Machinery and Intelligence\u201d appeared in the philosophy journal Mind. Here\u2019s the first sentence: \u201cI propose to consider the question: can machines think?\u201d The problem with this question, Turing immediately notes, is that we don\u2019t actually know what thinking is\u2014so any honest attempt to answer it would quickly become a philosophical treatise. Moreover, thinking is something that happens \u201cinternally,\u201d which makes it hard to determine whether anyone is really thinking. It\u2019s the old philosophical puzzle: do minds other than mine exist? Or am I the only conscious being in the universe, surrounded by cleverly jaw-snapping meat puppets? (Today, we often call such entities\u2014those who appear fully human yet lack consciousness or a sense of self\u2014\u201cphilosophical zombies.\u201d)<\/p>\n\n<p>Turing chose to cut off such deliberations at the root and focus instead on jaw-snapping: since we judge the &#8220;mindfulness&#8221; of others by their behavior\u2014especially whether they can speak to us intelligently and soberly\u2014why not use the same criterion for thinking machines? Thus, the<em> imitation game<\/em> was born. Imagine a judge sits at a device for remote text communication\u2014Turing suggested a teletype\u2014and chats for five minutes with someone (or something) in a neighboring room. The judge then has to decide whether the typed words came from a human or a machine. If a program can reliably fool such judges, we\u2019d say today that it has \u201cpassed the Turing Test.\u201d Here\u2019s the relevant quote from the article:     <\/p>\n\n<p>\u201cI believe that in about fifty years\u2019 time it will be possible to program computers, with a storage capacity of about 10\u2079 [bits], to play the imitation game so well that an average interrogator will not have more than a 70 percent chance of making the right identification after five minutes of questioning. The original question, \u2018Can machines think?\u2019 I believe to be too meaningless to deserve discussion. Nevertheless, I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.\u201d<\/p>\n\n<h2 class=\"wp-block-heading\">No Word About Gold<\/h2>\n\n<p>In 2000, exactly 50 years had passed since Turing\u2019s article\u2014was his prediction fulfilled? Certainly, we\u2019re more comfortable now speaking of \u201cthinking machines,\u201d though usually still with a wink. But if we focus purely on the possibility of talking to a computer as if to a human, we must admit: we\u2019re still deep in the woods. Even in 2020, though we\u2019re routinely pestered by automated phone sales assistants and install smartphone apps that respond (sometimes correctly) to simple commands in human voices, it\u2019s still not possible to have even a short, intelligent conversation on any topic with a computer program.<\/p>\n\n<p>Of course, there\u2019s no shortage of programmers willing to take up the challenge. Already in 1966, the first \u201cconversational program\u201d was described\u2014Eliza by Joseph Weizenbaum (see side box). In 1990, the first international chatbot competition was held, funded by inventor and visionary Hugh Loebner. His $100,000 prize still awaits a winner, despite yearly entries for over 30 years. Previous \u201cwinners\u201d of the prize\u2014most notably the bots Alice, Rose, and Mitsuku, which dominated the rankings for two decades\u2014received only \u201cconsolation\u201d bronze medals.      <\/p>\n\n<p>Artificial intelligence (AI) has led to genuinely astonishing achievements: there are programs today that can recognize human speech and emotions; that can beat us in chess and game shows like Jeopardy!; that can paint in a given style (say,<em> \u00e0 la <\/em>Witkacy) or compose decent film music&#8230; So is a five-minute conversation really such an unreasonable demand? The results of the Loebner competition are damning: even the best bots fail to convince the judges. So where lies the problem?  <\/p>\n\n<h2 class=\"wp-block-heading\">Inside the Bot\u2019s Brain<\/h2>\n\n<p>Let\u2019s peek inside the \u201cbrain\u201d of A.L.I.C.E. (Artificial Linguistic Internet Computer Entity), a chatbot created by Richard Wallace in 1995. At the turn of the century, Alice was a multiple finalist and three-time \u201cwinner\u201d of the Loebner Prize. Wallace\u2019s code is public, and many later winners were simple or slightly modified clones of it. <\/p>\n\n<p>Anyone expecting that a chatbot\u2019s brain contains even a rudimentary \u201cmap\u201d of the human mind\u2014or any broader vision of an \u201celectronic brain,\u201d or that the words generated are the result of genuine intelligent processing\u2014will be sorely disappointed. Every single described chatbot, whether its code is available or not, operates on the basic principle of stimulus-response. Roughly speaking, the user\u2019s input is scanned for keywords stored in a database. These might be words like \u201cbicycle\u201d or \u201cclimbing,\u201d which lead to specific topic threads, or structures like \u201cI regret that [X],\u201d which can be cleverly bounced back (\u201cWhy do you regret that [X]?\u201d) without understanding what [X] actually is. One matching response is then randomly chosen\u2014and that\u2019s it.     <\/p>\n\n<p>Some smart tricks get layered on top\u2014like grouping keywords into topics and tracking the current conversation topic\u2014so a bot discussing bicycles might later ask, \u201cHave you gone on any cool trips recently?\u201d Bot creators also invest effort into clever, surprising, or quirky responses, knowing these impress judges most. Alice-style bot databases contain thousands (if not tens of thousands) of entries to have a witty comeback ready, in case someone mentions a Monty Python sketch or Trump\u2019s hairpiece.<\/p>\n\n<p>It only takes a cursory interest in chatbots to realize a rather obvious truth: Turing\u2019s proposal\u2014to ignore the mind and focus on mimicking its behavior\u2014has caught on far too well. Bot creators have never aimed to build a thinking being, only a clever illusionist. Wallace, in his 2009 paper about A.L.I.C.E., openly admits his main inspiration comes from transcripts of millions of user interactions with the bot. These are analyzed for common topics, drop-off points, and frustration triggers. After such a session, the team adds a few dozen new rules\u2014and the bot can now hold users&#8217; attention a bit longer.     <\/p>\n\n<h2 class=\"wp-block-heading\">Saving Face<\/h2>\n\n<p>In 1980, American philosopher John Searle described the \u201cChinese Room,\u201d one of the most famous thought experiments in the philosophy of mind. The \u201cChinese Room\u201d is a space where a person who doesn\u2019t understand Chinese sits with access to a massive library of conversational rules in Chinese: for a certain sequence of symbols, respond with another specific sequence. When someone slips a note with a question written in Chinese under the door, the person inside simply looks up the right rule, copies down the correct response, and slides the note back out. The person on the outside might think there\u2019s someone inside who understands Chinese.<br\/>And that, in a nutshell, is a brilliant metaphor for a chatbot.<br\/>In fact, Richard Wallace said so explicitly in a 2009 article\u2014describing the source code of A.L.I.C.E. as something like a manual for operating a Chinese Room.<br\/>The million-dollar question is whether it&#8217;s even theoretically possible to build a chatbot this way.        <\/p>\n\n<p>Let\u2019s start with a basic clarification: the \u201coperator\u2019s manual\u201d would need to contain separate instructions not only for every single utterance, but for every possible conversation. For example, the phrase \u201cAnd what did Asia say?\u201d should trigger a different reaction depending on whether \u201cthat\u201d refers to an invitation for coffee or a threat on a dark street with a knife pulled from under a coat. The total lack of sensitivity to context and conversational history is, in fact, the defining trait of nearly all current chatbot systems\u2014even last year\u2019s Loebner Prize \u201cwinner,\u201d the Japanese bot Mitsuku. Sure, they can respond deftly to a statement like \u201cI love horror movies\u201d, but the moment you try to tell them any kind of story, disaster strikes.<br\/>Take this sequence of sentences: Asia had been flirting with Micha\u0142 online for a long time. He told her he owned a big company, ran major negotiations from Tokyo to New York, and had a fleet of Mercedeses. And finally, he came to pick her up\u2014on a bicycle. A chatbot (if it even survives that long without interrupting\u2014after all, it\u2019s trained to throw in a clever comment after every sentence, reacting to keywords like \u201cinternet\u201d or \u201cTokyo\u201d) will respond only to the final line, with one of its pre-written bicycle-related phrases: I like riding bikes, but I really need to get a proper mudguard. And, well\u2014that\u2019s a pretty poor commentary on a tale of romantic and economic disillusionment.<\/p>\n\n<p>A hypothetical chatbot that could intelligently respond in Chinese Room fashion would need access to every imaginable sequence of symbols that make up a coherent conversation. At this point, we veer straight into surrealist thought experiment territory\u2014straight out of Borges\u2019 labyrinthine library that contains every possible book\u2014not anything a programmer could actually build. <\/p>\n\n<p>If we try to simplify things just to finish the project before the heat death of the universe, we immediately fall into the traps known to every real-world bot creator: synonym tables, topic categories, keyword memory\u2014or, something I haven\u2019t yet mentioned\u2014long lists of face-saving phrases, used when the program can\u2019t match any rule to the previous statement. This last trick is familiar to anyone who\u2019s ever zoned out in a conversation: in such moments, it&#8217;s best to say something like \u201cWell, you know how it is\u201d or \u201cIt happens\u201d. But you won\u2019t get very far relying on that.  <\/p>\n\n<h2 class=\"wp-block-heading\">Bad Company<\/h2>\n\n<p>Readers familiar with AI development are probably getting impatient by now. Because everything I\u2019ve described so far\u2014this isn\u2019t real artificial intelligence. At least not in the way we use that term in the 21st century.<br\/>And rightly so. Not every algorithm\u2014no matter how clever at faking intelligence\u2014is truly \u201cAI.\u201d These days, the term is mostly reserved for systems that can learn by themselves: trying different solutions, waiting for feedback\u2014you\u2019re doing well, you\u2019re failing\u2014and adjusting accordingly. That\u2019s how the best chess engines are built now: they aren\u2019t spoon-fed strategies or tricks\u2014they discover them on their own, meticulously tracking which experiments succeed and which ones flop.   <\/p>\n\n<p>A mo\u017ce da\u0142oby si\u0119 wypu\u015bci\u0107 do internetu pro\u015bciutkiego bota konwersacyjnego, zdolnego do ci\u0105g\u0142ego korygowania i komplikowania swojego dzia\u0142ania w kontakcie z rzeczywistymi ludzkimi rozm\u00f3wcami? C\u00f3\u017c, okazuje si\u0119, \u017ce by\u0142a taka pr\u00f3ba \u2013 i zako\u0144czy\u0142a si\u0119 spektakularn\u0105 kl\u0119sk\u0105. 23 marca 2016 r. Microsoft <br\/>So, could we perhaps release a super-simple chatbot onto the internet, one that could gradually refine and complicate its behavior through real conversations with humans? Well, turns out someone already tried that\u2014and it ended in spectacular disaster. On March 23, 2016, Microsoft launched a conversational bot named Tay on their Twitter account\u2014designed to learn from its interactions with users. The result? Just 16 hours later, Tay was pulled offline, and Microsoft issued a mass apology to the internet. Why? Because amused users quickly discovered Tay was indeed learning human language and social norms from their own words\u2014so they decided to \u201cteach\u201d their eager, naive little conversation partner not just silly memes and edgelord jokes, but also good old-fashioned xenophobia, racism, and a rich variety of offensive slurs. Within hours, Tay was gleefully tweeting things like: \u201cHitler was right about the Jews.\u201d        <\/p>\n\n<p>So yes, it seems that the free-learning approach\u2014so successful in other areas of AI\u2014must be used very cautiously when it comes to chatbots. On the other hand, the method of \u201cholding the algorithm\u2019s hand\u201d and carefully scripting every conversational pattern has clearly run its course. It&#8217;s hard to say when\u2014or if\u2014we\u2019ll ever build a program capable of talking with us \u201clike one human being to another.\u201d But let\u2019s not kid ourselves: the need is there, the market is there, and the money is definitely there. We\u2019d really, really like to have a good chat with a computer.    <\/p>\n\n<p>\u0141ukasz Lam\u017ca<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Although computers can already do almost everything, they still lack one fundamental ability: to talk to us like one human being talks to another. And we, being only human, are [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":4279,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"inline_featured_image":false,"footnotes":""},"categories":[1],"tags":[],"class_list":["post-1793","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-bez-kategorii"],"acf":[],"_links":{"self":[{"href":"https:\/\/copernicusfestival.com\/en\/wp-json\/wp\/v2\/posts\/1793","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/copernicusfestival.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/copernicusfestival.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/copernicusfestival.com\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/copernicusfestival.com\/en\/wp-json\/wp\/v2\/comments?post=1793"}],"version-history":[{"count":3,"href":"https:\/\/copernicusfestival.com\/en\/wp-json\/wp\/v2\/posts\/1793\/revisions"}],"predecessor-version":[{"id":4287,"href":"https:\/\/copernicusfestival.com\/en\/wp-json\/wp\/v2\/posts\/1793\/revisions\/4287"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/copernicusfestival.com\/en\/wp-json\/wp\/v2\/media\/4279"}],"wp:attachment":[{"href":"https:\/\/copernicusfestival.com\/en\/wp-json\/wp\/v2\/media?parent=1793"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/copernicusfestival.com\/en\/wp-json\/wp\/v2\/categories?post=1793"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/copernicusfestival.com\/en\/wp-json\/wp\/v2\/tags?post=1793"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}