<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Copernicus Festival</title>
	<atom:link href="https://copernicusfestival.com/en/feed/" rel="self" type="application/rss+xml" />
	<link>https://copernicusfestival.com/en/</link>
	<description></description>
	<lastBuildDate>Tue, 06 May 2025 19:50:48 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
	<item>
		<title>Join the Copernicus Festival 2025 volunteer team and help us unravel this year&#8217;s mystery!</title>
		<link>https://copernicusfestival.com/en/more-than-a-metaphor-2/</link>
					<comments>https://copernicusfestival.com/en/more-than-a-metaphor-2/#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Mon, 05 May 2025 16:20:00 +0000</pubDate>
				<category><![CDATA[Bez kategorii]]></category>
		<guid isPermaLink="false">https://copernicusfestival.com/?p=5151</guid>

					<description><![CDATA[What will the volunteer&#8217;s work consist of? A Copernicus Festival volunteer helps prepare and execute festival events. These are not difficult tasks, but it is through their performance that our [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p>What will the volunteer&#8217;s work consist of?</p>



<p>A Copernicus Festival volunteer helps prepare and execute festival events. These are not difficult tasks, but it is through their performance that our festival has a chance to take place.</p>



<p>Who are we looking for?</p>



<p>People who are available during the period of May 19-25. Responsible souls, communicative and full of positive energy.</p>



<p>What do we offer?</p>



<p>First of all, the opportunity to participate in the creation of one of the most famous festivals in Krakow. Together we create an atmosphere and space to work in a group and learn new issues. In addition, at the end of the festival each volunteer has the opportunity to receive a certificate of volunteerism. During the festival, there is also an opportunity for student volunteer internships. Please notify the volunteer coordinators in advance if you would like to do an internship. We will also provide snacks and gifts.</p>



<p>How to become a volunteer?</p>



<p>Nothing difficult! Just fill out the recruitment form and we will take care of the rest. Once we review the applications we will select a team of volunteers, and we will inform you by email. Completion of the application form does not constitute acceptance as a volunteer.</p>



<p>How to contact us?</p>



<p>If you have any questions, email us at: wolontariat@copernicuscenter.edu.pl</p>



<p>Translated with DeepL.com (free version)</p>



<p></p>
]]></content:encoded>
					
					<wfw:commentRss>https://copernicusfestival.com/en/more-than-a-metaphor-2/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>More Than a Metaphor</title>
		<link>https://copernicusfestival.com/en/more-than-a-metaphor/</link>
					<comments>https://copernicusfestival.com/en/more-than-a-metaphor/#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Sat, 22 Feb 2025 12:47:28 +0000</pubDate>
				<category><![CDATA[Bez kategorii]]></category>
		<guid isPermaLink="false">https://dev.copernicusfestival.com/more-than-a-metaphor/</guid>

					<description><![CDATA[Though some may scoff and grumble, it’s impossible to deny: the human brain resembles a computer. It serves, after all, to process information. Computationalism continues to spark heated, almost ideological [&#8230;]]]></description>
										<content:encoded><![CDATA[<blockquote>
<p>Though some may scoff and grumble, it’s impossible to deny: the human brain resembles a computer. It serves, after all, to process information. </p>
</blockquote>
<p>Computationalism continues to spark heated, almost ideological debate. Many of these arguments resurface in discussions of artificial intelligence. Critics often point out that the brain is a biological system, that it doesn&#8217;t resemble a typical computer loaded with software, that artificial intelligence has yet to outwit good old common sense&#8230;<br />Philosophical daredevils — and not just any daredevils, but thinkers like the renowned American philosopher John Searle — go so far as to claim that computers don’t really exist at all, because whether something is a computer is merely a matter of rather liberal interpretation of reality.<br />The nervous system can be studied for many purposes — not only to uncover how cognitive or, more broadly, mental processes unfold. A toxicologist hired by conspirators might, for instance, be interested in the fastest way to poison James Bond and halt the workings of his brain (otherwise Bond might activate a special gadget that&#8230;). And while scheming to dominate the world may be more thrilling — and arguably more lucrative — here I’ll focus on the more modest aim of explaining mental processes.  </p>
<p>Since the second half of the 20th century, researchers have used computational models — in other words, computer simulations — to pursue this goal. The typical justification for this research practice is straightforward: computer simulations can accurately predict and explain cognitive processes because, in essence, the brain is a computer. To be more precise, one should rather say: the nervous system serves to process information. This view is known as computationalism.   </p>
<p>Computationalism continues to spark heated, almost ideological debate. Many of these arguments resurface in discussions of artificial intelligence. Critics often point out that the brain is a biological system, that it doesn&#8217;t resemble a typical computer loaded with software, that artificial intelligence has yet to outwit good old common sense&#8230; Philosophical daredevils — and not just any daredevils, but thinkers like the renowned American philosopher John Searle — go so far as to claim that computers don’t really exist at all, because whether something is a computer is merely a matter of rather liberal interpretation of reality.  </p>
<p>Another great, recently departed philosopher, Hilary Putnam, once said much the same — even backing his point with mathematical reasoning. A skeptic like Saul Kripke would chime in: we can&#8217;t even be sure computers are computers, i.e., that they truly carry out what the programmer intended. </p>
<p>I call these philosophers daredevils, because surely they don’t take such claims entirely seriously. Searle probably doesn’t write his texts while staring at the wall in his office — even though he once said the wall could, in principle, be described as a computer running the WordStar text editor. Kripke likely doesn’t genuinely question whether a mobile phone is different from a book that happens to be running Android software poorly when one tries to place a call with it. Surprisingly, these daredevil arguments are difficult to refute — but here, instead of wrestling with them, I’ll briefly reconstruct the reasons why computationalism remains, and will remain, unmatched in the study of how the mind works.   </p>
</p>
<p>What is Cognition?</p>
</p>
<p>It is hard to say what precisely characterizes cognitive processes — for example, in contrast to emotional processes or body temperature regulation, which the brain is also responsible for. One might say that to know anything, one must think. And thoughts must have some content. Yet this brings us to a challenging concept — content — that some researchers of cognitive processes prefer not to touch at all. What to do? Let’s lower the bar. If they won’t talk about content, they are unlikely to deny that cognitive processes must involve the reception and processing of information. Granted, the concept of information is also tricky, but it generates fewer controversies among cognitive scientists or philosophers of mind.       </p>
<p>In its simplest, structural sense, we can speak of information when something, namely the information carrier, can change — be in at least two states. These changes must be recorded by another system — only then do they make a meaningful difference. If they are recorded (by any receiver), the states of the carrier can carry information. This is the minimal concept of information: we cannot yet say what the information is about or whether it is true — but true information and information about something are also structural. Structural information is, therefore, the smallest common denominator of all more complex conceptions of information.    </p>
<p>Let’s combine both issues: cognitive systems must interact with their environment in such a way that they receive information from it if they are to know their environment. If they are to know their own states, they must also receive information, such as about the position of their head from the sense of balance. However, these states do not immediately cause changes in behavior or reasoning: whenever I look at my uncomfortable chair, I don’t have the same conclusion (&#8220;Time to buy something comfortable!&#8221;). Sometimes I think about avoiding tripping, other times about where to temporarily place a book. For this to happen, information must be processed, and so regularly trigger changes in other carriers of information. Only then can we speak of information processing.     </p>
</p>
<p>From Mathematical Machines to Computers</p>
</p>
<p>The term &#8220;computer&#8221; in the Polish language became common relatively late, with its use increasing in the 1980s with the rise of home computers. Prior to that, people spoke of mathematical machines, which should not be confused with mathematical models of computer operation (devices).  </p>
<p>A computer, then, is a device — a system with a specific organization — that performs calculations. It has parts that interact with each other — for example, permanent memory carriers. We can say that a computer performs a certain model of calculation only when its operation — everything that can be obtained from it by operating it — can be correctly predicted and explained in terms of that model.  </p>
<p>Imagine Faust is given for analysis the &#8220;Mephistopheles 2.0&#8221; computer, whose function, according to the instructions, is &#8220;always to deny.&#8221; Faust posits a hypothesis, which he describes as a formal model of calculations, that after entering the word &#8220;YES,&#8221; Mephistopheles will spit out &#8220;NO.&#8221; In fact, even after entering &#8220;NO,&#8221; Mephistopheles continues to answer &#8220;NO.&#8221; This means the model&#8217;s predictions hold true. It is important that Mephistopheles 2.0 is a device that is structured to perform such calculations — that is, it has been designed in such a way to compute this, and not something else. The result of the calculations should also be used to control some other device (which we also understand as being used by the user, influencing them cognitively).     </p>
<p>We also have the mathematical theory of computation, the foundation of which is based on the work of the brilliant mathematician Alan Turing (1912–1954). His mathematical model of a machine — later known as the &#8220;Turing machine&#8221; — formalized the concept of &#8220;algorithm&#8221; that mathematicians had intuitively used for centuries. An &#8220;algorithm&#8221; is understood strictly as a set of operations that a Turing machine can perform using a very limited set of basic operations. These correspond to what a clerk might mechanically do using an almost unlimited amount of paper, a pencil, and an eraser: writing, erasing, moving left or right on the paper (or tape). Turing demonstrated that there can also be a universal machine: one that can perform any calculation available to all other Turing machines (each of which performs a set of predetermined operations). He claimed that such a machine could compute anything that could be computed effectively — although there are those who argue that physically possible computations might exist for functions that a Turing machine cannot compute. These are called &#8220;hypercomputational functions.&#8221;      </p>
<p>A lot of ink has been spilled trying to describe all the conditions that physical systems must meet to be considered &#8220;computers,&#8221; but we can remain at a fairly general level. It is important for a physical system to have a calculating function and a proper structure. It must be a mechanism used to compute according to some mathematical model of computation, and its operations cannot be explained more simply and elegantly. Thus, a bucket of water is not a computer, because the water level does not constitute the expected result unless the bucket is used to perform calculations. Why this qualification? A computer could be hydraulic — for instance, in the mid-20th century, there was a model of the Keynesian economy implemented as a complicated water-based computer, MONIAC. There are also (somewhat humorous) theoretical works showing the possibility of using buckets of water to calculate logical functions that support pattern recognition by neural-like networks. Similarly, a regular stone is not a computer, because when it lies on the meadow, it doesn&#8217;t compute any simple constant function (e.g., the fact that it lies to the left of the apple tree doesn&#8217;t mean it yields the result &#8220;12&#8221;). This can be explained more simply. Gravity.        </p>
</p>
<p>Biological and Electronic Brains</p>
</p>
<p>Now that we have an approximate understanding of what it means for a physical device to be a computer, let’s turn to brains. As early as the beginning of the second half of the 20th century, mathematical machines were referred to as &#8220;electronic brains.&#8221; It was noted that computers solve problems whose nature cannot be predetermined, just like the universal Turing machine. Moreover, a machine&#8217;s capabilities stem from a large number of elementary operations — just as the brain’s capabilities arise from the interactions of a large number of relatively simple neurons. At the same time, it was pointed out that machines can use descriptions of other machines, process symbols, or process information.     </p>
<p>In short, this argument draws on several characteristics of brains: flexibility or universality; complexity arising from neuron interactions; and information processing. Even if computers have these characteristics, they are not enough to determine whether the brain is a computer. This overlooks whether the brain actually has an internal structure corresponding to a specific model of computation, and whether this structure has been selected — perhaps through natural selection — to perform tasks related to information processing.  </p>
<p>Why would the brain serve this purpose? Because cognitive processes, unlike metabolic processes, do not simply involve supplying energy to the organism. Predicting future states of the environment is crucial for the survival of any organism that can somehow prepare for these future states. We cannot explain how organisms prepare for future states — if they are completely new — except by referring to information about the environment.   </p>
<p>This suggests that the informational functioning of the brain could indeed be a biological adaptation, ensuring better adaptation to the environment and, for example, allowing animals to plan their movements better. Therefore, the better the movement planning, the larger the brain. (Compare the brain of an elephant to the information-processing system of a moving plant, such as a sundew).  </p>
</p>
<p>Simulations and Speculations</p>
</p>
<p>Hypotheses about the computational nature of the brain have inspired many speculations that could not be directly confirmed by experimental methods. However, one could create computer simulations that provide data so precise that they couldn’t be obtained from studies of biological brains. While the first computational models of the brain — proposed by cybernetics pioneers Warren McCulloch and Walter Pitts in 1943 — may seem like curiosities today, more precise models of both individual neurons and entire neural networks have developed over time.   </p>
<p>On the one hand, we have increasingly precise electrophysiological studies of animal and human brains. Laboratories also cultivate cultures of nerve cells, studying their behavior as neural networks (thus directly serving as computers). There are more and more experimental results — and some, like the 2014 Nobel Prize-winning discoveries of cognitive maps in rats’ brains (John O&#8217;Keefe, May-Britt Moser, and Edvard Moser), are particularly spectacular. Cognitive maps are simulated computationally, and the results of these simulations are compared with experimental results in animals. Anatomical research is also accelerating — with increasingly detailed attempts to trace the entire neural network in various organisms (the so-called connectome, the map of all neuronal connections). We now know the connectome of the roundworm C. elegans, as well as fragments of the fruit fly&#8217;s connectome, the retina, and the primary visual cortex of mice.      </p>
<p>On the other hand, computational models — both of individual neurons and of the whole brain — are becoming more precise. There are very accurate models describing the electrophysiological properties of individual neurons, which are used to study, for example, the origin of brain waves that can be recorded by an electroencephalograph. There are also cognitive simulations that approximately describe the workings of the entire brain. The largest and most successful such simulation is SPAUN — a brain model developed in the laboratory of Canadian neuroscientist and philosopher Chris Eliasmith. SPAUN can perform a variety of cognitive tasks, including solving intelligence tests, recognizing handwriting, and learning by conditioning.     </p>
<p>Thanks to research that includes both computer models and empirical results, the early cybernetic speculations have transformed into systematic research in computational neuroscience. Computationalism remains an incredibly fertile working hypothesis, and it can only be disproven in one way: by showing that the brain does not process information at all, including information about the environment. </p>
<p>But this is a task for daredevils. ©</p>
</p>
<p>Marcin Miłkowski</p>

<p></p>
]]></content:encoded>
					
					<wfw:commentRss>https://copernicusfestival.com/en/more-than-a-metaphor/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Automated Conversation</title>
		<link>https://copernicusfestival.com/en/automated_conversation/</link>
					<comments>https://copernicusfestival.com/en/automated_conversation/#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Sat, 03 Aug 2024 10:24:40 +0000</pubDate>
				<category><![CDATA[Bez kategorii]]></category>
		<guid isPermaLink="false">http://tfhgjvczvi.cfolks.pl/copernicus-festival-2025-mystery/</guid>

					<description><![CDATA[Although computers can already do almost everything, they still lack one fundamental ability: to talk to us like one human being talks to another. And we, being only human, are [&#8230;]]]></description>
										<content:encoded><![CDATA[
<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>Although computers can already do almost everything, they still lack one fundamental ability: to talk to us like one human being talks to another. And we, being only human, are getting increasingly impatient about it. </p>
</blockquote>

<p>It&#8217;s the late 1940s. “Computing machines” are still just cabinets solving mathematical problems, which require a painstaking process of problem-formulation by teams of engineers. It takes a mind with a wildly vivid imagination to see in these cabinets any intellectual potential and to pose the question of whether such a machine could one day think—or even… talk. One such mind belonged to Alan Turing, one of the fathers of the computer.    </p>

<p>In 1950, his article titled “Computing Machinery and Intelligence” appeared in the philosophy journal Mind. Here’s the first sentence: “I propose to consider the question: can machines think?” The problem with this question, Turing immediately notes, is that we don’t actually know what thinking is—so any honest attempt to answer it would quickly become a philosophical treatise. Moreover, thinking is something that happens “internally,” which makes it hard to determine whether anyone is really thinking. It’s the old philosophical puzzle: do minds other than mine exist? Or am I the only conscious being in the universe, surrounded by cleverly jaw-snapping meat puppets? (Today, we often call such entities—those who appear fully human yet lack consciousness or a sense of self—“philosophical zombies.”)</p>

<p>Turing chose to cut off such deliberations at the root and focus instead on jaw-snapping: since we judge the &#8220;mindfulness&#8221; of others by their behavior—especially whether they can speak to us intelligently and soberly—why not use the same criterion for thinking machines? Thus, the<em> imitation game</em> was born. Imagine a judge sits at a device for remote text communication—Turing suggested a teletype—and chats for five minutes with someone (or something) in a neighboring room. The judge then has to decide whether the typed words came from a human or a machine. If a program can reliably fool such judges, we’d say today that it has “passed the Turing Test.” Here’s the relevant quote from the article:     </p>

<p>“I believe that in about fifty years’ time it will be possible to program computers, with a storage capacity of about 10⁹ [bits], to play the imitation game so well that an average interrogator will not have more than a 70 percent chance of making the right identification after five minutes of questioning. The original question, ‘Can machines think?’ I believe to be too meaningless to deserve discussion. Nevertheless, I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.”</p>

<h2 class="wp-block-heading">No Word About Gold</h2>

<p>In 2000, exactly 50 years had passed since Turing’s article—was his prediction fulfilled? Certainly, we’re more comfortable now speaking of “thinking machines,” though usually still with a wink. But if we focus purely on the possibility of talking to a computer as if to a human, we must admit: we’re still deep in the woods. Even in 2020, though we’re routinely pestered by automated phone sales assistants and install smartphone apps that respond (sometimes correctly) to simple commands in human voices, it’s still not possible to have even a short, intelligent conversation on any topic with a computer program.</p>

<p>Of course, there’s no shortage of programmers willing to take up the challenge. Already in 1966, the first “conversational program” was described—Eliza by Joseph Weizenbaum (see side box). In 1990, the first international chatbot competition was held, funded by inventor and visionary Hugh Loebner. His $100,000 prize still awaits a winner, despite yearly entries for over 30 years. Previous “winners” of the prize—most notably the bots Alice, Rose, and Mitsuku, which dominated the rankings for two decades—received only “consolation” bronze medals.      </p>

<p>Artificial intelligence (AI) has led to genuinely astonishing achievements: there are programs today that can recognize human speech and emotions; that can beat us in chess and game shows like Jeopardy!; that can paint in a given style (say,<em> à la </em>Witkacy) or compose decent film music&#8230; So is a five-minute conversation really such an unreasonable demand? The results of the Loebner competition are damning: even the best bots fail to convince the judges. So where lies the problem?  </p>

<h2 class="wp-block-heading">Inside the Bot’s Brain</h2>

<p>Let’s peek inside the “brain” of A.L.I.C.E. (Artificial Linguistic Internet Computer Entity), a chatbot created by Richard Wallace in 1995. At the turn of the century, Alice was a multiple finalist and three-time “winner” of the Loebner Prize. Wallace’s code is public, and many later winners were simple or slightly modified clones of it. </p>

<p>Anyone expecting that a chatbot’s brain contains even a rudimentary “map” of the human mind—or any broader vision of an “electronic brain,” or that the words generated are the result of genuine intelligent processing—will be sorely disappointed. Every single described chatbot, whether its code is available or not, operates on the basic principle of stimulus-response. Roughly speaking, the user’s input is scanned for keywords stored in a database. These might be words like “bicycle” or “climbing,” which lead to specific topic threads, or structures like “I regret that [X],” which can be cleverly bounced back (“Why do you regret that [X]?”) without understanding what [X] actually is. One matching response is then randomly chosen—and that’s it.     </p>

<p>Some smart tricks get layered on top—like grouping keywords into topics and tracking the current conversation topic—so a bot discussing bicycles might later ask, “Have you gone on any cool trips recently?” Bot creators also invest effort into clever, surprising, or quirky responses, knowing these impress judges most. Alice-style bot databases contain thousands (if not tens of thousands) of entries to have a witty comeback ready, in case someone mentions a Monty Python sketch or Trump’s hairpiece.</p>

<p>It only takes a cursory interest in chatbots to realize a rather obvious truth: Turing’s proposal—to ignore the mind and focus on mimicking its behavior—has caught on far too well. Bot creators have never aimed to build a thinking being, only a clever illusionist. Wallace, in his 2009 paper about A.L.I.C.E., openly admits his main inspiration comes from transcripts of millions of user interactions with the bot. These are analyzed for common topics, drop-off points, and frustration triggers. After such a session, the team adds a few dozen new rules—and the bot can now hold users&#8217; attention a bit longer.     </p>

<h2 class="wp-block-heading">Saving Face</h2>

<p>In 1980, American philosopher John Searle described the “Chinese Room,” one of the most famous thought experiments in the philosophy of mind. The “Chinese Room” is a space where a person who doesn’t understand Chinese sits with access to a massive library of conversational rules in Chinese: for a certain sequence of symbols, respond with another specific sequence. When someone slips a note with a question written in Chinese under the door, the person inside simply looks up the right rule, copies down the correct response, and slides the note back out. The person on the outside might think there’s someone inside who understands Chinese.<br/>And that, in a nutshell, is a brilliant metaphor for a chatbot.<br/>In fact, Richard Wallace said so explicitly in a 2009 article—describing the source code of A.L.I.C.E. as something like a manual for operating a Chinese Room.<br/>The million-dollar question is whether it&#8217;s even theoretically possible to build a chatbot this way.        </p>

<p>Let’s start with a basic clarification: the “operator’s manual” would need to contain separate instructions not only for every single utterance, but for every possible conversation. For example, the phrase “And what did Asia say?” should trigger a different reaction depending on whether “that” refers to an invitation for coffee or a threat on a dark street with a knife pulled from under a coat. The total lack of sensitivity to context and conversational history is, in fact, the defining trait of nearly all current chatbot systems—even last year’s Loebner Prize “winner,” the Japanese bot Mitsuku. Sure, they can respond deftly to a statement like “I love horror movies”, but the moment you try to tell them any kind of story, disaster strikes.<br/>Take this sequence of sentences: Asia had been flirting with Michał online for a long time. He told her he owned a big company, ran major negotiations from Tokyo to New York, and had a fleet of Mercedeses. And finally, he came to pick her up—on a bicycle. A chatbot (if it even survives that long without interrupting—after all, it’s trained to throw in a clever comment after every sentence, reacting to keywords like “internet” or “Tokyo”) will respond only to the final line, with one of its pre-written bicycle-related phrases: I like riding bikes, but I really need to get a proper mudguard. And, well—that’s a pretty poor commentary on a tale of romantic and economic disillusionment.</p>

<p>A hypothetical chatbot that could intelligently respond in Chinese Room fashion would need access to every imaginable sequence of symbols that make up a coherent conversation. At this point, we veer straight into surrealist thought experiment territory—straight out of Borges’ labyrinthine library that contains every possible book—not anything a programmer could actually build. </p>

<p>If we try to simplify things just to finish the project before the heat death of the universe, we immediately fall into the traps known to every real-world bot creator: synonym tables, topic categories, keyword memory—or, something I haven’t yet mentioned—long lists of face-saving phrases, used when the program can’t match any rule to the previous statement. This last trick is familiar to anyone who’s ever zoned out in a conversation: in such moments, it&#8217;s best to say something like “Well, you know how it is” or “It happens”. But you won’t get very far relying on that.  </p>

<h2 class="wp-block-heading">Bad Company</h2>

<p>Readers familiar with AI development are probably getting impatient by now. Because everything I’ve described so far—this isn’t real artificial intelligence. At least not in the way we use that term in the 21st century.<br/>And rightly so. Not every algorithm—no matter how clever at faking intelligence—is truly “AI.” These days, the term is mostly reserved for systems that can learn by themselves: trying different solutions, waiting for feedback—you’re doing well, you’re failing—and adjusting accordingly. That’s how the best chess engines are built now: they aren’t spoon-fed strategies or tricks—they discover them on their own, meticulously tracking which experiments succeed and which ones flop.   </p>

<p>A może dałoby się wypuścić do internetu prościutkiego bota konwersacyjnego, zdolnego do ciągłego korygowania i komplikowania swojego działania w kontakcie z rzeczywistymi ludzkimi rozmówcami? Cóż, okazuje się, że była taka próba – i zakończyła się spektakularną klęską. 23 marca 2016 r. Microsoft <br/>So, could we perhaps release a super-simple chatbot onto the internet, one that could gradually refine and complicate its behavior through real conversations with humans? Well, turns out someone already tried that—and it ended in spectacular disaster. On March 23, 2016, Microsoft launched a conversational bot named Tay on their Twitter account—designed to learn from its interactions with users. The result? Just 16 hours later, Tay was pulled offline, and Microsoft issued a mass apology to the internet. Why? Because amused users quickly discovered Tay was indeed learning human language and social norms from their own words—so they decided to “teach” their eager, naive little conversation partner not just silly memes and edgelord jokes, but also good old-fashioned xenophobia, racism, and a rich variety of offensive slurs. Within hours, Tay was gleefully tweeting things like: “Hitler was right about the Jews.”        </p>

<p>So yes, it seems that the free-learning approach—so successful in other areas of AI—must be used very cautiously when it comes to chatbots. On the other hand, the method of “holding the algorithm’s hand” and carefully scripting every conversational pattern has clearly run its course. It&#8217;s hard to say when—or if—we’ll ever build a program capable of talking with us “like one human being to another.” But let’s not kid ourselves: the need is there, the market is there, and the money is definitely there. We’d really, really like to have a good chat with a computer.    </p>

<p>Łukasz Lamża</p>
]]></content:encoded>
					
					<wfw:commentRss>https://copernicusfestival.com/en/automated_conversation/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>What Came First: Words or Gestures?</title>
		<link>https://copernicusfestival.com/en/what-came-first-words-or-gestures/</link>
					<comments>https://copernicusfestival.com/en/what-came-first-words-or-gestures/#respond</comments>
		
		<dc:creator><![CDATA[admin]]></dc:creator>
		<pubDate>Sat, 03 Aug 2024 10:23:27 +0000</pubDate>
				<category><![CDATA[Bez kategorii]]></category>
		<guid isPermaLink="false">https://dev.copernicusfestival.com/what-came-first-words-or-gestures/</guid>

					<description><![CDATA[Communication is a widespread phenomenon in the animal world, occurring across the entire “tree of life” – and it takes many forms. It can be expressed through gestures (e.g., in [&#8230;]]]></description>
										<content:encoded><![CDATA[
<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>We still know embarrassingly little about the origin of perhaps the most important evolutionary product for our species &#8211; natural language.</p>
</blockquote>
</blockquote>

<p>Communication is a widespread phenomenon in the animal world, occurring across the entire “tree of life” – and it takes many forms. It can be expressed through gestures (e.g., in great apes), sequences of movements (like birds’ courtship rituals), scents (pheromones released by insects), sounds (e.g., frogs croaking), changes in body coloration (as in cuttlefish), bioluminescence (light emission – as in fireflies or deep-sea fish), facial expressions (including eye gaze), and even vibrations traveling through water, soil, leaves, or spider webs. Naturally, many species use several modes and channels of communication, each with its own evolutionary story.  </p>

<p>In the broadest sense, we speak of communication when a message is sent from a sender to a receiver – though the sender doesn’t always act intentionally or even know the receiver is present. Signals can be sent involuntarily, triggered by environmental stimuli or emotional states. Most animal communication seems to work like this. And it&#8217;s not too surprising, since the vast majority of animal signals are alarm calls (warnings about predators), mating calls (like birdsong), territorial threats (like the duets of gibbons), or signals about food sources (like the famous bee dance). Each of these messages has a specific environmental or emotional context. This type of communication isn’t flexible – an animal can’t send a message without the appropriate context – nor can it express arbitrary content. The advantage? It requires little cognitive effort – natural selection could hardwire it even into creatures we’d hardly call intentional or conscious.      </p>

<p>The meanings of such messages are typically hardwired genetically, and animals don&#8217;t need to learn them (though some do – for example, impalas, zebras, and wildebeests have been shown to understand alarm calls not only from their own species but from the others as well).</p>

<p><strong>A Lack of Continuity?</strong></p>

<p>Human natural language stands out against this backdrop: it’s highly flexible, it allows us to convey virtually any content (even fictional or absurd), it’s symbolic (we must learn the meanings of words – they’re not inborn), and above all – we generally use it intentionally.</p>

<p>With many human anatomical traits, practices, and behaviors, we can speak of evolutionary continuity. Take bipedal posture or tool use – once thought to define humanity; the fossil record shows how they evolved in our ancestors. Related species show similar traits (great apes can walk on two legs, though not for long distances; many monkeys use tools, like stones to crack nuts). But language seems trickier – for a long time, researchers believed no form of animal communication resembled human language enough to qualify as an evolutionary precursor. The fossil record is little help – since words and gestures don’t fossilize, we’re left with indirect clues, like social complexity or geographic spread of hominid species.    </p>

<p>Daniel Everett, for instance, argues that Homo erectus (who emerged about 2 million years ago and went extinct 100,000 years ago) must have used language – they built campsites with clear divisions of labor: areas for resting, meat processing, and plant processing. They also reached distant islands (like Crete), suggesting knowledge of seafaring. According to Everett, achieving all this without advanced language would’ve been impossible.   </p>

<p><strong>No Consensus</strong></p>

<p>Back to great apes: psychologist Michael Tomasello believes they have a rich repertoire of vocalizations (e.g., when spotting predators, encountering threats, or finding food), but their use is not flexible. Apes vocalize more or less unconsciously. Hence, it&#8217;s hard to see these calls as the evolutionary roots of human words. Tomasello argues that gesture-based communication came first – something between pantomime and modern sign language – and this form paved the way for the development of brain circuits later adapted for processing speech, which he views as a cultural invention. In support of this, apes sometimes use gestures intentionally: to get someone’s attention (e.g., by slapping the ground or throwing a stone) or to express intention (e.g., pulling another individual’s arm in the direction they want to go).    </p>

<p>However, recent years have brought new discoveries in the realm of natural primate communication. Thanks to research in “primate linguistics” – applying methods of traditional linguistics to primate vocalizations – we now know that primate vocal communication is more complex than previously thought. Playback experiments have shown that the meaning of vocalizations can be modified by syntax; each call can be broken into components (syllables), and changing their order affects the listener’s response.  </p>

<p>We also know that apes possess a theory of mind – meaning they perceive others as intentional beings, worth communicating with deliberately. Overall, vocalizations seem to play a more central role in their lives than gestures. Therefore, more and more scholars today argue that we shouldn’t look only to ape gestures for the roots of our own language – primate vocalizations must also be part of the story.   </p>

<p>Still, we must remember: we&#8217;re only just beginning the journey. Solving one of the great mysteries of science will require more discoveries – not only in animal communication, but also in paleoanthropology and neurobiology.</p>

<p>Łukasz Kwiatek</p>
]]></content:encoded>
					
					<wfw:commentRss>https://copernicusfestival.com/en/what-came-first-words-or-gestures/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
