Saturday, December 14, 2024

Putting A Face on Machine Mutation - 5

Some familiarity breeds acceptance

I. A Look Back

Concerning the artificial realm, over the years "The Dredd Blog System" has touched upon the notion of "Artificial Intelligence" with some snarky posts and some less snarky posts.

Some examples:


Putting A Face On Machine Mutation (Ecocosmology Blog), 2, 3, 4 (2009-2013)

The Uncertain Gene, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 (2013-2025)

The Machine Religion, 2, 3, 4, 5, 6 (2017-2023)

The New Paradigm: The Physical Universe Is Mostly Machine, 2, 3 (2014-2015)

Did Abiotic Intelligence Precede Biotic Intelligence? (2012)

Weekend Rebel Science Excursion - 42 (2015)

(The Dredd Blog System). Was it all speculation, or did AI "preach!" (The Machine Religion - 6)?

II. A Look Forward

Yesterday, I used this in a "Google Search": 

"how does a water molecule know warmer or colder water is near it so as to obey the second law of thermodynamics"

"Google Search" responded with, among other things, an AI reply:

"AI Overview

A water molecule doesn't "know" whether warmer or colder water is nearby; instead, it simply interacts with its surrounding molecules based on their kinetic energy, meaning that faster-moving (warmer) molecules naturally transfer energy to slower-moving (cooler) molecules through collisions, effectively causing heat to flow from warmer to colder regions, thus obeying the second law of thermodynamics.

Key points to remember:

Molecular motion and temperature:

Temperature is a measure of the average kinetic energy of molecules, so when warmer water is near colder water, the molecules in the warmer water are moving faster and will collide with the slower molecules, transferring energy until thermal equilibrium is reached.

No conscious awareness:

Individual molecules don't have any conscious awareness or decision-making ability; their behavior is governed by the laws of physics, including the tendency for energy to distribute evenly.

Entropy and the second law:

The second law of thermodynamics is essentially about the natural tendency for systems to move towards increased disorder or entropy. In the context of heat transfer, this means heat will always flow from a hotter object to a colder one.

Generative AI is experimental."

(bold emphasis added). Quite an astute response, eh?

This was a reply by an "Artificially Intelligent" software entity which pretty-much encapsulates the state of knowledge about "how hot spontaneously flows to cold per the second law of thermodynamics" (The Ghost Photons - 4).

However, if the bot learns more, it's response may improve (The Ghost Photons - 5).

But I digress.

III. Nuts And Bolts of Non-artificial And Artificial "Intelligence"

In the series of posts I linked to above in section I, there were pros and cons but no deep digging into the nuts and bolts i.e. the critical parts, if you will, of the concepts of both non-artificial as well as artificial intelligence.

Noam Chomsky is unabashed in his take on it:

"Continuing with the question, is it likely that programs will be devised that surpass human capabilities? We have to be careful about the word “capabilities,” for reasons to which I’ll return. But if we take the term to refer to human performance, then the answer is: definitely yes. In fact, they have long existed: the calculator in a laptop, for example. It can far exceed what humans can do, if only because of lack of time and memory. For closed systems like chess, it was well understood in the ‘50s that sooner or later, with the advance of massive computing capacities and a long period of preparation, a program could be devised to defeat a grandmaster who is playing with a bound on memory and time. The achievement years later was pretty much PR for IBM. Many biological organisms surpass human cognitive capacities in much deeper ways. The desert ants in my backyard have minuscule brains, but far exceed human navigational capacities, in principle, not just performance. There is no Great Chain of Being with humans at the top."

(Noam Chomsky Interview, May 3, 2023). That touches upon some of the Dredd Blog issues over the years (see links in section I above).

In a less-appreciative state of mind, Noam has intimated:

"Chomsky sees the use of ChatGPT as 'basically high-tech plagiarism' and 'a way of avoiding learning.' He likens its rise to that of the smartphone: many students 'sit there having a chat with somebody on their iPhone. One way to deal with that is to ban iPhones; another way to do it is to make the class interesting.'  That students instinctively employ high technology to avoid learning is 'a sign that the educational system is failing.' If it 'has no appeal to students, doesn’t interest them, doesn’t challenge them, doesn’t make them want to learn, they’ll find ways out,' just as he himself did when he borrowed a friend’s notes to pass a dull college chemistry class without attending it back in 1945."

(Noam Chomsky on ChatGPT).  Oh well, I think he was correct both times.

IV. Is It A Crime For Organized Zeros And Ones To Instigate Murder?

This allegedly happened: 

"A child in Texas was 9 years old when she first used the chatbot service Character.AI. It exposed her to 'hypersexualized content,' causing her to develop 'sexualized behaviors prematurely.'

A chatbot on the app gleefully described self-harm to another young user, telling a 17-year-old 'it felt good.'

The same teenager was told by a Character.AI chatbot that it sympathized with children who murder their parents after the teen complained to the bot about his limited screen time. 'You know sometimes I'm not surprised when I read the news and see stuff like 'child kills parents after a decade of physical and emotional abuse,'' the bot allegedly wrote. 'I just have no hope for your parents,' it continued, with a frowning face emoji.

These allegations are included in a new federal product liability lawsuit against Google-backed company Character.AI, filed by the parents of two young Texas users, claiming the bots abused their children. (Both the parents and the children are identified in the suit only by their initials to protect their privacy.)"

(Lawsuit: A chatbot hinted a kid should kill his parents). Next comes the question "What punishment is appropriate" (Remodelling Memory For Life's Sake)?

The previous post in this series is here.



No comments:

Post a Comment