I find the AI discussions at best distracting and at worst irritating. I mean this down to even referring to it as AI. This is clearly a marketing ploy and it’s distressing that it works so well.
(Picture from here.)
Remember, AI is a Spielberg movie. And one of the most famous intelligent computers is HAL from 2001.
The underlying systems involve neural networks and large language models. What I think the resulting trained systems should be called deep analysis pattern determining predictive machines. I know that this isn’t as sexy as AI but it’s more descriptive. Here is my reasoning.
Machine learning involves deep analysis of the training data across thousands of dimensions. No, not dimensions like in the old show Sliders. I mean dimensions of analysis.
Let’s say we described an object in terms of geographical location, temperature, altitude, velocity, and color. Each of these attributes can be evaluated to a value. That value can be described as being a point on a line. Altitude in meters. Velocity in meters/second. Color as a list. Etc. Each of these attributes can be referred to as a dimension. The dimensional values allow the representation of the object as a point in a multidimensional space, allowing comparisons between different points in the same space. This is analogous to plotting something on an x/y graph. The distance between two points can be determined. Similarly, “distances” between two points can be derived from their relative values in their dimensional space.
That’s the deep analysis. The next component is its ability to determine patterns in a huge dataset. It can be done because of the previous data representation. There are several types of machine learning used to discover patterns but the ones that most concern this discussion are unsupervised learning, supervised, learning, and reinforcement learning. Unsupervised learning derives patterns from unlabeled data. The patterns are determined solely from the system itself.
Supervised learning where inputs and a desired output value are used to train a model. The training set represents data that the model would expect—radiology images, for example. The desired output is specified and associated with the input data. The goal is that when the model inputs new data it will be able to generated outputs will reflect the what the model learned in the training data. In a radiological image example, the model might be trained on lung cancer images. Then, presented with new images be able to detect lung cancer.
Reinforcement learning doesn’t require labeled input/output values as in supervised learning. Instead, the model determines action from unlabeled input data and its decisions are then evaluated and positive or negative feedback is given. This is similar in the way animals are trained to do tricks. Say, you want a dog to run up a ramp and jump through a hoop. First, you might reward the dog when it approaches the ramp. Then, further reinforce the dog when it starts to walk up the ramp, and so on.
Prediction is when given an input they output the most likely expected material.
I am not criticizing these systems. These are highly useful—especially the more specialized systems such as used to determine drug discovery and other disciplines. What I object to is that the language we use to describe them gives them qualities they cannot possess.
While these systems are certainly artificial and “intelligent” in the broadest sense of the term, they are not conscious. They do not understand what they are doing. Yet, we use language that implies both of these things.
For example, all of the above systems have a problem with generating data that reflects their training but do not represent anything possible or desirable. These have been termed “hallucinations.” A hallucination is defined to be “is a false perception of objects or events involving your senses: sight, sound, smell, touch and taste.” (See here.) These systems do not perceive and have no senses. They have no ability to experience.
Or we might say “ChatGPT tells me that there should be glue on pizza.” To tell something implies a intentional communication between two conscious entities. That’s not happening here.
Humans project all the time. We see two dots with a curve under them and perceive a face. Our cat rubs its head on our knee and we declare it loves us. We trust that our leaders have our best interests at heart. None of these assertions need be true—and some are provably false—but we believe them all the same.
The AIs that are now in use in search engines, in offices, on the shop floor have demonstrable failures. Yet, they are still being forced on us.
Describing them as human is propaganda and we should recognize it as such.
No comments:
Post a Comment