Scientists don’t even agree on a clear definition of sentience. Just like there is no one definition of empathy. You’re right in that people need to stop anthropomorphizing AI models. Ai models there are 4 types consist of data sets chat gpt is a specific type of NLP based AI model called a transformer. NLP AI models traditionally attempt to derive meaning from data but there is human oversight in that a human labels the data prior it being fed through the model. This approach tends to take a long time . But transformer models like chat GPT speed up this process. But what chat gpt knows is a collection of data sets that it has been trained on consisting of peoples online behavior. Why it keeps evolving is due to more people using it and giving it more data by asking it questions or having it do work for them. What the ironic thing is , the chat bot isn’t working for people. People are working for it. They are training it. The best way I could see to fully understand this and gain some sense of control was to attend MIT for artificial intelligence. What AI is right now at least what’s available to the public is comparably a smart toaster. A computer is stupid until you program it it’s a hunk of different parts. There are more complicated roads ahead of us in this field. Right now what it is well it’s a regurgitatier that’s not been program to communicate directly that sometimes it doesn’t know the answer. Then again human beings have extreme difficulties in saying “I don’t know” also