ChatGPT is based on website and social media because Microsoft does not have rights to the world’s books. Since information, for the most part, is vetted by responsible publishers before publications, book information tends to be more “solid”. This is how I wound up with a chatbot telling me “Baby On Board” signs were deprecated because they could promote road rage. You don’t find that kind of “information” in books.
It’s sort of a Turing resolution. If it turns out you unknowingly had a satisfying talk, what difference does it make if it was a bot or not? If you want a guarantee, get off the couch and go find real people.
That’s going to be the death of art AI according to most AI experts. It has already gone through all the actual art to learn so now it’s learning off of its own awful AI art to produce even worse AI art and so on. Basically a form of inbreeding.
I don’t know any chat bot program that is actually getting data from websites and social media. Instead, they are being fed very selective (biased) information into their databases by their programmers. For example, after lots of questioning by me to ChatGPT, it admitted this very thing, and that its database has not been updated since 2021. Forgot the idea also that it will learn anything from your questions or information you try to give it.
Based upon my years of computer programming and knowledge of AI, our greatest fear of AI is not of it thinking FOR ITSELF and becoming dangerously independent of mankind. Our greatest fear is that it can be used to more efficiently PROGRAM (BRAINWASH) those who interact with it. Now, instead of us merely being indoctrinated by parents, teachers, “news,” and entertainment, we will be indoctrinated by AI, AKA, the PROGRAMMERS of AI, who indirectly program our own biases.
The solution to such programming, didactically or otherwise, is Socratic Learning, where we learn to develop independent critical thinking skills, and learn how to ask GOOD questions, and learn from the answers. This is opposed to the typical classroom situation where a teacher spouts out their dogma, and the student is graded on how well they regurgitate the dogma at test time. This applies as early as the first grade (or earlier) and continues well beyond graduate school.
Ratkin Premium Member over 1 year ago
And he cited BARD.
David Huie Green LoveJoyAndPeace over 1 year ago
More likely it would take the prevailing view and respond, “When in trouble or in doubt, run in circles, scream and shout.”
Imagine over 1 year ago
42
PoodleGroomer over 1 year ago
They will have to use verified data sites with a checkmark. GIGO.
Doug K over 1 year ago
This is also how you can rest assured that whatever your government does for you is for your good and the good of all.
preacherman Premium Member over 1 year ago
I’m sure the chatbots don’t worry, just us humans.
My First Premium Member over 1 year ago
This was known way back in 1939. Wizard of Oz…“Pay no attention to that man behind the curtain”.
Gent over 1 year ago
Eh we needs more I IS NOT A ROBOT tests these days.
Twelve Badgers in a Suit Premium Member over 1 year ago
Good job not uzing the I word. It’s not warranted here.
wetidlerjr over 1 year ago
Good to know…
Ebenezer Stooge Premium Member over 1 year ago
…but I enjoy worrying about nothing! ► ☺ ◄
geese28 over 1 year ago
No worries folks. Our govt has our best interest at heart. Just hush and watch the black and white swirlie-thingie on your tv…
kaffekup over 1 year ago
Too much data scraping.
mistercatworks over 1 year ago
ChatGPT is based on website and social media because Microsoft does not have rights to the world’s books. Since information, for the most part, is vetted by responsible publishers before publications, book information tends to be more “solid”. This is how I wound up with a chatbot telling me “Baby On Board” signs were deprecated because they could promote road rage. You don’t find that kind of “information” in books.
cuzinron47 over 1 year ago
And chatbots are incapable of lying, right?
Csaw Backnforth over 1 year ago
I’d be really worried if it had said, “I am not programmed to respond in that area.”
Bilan over 1 year ago
Perhaps the chatbots should include a reference list, the sites that it got its information from.
sparklite over 1 year ago
It’s sort of a Turing resolution. If it turns out you unknowingly had a satisfying talk, what difference does it make if it was a bot or not? If you want a guarantee, get off the couch and go find real people.
andrew.scharnhorst over 1 year ago
“You have nothing to worry about” is exactly what an agent of Skynet would say. (My Alexa told me so.)
smartman over 1 year ago
That’s going to be the death of art AI according to most AI experts. It has already gone through all the actual art to learn so now it’s learning off of its own awful AI art to produce even worse AI art and so on. Basically a form of inbreeding.
eddi-TBH over 1 year ago
Now I am really worried.
SKYSWIM over 1 year ago
I don’t know any chat bot program that is actually getting data from websites and social media. Instead, they are being fed very selective (biased) information into their databases by their programmers. For example, after lots of questioning by me to ChatGPT, it admitted this very thing, and that its database has not been updated since 2021. Forgot the idea also that it will learn anything from your questions or information you try to give it.
Based upon my years of computer programming and knowledge of AI, our greatest fear of AI is not of it thinking FOR ITSELF and becoming dangerously independent of mankind. Our greatest fear is that it can be used to more efficiently PROGRAM (BRAINWASH) those who interact with it. Now, instead of us merely being indoctrinated by parents, teachers, “news,” and entertainment, we will be indoctrinated by AI, AKA, the PROGRAMMERS of AI, who indirectly program our own biases.
The solution to such programming, didactically or otherwise, is Socratic Learning, where we learn to develop independent critical thinking skills, and learn how to ask GOOD questions, and learn from the answers. This is opposed to the typical classroom situation where a teacher spouts out their dogma, and the student is graded on how well they regurgitate the dogma at test time. This applies as early as the first grade (or earlier) and continues well beyond graduate school.
Bilan over 1 year ago
Interesting timing. There’s a new trailer out for yet another movie about an AI that’s trying to wipe out humanity.
BigDeal over 1 year ago
Open the pod bay doors, HAL.