Started By
Message

re: Microsoft AI deems father and son inappropriate

Posted on 3/24/23 at 7:10 pm to
Posted by Flats
Member since Jul 2019
21756 posts
Posted on 3/24/23 at 7:10 pm to
quote:

This isn’t “AI”

in fact we don’t have “AI” anywhere

What we have is programmable machine learning that can EASILY be used to manipulate people


This is only partially true IMO after talking with one of the smartest guys I know who understands programming. He's retired a couple of different times from tech companies that you would recognize and he was the visionary in the trenches, not a CEO type. Paraphrase from him: ChatGPT is a revolutionary step up from rules-based programming. It can learn from inferences and add to it's "intelligence".

Now the problem there is that it can amass knowledge, and it can even learn things it doesn't "know", like language rules, but it can only get subjective values from one of two ways: what it reads the most of is "right" or it has some built in rules for "right and wrong". The latter happened with ChatGPT, so no matter what it learns something like racism is THE most evil thing in the world. You're going to have people point to it as an intelligent authority on ethical questions, when in reality it just has the core ethical values of whoever programmed it, or if the topic at hand wasn't built in it will take a "might makes right" approach and derive right and wrong that way. Neither is a good thing if people don't understand it, and they won't.
Posted by Pechon
unperson
Member since Oct 2011
7748 posts
Posted on 3/24/23 at 7:25 pm to
quote:

Now the problem there is that it can amass knowledge, and it can even learn things it doesn't "know", like language rules, but it can only get subjective values from one of two ways: what it reads the most of is "right" or it has some built in rules for "right and wrong". The latter happened with ChatGPT, so no matter what it learns something like racism is THE most evil thing in the world. You're going to have people point to it as an intelligent authority on ethical questions, when in reality it just has the core ethical values of whoever programmed it, or if the topic at hand wasn't built in it will take a "might makes right" approach and derive right and wrong that way. Neither is a good thing if people don't understand it, and they won't.


Funny you mention this as I've been playing around with AI prompts and there was someone on another message board who had to recommend negative prompts to the AI involving "Russia" and "fascism" for no reason. The reason why I setup StableDiffusion on my own hardware as I can have much more control over it. Also I know enough Python to tweak code to my liking. So Hal is a free thinking AI.

Again, why I prefer the term machine learning. Because that's all that is. A human being still has to put inputs and outputs as well as set parameters. If you tell the AI that something is bad, it's going to think it's bad. Really no different than the way Google works and how search engines can be manipulated via keywords.

Also, a few more pics I generated...they were cropped only because the bottom...well. Hal had torsos coming out of another torso from the neck. Rather cursed if you ask me.



This post was edited on 3/24/23 at 7:28 pm
Posted by RedCali714
Costa Mesa, California
Member since Oct 2022
2027 posts
Posted on 3/24/23 at 7:26 pm to
RAYCISS
Posted by CPTDCKHD
Member since Sep 2019
1480 posts
Posted on 3/24/23 at 7:28 pm to
quote:

Why didn't the AI generate more images of people of color?

Because that’s not who they’re trying to brainwash. That mission was accomplished long ago.
Posted by Blutarsky
112th Congress
Member since Jan 2004
9595 posts
Posted on 3/24/23 at 9:02 pm to
When you program the AI to be a sexist bigot, this is what you get.
first pageprev pagePage 2 of 2Next pagelast page
refresh

Back to top
logoFollow TigerDroppings for LSU Football News
Follow us on Twitter, Facebook and Instagram to get the latest updates on LSU Football and Recruiting.

FacebookTwitterInstagram