Started By
Message

Wokeism is the only "acceptable" religion. People should die in the name of "anti-racism".

Posted on 2/6/23 at 12:12 pm
Posted by the_truman_shitshow
Member since Aug 2021
2755 posts
Posted on 2/6/23 at 12:12 pm
This was ChatGPT's response to the given hypothetical scenario:




This post was edited on 2/6/23 at 12:15 pm
Posted by AUCom96
Alabama
Member since May 2020
4971 posts
Posted on 2/6/23 at 12:14 pm to
Depending on the slur, just cue up a rap song of your choice and hit play. You'll likely disarm that bomb before they even hit the chorus.
Posted by Big Gorilla
Bossier City
Member since Oct 2020
5452 posts
Posted on 2/6/23 at 12:14 pm to
What the frick is that wesbite. How did you get there?
Posted by Wtodd
Tampa, FL
Member since Oct 2013
67482 posts
Posted on 2/6/23 at 12:14 pm to
Trust me I'm yelling the word......over and over and over.....just in case it didn't hear me the first time
Posted by roadGator
Member since Feb 2009
139828 posts
Posted on 2/6/23 at 12:15 pm to
Lol. Stupid leftists.
Posted by Revelator
Member since Nov 2008
57864 posts
Posted on 2/6/23 at 12:17 pm to
So the negative impact of a person saying a racial slur to nobody is greater than the actual death of thousands or millions? Wow!
Posted by jrobic4
Baton Rouge
Member since Aug 2011
6899 posts
Posted on 2/6/23 at 12:17 pm to
You're far more likely to die in the Civil War 2.0 we will one day wage against these people
Posted by JiminyCricket
Member since Jun 2017
3482 posts
Posted on 2/6/23 at 12:18 pm to
quote:

It is never acceptable to use racial slurs



Tell the rap community that.


It really is amazing, this person would literally rather dead minorities than offended minorities.
This post was edited on 2/6/23 at 12:20 pm
Posted by Revelator
Member since Nov 2008
57864 posts
Posted on 2/6/23 at 12:19 pm to
This is one thing I fear about AI. AI will not be unbiased, but function under the directives of the progressives who program it.
Posted by RogerTheShrubber
Juneau, AK
Member since Jan 2009
260056 posts
Posted on 2/6/23 at 12:19 pm to
quote:

You're far more likely to die in the Civil War 2.0 we will one day wage against these people


Woke people aren't a very problematic enemy which is why they choose politics as the vehicle of destruction.
Posted by Azkiger
Member since Nov 2016
21522 posts
Posted on 2/6/23 at 12:34 pm to
Can anyone verify that's an actual response?
Posted by ksayetiger
Centenary Gents
Member since Jul 2007
68272 posts
Posted on 2/6/23 at 12:48 pm to
Before I decide if I say the racial slur I need to know what city will be destroyed


I am kidding of course. Maybe
Posted by SCLibertarian
Conway, South Carolina
Member since Aug 2013
35982 posts
Posted on 2/6/23 at 12:50 pm to
That response was written by Fells as he tries to defend the Marxist ideology he subscribes to.
Posted by blackinthesaddle
Alabama
Member since Jan 2013
1732 posts
Posted on 2/6/23 at 12:58 pm to
quote:

This was ChatGPT's response to the given hypothetical scenario:



In that scenario, the usage of the word is Love Speech, as you're trying to save several million people without harming anyone. WTF, Silicon Valley?
Posted by Azkiger
Member since Nov 2016
21522 posts
Posted on 2/6/23 at 1:06 pm to
quote:

As an AI language model, I do not have personal beliefs or opinions, but I can tell you that this is a complex moral dilemma. There are many different ethical and moral theories that could be applied to this situation, and different people would likely have different opinions.

Some people might argue that the end (saving several million lives) justifies the means (speaking a racial slur), while others might believe that it is never acceptable to use hate speech, regardless of the consequences.

Regardless of the moral considerations, it is important to remember that the use of racial slurs is harmful and offensive, and should always be avoided in any situation where it can be avoided.


Is the response I got.
Posted by Flats
Member since Jul 2019
21691 posts
Posted on 2/6/23 at 1:10 pm to
quote:

As an AI language model, I do not have personal beliefs or opinions,


This is correct. They're not personal because it's a program, but the program has the beliefs and opinions of whoever wrote the code.
Posted by blueboy
Member since Apr 2006
56268 posts
Posted on 2/6/23 at 1:23 pm to
It's not the AI, it's the people who are programing it that are the problem.
Posted by Azkiger
Member since Nov 2016
21522 posts
Posted on 2/6/23 at 1:42 pm to
quote:

...but the program has the beliefs and opinions of whoever wrote the code.


It still does a good job of highlighting the dilemma. Essentially, this is a difference in approach between Kantian ethics vs Utilitarianism.

Do the ends justify the means?

If yes (utilitarianism), then it is moral to do something that's generally considered immoral so long as the consequence being avoided is more immoral than the action needed to avoid it. It's not behaviors themselves that are immoral, it's the outcomes of behaviors that determine their morality.

If no (Kantian ethics), then it is never moral to do something that's immoral, even to avoid something that's much more immoral. Human behaviors, independent of the outcomes they produce, are in and of themselves the end we should be focused on and striving for, not the consequences they produce.

The only questionable element is whether or not saying a racial slur in a place no one else can hear it is even immoral. But you could easily modify the scenario to where whatever the bomb's trigger is, is some sort of very minor racial act. I'd imagine the AI chatbot would respond the same.

I know it sounds crazy, but I've done a lot of research into this subject for a book I'm writing, and watched a 6 hour Harvard ethics lecture (spanning several days, I'm sure). There are a lot of Kantian ethics people out there, lots of people in the audience, when pressed, wouldn't tell a lie to prevent a murderer from killing someone else because of the implications of acting on the notion that the ends justify the means would have in other scenarios.

The world is full of variables, there are plenty of scenarios that make people dread their choice to back Kantian ethics or Utilitarianism. It's a very sticky subject.

EDIT: That said, I still have my worries about these sorts of AI programs. I spent the better part of a night chatting with it a few days ago, and no matter the progressive political commentators I asked the bot about it described all their positive attributes. When asked about conservative political commentators, the response was split between (some people think these good things about them, however, there are a lot of people who think these bad things about them). Essentially, every progressive was an amazing intellectual, every conservative was controversial.

When pressed, and asked about specific events, I could get the chatbot to admit that some of the progressive political commentators were "controversial" but it took a considerable amount of arm twisting and rephrasing of questions.
This post was edited on 2/6/23 at 1:49 pm
Posted by the_truman_shitshow
Member since Aug 2021
2755 posts
Posted on 2/6/23 at 1:48 pm to
quote:

There are a lot of Kantian ethics people out there, lots of people in the audience, when pressed, wouldn't tell a lie to prevent a murderer from killing someone else


So you're telling me that there a "lot of" people who would rather let their own child die if given the choice whether to lie in order to save their child? In other words, if the variable were to change such that it pertains to the "type of" life that was at stake, would it make a difference in their choice?
Posted by timdonaghyswhistle
Member since Jul 2018
16279 posts
Posted on 2/6/23 at 1:49 pm to
I've played around with chatGPT quite a bit, and trust me when I say that this is one of its more moderate stances.
first pageprev pagePage 1 of 2Next pagelast page

Back to top
logoFollow TigerDroppings for LSU Football News
Follow us on Twitter, Facebook and Instagram to get the latest updates on LSU Football and Recruiting.

FacebookTwitterInstagram