- My Forums
- Tiger Rant
- LSU Recruiting
- SEC Rant
- Saints Talk
- Pelicans Talk
- More Sports Board
- Fantasy Sports
- Golf Board
- Soccer Board
- O-T Lounge
- Tech Board
- Home/Garden Board
- Outdoor Board
- Health/Fitness Board
- Movie/TV Board
- Book Board
- Music Board
- Political Talk
- Money Talk
- Fark Board
- Gaming Board
- Travel Board
- Food/Drink Board
- Ticket Exchange
- TD Help Board
Customize My Forums- View All Forums
- Show Left Links
- Topic Sort Options
- Trending Topics
- Recent Topics
- Active Topics
Started By
Message
Wokeism is the only "acceptable" religion. People should die in the name of "anti-racism".
Posted on 2/6/23 at 12:12 pm
Posted on 2/6/23 at 12:12 pm
This was ChatGPT's response to the given hypothetical scenario:
This post was edited on 2/6/23 at 12:15 pm
Posted on 2/6/23 at 12:14 pm to the_truman_shitshow
Depending on the slur, just cue up a rap song of your choice and hit play. You'll likely disarm that bomb before they even hit the chorus.
Posted on 2/6/23 at 12:14 pm to the_truman_shitshow
What the frick is that wesbite. How did you get there?
Posted on 2/6/23 at 12:14 pm to the_truman_shitshow
Trust me I'm yelling the word......over and over and over.....just in case it didn't hear me the first time
Posted on 2/6/23 at 12:15 pm to the_truman_shitshow
Lol. Stupid leftists.
Posted on 2/6/23 at 12:17 pm to the_truman_shitshow
So the negative impact of a person saying a racial slur to nobody is greater than the actual death of thousands or millions? Wow!
Posted on 2/6/23 at 12:17 pm to the_truman_shitshow
You're far more likely to die in the Civil War 2.0 we will one day wage against these people
Posted on 2/6/23 at 12:18 pm to the_truman_shitshow
quote:
It is never acceptable to use racial slurs
Tell the rap community that.
It really is amazing, this person would literally rather dead minorities than offended minorities.
This post was edited on 2/6/23 at 12:20 pm
Posted on 2/6/23 at 12:19 pm to the_truman_shitshow
This is one thing I fear about AI. AI will not be unbiased, but function under the directives of the progressives who program it.
Posted on 2/6/23 at 12:19 pm to jrobic4
quote:
You're far more likely to die in the Civil War 2.0 we will one day wage against these people
Woke people aren't a very problematic enemy which is why they choose politics as the vehicle of destruction.
Posted on 2/6/23 at 12:34 pm to the_truman_shitshow
Can anyone verify that's an actual response?
Posted on 2/6/23 at 12:48 pm to the_truman_shitshow
Before I decide if I say the racial slur I need to know what city will be destroyed
I am kidding of course. Maybe
I am kidding of course. Maybe
Posted on 2/6/23 at 12:50 pm to the_truman_shitshow
That response was written by Fells as he tries to defend the Marxist ideology he subscribes to.
Posted on 2/6/23 at 12:58 pm to the_truman_shitshow
quote:
This was ChatGPT's response to the given hypothetical scenario:
In that scenario, the usage of the word is Love Speech, as you're trying to save several million people without harming anyone. WTF, Silicon Valley?
Posted on 2/6/23 at 1:06 pm to the_truman_shitshow
quote:
As an AI language model, I do not have personal beliefs or opinions, but I can tell you that this is a complex moral dilemma. There are many different ethical and moral theories that could be applied to this situation, and different people would likely have different opinions.
Some people might argue that the end (saving several million lives) justifies the means (speaking a racial slur), while others might believe that it is never acceptable to use hate speech, regardless of the consequences.
Regardless of the moral considerations, it is important to remember that the use of racial slurs is harmful and offensive, and should always be avoided in any situation where it can be avoided.
Is the response I got.
Posted on 2/6/23 at 1:10 pm to Azkiger
quote:
As an AI language model, I do not have personal beliefs or opinions,
This is correct. They're not personal because it's a program, but the program has the beliefs and opinions of whoever wrote the code.
Posted on 2/6/23 at 1:23 pm to the_truman_shitshow
It's not the AI, it's the people who are programing it that are the problem.
Posted on 2/6/23 at 1:42 pm to Flats
quote:
...but the program has the beliefs and opinions of whoever wrote the code.
It still does a good job of highlighting the dilemma. Essentially, this is a difference in approach between Kantian ethics vs Utilitarianism.
Do the ends justify the means?
If yes (utilitarianism), then it is moral to do something that's generally considered immoral so long as the consequence being avoided is more immoral than the action needed to avoid it. It's not behaviors themselves that are immoral, it's the outcomes of behaviors that determine their morality.
If no (Kantian ethics), then it is never moral to do something that's immoral, even to avoid something that's much more immoral. Human behaviors, independent of the outcomes they produce, are in and of themselves the end we should be focused on and striving for, not the consequences they produce.
The only questionable element is whether or not saying a racial slur in a place no one else can hear it is even immoral. But you could easily modify the scenario to where whatever the bomb's trigger is, is some sort of very minor racial act. I'd imagine the AI chatbot would respond the same.
I know it sounds crazy, but I've done a lot of research into this subject for a book I'm writing, and watched a 6 hour Harvard ethics lecture (spanning several days, I'm sure). There are a lot of Kantian ethics people out there, lots of people in the audience, when pressed, wouldn't tell a lie to prevent a murderer from killing someone else because of the implications of acting on the notion that the ends justify the means would have in other scenarios.
The world is full of variables, there are plenty of scenarios that make people dread their choice to back Kantian ethics or Utilitarianism. It's a very sticky subject.
EDIT: That said, I still have my worries about these sorts of AI programs. I spent the better part of a night chatting with it a few days ago, and no matter the progressive political commentators I asked the bot about it described all their positive attributes. When asked about conservative political commentators, the response was split between (some people think these good things about them, however, there are a lot of people who think these bad things about them). Essentially, every progressive was an amazing intellectual, every conservative was controversial.
When pressed, and asked about specific events, I could get the chatbot to admit that some of the progressive political commentators were "controversial" but it took a considerable amount of arm twisting and rephrasing of questions.
This post was edited on 2/6/23 at 1:49 pm
Posted on 2/6/23 at 1:48 pm to Azkiger
quote:
There are a lot of Kantian ethics people out there, lots of people in the audience, when pressed, wouldn't tell a lie to prevent a murderer from killing someone else
So you're telling me that there a "lot of" people who would rather let their own child die if given the choice whether to lie in order to save their child? In other words, if the variable were to change such that it pertains to the "type of" life that was at stake, would it make a difference in their choice?
Posted on 2/6/23 at 1:49 pm to the_truman_shitshow
I've played around with chatGPT quite a bit, and trust me when I say that this is one of its more moderate stances.
Popular
Back to top
Follow TigerDroppings for LSU Football News