Started By
Message
locked post

ChatGPT might not be coming for all of us quite yet...

Posted on 5/10/23 at 12:42 pm
Posted by LSUFanHouston
NOLA
Member since Jul 2009
39180 posts
Posted on 5/10/23 at 12:42 pm
From an Accounting Today article:

ChatGPT, the AI chatbot that's taken the world by storm, has already conquered numerous tests–the Wharton MBA exam, the bar exam, and several AP exams among others. But the talking bot met its match when Accounting Today ran it through the CPA exam as an experiment: ChatGPT failed utterly in all four sections.

The experiment took place at the Arizent office in New York City's financial district on April 13 in collaboration with Surgent CPA Review. We used two laptops, each running a separate ChatGPT 3.5 Pro account (metering on free accounts, or on GPT 4, would have made the experiment impractical). One laptop ran the BEC and FAR section. The other ran the REG and AUD section.

When all test sections were completed, its scores were:

REG: 39%;
AUD: 46%;
FAR: 35%
BEC: 48%

A 75% score is required to pass a section.
Posted by H2O Tiger
Delta Sky Club
Member since May 2021
6848 posts
Posted on 5/10/23 at 12:43 pm to
Big 4 employees never gonna get any relief
Posted by CocomoLSU
Inside your dome.
Member since Feb 2004
153804 posts
Posted on 5/10/23 at 12:46 pm to
Saw an interview on Rogan yesterday (youtube) and it was an asian guy talking about things like ChatGPT. He was saying how literally all it does is scour the internet for information and present it. It doesn't care if the information is right or wrong, it just copies it and pastes it in whatever order it determines. He said the reason people say "That sounds like it was written by a human" is it literally was written by someone before. It's basically one huge plagiarizer.

I had never thought about it in that way, but it made perfect sense.
This post was edited on 5/10/23 at 6:14 pm
Posted by bad93ex
Walnut Cove
Member since Sep 2018
30808 posts
Posted on 5/10/23 at 12:46 pm to
It is also unable to answer if a mans fart in College Station sounds different.
Posted by LegendInMyMind
Member since Apr 2019
66261 posts
Posted on 5/10/23 at 12:49 pm to
Yeah, but......how's it do on the Wonderlic?





And, AI Board.
Posted by TacoNash
Member since Mar 2020
715 posts
Posted on 5/10/23 at 12:51 pm to
quote:

each running a separate ChatGPT 3.5 Pro account


Why would they not use ChatGPT 4? It is significantly better.
Posted by Cdawg
TigerFred's Living Room
Member since Sep 2003
60777 posts
Posted on 5/10/23 at 12:51 pm to
quote:

He was saying how literally all it does is scour the internet for information and present it. It doesn't care if the information is right or wrong, it just copies it and pastes it in whatever order it determines. He said the reason people say "Thaat sounds like it was written by a human" is it literally was written by someone before. It's basically one huge plagiarizer.

Figured that out when I was trying to use it for my daughters paper. It had a lot if incorrect information.
Posted by LSUFanHouston
NOLA
Member since Jul 2009
39180 posts
Posted on 5/10/23 at 12:52 pm to
quote:

He was saying how literally all it does is scour the internet for information and present it. It doesn't care if the information is right or wrong, it just copies it and pastes it in whatever order it determines. He said the reason people say "Thaat sounds like it was written by a human" is it literally was written by someone before. It's basically one huge plagiarizer.


There's some company that is trying to start a service, where people pay $10 or $20 and ask some ChatGPT bot tax questions and get answers.

My immediate question is... how do they know it is a correct answer?
Posted by LSUFanHouston
NOLA
Member since Jul 2009
39180 posts
Posted on 5/10/23 at 12:54 pm to
They said that was impracticeal, I don't know what that means here.
Posted by Ancient Astronaut
Member since May 2015
36225 posts
Posted on 5/10/23 at 12:55 pm to
Michio Kaku?
This post was edited on 5/10/23 at 12:56 pm
Posted by TacoNash
Member since Mar 2020
715 posts
Posted on 5/10/23 at 12:57 pm to
quote:

They said that was impracticeal, I don't know what that means here.


Interesting, it may have been too expensive or OpenAI may have limited the quota too much for them to use.
Posted by Roy Curado
Member since Jul 2021
1359 posts
Posted on 5/10/23 at 1:04 pm to
AI technology is rapidly expanding and enhancing every day now. ChatGPT is so late 2022 now.

AutoGPT is the tech that many believe "will take over humans" not a chatbox like ChatGPT.
Posted by TigerinATL
Member since Feb 2005
62439 posts
Posted on 5/10/23 at 1:05 pm to
First, 3.5 vs 4 is a huge difference and kind of makes the test pointless.

Second, the quality of the tech isn’t what is preventing it from taking a lot of jobs today, it’s the lack of integration. As we all use Microsoft Copilot over the next 3-5 years we will essentially be training our replacements and helping companies integrate AI into their workflows.

The job losses are going to come through gradual attrition, not mass layoffs. It will just be harder to get new jobs as nobody will be hiring.
Posted by armsdealer
Member since Feb 2016
11969 posts
Posted on 5/10/23 at 1:07 pm to
I pulled some medical case studies from bio chem that I had already completed and it answered all of the questions correct study after study.

I started using it to fill in objectives for exams and turning text book style writing into plain English to help with memorization.

It is also great for fluffing a bio, just write what you want to say in a normal writing style and it adds all the PR HR style fluff for you.
Posted by lostinbr
Baton Rouge, LA
Member since Oct 2017
11832 posts
Posted on 5/10/23 at 1:12 pm to
quote:

He was saying how literally all it does is scour the internet for information and present it.

That’s a pretty massive oversimplification. It’s not actively scouring the internet. Its “knowledge” is limited to the dataset that was used for training (which is, admittedly, massive).

That being said, the conversation around where GPT gets its “information” largely misses the point. Large language models (LLMs) like GPT are designed to learn language, first and foremost, by consuming human written language and developing an understanding of relationships between words. It’s not that much different than how an infant learn language - it builds associations, tries them out, and is then corrected (via training) when it’s wrong.

I bring this up because people often talk about GPT in terms of a “search engine,” saying it doesn’t actually understand the meaning of its words. Creating better search engines is certainly one application for LLMs but that’s not why they are important.

LLMs aren’t currently expected to be artificial general intelligence. They are just a piece of AGI. It would be like a human who could read and write words on a page but had no other senses. No vision beyond the page, no hearing, no ability to touch/manipulate objects. The human would only “understand” the words on the page as.. well.. words.

But the real jump in AI happens when you combine these LLMs with other AI research. You merge GPT with an image recognition AI and a camera. Now it has sight and can associate words with images. You merge it with a speech recognition program. Now it can have a conversation with a human.

And then you get to the scary part - you give it access to the internet. Now it can actively search out knowledge it doesn’t already possess AND potentially interact with the outside world in a more uncontrolled environment. That means it can take actions much more freely and see the results if those actions. The logical next step is to give it a directive and see what it does. Maybe it cures cancer, maybe it kills us all.

The thing is, there’s an argument that the language part is the most complex and difficult to achieve. That’s why people think GPT and other LLMs are such a big deal.
Posted by hubertcumberdale
Member since Nov 2009
6707 posts
Posted on 5/10/23 at 1:20 pm to
The thing about AI is that is perpetually improving with time/training
Posted by StringedInstruments
Member since Oct 2013
19818 posts
Posted on 5/10/23 at 1:22 pm to
quote:

The thing about AI is that is perpetually improving with time/training


Unlike humanity.
Posted by BigCheese2001x
Member since Aug 2012
309 posts
Posted on 5/10/23 at 1:25 pm to
quote:

quote:
He was saying how literally all it does is scour the internet for information and present it.

That’s a pretty massive oversimplification. It’s not actively scouring the internet. Its “knowledge” is limited to the dataset that was used for training (which is, admittedly, massive).

That being said, the conversation around where GPT gets its “information” largely misses the point. Large language models (LLMs) like GPT are designed to learn language, first and foremost, by consuming human written language and developing an understanding of relationships between words. It’s not that much different than how an infant learn language - it builds associations, tries them out, and is then corrected (via training) when it’s wrong.

I bring this up because people often talk about GPT in terms of a “search engine,” saying it doesn’t actually understand the meaning of its words. Creating better search engines is certainly one application for LLMs but that’s not why they are important.

LLMs aren’t currently expected to be artificial general intelligence. They are just a piece of AGI. It would be like a human who could read and write words on a page but had no other senses. No vision beyond the page, no hearing, no ability to touch/manipulate objects. The human would only “understand” the words on the page as.. well.. words.

But the real jump in AI happens when you combine these LLMs with other AI research. You merge GPT with an image recognition AI and a camera. Now it has sight and can associate words with images. You merge it with a speech recognition program. Now it can have a conversation with a human.

And then you get to the scary part - you give it access to the internet. Now it can actively search out knowledge it doesn’t already possess AND potentially interact with the outside world in a more uncontrolled environment. That means it can take actions much more freely and see the results if those actions. The logical next step is to give it a directive and see what it does. Maybe it cures cancer, maybe it kills us all.

The thing is, there’s an argument that the language part is the most complex and difficult to achieve. That’s why people think GPT and other LLMs are such a big deal.



This was written in ChatGPT...
Posted by Hester Carries
Member since Sep 2012
24190 posts
Posted on 5/10/23 at 1:30 pm to
quote:

Figured that out when I was trying to use it for my daughters paper. It had a lot if incorrect information.

So your daughter is so lazy and dumb she couldn’t even plagiarize her own paper and had to get you to do it? Haha.
Posted by BregmansWheelbarrow
Member since Mar 2020
2996 posts
Posted on 5/10/23 at 1:35 pm to
quote:

It doesn't care if the information is right or wrong, it just copies it and pastes it in whatever order it determines. He said the reason people say "Thaat sounds like it was written by a human" is it literally was written by someone before. It's basically one huge plagiarizer.


It seems to be about the same technology, though obviously more advanced, that was used as plagiarism detection when I was in college. All my history and English papers and exams had to have a digital file uploaded to a service that would search the internet and many different archives to see if the papers and tests were plagiarized.
first pageprev pagePage 1 of 3Next pagelast page

Back to top
logoFollow TigerDroppings for LSU Football News
Follow us on X, Facebook and Instagram to get the latest updates on LSU Football and Recruiting.

FacebookXInstagram