- My Forums
- Tiger Rant
- LSU Recruiting
- SEC Rant
- Saints Talk
- Pelicans Talk
- More Sports Board
- Fantasy Sports
- Golf Board
- Soccer Board
- O-T Lounge
- Tech Board
- Home/Garden Board
- Outdoor Board
- Health/Fitness Board
- Movie/TV Board
- Book Board
- Music Board
- Political Talk
- Money Talk
- Fark Board
- Gaming Board
- Travel Board
- Food/Drink Board
- Ticket Exchange
- TD Help Board
Customize My Forums- View All Forums
- Show Left Links
- Topic Sort Options
- Trending Topics
- Recent Topics
- Active Topics
Started By
Message

ChatGPT might not be coming for all of us quite yet...
Posted on 5/10/23 at 12:42 pm
Posted on 5/10/23 at 12:42 pm
From an Accounting Today article:
ChatGPT, the AI chatbot that's taken the world by storm, has already conquered numerous tests–the Wharton MBA exam, the bar exam, and several AP exams among others. But the talking bot met its match when Accounting Today ran it through the CPA exam as an experiment: ChatGPT failed utterly in all four sections.
—
The experiment took place at the Arizent office in New York City's financial district on April 13 in collaboration with Surgent CPA Review. We used two laptops, each running a separate ChatGPT 3.5 Pro account (metering on free accounts, or on GPT 4, would have made the experiment impractical). One laptop ran the BEC and FAR section. The other ran the REG and AUD section.
When all test sections were completed, its scores were:
REG: 39%;
AUD: 46%;
FAR: 35%
BEC: 48%
A 75% score is required to pass a section.
ChatGPT, the AI chatbot that's taken the world by storm, has already conquered numerous tests–the Wharton MBA exam, the bar exam, and several AP exams among others. But the talking bot met its match when Accounting Today ran it through the CPA exam as an experiment: ChatGPT failed utterly in all four sections.
—
The experiment took place at the Arizent office in New York City's financial district on April 13 in collaboration with Surgent CPA Review. We used two laptops, each running a separate ChatGPT 3.5 Pro account (metering on free accounts, or on GPT 4, would have made the experiment impractical). One laptop ran the BEC and FAR section. The other ran the REG and AUD section.
When all test sections were completed, its scores were:
REG: 39%;
AUD: 46%;
FAR: 35%
BEC: 48%
A 75% score is required to pass a section.
Posted on 5/10/23 at 12:43 pm to LSUFanHouston
Big 4 employees never gonna get any relief
Posted on 5/10/23 at 12:46 pm to LSUFanHouston
Saw an interview on Rogan yesterday (youtube) and it was an asian guy talking about things like ChatGPT. He was saying how literally all it does is scour the internet for information and present it. It doesn't care if the information is right or wrong, it just copies it and pastes it in whatever order it determines. He said the reason people say "That sounds like it was written by a human" is it literally was written by someone before. It's basically one huge plagiarizer.
I had never thought about it in that way, but it made perfect sense.
I had never thought about it in that way, but it made perfect sense.
This post was edited on 5/10/23 at 6:14 pm
Posted on 5/10/23 at 12:46 pm to LSUFanHouston
It is also unable to answer if a mans fart in College Station sounds different.
Posted on 5/10/23 at 12:49 pm to LSUFanHouston
Yeah, but......how's it do on the Wonderlic?
And, AI Board.
And, AI Board.
Posted on 5/10/23 at 12:51 pm to LSUFanHouston
quote:
each running a separate ChatGPT 3.5 Pro account
Why would they not use ChatGPT 4? It is significantly better.
Posted on 5/10/23 at 12:51 pm to CocomoLSU
quote:
He was saying how literally all it does is scour the internet for information and present it. It doesn't care if the information is right or wrong, it just copies it and pastes it in whatever order it determines. He said the reason people say "Thaat sounds like it was written by a human" is it literally was written by someone before. It's basically one huge plagiarizer.
Figured that out when I was trying to use it for my daughters paper. It had a lot if incorrect information.
Posted on 5/10/23 at 12:52 pm to CocomoLSU
quote:
He was saying how literally all it does is scour the internet for information and present it. It doesn't care if the information is right or wrong, it just copies it and pastes it in whatever order it determines. He said the reason people say "Thaat sounds like it was written by a human" is it literally was written by someone before. It's basically one huge plagiarizer.
There's some company that is trying to start a service, where people pay $10 or $20 and ask some ChatGPT bot tax questions and get answers.
My immediate question is... how do they know it is a correct answer?
Posted on 5/10/23 at 12:54 pm to TacoNash
They said that was impracticeal, I don't know what that means here.
Posted on 5/10/23 at 12:55 pm to CocomoLSU
Michio Kaku?
This post was edited on 5/10/23 at 12:56 pm
Posted on 5/10/23 at 12:57 pm to LSUFanHouston
quote:
They said that was impracticeal, I don't know what that means here.
Interesting, it may have been too expensive or OpenAI may have limited the quota too much for them to use.
Posted on 5/10/23 at 1:04 pm to LSUFanHouston
AI technology is rapidly expanding and enhancing every day now. ChatGPT is so late 2022 now.
AutoGPT is the tech that many believe "will take over humans" not a chatbox like ChatGPT.
AutoGPT is the tech that many believe "will take over humans" not a chatbox like ChatGPT.
Posted on 5/10/23 at 1:05 pm to LSUFanHouston
First, 3.5 vs 4 is a huge difference and kind of makes the test pointless.
Second, the quality of the tech isn’t what is preventing it from taking a lot of jobs today, it’s the lack of integration. As we all use Microsoft Copilot over the next 3-5 years we will essentially be training our replacements and helping companies integrate AI into their workflows.
The job losses are going to come through gradual attrition, not mass layoffs. It will just be harder to get new jobs as nobody will be hiring.
Second, the quality of the tech isn’t what is preventing it from taking a lot of jobs today, it’s the lack of integration. As we all use Microsoft Copilot over the next 3-5 years we will essentially be training our replacements and helping companies integrate AI into their workflows.
The job losses are going to come through gradual attrition, not mass layoffs. It will just be harder to get new jobs as nobody will be hiring.
Posted on 5/10/23 at 1:07 pm to LSUFanHouston
I pulled some medical case studies from bio chem that I had already completed and it answered all of the questions correct study after study.
I started using it to fill in objectives for exams and turning text book style writing into plain English to help with memorization.
It is also great for fluffing a bio, just write what you want to say in a normal writing style and it adds all the PR HR style fluff for you.
I started using it to fill in objectives for exams and turning text book style writing into plain English to help with memorization.
It is also great for fluffing a bio, just write what you want to say in a normal writing style and it adds all the PR HR style fluff for you.
Posted on 5/10/23 at 1:12 pm to CocomoLSU
quote:
He was saying how literally all it does is scour the internet for information and present it.
That’s a pretty massive oversimplification. It’s not actively scouring the internet. Its “knowledge” is limited to the dataset that was used for training (which is, admittedly, massive).
That being said, the conversation around where GPT gets its “information” largely misses the point. Large language models (LLMs) like GPT are designed to learn language, first and foremost, by consuming human written language and developing an understanding of relationships between words. It’s not that much different than how an infant learn language - it builds associations, tries them out, and is then corrected (via training) when it’s wrong.
I bring this up because people often talk about GPT in terms of a “search engine,” saying it doesn’t actually understand the meaning of its words. Creating better search engines is certainly one application for LLMs but that’s not why they are important.
LLMs aren’t currently expected to be artificial general intelligence. They are just a piece of AGI. It would be like a human who could read and write words on a page but had no other senses. No vision beyond the page, no hearing, no ability to touch/manipulate objects. The human would only “understand” the words on the page as.. well.. words.
But the real jump in AI happens when you combine these LLMs with other AI research. You merge GPT with an image recognition AI and a camera. Now it has sight and can associate words with images. You merge it with a speech recognition program. Now it can have a conversation with a human.
And then you get to the scary part - you give it access to the internet. Now it can actively search out knowledge it doesn’t already possess AND potentially interact with the outside world in a more uncontrolled environment. That means it can take actions much more freely and see the results if those actions. The logical next step is to give it a directive and see what it does. Maybe it cures cancer, maybe it kills us all.
The thing is, there’s an argument that the language part is the most complex and difficult to achieve. That’s why people think GPT and other LLMs are such a big deal.
Posted on 5/10/23 at 1:20 pm to LSUFanHouston
The thing about AI is that is perpetually improving with time/training
Posted on 5/10/23 at 1:22 pm to hubertcumberdale
quote:
The thing about AI is that is perpetually improving with time/training
Unlike humanity.
Posted on 5/10/23 at 1:25 pm to lostinbr
quote:
quote:
He was saying how literally all it does is scour the internet for information and present it.
That’s a pretty massive oversimplification. It’s not actively scouring the internet. Its “knowledge” is limited to the dataset that was used for training (which is, admittedly, massive).
That being said, the conversation around where GPT gets its “information” largely misses the point. Large language models (LLMs) like GPT are designed to learn language, first and foremost, by consuming human written language and developing an understanding of relationships between words. It’s not that much different than how an infant learn language - it builds associations, tries them out, and is then corrected (via training) when it’s wrong.
I bring this up because people often talk about GPT in terms of a “search engine,” saying it doesn’t actually understand the meaning of its words. Creating better search engines is certainly one application for LLMs but that’s not why they are important.
LLMs aren’t currently expected to be artificial general intelligence. They are just a piece of AGI. It would be like a human who could read and write words on a page but had no other senses. No vision beyond the page, no hearing, no ability to touch/manipulate objects. The human would only “understand” the words on the page as.. well.. words.
But the real jump in AI happens when you combine these LLMs with other AI research. You merge GPT with an image recognition AI and a camera. Now it has sight and can associate words with images. You merge it with a speech recognition program. Now it can have a conversation with a human.
And then you get to the scary part - you give it access to the internet. Now it can actively search out knowledge it doesn’t already possess AND potentially interact with the outside world in a more uncontrolled environment. That means it can take actions much more freely and see the results if those actions. The logical next step is to give it a directive and see what it does. Maybe it cures cancer, maybe it kills us all.
The thing is, there’s an argument that the language part is the most complex and difficult to achieve. That’s why people think GPT and other LLMs are such a big deal.
This was written in ChatGPT...
Posted on 5/10/23 at 1:30 pm to Cdawg
quote:
Figured that out when I was trying to use it for my daughters paper. It had a lot if incorrect information.
So your daughter is so lazy and dumb she couldn’t even plagiarize her own paper and had to get you to do it? Haha.
Posted on 5/10/23 at 1:35 pm to CocomoLSU
quote:
It doesn't care if the information is right or wrong, it just copies it and pastes it in whatever order it determines. He said the reason people say "Thaat sounds like it was written by a human" is it literally was written by someone before. It's basically one huge plagiarizer.
It seems to be about the same technology, though obviously more advanced, that was used as plagiarism detection when I was in college. All my history and English papers and exams had to have a digital file uploaded to a service that would search the internet and many different archives to see if the papers and tests were plagiarized.
Popular
Back to top
