- My Forums
- Tiger Rant
- LSU Recruiting
- SEC Rant
- Saints Talk
- Pelicans Talk
- More Sports Board
- Fantasy Sports
- Golf Board
- Soccer Board
- O-T Lounge
- Tech Board
- Home/Garden Board
- Outdoor Board
- Health/Fitness Board
- Movie/TV Board
- Book Board
- Music Board
- Political Talk
- Money Talk
- Fark Board
- Gaming Board
- Travel Board
- Food/Drink Board
- Ticket Exchange
- TD Help Board
Customize My Forums- View All Forums
- Show Left Links
- Topic Sort Options
- Trending Topics
- Recent Topics
- Active Topics
Started By
Message
re: Creator of node.js says the era of human computer coding is OVER
Posted on 1/21/26 at 9:18 pm to Tiger985
Posted on 1/21/26 at 9:18 pm to Tiger985
quote:
Much more rapidly than people realize. I think the critics of AI who today make valid criticisms of some of the current product are in denial about how rapidly these changes are coming. Imagine the changes we have seen in the Internet and smartphones over the last 25 years, multiply it by 5 and compress it into about 24 months. That's what's happening.
I don’t see that as an apt comparison. I’ve yet to see anyone so much as attempt to explain how we go from LLM to AGI. As far as I can tell, one doesn’t evolve into the other.
Posted on 1/21/26 at 9:22 pm to Joshjrn
quote:
I don’t see that as an apt comparison. I’ve yet to see anyone so much as attempt to explain how we go from LLM to AGI. As far as I can tell, one doesn’t evolve into the other.
Yeah its not a good comparison.
Bayesian statistics and artificial intelligence research has been around for awhile with several intractable problems. LLMs have a long way to go. But by their own nature will grow but still controlled by inputs from humans. And review by humans.
Posted on 1/21/26 at 9:53 pm to Tiger985
The internet was also prophecized as a failure after the dot com boom, even by some at the WSJ and the tech industry. Obviously, hindsight is 20/20 and I can AI going through a similar evolution. So much fear it will completely destroy SWE but likely it will end up with engineering teams being reduced headcount-wise, and engineering having more of a horizontal growth model rather than vertical. AI will help cover more ground and open up avenues for skilled labor that’s not yet fully understood.
Plenty of highly intelligent folks making the fear claims may be more than correct, and their opinions hold a lot of weight, but there is a certain economics to adoption of AI that hopefully will course correct these fears.
Or tech workers are screwed in 10 years or less and we’ll end up “working in the fields” instead.
Plenty of highly intelligent folks making the fear claims may be more than correct, and their opinions hold a lot of weight, but there is a certain economics to adoption of AI that hopefully will course correct these fears.
Or tech workers are screwed in 10 years or less and we’ll end up “working in the fields” instead.
Posted on 1/21/26 at 10:06 pm to tiggerthetooth
John Henry Vs the Steam Drill.
Paul Bunyan vs the chain saw
AI has the relentless engine but humans still have the heart
The A.I Polka
Paul Bunyan vs the chain saw
AI has the relentless engine but humans still have the heart
The A.I Polka
Posted on 1/21/26 at 11:07 pm to bignuss18
quote:
AI will help cover more ground and open up avenues for skilled labor that’s not yet fully understood.
This is what I think is most likely to happen; it's what happened with the rise of the personal computer and the internet.
quote:
there is a certain economics to adoption of AI that hopefully will course correct these fears
When I see the reported earnings for the AI companies, I wonder how they are going to make money long term. If OpenAI is resorting to ads, that doesn't seem like a good sign. Supposedly Anthropic is projected to turn a profit this year.
quote:
Or tech workers are screwed in 10 years or less and we’ll end up “working in the fields” instead.
Whether it's tech, politics, the economy, or the weather, fear and doomcasting sell and get clicks.
If tech goes away, I'll go get a real job.
Posted on 1/22/26 at 4:45 am to GetMeOutOfHere
OpenAI is an interesting case. Almost like it’s going the way of ask Jeeves as a comparison. They also just inked a a huge deal with ServiceNow (this is one of my supervised PITAs). 10 years ago this might’ve be good for both but now both companies are predatory af in their own measure scales.
I continue to believe audit/risk around AI will still have a larger human footprint once AI is mature. Finserv hat on, there will be more clients and regulatory bodies than not who don’t want AI auditing AI. It’s a recipe for failure
I continue to believe audit/risk around AI will still have a larger human footprint once AI is mature. Finserv hat on, there will be more clients and regulatory bodies than not who don’t want AI auditing AI. It’s a recipe for failure
Posted on 1/22/26 at 6:04 am to tiggerthetooth
Learn to coal, bitches!
Posted on 1/22/26 at 7:39 am to Tiger985
quote:
Imagine the changes we have seen in the Internet and smartphones over the last 25 years
Has it actually changed all that much?
Posted on 1/22/26 at 8:15 am to Joshjrn
quote:
I don’t see that as an apt comparison. I’ve yet to see anyone so much as attempt to explain how we go from LLM to AGI. As far as I can tell, one doesn’t evolve into the other.
Part of the problem is that people can’t agree on the definition of AGI in the first place.
That being said.. if we can create models that match or surpass human performance across an array of individual tasks, we should be able to combine those models via the mixture-of-experts approach, with a core LLM likely acting as the human-machine interface.
I don’t see LLM’s as “evolving” into AGI so much as the glue that makes AGI functionally possible. They’ll have to be combined with other types of models (for example speech recognition, computer vision, data analysis, etc.) to extend the capabilities. We are already seeing steps in that direction with the public models available today.
Ultimately I think the technical barriers are:
1. Identifying cognitive tasks/functions that are not adequately addressed by current AI models, and developing new models to attack them.
2. Creating the core model that handles goal setting, planning, and execution. Right now it looks (to me) like this is something LLM’s might be able to accomplish, but this could end up being something novel.
3. Connecting everything together and giving it the freedom to step out “into the real world.”
I don’t think any of these are insurmountable with the trajectory we are on right now. I think the real (practical) question is scalability. Building AGI is one thing. Scaling it is another, and I wouldn’t be surprised if we hit a major bottleneck in compute power before it’s all said and done.
Popular
Back to top


2






