- My Forums
- Tiger Rant
- LSU Recruiting
- SEC Rant
- Saints Talk
- Pelicans Talk
- More Sports Board
- Fantasy Sports
- Golf Board
- Soccer Board
- O-T Lounge
- Tech Board
- Home/Garden Board
- Outdoor Board
- Health/Fitness Board
- Movie/TV Board
- Book Board
- Music Board
- Political Talk
- Money Talk
- Fark Board
- Gaming Board
- Travel Board
- Food/Drink Board
- Ticket Exchange
- TD Help Board
Customize My Forums- View All Forums
- Show Left Links
- Topic Sort Options
- Trending Topics
- Recent Topics
- Active Topics
Started By
Message
re: Why Google shutdown it's revolutionary, cutting edge, quantum computer chip, Willow
Posted on 2/14/25 at 2:14 am to CIGAR_cigarillo
Posted on 2/14/25 at 2:14 am to CIGAR_cigarillo
I thought we weren't permitted to believe in invisible things.
Posted on 2/14/25 at 6:41 am to lake chuck fan

"Skynet is the virus!"
Posted on 2/14/25 at 9:29 am to RiverCityTider
quote:
The same way Deepseek cost a thousand times less than ChatGPT, but with better results.
No one has been able to replicate the training claims, the costs were lied about.
It's also showed to be much worse than the American models on things that aren't the benchmarks.
They trained specifically on known benchmarks which is known as overfitting.
Posted on 2/14/25 at 9:33 am to Narax
quote:
They trained specifically on known benchmarks
So it's like giving a kid the test questions beforehand and praising him for getting an A.
Posted on 2/14/25 at 10:09 am to LordSaintly
Yup they are all known and can be trained on.
quote:
The tested benchmarks are as follows:
AIME 2024: A set of problems from the 2024 edition of the American Invitational Mathematics Examination.
CodeForces: A competition coding benchmark designed to accurately evaluate the reasoning capabilities of LLMs with human-comparable standardized ELO ratings.
GPQA Diamond: A subset of the larger Graduate-Level Google-Proof Q&A dataset of challenging questions that domain experts consistently answer correctly, but non-experts struggle to answer accurately, even with extensive internet access.
MATH-500: This tests the ability to solve challenging high-school-level mathematical problems, typically requiring significant logical reasoning and multi-step solutions.
MMLU: Massive Multitask Language Understanding is a benchmark designed to measure knowledge acquired during pretraining, by evaluating LLMs exclusively in zero-shot and few-shot settings.
SWE-bench: This assesses an LLM’s ability to complete real-world software engineering tasks, specifically how the model can resolve GitHub issues from popular open-source Python repositories.
Posted on 2/14/25 at 10:15 am to lake chuck fan
quote:
producing strange anomalies such as symbols and equations that couldn't be explained. These closely resembled ancient symbols believed to be lost languages,
Is LookSquirrel in the thread yet?
Posted on 2/14/25 at 10:43 am to TheHarahanian
quote:
Pulling the plug will defeat any secret encryption it came up with.
Unless is replicates itself somewhere else. Then you might be playing whack a mole.
Posted on 2/14/25 at 11:08 am to lake chuck fan
I, somewhat, pay attention to the news regarding Quantum computing developments in the realm of computer security, especially in the hype around the financial news.
However based on notable quotes from competent people, such as Dario Gil (trump appointed and former head of IBM's quantum team) and security blog experts such as Bruce Schneier, I believe we not as far a long as what may portrayed in financial news and fear spreading.
These types of propaganda seem to be hyped around the financial news sector.
However based on notable quotes from competent people, such as Dario Gil (trump appointed and former head of IBM's quantum team) and security blog experts such as Bruce Schneier, I believe we not as far a long as what may portrayed in financial news and fear spreading.
These types of propaganda seem to be hyped around the financial news sector.
Popular
Back to top
