- My Forums
- Tiger Rant
- LSU Recruiting
- SEC Rant
- Saints Talk
- Pelicans Talk
- More Sports Board
- Fantasy Sports
- Golf Board
- Soccer Board
- O-T Lounge
- Tech Board
- Home/Garden Board
- Outdoor Board
- Health/Fitness Board
- Movie/TV Board
- Book Board
- Music Board
- Political Talk
- Money Talk
- Fark Board
- Gaming Board
- Travel Board
- Food/Drink Board
- Ticket Exchange
- TD Help Board
Customize My Forums- View All Forums
- Show Left Links
- Topic Sort Options
- Trending Topics
- Recent Topics
- Active Topics
Started By
Message
Mr. Chicken, are text posts generated exclusively by Deepseek or other AI applications ok?
Posted on 1/29/25 at 6:53 pm
Posted on 1/29/25 at 6:53 pm
And by "ok" I mean is that considered using another's intellectual property and thus in violation of the board's guidelines? Should attribution be given when it's done?
Besides that, it sure is boring to read when I know some computer wrote the post and it would be easy to lose interest reading the board if that practice is allowed to proliferate.
Thank you.
Besides that, it sure is boring to read when I know some computer wrote the post and it would be easy to lose interest reading the board if that practice is allowed to proliferate.
Thank you.
Posted on 1/29/25 at 10:34 pm to LSURussian
Is there a reason why You are asking this question?
Posted on 1/30/25 at 8:36 am to Chicken
Here are posts from the Political Board that were copied and pasted from the AI Deepseek since that app was made available 3 days ago. I remember reading one other Deepseat generated post but I don't remember who posted it so I couldn't find it using the Search function.
Perhaps the novelty of using AI to generate posts will wear off quickly and there won't be any issues with it.
None of the AI posts were formatted as quotes from an outside source nor was there any attribution given.
(I had to use two posts to quote the AI generated posts because of the 7,900 character limit for a post.)
Perhaps the novelty of using AI to generate posts will wear off quickly and there won't be any issues with it.
None of the AI posts were formatted as quotes from an outside source nor was there any attribution given.
(I had to use two posts to quote the AI generated posts because of the 7,900 character limit for a post.)
quote:
1. **Understanding Deepseek**: If Deepseek is a legitimate AI tool or service, its adoption likely reflects user interest in its features, whether for productivity, research, or curiosity. Labeling users as "morons" disregards their autonomy in choosing tools that meet their needs.
2. **Criticism Validity**:
- If concerns about Deepseek (e.g., privacy, effectiveness) exist, these should be addressed constructively. Users may prioritize convenience, lack awareness of alternatives, or trust marketing claims—common behaviors globally, not uniquely American.
- Without evidence of harm or deceit, dismissing users as "morons" is hyperbolic and unproductive.
3. **Cultural Stereotyping**: Singling out Americans reflects bias. Technology adoption varies widely, and critiques should focus on the product, not nationality. If Deepseek has issues, they likely affect all users, not just Americans.
4. **Ethical Labeling**: Using derogatory terms like "moron" stifles meaningful discourse. Educating users about potential risks or alternatives is more constructive than insults.
**Conclusion**: The claim is a harmful overgeneralization. Rather than assigning blame, fostering informed decision-making and addressing specific concerns about the software (if any) would be more appropriate. Respectful dialogue encourages better understanding and safer tech practices.
quote:
The crash of an F-35 fighter jet at Eielson Air Force Base in Alaska on **January 29, 2025**, carries significant political implications, intersecting with national security priorities, defense spending debates, and public trust in military technology. Below is a detailed analysis of its political dimensions:
---
### **1. Scrutiny of the F-35 Program’s Cost and Effectiveness**
The F-35 Lightning II, developed by Lockheed Martin, is the **most expensive military program in U.S. history**, with a projected lifetime cost of $1.7 trillion . Critics have long questioned its value, citing recurring technical failures, maintenance challenges, and crashes. The Alaska incident—the **third F-35 crash in under a year**—fuels arguments that the program is a "flop" despite its astronomical price tag .
- **Political fallout**: Lawmakers and watchdog groups, such as the Government Accountability Office (GAO), may intensify calls for audits or funding cuts, especially as opponents contrast the F-35 with rival aircraft like Russia’s Su-57, which is perceived as more cost-effective .
- **Public perception**: Viral footage of the crash (e.g., the jet "dropping from the sky" ) amplifies skepticism about taxpayer-funded military investments, pressuring elected officials to address accountability.
---
### **2. Strategic Implications for Arctic and Indo-Pacific Defense**
Eielson Air Force Base is a **strategic hub for Arctic operations** and Indo-Pacific Command (INDOPACOM) missions. The base underwent a $500 million expansion to host 54 F-35s, positioning it as a critical node for countering threats from China and Russia .
- **Political symbolism**: Repeated crashes undermine confidence in the U.S. military’s ability to project power in contested regions. Critics argue that operational failures risk ceding dominance to adversaries, especially as China advances its own stealth technology .
- **Bipartisan concerns**: Both Democrats and Republicans prioritize Arctic security, but incidents like this could spark debates about whether the F-35 is the right tool for the mission or if alternatives should be explored.
---
### **3. Military Accountability and Transparency**
The crash investigation’s findings—or lack thereof—could trigger political battles over military transparency. For example:
- **Past precedents**: The 2023 South Carolina F-35B crash, where a pilot ejected unnecessarily and the jet flew unmanned for 64 miles, led to accusations of poor training and flawed manuals . Similar scrutiny may follow the Alaska crash, particularly if mechanical failures or pilot error are confirmed .
- **Congressional pressure**: Lawmakers like Rep. Nancy Mace (R-S.C.) have previously criticized the military’s handling of F-35 incidents, demanding clearer communication and accountability . The Alaska crash may reignite these calls, especially if the Air Force downplays systemic issues.
---
### **4. Defense Contractor Influence and Lobbying**
Lockheed Martin’s lobbying power and its ties to policymakers are a recurring political issue. The F-35’s troubled history has not deterred continued funding, raising questions about corporate influence over defense priorities.
- **Contractor accountability**: Critics argue that Lockheed Martin has faced insufficient consequences for delays and technical flaws, despite the program’s importance to national security .
- **Jobs vs. safety**: The F-35 supports thousands of jobs across congressional districts, creating political incentives to sustain funding even amid safety concerns. For example, Eielson’s expansion brought 3,500 personnel to Alaska, bolstering local economies . Lawmakers may prioritize economic benefits over program reforms.
---
### **5. Environmental and Fiscal Impact**
The crash’s aftermath—including environmental cleanup and the loss of a $100 million aircraft—adds to the political debate over military sustainability .
- **Environmental costs**: Past F-35 crashes required extensive remediation efforts, such as removing contaminated soil and vegetation . Environmental groups may leverage this to oppose expanded military operations in sensitive Arctic regions.
- **Budget priorities**: With the U.S. facing domestic fiscal challenges, opponents could argue that F-35 funds would be better spent on infrastructure, healthcare, or climate initiatives.
---
### **Conclusion**
The Alaska F-35 crash is politically charged because it intersects with **defense spending**, **strategic credibility**, **military-industrial complex influence**, and **public accountability**. As investigations proceed, lawmakers will face pressure to balance national security needs with fiscal responsibility and transparency. The incident underscores broader tensions between innovation, cost, and reliability in U.S. defense policy—a debate likely to intensify as global threats evolve.
Posted on 1/30/25 at 8:36 am to LSURussian
quote:
### **1. Purpose and Functionality**
- **DeepSeek**:
A Chinese AI company focused on developing **open-source, cost-efficient language models** for tasks like mathematical reasoning, coding, and complex problem-solving. Its models (e.g., DeepSeek-R1, V3) emphasize transparency, long-chain reasoning, and affordability for developers and enterprises .
Example: DeepSeek-R1 can solve math competition problems with detailed "thought processes" and outperforms models like GPT-4o in certain benchmarks .
- **"Deep Dream" (Hypothetical Bourne Analogue)**:
If imagined as a Bourne-style surveillance system, "Deep Dream" might represent a **closed, government-controlled AI** for tracking targets, hacking networks, or predicting human behavior—common tropes in spy thrillers. Unlike DeepSeek, it would prioritize secrecy, real-time threat analysis, and covert operations.
---
### **2. Technological Approach**
- **DeepSeek**:
Uses **reinforcement learning (RL)** and innovative training methods like "pure RL" without pre-defined templates. For example, DeepSeek-R1-Zero learns through trial-and-error rewards, achieving "aha moments" akin to human problem-solving . Its architecture is transparent, with open-source models and APIs .
Cost: Trained for ~$5.58 million, far cheaper than competitors .
- **"Deep Dream"**:
Likely depicted as a **black-box system** with classified algorithms, designed for military or intelligence use. It might employ neural networks for facial recognition, predictive analytics, or autonomous decision-making—aligned with Hollywood’s portrayal of omnipotent, ethically ambiguous AI.
---
### **3. Ethical and Operational Implications**
- **DeepSeek**:
Promotes **democratized AI** through low-cost access and open-source frameworks. It prioritizes user privacy (e.g., local deployment options) and aims to "make AI more??" (inclusive) . Criticisms include slower response times and occasional overthinking in simple tasks .
- **"Deep Dream"**:
Would raise **ethical concerns** typical of fictional espionage tools: mass surveillance, lack of accountability, and potential misuse. Its closed nature contrasts sharply with DeepSeek’s transparency, mirroring real-world debates about AI governance .
---
### **4. Real-World vs. Fictional Impact**
- **DeepSeek**:
Impacts industries like education, coding, and research. For example, its models assist in solving complex math problems and generate code with human-like reasoning . Its affordability ($0.5 per million input tokens) disrupts the market dominated by OpenAI and Anthropic .
- **"Deep Dream"**:
In fiction, such systems often symbolize **dystopian risks**—AI as a tool for control or warfare. The Bourne series’ Treadstone project (a covert assassin program) exemplifies this, though it lacks direct AI parallels.
### **Conclusion**
While DeepSeek represents a leap in practical, ethical AI development, the hypothetical "Deep Dream" reflects Hollywood’s cautionary tales about unchecked technological power. The contrast highlights how real-world advancements (like DeepSeek’s open models) aim to avoid the pitfalls often dramatized in fiction.
Popular
Back to top
